Date of Award


Degree Type


Degree Name

Master of Arts



Major Professor

Jessica Hay

Committee Members

Aaron Buss, Devin Casenhiser, Daniela Corbetta


Human speech is necessarily multimodal and audiovisual redundancies in speech may play a vital role in speech perception across the lifespan. The majority of previous studies have focused particularly on how language is learned from auditory input, but the way in which audiovisual speech information is perceived and comprehended remains less well understood. Here, I examine how audiovisual and visual-only speech information is represented for known words, and if intersensory processing efficiency ability predicts the strength of the lexical representation. To explore the relationship between intersensory processing ability (indexed by matching temporally synchronous auditory and visual stimulation) and the strength of lexical representations, adult subjects participated in an audiovisual word recognition task and the Intersensory Processing Efficiency Protocol (IPEP). Participants were able to reliably identify a correct referent object across manipulations of modality (audiovisual vs visual-only) and pronunciation (correctly vs mispronounced). Correlational analyses did not reveal any relationship between processing efficiency and visual speech information in lexical representations. However, the results presented here suggest that adults’ lexical representations robustly include visual speech information and that visual speech information is sublexically processed during speech perception.

Files over 3MB may be slow to open. For best results, right-click and select "save as..."