Masters Theses
Date of Award
12-2017
Degree Type
Thesis
Degree Name
Master of Arts
Major
Psychology
Major Professor
Jessica Hay
Committee Members
Aaron Buss, Devin Casenhiser, Daniela Corbetta
Abstract
Human speech is necessarily multimodal and audiovisual redundancies in speech may play a vital role in speech perception across the lifespan. The majority of previous studies have focused particularly on how language is learned from auditory input, but the way in which audiovisual speech information is perceived and comprehended remains less well understood. Here, I examine how audiovisual and visual-only speech information is represented for known words, and if intersensory processing efficiency ability predicts the strength of the lexical representation. To explore the relationship between intersensory processing ability (indexed by matching temporally synchronous auditory and visual stimulation) and the strength of lexical representations, adult subjects participated in an audiovisual word recognition task and the Intersensory Processing Efficiency Protocol (IPEP). Participants were able to reliably identify a correct referent object across manipulations of modality (audiovisual vs visual-only) and pronunciation (correctly vs mispronounced). Correlational analyses did not reveal any relationship between processing efficiency and visual speech information in lexical representations. However, the results presented here suggest that adults’ lexical representations robustly include visual speech information and that visual speech information is sublexically processed during speech perception.
Recommended Citation
Cannistraci, Ryan Andrew, "Do you see what I mean? The role of visual speech information in lexical representations. " Master's Thesis, University of Tennessee, 2017.
https://trace.tennessee.edu/utk_gradthes/4992