Visual boundaries in sign motion: processing with and without lip-reading cues
Keywords:semantics, EEG, psycholinguistics, sign language, event visibility
Sign languages demonstrate a higher degree of iconicity than spoken languages. Studies on a number of unrelated sign languages show that the event structure of verb signs is reflected in the phonological form of the signs (Wilbur (2008), Malaia & Wilbur (2012), Krebs et al. (2021)). Previous research showed that hearing non-signers (with no prior exposure to sign language) can use the iconicity inherent in the visual dynamics of a verb sign to correctly identify its event structure (telic vs. atelic). In two EEG experiments, hearing non-signers were presented with telic and atelic verb signs unfamiliar to them, which they had to classify in a two-choice lexical decision task in their native language. The first experiment assessed the timeline of neural processing mechanisms in non-signers processing telic/atelic signs without access to lip-reading cues in their native language, to understand the pathways for incorporation of physical perceptual motion features into linguistic processing. The second experiment further probed the impact of visual information provided by lip-reading (speech decoding based on visual information from the face of the speaker, most importantly, the lips) on the processing of telic/atelic signs in non-signers.