When You CAN See the Difference: The Phonetic Basis of Sonority in American Sign Language

Tatiana Luchkina, Elena Koulidobrova, Jeffrey Palmer


Spoken and signed languages (SL) deliver perceptual cues which exhibit various degrees of perceptual validity during categorization: In spoken languages, listeners develop perceptual biases when integrating multiple acoustic dimensions during auditory categorization (Holt & Lotto, 2006). This leads us to expect differential perceptual validity for dynamic gestural units HANDSHAPE, MOVEMENT, ORIENTATION, and LOCATION produced by manual articulators in SLs. In this study, we use a closed-set sentence discrimination task developed by Bochner et al. (2011) to evaluate the perceptual saliency of the gestural components of signs in American Sign Language (ASL) for naïve signers and deaf L2 learners of ASL proficient in another SL. Our goal is to gauge which of these features are likely to present the phonetic basis of sonority in sign modality and relay phonemic contrasts perceptible for even first-time signers.

25 deaf L2 ASL signers and 28 hearing English speakers with no experience in any SL participated in this study. Results reveal that phonemic contrasts based on HANDSHAPE presented an area of maximum difficulty in phonological discrimination for sign-naïve participants. For all participants, contrasts based on ORIENTATION and LOCATION and involving larger scale articulators, were associated with robust categorical discrimination.  


ASL; visual sonority; sign perception

Full Text:


DOI: https://doi.org/10.3765/amp.v8i0.4686

Copyright (c) 2020 Tatiana Luchkina, Elena Koulidobrova, Jeffrey Palmer

License URL: https://creativecommons.org/licenses/by/3.0/