Probing a Neural Network Model of Sound Change for Perceptual Integration


  • Cerys Ashley Hughes University of Massachusetts Amherst



Neural Network, Contrast shift, Sound change, Convolutional Neural Network, Perception, Garner paradigm, English, Stops, Voicing


The cross-linguistic tendency for contrast shifts to occur between some cues more than others has been investigated typologically and experimentally (Yang 2019), but with less attention in computational modeling. This paper adapts a human experimental paradigm (Kingston et al. 2008) to the speech perception component of a neural network model of sound change (Beguš 2020) to better understand how it processes acoustic cues in the context of Yang’s proposal that auditory dimensions affect which cues are more likely to undergo contrast shift. Piloting this neural network probing technique, I find evidence that the model integrates different pairs of English stop voicing cues than humans do, suggesting that further amendments to the model are necessary to implement Yang (2019)’s account. In general, these results highlight potential acoustic processing differences between humans and the model under investigation, a Convolutional Neural Network, which is commonly used in spoken language applications.






Supplemental Proceedings