Learning a gradient grammar of French liaison

Paul Smolensky, Eric Rosen, Matthew Goldrick

Abstract


In certain French words, an orthgraphically-final consonant is unpronounced except, in certain environments, when it precedes a vowel. This phenomenon, liaison, shows significant interactions with several other patterns in French (including h-aspiré, schwa deletion, and the presence of other morphemes in the liaison context). We present a learning algorithm that acquires a grammar that accounts for these patterns and their interactions. The learned grammar employs Gradient Symbolic Computation (GSC), incorporating weighted constraints and partially-activated symbolic representations. Grammatical analysis in the GSC framework includes the challenging determination of the numerical strength of symbolic constituent activations (as well as constraints). Here we present the first general algorithm for learning these quantities from empirical examples: the Error-Driven Gradient Activation Readjustment (EDGAR). Smolensky and Goldrick (2016) proposed a GSC analysis, with hand-determined numerical strengths, in which liaison derives from the coalescence of partially-activated input consonants. EDGAR allows us to extend this work to a wider range of liaison phenomena by automatically determining the more comprehensive set of numerical strengths required to generate the complex pattern of overall liaison behaviour.


Keywords


liaison; Gradient Symbolic Computation; learning

Full Text:

PDF


DOI: https://doi.org/10.3765/amp.v8i0.4680

Copyright (c) 2020 Matthew Goldrick, Paul Smolensky, Eric Rosen

License URL: https://creativecommons.org/licenses/by/3.0/