Learning Parametric Stress without Domain-Specific Mechanisms

Authors

  • Aleksei Nazarov Harvard University
  • Gaja Jarosz University of Massachusetts Amherst

DOI:

https://doi.org/10.3765/amp.v4i0.4010

Keywords:

Principles and Parameters, Domain-General Learning, Expectation Maximization

Abstract

State-of-the-art learning mechanisms for stress in Optimality Theory (see, e.g., Tesar and Smolensky 2000; Boersma and Pater 2016; Jarosz 2013) make use of probabilistic mechanisms that are domain-general in that they do not refer to the content of constraints and must not be in UG. By contrast, Pearl (2007, 2011) has argued that domain-general probabilistic learners of parametric grammars (Yang 2002) are insufficient for word stress, and, instead, domain-general learning mechanisms must be stipulated in UG alongside the parameters themselves. We propose a modification of Yang's (2002) learner based on Jarosz's (2015) learner for Optimality Theory: the Expectation Driven Parameter Learner, and show that this modification yields a dramatic improvement in accuracy (from 4.3% to 96%) on a representative typology generated by Dresher and Kaye's (1990) parameter set. This suggests that domain-general learning mechanisms may be sufficient for learning stress after all, contra Pearl (2007, 2011), regardless of which grammatical representation (parameters or violable constraints) is a better reflection of the human language capacity.

Author Biography

  • Aleksei Nazarov, Harvard University
    Lecturer at Linguistics Department, Harvard University

Downloads

Published

2017-05-09