From experiment results to a constraint hierarchy with the ‘Rank Centrality’ algorithm

Jennifer L. Smith

Abstract


Rank Centrality (RC; Negahban, Oh, & Shah 2017) is a rank-aggregation algorithm that computes a total ranking of elements from noisy pairwise ranking information. I test RC as an alternative to incremental error-driven learning algorithms such as GLA-MaxEnt (Boersma & Hayes 2001; Jäger 2007) for modeling a constraint hierarchy on the basis of two-alternative forced-choice experiment results. For the case study examined here, RC agrees well with GLA-MaxEnt on the ordering of the constraints, but differs somewhat on the distance between constraints; in particular, RC assigns more extreme (low) positions to constraints at the bottom of the hierarchy than GLA-MaxEnt does. Overall, these initial results are promising, and RC merits further investigation as a constraint-ranking method in experimental linguistics.

Keywords


constraint ranking algorithms; Rank Centrality; Maximum Entropy; experimental phonology; loanword phonology

Full Text:

PDF


DOI: https://doi.org/10.3765/plsa.v5i1.4694

Copyright (c) 2020 Jennifer L. Smith

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Donate to the Open-Access Fund of the LSA

Linguistic Society of America


Advancing the Scientific Study of Language since 1924

ISSN (online): 2473-8689

This publication is made available for free to readers and with no charge to authors thanks in part to your continuing LSA membership and your donations to the open access fund.