Missing the cues: LLMs’ insensitivity to semantic biases in relative clause attachment
DOI:
https://doi.org/10.3765/plsa.v10i1.5902Keywords:
LLMs, Relative clause attachment ambiguity, semantic bias, prompt format effect, prompt sensitivityAbstract
We investigate whether large language models (LLMs) replicate English speakers' well-established preference for low attachment in relative clause (RC) ambiguities, and how they respond when semantic cues such as world knowledge and stereotypical associations (e.g., age or gender plausibility) conflict with this preference. Eight commercial LLMs spanning the Claude-3/3.5 and GPT-3.5/4o families were evaluated using structurally and semantically ambiguous stimuli, alongside items that introduced plausibility-based biases toward either high or low attachment. In the absence of disambiguating cues, all models showed a strong preference for low attachment, consistent with human tendencies in ambiguous contexts (i.e. no semantic bias cues). However, models varied in their sensitivity to semantic information: newer Claude-3.5 models frequently shifted toward high attachment when the LA interpretation was implausible, while GPT-based models rarely did so. Attachment preferences were also affected by prompt format, suggesting that LLMs do not consistently integrate syntactic and semantic information in a stable, human-like way. These findings highlight both convergence and divergence between LLMs and human sentence processing, offering insight into the limits of current pretraining paradigms in handling structural ambiguity and world knowledge.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Russell Scheinberg, So Young Lee, Ameeta Agrawal

This work is licensed under a Creative Commons Attribution 4.0 International License.
Published by the LSA with permission of the author(s) under a CC BY 4.0 license.