When Transformer models are more compositional than humans: The case of the depth charge illusion
DOI:
https://doi.org/10.3765/elm.2.5370Keywords:
Transformer, language model, depth charge illusion, compositionalityAbstract
State-of-the-art Transformer-based language models like GPT-3 are very good at generating syntactically well-formed and semantically plausible text. However, it is unclear to what extent these models encode the compositional rules of human language and to what extent their impressive performance is due to the use of relatively shallow heuristics, which have also been argued to be a factor in human language processing. One example is the so-called depth charge illusion, which occurs when a semantically complex, incongruous sentence like No head injury is too trivial to be ignored is assigned a plausible but not compositionally licensed meaning (Don't ignore head injuries, even if they appear to be trivial). I present an experiment that investigated how depth charge sentences are processed by Transformer models, which are free of many human performance bottlenecks. The results are mixed: Transformers do show evidence of non-compositionality in depth charge contexts, but also appear to be more compositional than humans in some respects.Downloads
Published
2023-01-27
Issue
Section
Articles
License
Published by the LSA with permission of the author(s) under a CC BY 4.0 license.