Reviewed by Silvia Kouwenberg and Jodianne Scott, University of the West Indies
Is codeswitching (CS) subject to constraints on grammar? This question is several decades old, and attempts to answer it have been frustrated by large numbers of ill-behaved utterances. Typically, responses take two forms: typological, in which different types of intrasentential CS are distinguished and constraints are proposed for each type; and model-internal, in which constraints are relaxed to bring all instances of CS within the same fold. Pieter Muysken takes the first approach in Bilingual speech: A typology of code-mixing (Cambridge: Cambridge University Press, 2000). In this volume, Carol Myers-Scotton’s solution is of the second kind. In fact, her model is intended to account for language contact phenomena more generally, witness the volume’s title as well as the chapters on ‘Convergence and attrition’ and ‘Lexical borrowing, split (mixed) languages, and creole formation’.
The foundation of M-S’s approach, which she first developed in Duelling languages: Grammatical structure in codeswitching (Oxford: Oxford University Press, 1993), is provided by the matrix language frame (MLF) model. The basic idea of the MLF model is that languages never participate equally in CS. Instead, one language provides the MLF into which elements of the embedded language(s) (EL) are inserted. It follows that there is also asymmetry in the imposition of grammatical constraints: EL elements must comply with the ML’s structural constraints. This is captured in the abstract level model, which requires congruence at different levels of analysis between ML and EL material to facilitate insertion of the EL. The 4-M model, which makes a distinction between content morphemes and different system morphemes (i.e. roughly, functional morphemes), accounts for the possible combinations of morphemes from the participating languages in a given frame, predicting that late system morphemes, whose functions are external to their head’s projection, are ML morphemes.
All this seems pretty straightforward and promising as a testable model—until composite CS is brought up to deal with CS of the ‘ill-behaved’ kind. M-S brings these data within the reach of the model by relaxing the constraints on the determination of constituent structure and the selection of late system morphemes. This allows a composite ML to be formed and provides ‘an abstract frame composed of grammatical projections from more than one variety’ (22). M-S considers composite CS to be produced by codeswitchers whose proficiency in both participating languages is insufficient (8)—as compared to classic codeswitchers who are at least fluent in the ML—and describes composite CS in the context of language loss and language shift. In other words, instances of CS that fail to conform to the predictions of the MLF model can be thrust aside as aberrations. But who is to say that the speaker who inserted an Italian auxiliary into a French frame in No, parce que hanno donné des cours ‘No, because they have taught courses’ (Muysken 2000:23) is nonfluent in French and Italian? Or that the speaker who felt obliged to use a Spanish case-marker in They invite a El Boss ‘They invite El Boss’ (Muysken 2000:39) is nonfluent in English and Spanish? Both these cases involve the insertion of late system morphemes.
M-S has developed a model that accounts for ‘the good, the bad, and the ugly’. But it may just be that its price is the loss of the hallmark of scientific models: falsifiability.