When You CAN See the Difference: The Phonetic Basis of Sonority in American Sign Language

Natural languages, spoken and signed, deliver perceptual cues which exhibit various degrees of perceptual validity in categorization. Phonological categories expressed through auditory signal in spoken languages, are distributed across dynamic gestural units in signed languages. Each gesture delivers important biomechanical and kinesiological information perceived by the signers as time-varying patterns of articulator motion (Sandler, 2017). Relevant phonological contrasts can be localized to one or more of the following articulatory features of signs: handshape, location of the sign relative to the signer’s body, hand orientation, and movement. Non-manual information may also be contrastive. Despite this radical difference in modalities, compelling evidence supporting a unification account for spoken and signed language phonologies comes from an observation that articulatory features which spoken and signed languages deploy as markers of phonological contrasts are present in pre-linguistic infants’ babbles regardless of their hearing ability. Human infants babble both vocally and manually and differ only in terms of the dominant babbling modality (Petitto & Marentette 1991). Consistent with the findings of Petitto and Marentette, Best, Goldstein, Nam, and Tyler (2016) argue that the articulatory information absorbed by human infants during early development is amodal in nature and incorporates visual data from observing talking faces and even tactile data gathered from touching the caregiver’s face or hands while they are talking/signing, e.g., when a mother and a child are in close proximity to each other. Not surprisingly, a number of perception studies, including Hilderbrandt and Corina (2002) Williams and Newman (2016), and Stone, Petitto, and Bosworth (2017) report that not only native deaf signers, but also hearing L2 learners of a sign language, as well as sign-naïve infants and adults are responsive to phonetic detail and gestural parameters in a sign language. Gestural competence of non-signers has been established in the prior literature on sign perception as having in its foundation experience producing and perceiving communicative gestures and having a sense of how gestures function, with and without speech (see, e.g., Brentari, 2010, Bochner et al. 2011). This suggests that humans may be receptive to the phonological features in the sign domain irrespective of previous sign exposure. Such receptiveness, then, must be more readily available for perceptually salient articulatory features of signs. In spoken languages, acoustic cues are known to have various degrees of perceptual saliency in categorization, leading listeners to have perceptual biases when integrating multiple acoustic dimensions (Holt & Lotto, 2006). As a result, some acoustic dimensions play a greater role in determining the category membership of a speech sound than others. Using an example from Holt and Lotto (2006), over 15 established cues to voicing, the voice onset time is considered as the primary cue due to its high perceptual robustness. While it is reasonable to predict differential perceptual validity for dynamic gestural units produced by manual articulators in sign languages, an interesting related question is which articulatory parameters of signs are perceptually the most salient. Assuming dissimilar perceptual salience of handshape, location, movement and orientation, one could further ask what phonological feature(s) may capitalize on it in languages that use kinetic signal.


Introduction
Natural languages, spoken and signed, deliver perceptual cues which exhibit various degrees of perceptual validity in categorization. Phonological categories expressed through auditory signal in spoken languages, are distributed across dynamic gestural units in signed languages. Each gesture delivers important biomechanical and kinesiological information perceived by the signers as time-varying patterns of articulator motion (Sandler, 2017). Relevant phonological contrasts can be localized to one or more of the following articulatory features of signs: handshape, location of the sign relative to the signer's body, hand orientation, and movement. Non-manual information may also be contrastive.
Despite this radical difference in modalities, compelling evidence supporting a unification account for spoken and signed language phonologies comes from an observation that articulatory features which spoken and signed languages deploy as markers of phonological contrasts are present in pre-linguistic infants' babbles regardless of their hearing ability. Human infants babble both vocally and manually and differ only in terms of the dominant babbling modality (Petitto & Marentette 1991). Consistent with the findings of Petitto and Marentette, Best, Goldstein, Nam, and Tyler (2016) argue that the articulatory information absorbed by human infants during early development is amodal in nature and incorporates visual data from observing talking faces and even tactile data gathered from touching the caregiver's face or hands while they are talking/signing, e.g., when a mother and a child are in close proximity to each other. Not surprisingly, a number of perception studies, including Hilderbrandt and Corina (2002) Williams and Newman (2016), and Stone, Petitto, and Bosworth (2017) report that not only native deaf signers, but also hearing L2 learners of a sign language, as well as sign-naïve infants and adults are responsive to phonetic detail and gestural parameters in a sign language. Gestural competence of non-signers has been established in the prior literature on sign perception as having in its foundation experience producing and perceiving communicative gestures and having a sense of how gestures function, with and without speech (see, e.g., Brentari, 2010, Bochner et al. 2011. This suggests that humans may be receptive to the phonological features in the sign domain irrespective of previous sign exposure. Such receptiveness, then, must be more readily available for perceptually salient articulatory features of signs 1 . In spoken languages, acoustic cues are known to have various degrees of perceptual saliency in categorization, leading listeners to have perceptual biases when integrating multiple acoustic dimensions (Holt & Lotto, 2006). As a result, some acoustic dimensions play a greater role in determining the category membership of a speech sound than others. Using an example from Holt and Lotto (2006), over 15 established cues to voicing, the voice onset time is considered as the primary cue due to its high perceptual robustness. While it is reasonable to predict differential perceptual validity for dynamic gestural units produced by manual articulators in sign languages, an interesting related question is which articulatory parameters of signs are perceptually the most salient. Assuming dissimilar perceptual salience of handshape, location, movement and orientation, one could further ask what phonological feature(s) may capitalize on it in languages that use kinetic signal.

Towards an understanding of visual sonority
Traditionally, the relative sonority of speech sounds has been regarded as a conceptual representation of their perceptual salience (Williams & Newman 2016). From the phonological standpoint, sonority may be characterized as a non-binary feature which has implications for language phonotactics and syllable well-formedness, by setting requirements for a minimal degree of perceptual distinctiveness or contrastiveness for adjacent segments. Syllable organization in spoken languages can be captured by the Sonority Sequencing Principle, which aligns local sonority maxima with syllable peaks, and less sonorous segments -with onsets and codas. In a well-formed syllable, raising sonority is characteristic of the onsetnucleus transition, and falling sonorityof the nucleus-coda transition, rendering the segments maximally distinct and therefore more clearly perceptible (e.g., Parker 2011).
Sonority is regarded as a language universal manifest in spoken and signed languages alike. In spoken languages, loudness has been established, although tentatively, as a psycho-acoustic correlate of sonority (e.g., Parker 2008). A number of theoretical proposals detailing the nature of sonority in sign languages has been put forward exploring sonority as a property of entire signs or of sublexical sign units. For example, Corina (1990) discusses the sonority of classes of sign features movement, location, handshape, orientation; Brentari (1993) ranks the relative sonority of dynamic sign elements, and Perlmutter (1992) regards syllable components (e.g., morae) as the domain in which sonority may be realized. Despite the difference in these characterizations, most, if not all proposals concur that visual sonority plays an important role in determining syllabic templates and word formation strategies in sign languages (Corina 1990, Perlmutter 1992, Sandler 1993, Brentari 1993. To illustrate, Liddell (1984), Sandler (1993), and Perlmutter (1992) draw a categorical difference between the movement and the non-movement components of ASL signs. More specifically, Sandler (1993) argues that in the sign domain, the relative sonority of two fundamental sign segments, movements and locations, determines their optimal sequencing relationship, leading ASL morphemes to exhibit an optimal sonority cycle. According to Sandler, movement units, comprised of path movements, smaller-scale handshape-internal or orientation-internal movements, are inherently more sonorous than location units. These movement types serve as optimal syllable nuclei and present a structural basis of the dominant morphological templates in ASL.
The centrality of movement in determining sign well-formedness has been linked to the basic principles of the human vision system. Sandler (1993) summarizes the importance of motion-based signal for humans by saying that motion enhances visual perception. In associating movement with perceptual saliency in the sign domain, Brentari (1993) and Perlmutter (1992) compare its perceptual qualities in sign languages to the perceptual qualities of vowels in spoken languages. Perlmutter (1992), for example, draws a parallel between movement and vowels and locations and consonants and demonstrates that signs which do not have an underlyingly specified movement may only be considered well-formed when "repaired" by means of epenthetic movement, or M-epenthesis. Perlmutter (1992) and Brentari (1993) concur the special status of the movement parameter by further observing that it provides an optimal "docking site" for secondary movement features, such as finger wiggling, or hand-internal changes: these secondary articulations may be executed while a path movement gesture is in progress and, more generally, may only be co-articulated with the syllable nucleus.
Visual sonority, like sonority in the auditory domain, should be regarded as a property of the sensory signal which makes it qualitatively richer, so that it can be better absorbed by the perceptual system. "High sonority signs", then, may be perceptibly salient in the stream of articulatory data due to special visual characteristics of the incorporated movement feature. For example, movement may be perceived as salient because of its dynamic nature and may be further enhanced if its execution yields larger size articulatory gestures (this is especially the case for path movements). Other kinds of movement also exist, which feature movement that is necessarily more bound in time and space, due to being hand-internal, and therefore bound to the spatial dimensions of a human hand. Brentari (1998) further argues that the amplitude of movement in a sign may be varied as the signer seeks to foreground or background information in an utterance or phrase. This use of movement as a suprasegmental feature is achieved by means of two prosodic mechanisms, proximization and distalization. When movement is proximized, it migrates from its default joint to a more proximal joint and is thereby magnified in size/amplitude. When made more distal, movement migrates from its default joint to a more distal joint and therefore undergoes reduction.
Reflective of the different ways of understanding visual sonority, a number of sonority hierarchies have been proposed in the sign language literature ranking the articulatory parameters movement, handshape, orientation, and location based on their relative sonority value. To illustrate, Corina (1990) regards movement as the most sonorous articulatory component of signs, followed by change in handshape, orientation and location. Sandler (1993) proposes a hierarchy in which Locations, contacting and plain, are low in sonority and Movements, hand-internal and path, are high in sonority. According to Sandler's model, contacting locations are physical places of articulation which are in direct contact with the signer's body (e.g., when contact is established between the signing hand and the signer's body) and therefore limiting movement. Trilled movements, on the other hand, are a combination of greater amplitude path movements and hand-internal trilling motion rendering them as the most sonorous aspects of signs. To summarize, a number of theoretical proposals endorse the special status of movement as a possible physical correlate of visual sonority. However, there seems to be a lack agreement on how movement should be operationalized. For example, Sandler (1993) proposes that degrees of visual sonority may be operationalized as types of movement carried out during sign's production. For Crasborn (2001), it is the perceived variation in the movement segment, reflected in the size/amplitude of the movement gesture present in a sign.
Whereas movement has been endorsed by many as a possible basis for visual sonority, perception literature has demonstrated that native deaf signers and second language learners of sign languages acquire and identify movement features significantly later in the development and with more errors than the other phonological parameters of signs. To illustrate, adult native ASL signers demonstrate difficulties perceiving movement-based and handshape-based contrasts in visually degraded presentations of ASL utterances (Fischer, Delhorne, & Reed, 1999;Fischer & Tartter, 1985;Tartter & Fischer, 1982). Deaf children engage more proximal articulators (e.g., elbow or shoulder joints) to produce non-target-like motorically simplified movement (Meier, 2006;Meier, Mauk, Cheek, & Moreland, 2008). Hearing ASL L2 learners discern movement-based phonological contrasts with more difficulty than handshape-based or location-based contrasts (Williams & Newman 2016, Hilger et al. 2015. Finally, adult non-signers tested on their perception of sign articulatory features use superficial visual cues when processing SL input and experience the most difficulty perceiving handshapes and movement (e.g., Bochner, Christie, Hauser, & Searls 2011). To summarize, while there seems to be a consensus regarding the very important status of movement for sign well-formedness and its contribution to perceptual salience in the sign domain, an apparent empirical challenge is to how best account for the perception difficulties associated with movement such as those reported in the sign language literature. Brentari's (1998) Sonority Hierarchy for ASL assumes a different model of visual sonority, which hinges on the relative proximity of the articulating joint to the signer's body. Larger scale articulators, including shoulder, elbow, and wrist joints, deliver more perceptually salient contrasts than smaller scale articulators, such as finger joints. In a sense, the larger the sign being made due to the size of the articulators used, the more sonorous it is, because the larger articulators produce a 'louder' sign (Sandler & LilloMartin 2006). On this view, the endpoints of this sonority scale are handshapes, highly configurationally complex but relatively low in sonority, due to spatial compression and locations, high in sonority due to being articulated by the most proximal and therefore largest joints, shoulder and elbow.
In summary, the concept of visual sonority has been widely endorsed in the sign language literature. Various phonetic bases of visual sonority have been proposed, including the most dynamic component present in the sign articulation and the relative size of the active articulator. To shed more light on the perceptual correlate(s) of visual sonority, one may need to examine which articulatory sign features aid the most in perception of ASL phonological contrasts. Furthermore, the perceptual salience of the articulatory parameters of signs may seem less critical for native ASL signers whose sensitivity to even the most subtle contrasts is set high due to early exposure to a sign language. At the same time, the relative perceptual salience of the articulatory parameters of signs may be rather critical for individuals with native proficiency in a different sign language or for hearing individuals with no prior exposure to a sign language. For that reason, we regard populations which may be particularly suited for gauging the relative perceptual contribution of the articulatory parameters handshape, orientation, location, and movement for discriminating between phonological contrasts in ASL to be adult deaf L2 learners of ASL with experience in a different sign language as well as hearing sign-naïve individuals who are spoken language users with no prior exposure to any sign language.

The present study
The present study seeks to evaluate the relative perceptual salience of the gestural components of ASL signs handshape (henceforth, HS), orientation (henceforth, ORI), location (henceforth, LOC) and movement (henceforth, MOV). To this end, we use a closed-set sentence discrimination task developed by Bochner et al. (2011) and test the perception of ASL phonological contrasts by sign-naïve speakers of American English and deaf L2 learners of ASL proficient in another sign language. By doing so, we seek to determine which articulatory parameters of signs relay phonological contrasts perceptible for even signnaïve individuals and which are likely to present areas of maximal difficulty in (non-native) phonological acquisition and development of sign perception abilities. The findings of this study shed light on what gestural components of ASL signs could serve as the phonetic basis for visual sonority in the sign domain and enhance the perceptibility of the phonemic contrasts for ASL L2 learners as well as first-time ASL signers.

Method
Using a closed-set sentence discrimination task developed by Bochner et al. (2011), we evaluated relative perceptual salience of the articulatory features in American Sign Language (ASL) as proxied by the rate of successful discrimination of ASL sentence pairs which differed in terms of one aspect of the visuo-spatial configuration; we also tested the ability of hearing English speakers with no experience in any sign language to detect phonological contrasts encoded by ASL gestural components HS, ORI, MOV, and LOC.
During testing, participants were seated in front of a computer monitor in a quiet room and were presented with video recordings of sentence pairs in which the difference between the sentences, when present, was localized to a sentence-mid or a sentence-final sign and was lexical (e.g., MOTHER/FATHER, see Figures 1a and 1b) or morphological (e.g., 1-MONTH/6-MONTHS, not shown). Figures 1a,b: ASL signs "mother" and "father" present a minimal pair discriminated based on the Location parameter. The Location for "mother" is at the signer's chin and for "father" -at the singer's forehead.
Each test trial contained a test sentence presented by a model native signer and reproduced by two different native ASL signers, in the following order: model → signer1; model → signer2. Participants judged each sentence pair as same or different, thus making two judgments per trial.
For every sentence pair, every possible combination of same/different could occur. Each of the gestural components enabling successful discrimination of the minimal sign pairs was represented by 8 test items. The task included 6 practice trials during which feedback was provided and the relevant articulatory contrast was explained and 48 test trials, for a total of 54 matching sentence pairs and 54 non-matching sentence pairs. The task was self-paced: participants initiated each new test trial with a button press and could take breaks between test trials as needed.

Participants
The sign-experienced participants in this study were 25 foreign-born Deaf L2 learners of ASL who reported extensive exposure to and proficiency in Emirati, Kuwait, and Saudi sign languages (mean age: 19;03; mean length of a non-ASL sign language exposure =193;8 mo., mean length of ASL exposure=15;2 mo.). Twenty-eight sign-naïve adult speakers of American English (21 females, Luchkina, Koulidobrova, and Palmer mean age 27;09) participated who all reported having normal hearing and no prior experience with any sign language.

Predictions
We anticipated a better rate of phonological discrimination based on all articulatory sign components from deaf L2 ASL signers due to extensive prior exposure to a different sign language, compared to our sign-naïve participants. We anticipated that the sign-naïve participants would demonstrate a baseline level of sensitivity to perceptually salient phonological contrasts in ASL despite having no prior exposure to a sign language and despite processing sign language input as mostly visual (as opposed to linguistic) signal.
For both participant groups, we predicted to observe differential contribution of the phonological contrast type (HS-based, LOC-based, ORI-based, or MOV-based) to the likelihood of successful discrimination. Given that perceptually salient phonological contrasts should facilitate discrimination, if MOV is salient and presents the phonetic basis for visual sonority, we should observe greater sensitivity to MOV-based contrasts. If the relative size of the active articulator (or, using Brentari's terminology, the proximity of the active articulator to the signer's body) presents the phonetic basis for visual sonority, we expected to obtain dissimilar rates of sensitivity to HS-based (maximally distal and lower in visual sonority) vs. LOC-based (maximally proximal and more sonorous) contrasts.    Results of the closed-set sentence discrimination task. X-axis: Mean discrimination accuracy (%) obtained from individual study participants. Y-axis: Figures 2a and 2b: the cumulative length of sign-experienced participants' exposure to a sign language in months; Figure 2c: the cumulative length of sign-naïve participants' exposure to ASL operationalized as a percentage of the task items completed during testing in the present study. Luchkina, Koulidobrova, and Palmer relevant aspect of the visuo-spatial configuration of the sentence-initial/medial/final word. Visual examination of participant responses supports our first set of predictions regarding the effect of the prior exposure to a sign language or a lack of thereof, on task performance. Figures 2a-b demonstrate that experienced signers' accuracy on the closed-set sentence discrimination was positively correlated with the amount of the previous exposure to a sign language other than ASL (Figure 2a) as well as the amount of the previous ASL exposure (Figure 2b). No such relationship could be established for sign-naïve participants (Figure 2c), pointing to a systematic difference between the discrimination strategies used by the two participant groups.

Figure 3:
Results of the closed-set sentence discrimination task. Mean discrimination accuracy (%) on all test items (left panel) and sentence pairs different based on a morphological contrast (right panel). Figure 3 demonstrates the mean accuracy rates obtained in the present study (light and shaded bars) as well as in the study by Bochner et al. (2011) (dark bars 2 ) using the same task for lexical and morphological contrasts. The differences in the obtained mean accuracy rates for native and non-native ASL signers point to the fact that the L2 ASL learners had to allocate more processing resources to recognizing the phonological shape of signs in a non-native sign language.

Figure 4:
Results of the closed-set sentence discrimination task. Y-axis: mean discrimination accuracy (%) for experienced ASL signers (dark line) and English speakers with no experience in a sign language (light line); X-axis: Contrast type; HS=Hand shape; LOC=Location; MVT=Movement; ORI=Orientation; SAME=identical sentences (no contrast). In their analyses of sentence discrimination data obtained from 127 hearing adult signers (beginning and intermediate levels) and 10 adult native signers, Bochner et al. (2011) determine that MOV-based phonological contrasts were associated with the lowest discrimination accuracy followed by Handshapeand Location-based contrasts. The mean accuracy rates for each phonological contrast type obtained in the present study are plotted in Figure 4. On most contrast types, both participant groups demonstrated abovechance discrimination, with the exception of the contrasts signaled by the non-manual feature Tongue 3 . Figure 4 also demonstrates that the difference in accuracy between deaf L2 ASL learners and non-signers, except for HS-based contrasts, ranges between 9-17%.

The dependent variable Fixed effects Random effects (intercepts & slopes)
The log likelihood of a correct identification of the sentences as same or different Articulatory sign parameters (HS, MVT, LOC, ORI) Participant Contrast type -lexical; -morphological; Test item Table 1: Variables (fixed and random effects) tested in the mixed effects logistic regression analyses of the responses obtained in the closed-set sentence discrimination task.
The responses of the sign-naïve participants and the L2 ASL learners (Hits and Misses) were modeled using two mixed effects logistic regression models summarized in Table 1. A separate model was fit to the data obtained from the sign-experienced and the sign-naïve participants. The models estimated the log likelihood of a correct identification of the sentences in a pair as "same" or "different". The output of these analyses is presented in Table 2. Table 2: Results of the mixed-effects logistic regressions (fixed effects) modeling responses of the closedset sentence discrimination task. Labels "Handshape", "Location", "Movement", and "Orientation" refer to phonological contrast type in the test trials featuring different (as opposed to same) sentences. Shaded cells display statistically significant effects (p<.05) and different coefficient signs associated with the main effect of HS-based contrasts.
The logistic regression analyses returned significant main effects of each phonological contrast type except Movement-based contrasts. Contrasts based on the articulatory parameter Location were associated with a greater likelihood of successful discrimination by both participant groups. The models also revealed main effects which were in the opposite direction for the sign-experienced and the sign-naïve participants or held for the sign-experienced ASL L2 learners only. To illustrate, HS-based contrasts were associated 8 Luchkina, Koulidobrova, and Palmer with a greater likelihood of accurate discrimination by sign-experienced ASL L2 learners, but presented a maximally difficult test category for the sign-naïve participants who were more likely to miss contrastive differences based on the Handshape parameter. Furthermore, only the experienced signers were significantly more accurate on the ORI-based contrasts (this effect approaches significance for the signnaïve respondents) and demonstrated sensitivity to the nature of the contrast, lexical vs. morphological. When the sentence pair differed in terms of lexical meaning, the likelihood of correct discrimination was higher, pointing to a greater complexity associated with the processing and comprehension of morphological meanings.
To summarize, the results of the closed-set sentence discrimination task reveal that phonological contrasts based on handshape, configurationally complex but spatially compressed and possibly low in visual sonority, presented a likely area of maximal difficulty for sign-naïve participants. For all participants, contrasts based on the location parameter and involving larger scale articulators, such as the shoulder joint, were associated with robust categorical discrimination. Consistent with Brentari's (1998) model of visual sonority, orientation-based contrasts, too, are predicted to be more perceptually salient than handshape-based contrasts but less so than location-based contrasts. The results obtained from the signnaïve participants as well as from the experienced signers are consistent with this proposal: numerically greater accuracy was associated with orientation-based contrasts possibly due to the fact that such contrasts engage the wrist joint unlike the handshape-based contrasts, which are spatially limited to the dimensions of the signer's hand. Finally, movement-based contrasts did not pattern with either location-based contrasts (high probability of accurate discrimination) or handshape-based contrasts (lower probability of correct discrimination), and were not predictive of accurate discrimination for L2 ASL signers (z=1.54, p =.13) or non-signers (z=.004, p=.97).
Taken as a whole, these results provide evidence that the relative size of the active articulator may present the possible phonetic basis for visual sonority in sign languages, consistent with the proposal for ASL detailed in Brentari (1998).

Discussion
The present study provides an empirical test of visual sonority in ASL. We used the closed-set sentence discrimination task developed by Bochner et al. (2011) to gauge the relative perceptual salience of the articulatory sign parameters Handshape, Orientation, Movement and Location in cuing phonological contrasts during sentence comprehension. For this study, we selected participants who were more likely to demonstrate systematically greater sensitivity to visual salience or sonority in the sign domain due to nonnative proficiency in the target sign language or due to a lack of the previous exposure to linguistic signs. The populations of interest were therefore deaf non-ASL signers learning ASL as their second sign language and sign-naïve hearing speakers of American English.
The results are consistent with the prediction that spoken language use does not preclude non-signers from processing perceptually salient phonological contrasts in ASL even when these contrasts are not reinforced with meaning. This is evident from the finding that the difference in the accuracy rate for the experienced ASL signers and the non-signers, except for handshape-based contrasts (~15% of test items), fell within a narrow range 9-17% and contributed to the remarkable parallelism in the observed response patterns. We conclude that the processing foundations of sentence comprehension may be modalityindependent but reflective of the perceptually salient aspects of the linguistic signal.
Our results support that phonological processing in the sign domain is anchored in the relative perceptual saliency of the features marking phonological contrasts. Consistent with the Sonority Hierarchy proposed by Brentari (1998), in ASL, phonological contrasts based on the Handshape parameter may present a likely area of maximal difficulty in phonological development, unlike contrasts based on Location and Orientation, which are high in visual sonority and perceptible even for first-time signers. More specifically, poor discrimination of handshape-based contrasts was characteristic only of our sign-naïve participants, whereas our sign-experienced ASL L2 learners were on average over 80% accurate in detecting such contrasts. The observed dissociation in the role that the handshape parameter plays in phonological categorization is in line with the previous findings by Grosvald, Gutierrez, Hafer, and Corina (2012) and Williams and Newman (2016). These earlier studies report that deaf signers, unlike hearing non-signers, perceive marked handshapes better than unmarked handshapes, pointing to the fact that even lower sonority segments may serve as robust discriminators, as long as they are systematically used as a distinctive feature in a given language.
This study reports no conclusive evidence regarding the status of movement in phonological categorization in ASL. One theoretical proposal which we were interested in testing is if the movement parameter presents the phonetic basis for visual sonority and as such would facilitate discrimination of the movement-based contrasts due the high perceptibility of movement in the sign domain (Brentari 1992).
Recall that earlier studies reported phonological discrimination based on movement to be subject to sign language proficiency effects as well as visual signal quality effects (e.g., Emmorey 2002).Whereas our instrument offered high quality video recordings of sentences produced by native ASL signers, the language background of our participants could contribute to a lack of the main effect of movement-based contrasts in our inferential statistical analyses. Numerically, experienced signers were less accurate when discriminating between movement-based phonological contrasts tested in the closed-set sentence discrimination task. Movement-based contrasts were the second most difficult test category for the naïve signers, who were the lest accurate on handshape-based contrasts. Native-like proficiency in a sign language may be needed to draw more fine-grained distinctions between the informativeness of movementbased vs. other parameter-based phonological contrasts. This may be accounted for by appealing to the relatively greater computational complexity of movement, as well as its semblance to communicative gestures, possibly compromising categorization for the sign-naïve participants. Finally, our null result may be attributed to the limitations of the test instrument which did not systematically vary different movement types, such as path movements, executed by engaging larger-size articulators and hand-internal movements.
The sonority status of the movement parameter in ASL remains unresolved in the present investigation. Future research must focus on testing a more nuanced view of movement. This will require, for example, accounting for movement which is more local and spatially-compressed (hand-internal) vs. greater in amplitude (path). A more nuanced view of sign movement would also involve awareness of other articulatory parameters simultaneously co-implemented in the sign, e.g., Handshape for hand-internal movements and Locationfor path movements.

Conclusion
The present study finds that in perception, L2 learners of ASL proficient in a different sign language as well sign-naïve hearing speakers of American English demonstrate selective sensitivity to the manual parameters of sign handshape, orientation, location and movement. Results of a closed-set sentence discrimination task reveal that phonological contrasts cued by means of larger-scale articulators, such as shoulder and elbow joints and possibly high in visual sonority are readily detectable by even the first-time ASL perceivers. Our findings provide empirical support for Brentari's Sonority Hierarchy in SL (Brentari 1998, Sandler & LilloMartin 2006. More broadly, our findings suggest that strategies for processing language input in the sign domain by sign-experienced and sign-naïve perceivers hinge on the perceptibly salient properties of the linguistic signal and point to modality-independent foundations of natural languages, signed and spoken.