1
|
Fingerspelling and Its Role in Translanguaging. LANGUAGES (BASEL, SWITZERLAND) 2022; 7:278. [PMID: 37920277 PMCID: PMC10622114 DOI: 10.3390/languages7040278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Fingerspelling is a critical component of many sign languages. This manual representation of orthographic code is one key way in which signers engage in translanguaging, drawing from all of their linguistic and semiotic resources to support communication. Translanguaging in bimodal bilinguals is unique because it involves drawing from languages in different modalities, namely a signed language like American Sign Language and a spoken language like English (or its written form). Fingerspelling can be seen as a unique product of the unified linguistic system that translanguaging theories purport, as it blends features of both sign and print. The goals of this paper are twofold: to integrate existing research on fingerspelling in order to characterize it as a cognitive-linguistic phenomenon and to discuss the role of fingerspelling in translanguaging and communication. We will first review and synthesize research from linguistics and cognitive neuroscience to summarize our current understanding of fingerspelling, its production, comprehension, and acquisition. We will then discuss how fingerspelling relates to translanguaging theories and how it can be incorporated into translanguaging practices to support literacy and other communication goals.
Collapse
|
2
|
Cross-modal and cross-language activation in bilinguals reveals lexical competition even when words or signs are unheard or unseen. Proc Natl Acad Sci U S A 2022; 119:e2203906119. [PMID: 36037359 PMCID: PMC9457174 DOI: 10.1073/pnas.2203906119] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We exploit the phenomenon of cross-modal, cross-language activation to examine the dynamics of language processing. Previous within-language work showed that seeing a sign coactivates phonologically related signs, just as hearing a spoken word coactivates phonologically related words. In this study, we conducted a series of eye-tracking experiments using the visual world paradigm to investigate the time course of cross-language coactivation in hearing bimodal bilinguals (Spanish-Spanish Sign Language) and unimodal bilinguals (Spanish/Basque). The aim was to gauge whether (and how) seeing a sign could coactivate words and, conversely, how hearing a word could coactivate signs and how such cross-language coactivation patterns differ from within-language coactivation. The results revealed cross-language, cross-modal activation in both directions. Furthermore, comparison with previous findings of within-language lexical coactivation for spoken and signed language showed how the impact of temporal structure changes in different modalities. Spoken word activation follows the temporal structure of that word only when the word itself is heard; for signs, the temporal structure of the sign does not govern the time course of lexical access (location coactivation precedes handshape coactivation)-even when the sign is seen. We provide evidence that, instead, this pattern of activation is motivated by how common in the lexicon the sublexical units of the signs are. These results reveal the interaction between the perceptual properties of the explicit signal and structural linguistic properties. Examining languages across modalities illustrates how this interaction impacts language processing.
Collapse
|
3
|
CHALLENGING THE ORAL-ONLY NARRATIVE: ENHANCING EARLY SIGNED INPUT FOR DEAF CHILDREN WITH HEARING PARENTS. HRVATSKA REVIJA ZA REHABILITACIJSKA ISTRAŽIVANJA 2022; 58:6-26. [PMID: 37396568 PMCID: PMC10311540 DOI: 10.31299/hrri.58.si.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2023] Open
Abstract
Learning a language is, at its core, a process of noticing patterns in the language input surrounding the learner. Although many of these language patterns are complex and difficult for adult speakers/signers to recognize, infants are able to find and learn them from the youngest age, without explicit instruction. However, this impressive feat is dependent on children's early access to ample and well-formed input that displays the regular patterns of natural language. Such input is far from guaranteed for the great majority of deaf and hard of hearing (DHH) children, leading to well-documented difficulties and delays in linguistic development. Efforts to remedy this situation have focused disproportionately on amplifying DHH children's hearing levels, often through cochlear implants, as young as possible to facilitate early access to spoken language. Given the time required for cochlear implantation, its lack of guaranteed success, and the critical importance of exposing infants to quality language input as early as possible, a bimodal bilingual approach can optimize DHH infants' chances for on-time language development by providing them with both spoken and signed language input from the start. This paper addresses the common claim that signing with DHH children renders the task of learning spoken language more difficult, leading to delays and inferior language development, compared to DHH children in oral-only environments. That viewpoint has most recently been articulated by Geers et al. (2017a), which I will discuss as a representative of the many studies promoting an oral-only approach. Contrary to their claims that signing degrades the language input available to DHH children, recent research has demonstrated that the formidable pattern-finding skills of newborn infants extends to linguistic cues in both the spoken and signed modalities, and that the additional challenge of simultaneously acquiring two languages is offset by important "bilingual advantages." Of course, securing early access to high quality signed input for DHH children from hearing families requires considerable effort, especially since most hearing parents are still novice signers. This paper closes with some suggestions for how to address this challenge through partnerships between linguistics researchers and early intervention programs to support family-centered bimodal bilingual development for DHH children.
Collapse
|
4
|
Manual and Spoken Cues in French Sign Language's Lexical Access: Evidence From Mouthing in a Sign-Picture Priming Paradigm. Front Psychol 2021; 12:655168. [PMID: 34113290 PMCID: PMC8185165 DOI: 10.3389/fpsyg.2021.655168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 04/26/2021] [Indexed: 11/13/2022] Open
Abstract
Although Sign Languages are gestural languages, the fact remains that some linguistic information can also be conveyed by spoken components as mouthing. Mouthing usually tend to reproduce the more relevant phonetic part of the equivalent spoken word matching with the manual sign. Therefore, one crucial issue in sign language is to understand whether mouthing is part of the signs themselves or not, and to which extent it contributes to the construction of signs meaning. Another question is to know whether mouthing patterns constitute a phonological or a semantic cue in the lexical sign entry. This study aimed to investigate the role of mouthing on the processing of lexical signs in French Sign Language (LSF), according the type of bilingualism (intramodal vs. bimodal). For this purpose, a behavioral sign-picture lexical decision experiment was designed. Intramodal signers (native deaf adults) and Bimodal signers (fluent hearing adults) have to decide as fast as possible whether a picture matched with the sign seen just before. Five experimental conditions in which the pair sign-mouthing were congruent or incongruent were created. Our results showed a strong interference effect when the sign-mouthing matching was incongruent, reflected by higher error rates and lengthened reaction times compared with the congruent condition. This finding suggests that both groups of signers use the available lexical information contained in mouthing during accessing the sign meaning. In addition, deaf intramodal signers were strongly interfered than hearing bimodal signers. Taken together, our data indicate that mouthing is a determining factor in LSF lexical access, specifically in deaf signers.
Collapse
|
5
|
Effects of deafness and sign language experience on the human brain: voxel-based and surface-based morphometry. LANGUAGE, COGNITION AND NEUROSCIENCE 2021; 36:422-439. [PMID: 33959670 PMCID: PMC8096161 DOI: 10.1080/23273798.2020.1854793] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
We investigated how deafness and sign language experience affect the human brain by comparing neuroanatomical structures across congenitally deaf signers (n = 30), hearing native signers (n = 30), and hearing sign-naïve controls (n = 30). Both voxel-based and surface-based morphometry results revealed deafness-related structural changes in visual cortices (grey matter), right frontal lobe (gyrification), and left Heschl's gyrus (white matter). The comparisons also revealed changes associated with lifelong signing experience: expansions in the surface area within left anterior temporal and left occipital lobes, and a reduction in cortical thickness in the right occipital lobe for deaf and hearing signers. Structural changes within these brain regions may be related to adaptations in the neural networks involved in processing signed language (e.g. visual perception of face and body movements). Hearing native signers also had unique neuroanatomical changes (e.g. reduced gyrification in premotor areas), perhaps due to lifelong experience with both a spoken and a signed language.
Collapse
|
6
|
Cross-linguistic metaphor priming in ASL-English bilinguals: Effects of the Double Mapping Constraint. SIGN LANGUAGE AND LINGUISTICS 2020; 23:96-111. [PMID: 33994844 PMCID: PMC8115326 DOI: 10.1075/sll.00045.sch] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Meir's (2010) Double Mapping Constraint (DMC) states the use of iconic signs in metaphors is restricted to signs that preserve the structural correspondence between the articulators and the concrete source domain and between the concrete and metaphorical domains. We investigated ASL signers' comprehension of English metaphors whose translations complied with the DMC (Communication collapsed during the meeting) or violated the DMC (The acid ate the metal). Metaphors were preceded by the ASL translation of the English verb, an unrelated sign, or a still video. Participants made sensibility judgments. Response times (RTs) were faster for DMC-compliant sentences with verb primes compared to unrelated primes or the still baseline. RTs for DMC-violation sentences were longer when preceded by verb primes. We propose the structured iconicity of the ASL verbs primed the semantic features involved in the iconic mapping and these primed semantic features facilitated comprehension of DMC-compliant metaphors and slowed comprehension of DMC-violation metaphors.
Collapse
|
7
|
Second language acquisition of American Sign Language influences co-speech gesture production. BILINGUALISM (CAMBRIDGE, ENGLAND) 2020; 23:473-482. [PMID: 32733161 PMCID: PMC7392225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Previous work indicates that 1) adults with native sign language experience produce more manual co-speech gestures than monolingual non-signers, and 2) one year of ASL instruction increases gesture production in adults, but not enough to differentiate them from non-signers. To elucidate these effects, we asked early ASL-English bilinguals, fluent late second language (L2) signers (≥ 10 years of experience signing), and monolingual non-signers to retell a story depicted in cartoon clips to a monolingual partner. Early and L2 signers produced manual gestures at higher rates compared to non-signers, particularly iconic gestures, and used a greater variety of handshapes. These results indicate susceptibility of the co-speech gesture system to modification by extensive sign language experience, regardless of the age of acquisition. L2 signers produced more ASL signs and more handshape varieties than early signers, suggesting less separation between the ASL lexicon and the co-speech gesture system for L2 signers.
Collapse
|
8
|
Abstract
Bimodal bilinguals sometimes use code-blending, simultaneous production of (parts of) an utterance in both speech and sign. We ask what spoken language material is blended with entity and handling depicting signs (DS), representations of action that combine discrete components with iconic depictions of aspects of a referenced event in a gradient, analog manner. We test a semantic approach that DS may involve a demonstration, involving a predicate which obligatorily includes a modificational demonstrational component, and adopt a syntactic analysis which crucially distinguishes between entity and handling DS. Given the model of bilingualism we use, we expect both DS can be produced with speech that occurs in the verbal structure, along with vocal gestures, but speech that includes a subject is only expected to be blended with handling DS, not entity. The data we report from three Codas, native bimodal bilinguals, from the United States and one from Brazil conform with this prediction.
Collapse
|
9
|
ERP Evidence for Co-Activation of English Words during Recognition of American Sign Language Signs. Brain Sci 2019; 9:E148. [PMID: 31234356 PMCID: PMC6627215 DOI: 10.3390/brainsci9060148] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 06/18/2019] [Accepted: 06/20/2019] [Indexed: 11/17/2022] Open
Abstract
Event-related potentials (ERPs) were used to investigate co-activation of English words during recognition of American Sign Language (ASL) signs. Deaf and hearing signers viewed pairs of ASL signs and judged their semantic relatedness. Half of the semantically unrelated signs had English translations that shared an orthographic and phonological rime (e.g., BAR-STAR) and half did not (e.g., NURSE-STAR). Classic N400 and behavioral semantic priming effects were observed in both groups. For hearing signers, targets in sign pairs with English rime translations elicited a smaller N400 compared to targets in pairs with unrelated English translations. In contrast, a reversed N400 effect was observed for deaf signers: target signs in English rime translation pairs elicited a larger N400 compared to targets in pairs with unrelated English translations. This reversed effect was overtaken by a later, more typical ERP priming effect for deaf signers who were aware of the manipulation. These findings provide evidence that implicit language co-activation in bimodal bilinguals is bidirectional. However, the distinct pattern of effects in deaf and hearing signers suggests that it may be modulated by differences in language proficiency and dominance as well as by asymmetric reliance on orthographic versus phonological representations.
Collapse
|
10
|
Impact of Language Experience on Attention to Faces in Infancy: Evidence From Unimodal and Bimodal Bilingual Infants. Front Psychol 2018; 9:1943. [PMID: 30459671 PMCID: PMC6232685 DOI: 10.3389/fpsyg.2018.01943] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 09/20/2018] [Indexed: 01/14/2023] Open
Abstract
Faces capture and maintain infants' attention more than other visual stimuli. The present study addresses the impact of early language experience on attention to faces in infancy. It was hypothesized that infants learning two spoken languages (unimodal bilinguals) and hearing infants of Deaf mothers learning British Sign Language and spoken English (bimodal bilinguals) would show enhanced attention to faces compared to monolinguals. The comparison between unimodal and bimodal bilinguals allowed differentiation of the effects of learning two languages, from the effects of increased visual communication in hearing infants of Deaf mothers. Data are presented for two independent samples of infants: Sample 1 included 49 infants between 7 and 10 months (26 monolinguals and 23 unimodal bilinguals), and Sample 2 included 87 infants between 4 and 8 months (32 monolinguals, 25 unimodal bilinguals, and 30 bimodal bilingual infants with a Deaf mother). Eye-tracking was used to analyze infants' visual scanning of complex arrays including a face and four other stimulus categories. Infants from 4 to 10 months (all groups combined) directed their attention to faces faster than to non-face stimuli (i.e., attention capture), directed more fixations to, and looked longer at faces than non-face stimuli (i.e., attention maintenance). Unimodal bilinguals demonstrated increased attention capture and attention maintenance by faces compared to monolinguals. Contrary to predictions, bimodal bilinguals did not differ from monolinguals in attention capture and maintenance by face stimuli. These results are discussed in relation to the language experience of each group and the close association between face processing and language development in social communication.
Collapse
|
11
|
Acquisition of Classifier Constructions in HKSL by Bimodal Bilingual Deaf Children of Hearing Parents. Front Psychol 2018; 9:1148. [PMID: 30083114 PMCID: PMC6064956 DOI: 10.3389/fpsyg.2018.01148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Accepted: 06/14/2018] [Indexed: 11/13/2022] Open
Abstract
The current study focuses on the acquisition of classifier constructions in Hong Kong Sign Language (HKSL) by a group of Deaf children of hearing parents, aided or implanted. These children have been mainstreamed together since kindergarten; but their learning environment supports dual language input in Cantonese and HKSL on a daily basis. Classifier constructions were chosen because previous research suggested full mastery at a late age when compared with other verb types, due to their morphosyntactic complexity. Also, crosslinguistic comparison between HKSL and Cantonese reveals differences in verb morphology as well as word order of the structures under investigation. We predicted that verb root and word order were the two domains for crosslingusitic interaction to occur. At the general level, given the specific learning environment and dual input condition, we examined if these Deaf child learners could ultimately acquire classifier constructions. Fifteen Deaf children divided into four groups based on duration of exposure to HKSL participated in the study. Two Deaf children born to Deaf parents and three native HKSL signers served as controls. A picture description task was designed to elicit classifier constructions containing either a transitive, a locative existential or a motion directional predicate. The findings revealed Deaf children's gradual convergence on the adult grammar despite late exposure to HKSL. Evidence of crosslinguistic influence on word order came from the Deaf children's initial adoption of a Cantonese structure for locative existential and motion directional predicates. There was also a prolonged period of adherence to the SVO order across all grades. However, within this SVO structure, the verb revealed increasing morphological complexity as a function of longer duration of exposure. We interpreted the findings using Language Synthesis, arguing that it was the selection of morphosyntactic features in Numeration that triggered crosslinguistic interaction between Cantonese and HKSL with bimodal bilinguals.
Collapse
|
12
|
Evidence for a bimodal bilingual disadvantage in letter fluency. BILINGUALISM (CAMBRIDGE, ENGLAND) 2017; 20:42-48. [PMID: 28785168 PMCID: PMC5544419 DOI: 10.1017/s1366728916000596] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Many bimodal bilinguals are immersed in a spoken language-dominant environment from an early age and, unlike unimodal bilinguals, do not necessarily divide their language use between languages. Nonetheless, early ASL-English bilinguals retrieved fewer words in a letter fluency task in their dominant language compared to monolingual English speakers with equal vocabulary level. This finding demonstrates that reduced vocabulary size and/or frequency of use cannot completely account for bilingual disadvantages in verbal fluency. Instead, retrieval difficulties likely reflect between-language interference. Furthermore, it suggests that the two languages of bilinguals compete for selection even when they are expressed with distinct articulators.
Collapse
|
13
|
Language co-activation and lexical selection in bimodal bilinguals: Evidence from picture-word interference. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:264-276. [PMID: 26989347 PMCID: PMC4790112 DOI: 10.1017/s1366728915000097] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
We used picture-word interference (PWI) to discover a) whether cross-language activation at the lexical level can yield phonological priming effects when languages do not share phonological representations, and b) whether semantic interference effects occur without articulatory competition. Bimodal bilinguals fluent in American Sign Language (ASL) and English named pictures in ASL while listening to distractor words that were 1) translation equivalents, 2) phonologically related to the target sign through translation, 3) semantically related, or 4) unrelated. Monolingual speakers named pictures in English. Production of ASL signs was facilitated by words that were phonologically related through translation and by translation equivalents, indicating that cross-language activation spreads from lexical to phonological levels for production. Semantic interference effects were not observed for bimodal bilinguals, providing some support for a post-lexical locus of semantic interference, but which we suggest may instead reflect time course differences in spoken and signed production in the PWI task.
Collapse
|
14
|
Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:223-242. [PMID: 28804269 PMCID: PMC5553278 DOI: 10.1017/s1366728915000085] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.
Collapse
|
15
|
Abstract
Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending-expressions in both speech and sign simultaneously-an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult. Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant community language.
Collapse
|
16
|
[Not Available]. LETRAS DE HOJE 2013; 48:389-398. [PMID: 25506105 PMCID: PMC4262527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This study investigates the phonological acquisition of Brazilian Portuguese (BP) by a group of 24 bimodal bilingual hearing children, who have unrestricted access to Brazilian Sign Language (Libras), and a group of 6 deaf children, who use cochlear implants (CI), with restricted or unrestricted access to Libras. The children's phonological system of BP was assessed through the Naming Task (Part A) of the ABFW - Children Language Test (ANDRADE et al. 2004). The results revealed that the hearing children and the deaf child who use CI, all with full access to Libras, showed expected (normal) phonological acquisition considering their age groups. We consider that the early acquisition and unrestricted access to Libras may have determined these children's performance in the oral tests.
Collapse
|
17
|
The bimodal bilingual brain: effects of sign language experience. BRAIN AND LANGUAGE 2009; 109:124-32. [PMID: 18471869 PMCID: PMC2680472 DOI: 10.1016/j.bandl.2008.03.005] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2007] [Revised: 03/12/2008] [Accepted: 03/13/2008] [Indexed: 05/18/2023]
Abstract
Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken language experience on activation patterns within the superior temporal sulcus. In addition, the strong left-lateralized activation for facial expression recognition previously observed for deaf signers was not observed for hearing signers. We conclude that both sign language experience and deafness can affect the neural organization for recognizing facial expressions, and we argue that bimodal bilinguals provide a unique window into the neurocognitive changes that occur with the acquisition of two languages.
Collapse
|