1
|
Caldwell HB. Sign and Spoken Language Processing Differences in the Brain: A Brief Review of Recent Research. Ann Neurosci 2022; 29:62-70. [PMID: 35875424 PMCID: PMC9305909 DOI: 10.1177/09727531211070538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 11/29/2021] [Indexed: 11/27/2022] Open
Abstract
Background: It is currently accepted that sign languages and spoken languages have significant processing commonalities. The evidence supporting this often merely investigates frontotemporal pathways, perisylvian language areas, hemispheric lateralization, and event-related potentials in typical settings. However, recent evidence has explored beyond this and uncovered numerous modality-dependent processing differences between sign languages and spoken languages by accounting for confounds that previously invalidated processing comparisons and by delving into the specific conditions in which they arise. However, these processing differences are often shallowly dismissed as unspecific to language. Summary: This review examined recent neuroscientific evidence for processing differences between sign and spoken language modalities and the arguments against these differences’ importance. Key distinctions exist in the topography of the left anterior negativity (LAN) and with modulations of event-related potential (ERP) components like the N400. There is also differential activation of typical spoken language processing areas, such as the conditional role of the temporal areas in sign language (SL) processing. Importantly, sign language processing uniquely recruits parietal areas for processing phonology and syntax and requires the mapping of spatial information to internal representations. Additionally, modality-specific feedback mechanisms distinctively involve proprioceptive post-output monitoring in sign languages, contrary to spoken languages’ auditory and visual feedback mechanisms. The only study to find ERP differences post-production revealed earlier lexical access in sign than spoken languages. Themes of temporality, the validity of an analogous anatomical mechanisms viewpoint, and the comprehensiveness of current language models were also discussed to suggest improvements for future research. Key message: Current neuroscience evidence suggests various ways in which processing differs between sign and spoken language modalities that extend beyond simple differences between languages. Consideration and further exploration of these differences will be integral in developing a more comprehensive view of language in the brain.
Collapse
Affiliation(s)
- Hayley Bree Caldwell
- Cognitive and Systems Neuroscience Research Hub (CSN-RH), School of Justice and Society, University of South Australia Magill Campus, Magill, South Australia, Australia
| |
Collapse
|
2
|
Liu L, Yan X, Li H, Gao D, Ding G. Identifying a supramodal language network in human brain with individual fingerprint. Neuroimage 2020; 220:117131. [PMID: 32622983 DOI: 10.1016/j.neuroimage.2020.117131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Revised: 06/21/2020] [Accepted: 06/29/2020] [Indexed: 11/26/2022] Open
Abstract
Where is human language processed in the brain independent of its form? We addressed this issue by analyzing the cortical responses to spoken, written and signed sentences at the level of individual subjects. By applying a novel fingerprinting method based on the distributed pattern of brain activity, we identified a left-lateralized network composed by the superior temporal gyrus/sulcus (STG/STS), inferior frontal gyrus (IFG), precentral gyrus/sulcus (PCG/PCS), and supplementary motor area (SMA). In these regions, the local distributed activity pattern induced by any of the three language modalities can predict the activity pattern induced by the other two modalities, and such cross-modal prediction is individual-specific. The prediction is successful for speech-sign bilinguals across all possible modality pairs, but fails for monolinguals across sign-involved pairs. In comparison, conventional group-mean focused analysis detects shared cortical activations across modalities only in the STG, PCG/PCS and SMA, and the shared activations were found in both groups. This study reveals the core language system in the brain that is shared by spoken, written and signed language, and demonstrates that it is possible and desirable to utilize the information of individual differences for functional brain mapping.
Collapse
Affiliation(s)
- Lanfang Liu
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, 510006, China; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, 100875, China
| | - Xin Yan
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, MI, 48823, United States; Mental Health Center, Wenhua College, Wuhan, 430000, China
| | - Hehui Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, 100875, China
| | - Dingguo Gao
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, 510006, China.
| | - Guosheng Ding
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, 100875, China.
| |
Collapse
|
3
|
Blanco-Elorrieta E, Emmorey K, Pylkkänen L. Language switching decomposed through MEG and evidence from bimodal bilinguals. Proc Natl Acad Sci U S A 2018; 115:9708-9713. [PMID: 30206151 PMCID: PMC6166835 DOI: 10.1073/pnas.1809779115] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A defining feature of human cognition is the ability to quickly and accurately alternate between complex behaviors. One striking example of such an ability is bilinguals' capacity to rapidly switch between languages. This switching process minimally comprises disengagement from the previous language and engagement in a new language. Previous studies have associated language switching with increased prefrontal activity. However, it is unknown how the subcomputations of language switching individually contribute to these activities, because few natural situations enable full separation of disengagement and engagement processes during switching. We recorded magnetoencephalography (MEG) from American Sign Language-English bilinguals who often sign and speak simultaneously, which allows to dissociate engagement and disengagement. MEG data showed that turning a language "off" (switching from simultaneous to single language production) led to increased activity in the anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (dlPFC), while turning a language "on" (switching from one language to two simultaneously) did not. The distinct representational nature of these on and off processes was also supported by multivariate decoding analyses. Additionally, Granger causality analyses revealed that (i) compared with "turning on" a language, "turning off" required stronger connectivity between left and right dlPFC, and (ii) dlPFC activity predicted ACC activity, consistent with models in which the dlPFC is a top-down modulator of the ACC. These results suggest that the burden of language switching lies in disengagement from the previous language as opposed to engaging a new language and that, in the absence of motor constraints, producing two languages simultaneously is not necessarily more cognitively costly than producing one.
Collapse
Affiliation(s)
- Esti Blanco-Elorrieta
- Department of Psychology, New York University, New York, NY 10003;
- NYU Abu Dhabi Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182
| | - Liina Pylkkänen
- Department of Psychology, New York University, New York, NY 10003
- NYU Abu Dhabi Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Department of Linguistics, New York University, New York, NY 10003
| |
Collapse
|
4
|
van Berkel-van Hoof L, Hermans D, Knoors H, Verhoeven L. Benefits of augmentative signs in word learning: Evidence from children who are deaf/hard of hearing and children with specific language impairment. RESEARCH IN DEVELOPMENTAL DISABILITIES 2016; 59:338-350. [PMID: 27668401 DOI: 10.1016/j.ridd.2016.09.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2016] [Revised: 09/15/2016] [Accepted: 09/16/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND Augmentative signs may facilitate word learning in children with vocabulary difficulties, for example, children who are Deaf/Hard of Hearing (DHH) and children with Specific Language Impairment (SLI). Despite the fact that augmentative signs may aid second language learning in populations with a typical language development, empirical evidence in favor of this claim is lacking. AIMS We aim to investigate whether augmentative signs facilitate word learning for DHH children, children with SLI, and typically developing (TD) children. METHODS AND PROCEDURES Whereas previous studies taught children new labels for familiar objects, the present study taught new labels for new objects. In our word learning experiment children were presented with pictures of imaginary creatures and pseudo words. Half of the words were accompanied by an augmentative pseudo sign. The children were tested for their receptive word knowledge. OUTCOMES AND RESULTS The DHH children benefitted significantly from augmentative signs, but the children with SLI and TD age-matched peers did not score significantly different on words from either the sign or no-sign condition. CONCLUSIONS AND IMPLICATIONS These results suggest that using Sign-Supported speech in classrooms of bimodal bilingual DHH children may support their spoken language development. The difference between earlier research findings and the present results may be caused by a difference in methodology.
Collapse
Affiliation(s)
- Lian van Berkel-van Hoof
- Behavioural Science Institute, Radboud University, P.O. Box 9104, 6500 HE Nijmegen, The Netherlands.
| | - Daan Hermans
- Kentalis Academy, Royal Dutch Kentalis, P.O. Box 7, 5270 BA Sint-Michielsgestel, The Netherlands
| | - Harry Knoors
- Behavioural Science Institute, Radboud University, P.O. Box 9104, 6500 HE Nijmegen, The Netherlands; Kentalis Academy, Royal Dutch Kentalis, P.O. Box 7, 5270 BA Sint-Michielsgestel, The Netherlands
| | - Ludo Verhoeven
- Behavioural Science Institute, Radboud University, P.O. Box 9104, 6500 HE Nijmegen, The Netherlands
| |
Collapse
|
5
|
Weisberg J, Hubbard AL, Emmorey K. Multimodal integration of spontaneously produced representational co-speech gestures: an fMRI study. LANGUAGE, COGNITION AND NEUROSCIENCE 2016; 32:158-174. [PMID: 29130054 PMCID: PMC5675577 DOI: 10.1080/23273798.2016.1245426] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Accepted: 09/05/2016] [Indexed: 05/31/2023]
Abstract
To examine whether more ecologically valid co-speech gesture stimuli elicit brain responses consistent with those found by studies that relied on scripted stimuli, we presented participants with spontaneously produced, meaningful co-speech gesture during fMRI scanning (n = 28). Speech presented with gesture (versus either presented alone) elicited heightened activity in bilateral posterior superior temporal, premotor, and inferior frontal regions. Within left temporal and premotor, but not inferior frontal regions, we identified small clusters with superadditive responses, suggesting that these discrete regions support both sensory and semantic integration. In contrast, surrounding areas and the inferior frontal gyrus may support either sensory or semantic integration. Reduced activation for speech with gesture in language-related regions indicates allocation of fewer neural resources when meaningful gestures accompany speech. Sign language experience did not affect co-speech gesture activation. Overall, our results indicate that scripted stimuli have minimal confounding influences; however, they may miss subtle superadditive effects.
Collapse
Affiliation(s)
- Jill Weisberg
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| | - Amy Lynn Hubbard
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| |
Collapse
|
6
|
Giezen MR, Emmorey K. Semantic Integration and Age of Acquisition Effects in Code-Blend Comprehension. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2016; 21:213-221. [PMID: 26657077 PMCID: PMC4886315 DOI: 10.1093/deafed/env056] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2015] [Revised: 11/09/2015] [Accepted: 11/09/2015] [Indexed: 06/05/2023]
Abstract
Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation: the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English words more quickly when they are presented together simultaneously than when each is presented alone. More robust facilitation effects were observed for semantic decision than for lexical decision, suggesting that lexical integration of signs and words within a code-blend occurs primarily at the semantic level, rather than at the level of form. Early bilinguals exhibited greater facilitation effects than late bilinguals for English (the dominant language) in the semantic decision task, possibly because early bilinguals are better able to process early visual cues from ASL signs and use these to constrain English word recognition. Comprehension facilitation via semantic integration of words and signs is consistent with co-speech gesture research demonstrating facilitative effects of gesture integration on language comprehension.
Collapse
|
7
|
Emmorey K, Giezen MR, Gollan TH. Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:223-242. [PMID: 28804269 PMCID: PMC5553278 DOI: 10.1017/s1366728915000085] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| | | | - Tamar H Gollan
- University of California San Diego, Department of Psychiatry
| |
Collapse
|
8
|
Benitez-Quiroz CF, Wilbur RB, Martinez AM. The not face: A grammaticalization of facial expressions of emotion. Cognition 2016; 150:77-84. [PMID: 26872248 DOI: 10.1016/j.cognition.2016.02.004] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2015] [Revised: 01/29/2016] [Accepted: 02/04/2016] [Indexed: 10/22/2022]
Abstract
Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers.
Collapse
|