1
|
Yang T, Fan X, Hou B, Wang J, Chen X. Linguistic network in early deaf individuals: A neuroimaging meta-analysis. Neuroimage 2024:120720. [PMID: 38971484 DOI: 10.1016/j.neuroimage.2024.120720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 07/01/2024] [Accepted: 07/03/2024] [Indexed: 07/08/2024] Open
Abstract
This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior temporal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.
Collapse
Affiliation(s)
- Tengyu Yang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China
| | - Xinmiao Fan
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China
| | - Bo Hou
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China
| | - Jian Wang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China.
| | - Xiaowei Chen
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China.
| |
Collapse
|
2
|
Syntax through the looking glass: A review on two-word linguistic processing across behavioral, neuroimaging and neurostimulation studies. Neurosci Biobehav Rev 2022; 142:104881. [DOI: 10.1016/j.neubiorev.2022.104881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 09/13/2022] [Accepted: 09/15/2022] [Indexed: 11/23/2022]
|
3
|
Zhou X, Feng M, Hu Y, Zhang C, Zhang Q, Luo X, Yuan W. The Effects of Cortical Reorganization and Applications of Functional Near-Infrared Spectroscopy in Deaf People and Cochlear Implant Users. Brain Sci 2022; 12:brainsci12091150. [PMID: 36138885 PMCID: PMC9496692 DOI: 10.3390/brainsci12091150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/19/2022] [Accepted: 08/24/2022] [Indexed: 11/22/2022] Open
Abstract
A cochlear implant (CI) is currently the only FDA-approved biomedical device that can restore hearing for the majority of patients with severe-to-profound sensorineural hearing loss (SNHL). While prelingually and postlingually deaf individuals benefit substantially from CI, the outcomes after implantation vary greatly. Numerous studies have attempted to study the variables that affect CI outcomes, including the personal characteristics of CI candidates, environmental variables, and device-related variables. Up to 80% of the results remained unexplainable because all these variables could only roughly predict auditory performance with a CI. Brain structure/function differences after hearing deprivation, that is, cortical reorganization, has gradually attracted the attention of neuroscientists. The cross-modal reorganization in the auditory cortex following deafness is thought to be a key factor in the success of CI. In recent years, the adaptive and maladaptive effects of this reorganization on CI rehabilitation have been argued because the neural mechanisms of how this reorganization impacts CI learning and rehabilitation have not been revealed. Due to the lack of brain processes describing how this plasticity affects CI learning and rehabilitation, the adaptive and deleterious consequences of this reorganization on CI outcomes have recently been the subject of debate. This review describes the evidence for different roles of cross-modal reorganization in CI performance and attempts to explore the possible reasons. Additionally, understanding the core influencing mechanism requires taking into account the cortical changes from deafness to hearing restoration. However, methodological issues have restricted longitudinal research on cortical function in CI. Functional near-infrared spectroscopy (fNIRS) has been increasingly used for the study of brain function and language assessment in CI because of its unique advantages, which are considered to have great potential. Here, we review studies on auditory cortex reorganization in deaf patients and CI recipients, and then we try to illustrate the feasibility of fNIRS as a neuroimaging tool in predicting and assessing speech performance in CI recipients. Here, we review research on the cross-modal reorganization of the auditory cortex in deaf patients and CI recipients and seek to demonstrate the viability of using fNIRS as a neuroimaging technique to predict and evaluate speech function in CI recipients.
Collapse
Affiliation(s)
- Xiaoqing Zhou
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Menglong Feng
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Yaqin Hu
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Chanyuan Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Qingling Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Xiaoqin Luo
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Wei Yuan
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
- Correspondence: ; Tel.: +86-23-63535180
| |
Collapse
|
4
|
Structural Brain Asymmetries for Language: A Comparative Approach across Primates. Symmetry (Basel) 2022. [DOI: 10.3390/sym14050876] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Humans are the only species that can speak. Nonhuman primates, however, share some ‘domain-general’ cognitive properties that are essential to language processes. Whether these shared cognitive properties between humans and nonhuman primates are the results of a continuous evolution [homologies] or of a convergent evolution [analogies] remain difficult to demonstrate. However, comparing their respective underlying structure—the brain—to determinate their similarity or their divergence across species is critical to help increase the probability of either of the two hypotheses, respectively. Key areas associated with language processes are the Planum Temporale, Broca’s Area, the Arcuate Fasciculus, Cingulate Sulcus, The Insula, Superior Temporal Sulcus, the Inferior Parietal lobe, and the Central Sulcus. These structures share a fundamental feature: They are functionally and structurally specialised to one hemisphere. Interestingly, several nonhuman primate species, such as chimpanzees and baboons, show human-like structural brain asymmetries for areas homologous to key language regions. The question then arises: for what function did these asymmetries arise in non-linguistic primates, if not for language per se? In an attempt to provide some answers, we review the literature on the lateralisation of the gestural communication system, which may represent the missing behavioural link to brain asymmetries for language area’s homologues in our common ancestor.
Collapse
|
5
|
Holmer E, Schönström K, Andin J. Associations Between Sign Language Skills and Resting-State Functional Connectivity in Deaf Early Signers. Front Psychol 2022; 13:738866. [PMID: 35369269 PMCID: PMC8975249 DOI: 10.3389/fpsyg.2022.738866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 02/03/2022] [Indexed: 11/13/2022] Open
Abstract
The processing of a language involves a neural language network including temporal, parietal, and frontal cortical regions. This applies to spoken as well as signed languages. Previous research suggests that spoken language proficiency is associated with resting-state functional connectivity (rsFC) between language regions and other regions of the brain. Given the similarities in neural activation for spoken and signed languages, rsFC-behavior associations should also exist for sign language tasks. In this study, we explored the associations between rsFC and two types of linguistic skills in sign language: phonological processing skill and accuracy in elicited sentence production. Fifteen adult, deaf early signers were enrolled in a resting-state functional magnetic resonance imaging (fMRI) study. In addition to fMRI data, behavioral tests of sign language phonological processing and sentence reproduction were administered. Using seed-to-voxel connectivity analysis, we investigated associations between behavioral proficiency and rsFC from language-relevant nodes: bilateral inferior frontal gyrus (IFG) and posterior superior temporal gyrus (STG). Results showed that worse sentence processing skill was associated with stronger positive rsFC between the left IFG and left sensorimotor regions. Further, sign language phonological processing skill was associated with positive rsFC from right IFG to middle frontal gyrus/frontal pole although this association could possibly be explained by domain-general cognitive functions. Our findings suggest a possible connection between rsFC and developmental language outcomes in deaf individuals.
Collapse
Affiliation(s)
- Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization, Linköping, Sweden
- *Correspondence: Emil Holmer,
| | | | - Josefine Andin
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
6
|
Caldwell HB. Sign and Spoken Language Processing Differences in the Brain: A Brief Review of Recent Research. Ann Neurosci 2022; 29:62-70. [PMID: 35875424 PMCID: PMC9305909 DOI: 10.1177/09727531211070538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 11/29/2021] [Indexed: 11/27/2022] Open
Abstract
Background: It is currently accepted that sign languages and spoken languages have significant processing commonalities. The evidence supporting this often merely investigates frontotemporal pathways, perisylvian language areas, hemispheric lateralization, and event-related potentials in typical settings. However, recent evidence has explored beyond this and uncovered numerous modality-dependent processing differences between sign languages and spoken languages by accounting for confounds that previously invalidated processing comparisons and by delving into the specific conditions in which they arise. However, these processing differences are often shallowly dismissed as unspecific to language. Summary: This review examined recent neuroscientific evidence for processing differences between sign and spoken language modalities and the arguments against these differences’ importance. Key distinctions exist in the topography of the left anterior negativity (LAN) and with modulations of event-related potential (ERP) components like the N400. There is also differential activation of typical spoken language processing areas, such as the conditional role of the temporal areas in sign language (SL) processing. Importantly, sign language processing uniquely recruits parietal areas for processing phonology and syntax and requires the mapping of spatial information to internal representations. Additionally, modality-specific feedback mechanisms distinctively involve proprioceptive post-output monitoring in sign languages, contrary to spoken languages’ auditory and visual feedback mechanisms. The only study to find ERP differences post-production revealed earlier lexical access in sign than spoken languages. Themes of temporality, the validity of an analogous anatomical mechanisms viewpoint, and the comprehensiveness of current language models were also discussed to suggest improvements for future research. Key message: Current neuroscience evidence suggests various ways in which processing differs between sign and spoken language modalities that extend beyond simple differences between languages. Consideration and further exploration of these differences will be integral in developing a more comprehensive view of language in the brain.
Collapse
Affiliation(s)
- Hayley Bree Caldwell
- Cognitive and Systems Neuroscience Research Hub (CSN-RH), School of Justice and Society, University of South Australia Magill Campus, Magill, South Australia, Australia
| |
Collapse
|
7
|
Matchin W, İlkbaşaran D, Hatrak M, Roth A, Villwock A, Halgren E, Mayberry RI. The Cortical Organization of Syntactic Processing Is Supramodal: Evidence from American Sign Language. J Cogn Neurosci 2022; 34:224-235. [PMID: 34964898 PMCID: PMC8764739 DOI: 10.1162/jocn_a_01790] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.
Collapse
Affiliation(s)
- William Matchin
- University of California San Diego
- University of South Carolina, Columbia
| | | | | | | | - Agnes Villwock
- University of California San Diego
- Humboldt University of Berlin
| | | | | |
Collapse
|
8
|
Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Collapse
|
9
|
How does inattention affect written and spoken language processing? Cortex 2021; 138:212-227. [PMID: 33713968 DOI: 10.1016/j.cortex.2021.02.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 12/20/2020] [Accepted: 02/02/2021] [Indexed: 11/22/2022]
Abstract
The classic cocktail party effect suggests that some, but probably not all levels of language processing can proceed without attention. We used whole-brain functional MRI to investigate how modality-specific and modality-independent language areas are modulated by the withdrawal of attention to another sensory modality (e.g., attending to vision during the presentation of auditory sentences, or vice-versa). We tested the hypotheses that inattention may abolish sentence-level integration and eliminate top-down effects. In both written and spoken modalities, language processing was strongly modulated by the distraction of attention, but this inattention effect varied considerably depending on the area and hierarchical level of language processing. Under inattention, a bottom-up activation remained in early modality-specific areas, particularly in superior temporal spoken-language areas, but the difference between sentences and words lists vanished. Under both attended and unattended conditions, ventral temporal cortices were activated in a top-down manner by spoken language more than by control stimuli, reaching posteriorily the Visual Word Form Area. We conclude that inattention prevents sentence-level syntactic and semantic integration, but preserves some top-down crossmodal processing, plus a large degree of bottom-up modality-specific processing, including a ventral occipito-temporal specialization for letter strings in a known alphabet.
Collapse
|
10
|
Altered Brain Activity and Functional Connectivity in Unilateral Sudden Sensorineural Hearing Loss. Neural Plast 2020; 2020:9460364. [PMID: 33029130 PMCID: PMC7527900 DOI: 10.1155/2020/9460364] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 08/04/2020] [Accepted: 08/18/2020] [Indexed: 11/18/2022] Open
Abstract
Background Sudden sensorineural hearing loss (SSNHL) is an otologic emergency and could lead to social difficulties and mental disorders in some patients. Although many studies have analyzed altered brain function in populations with hearing loss, little information is available about patients with idiopathic SSNHL. This study is aimed at investigating brain functional changes in SSNHL via functional magnetic resonance imaging (fMRI). Methods Thirty-six patients with SSNHL and thirty well-matched normal hearing individuals underwent resting-state fMRI. Amplitude of low-frequency fluctuation (ALFF), fractional ALFF (fALFF), and functional connectivity (FC) values were calculated. Results In the SSNHL patients, ALFF and fALFF were significantly increased in the bilateral putamen but decreased in the right calcarine cortex, right middle temporal gyrus (MTG), and right precentral gyrus. Widespread increases in FC were observed between brain regions, mainly including the bilateral auditory cortex, bilateral visual cortex, left striatum, left angular gyrus (AG), bilateral precuneus, and bilateral limbic lobes in patients with SSNHL. No decreased FC was observed. Conclusion SSNHL causes functional alterations in brain regions, mainly in the striatum, auditory cortex, visual cortex, MTG, AG, precuneus, and limbic lobes within the acute period of hearing loss.
Collapse
|
11
|
Chang CHC, Dehaene S, Wu DH, Kuo WJ, Pallier C. Cortical encoding of linguistic constituent with and without morphosyntactic cues. Cortex 2020; 129:281-295. [PMID: 32535379 DOI: 10.1016/j.cortex.2020.04.024] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2019] [Revised: 07/30/2019] [Accepted: 04/21/2020] [Indexed: 12/31/2022]
Abstract
This study examined the brain areas involved in combining words into larger units when there are few or no morphosyntactic cues. We manipulated constituent length in word strings of the same length under two conditions: Mandarin sentence, which had sparse morphosyntactic cues, and nominal phrase that had no morphosyntactic cues [e.g., ((honey mustard) (chicken burger))]. Contrasting sentences to word lists revealed a network that largely overlapped with the one reported in languages with rich morphosyntactic cues, including left IFGorb/IFGtri and areas along left STG/STS. Both conditions showed increased activation in left IFGtri/IFGorb in functional ROIs defined based on previous study in sentence processing, while the nominal phrases additionally revealed a constituent length effect in bilateral dorsal IFGtri, left IFGoper, left pMTG/pSTG, left IPL, and several subcortical areas, which might reflect an increased reliance on semantic and pragmatic information. Moreover, in upper left IFGtri/IFGoper and left thalamus/caudate, this effect increased with the participants' tendency to combine nouns into phrases. The absence of syntactic constraints on linguistic composition might highlight individual differences in cognitive control, which helps to integrate non-syntactic information.
Collapse
Affiliation(s)
- Claire H C Chang
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan.
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, France; Collège de France, Paris, France.
| | - Denise H Wu
- Institute of Cognitive Neuroscience, National Central University, Zhongli, Taiwan.
| | - Wen-Jui Kuo
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan; Brain Research Center, National Yang-Ming University, Taipei, Taiwan.
| | - Christophe Pallier
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, France.
| |
Collapse
|
12
|
Sarbu M, Dehelean L, Munteanu CVA, Ica R, Petrescu AJ, Zamfir AD. Human caudate nucleus exhibits a highly complex ganglioside pattern as revealed by high-resolution multistage Orbitrap MS. J Carbohydr Chem 2019. [DOI: 10.1080/07328303.2019.1669632] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Affiliation(s)
- Mirela Sarbu
- Department of Applied Physics, National Institute for Research and Development in Electrochemistry and Condensed Matter, Timisoara, Romania
| | - Liana Dehelean
- Department of Neurosciences, “Victor Babes” University of Medicine and Pharmacy, Timisoara, Romania
| | - Cristian V. A. Munteanu
- Department of Bioinformatics and Structural Biochemistry, Institute of Biochemistry of the Romanian Academy, Bucharest, Romania
| | - Raluca Ica
- Department of Applied Physics, National Institute for Research and Development in Electrochemistry and Condensed Matter, Timisoara, Romania
| | - Andrei J. Petrescu
- Department of Bioinformatics and Structural Biochemistry, Institute of Biochemistry of the Romanian Academy, Bucharest, Romania
| | - Alina D. Zamfir
- Department of Applied Physics, National Institute for Research and Development in Electrochemistry and Condensed Matter, Timisoara, Romania
- Department of Technical and Natural Sciences, “Aurel Vlaicu” University of Arad, Arad, Romania
| |
Collapse
|
13
|
Stroh AL, Rösler F, Dormal G, Salden U, Skotara N, Hänel-Faulhaber B, Röder B. Neural correlates of semantic and syntactic processing in German Sign Language. Neuroimage 2019; 200:231-241. [PMID: 31220577 DOI: 10.1016/j.neuroimage.2019.06.025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Revised: 05/16/2019] [Accepted: 06/12/2019] [Indexed: 11/24/2022] Open
Abstract
The study of deaf and hearing native users of signed languages can offer unique insights into how biological constraints and environmental input interact to shape the neural bases of language processing. Here, we use functional magnetic resonance imaging (fMRI) to address two questions: (1) Do semantic and syntactic processing in a signed language rely on anatomically and functionally distinct neural substrates as it has been shown for spoken languages? and (2) Does hearing status affect the neural correlates of these two types of linguistic processing? Deaf and hearing native signers performed a sentence judgement task on German Sign Language (Deutsche Gebärdensprache: DGS) sentences which were correct or contained either syntactic or semantic violations. We hypothesized that processing of semantic and syntactic violations in DGS relies on distinct neural substrates as it has been shown for spoken languages. Moreover, we hypothesized that effects of hearing status are observed within auditory regions, as deaf native signers have been shown to activate auditory areas to a greater extent than hearing native signers when processing a signed language. Semantic processing activated low-level visual areas and the left inferior frontal gyrus (IFG), suggesting both modality-dependent and independent processing mechanisms. Syntactic processing elicited increased activation in the right supramarginal gyrus (SMG). Moreover, psychophysiological interaction (PPI) analyses revealed a cluster in left middle occipital regions showing increased functional coupling with the right SMG during syntactic relative to semantic processing, possibly indicating spatial processing mechanisms that are specific to signed syntax. Effects of hearing status were observed in the right superior temporal cortex (STC): deaf but not hearing native signers showed greater activation for semantic violations than for syntactic violations in this region. Taken together, the present findings suggest that the neural correlates of language processing are partly determined by biological constraints, but that they may additionally be influenced by the unique processing demands of the language modality and different sensory experiences.
Collapse
Affiliation(s)
- Anna-Lena Stroh
- Biological Psychology and Neuropsychology, University of Hamburg, Germany.
| | - Frank Rösler
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Giulia Dormal
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Uta Salden
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Nils Skotara
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Barbara Hänel-Faulhaber
- Biological Psychology and Neuropsychology, University of Hamburg, Germany; Special Education, University of Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| |
Collapse
|
14
|
Le Guen Y, Leroy F, Auzias G, Riviere D, Grigis A, Mangin JF, Coulon O, Dehaene-Lambertz G, Frouin V. The chaotic morphology of the left superior temporal sulcus is genetically constrained. Neuroimage 2018; 174:297-307. [DOI: 10.1016/j.neuroimage.2018.03.046] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 03/01/2018] [Accepted: 03/19/2018] [Indexed: 12/31/2022] Open
|
15
|
Language and Sensory Neural Plasticity in the Superior Temporal Cortex of the Deaf. Neural Plast 2018; 2018:9456891. [PMID: 29853853 PMCID: PMC5954881 DOI: 10.1155/2018/9456891] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 03/26/2018] [Indexed: 11/18/2022] Open
Abstract
Visual stimuli are known to activate the auditory cortex of deaf people, presenting evidence of cross-modal plasticity. However, the mechanisms underlying such plasticity are poorly understood. In this functional MRI study, we presented two types of visual stimuli, language stimuli (words, sign language, and lip-reading) and a general stimulus (checkerboard) to investigate neural reorganization in the superior temporal cortex (STC) of deaf subjects and hearing controls. We found that only in the deaf subjects, all visual stimuli activated the STC. The cross-modal activation induced by the checkerboard was mainly due to a sensory component via a feed-forward pathway from the thalamus and primary visual cortex, positively correlated with duration of deafness, indicating a consequence of pure sensory deprivation. In contrast, the STC activity evoked by language stimuli was functionally connected to both the visual cortex and the frontotemporal areas, which were highly correlated with the learning of sign language, suggesting a strong language component via a possible feedback modulation. While the sensory component exhibited specificity to features of a visual stimulus (e.g., selective to the form of words, bodies, or faces) and the language (semantic) component appeared to recruit a common frontotemporal neural network, the two components converged to the STC and caused plasticity with different multivoxel activity patterns. In summary, the present study showed plausible neural pathways for auditory reorganization and correlations of activations of the reorganized cortical areas with developmental factors and provided unique evidence towards the understanding of neural circuits involved in cross-modal plasticity.
Collapse
|