1
|
Kumar U, Dhanik K, Mishra M, Pandey HR, Keshri A. Mapping the unique neural engagement in deaf individuals during picture, word, and sign language processing: fMRI study. Brain Imaging Behav 2024; 18:835-851. [PMID: 38523177 DOI: 10.1007/s11682-024-00878-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/15/2024] [Indexed: 03/26/2024]
Abstract
Employing functional magnetic resonance imaging (fMRI) techniques, we conducted a comprehensive analysis of neural responses during sign language, picture, and word processing tasks in a cohort of 35 deaf participants and contrasted these responses with those of 35 hearing counterparts. Our voxel-based analysis unveiled distinct patterns of brain activation during language processing tasks. Deaf individuals exhibited robust bilateral activation in the superior temporal regions during sign language processing, signifying the profound neural adaptations associated with sign comprehension. Similarly, during picture processing, the deaf cohort displayed activation in the right angular, right calcarine, right middle temporal, and left angular gyrus regions, elucidating the neural dynamics engaged in visual processing tasks. Intriguingly, during word processing, the deaf group engaged the right insula and right fusiform gyrus, suggesting compensatory mechanisms at play during linguistic tasks. Notably, the control group failed to manifest additional or distinctive regions in any of the tasks when compared to the deaf cohort, underscoring the unique neural signatures within the deaf population. Multivariate Pattern Analysis (MVPA) of functional connectivity provided a more nuanced perspective on connectivity patterns across tasks. Deaf participants exhibited significant activation in a myriad of brain regions, including bilateral planum temporale (PT), postcentral gyrus, insula, and inferior frontal regions, among others. These findings underscore the intricate neural adaptations in response to auditory deprivation. Seed-based connectivity analysis, utilizing the PT as a seed region, revealed unique connectivity pattern across tasks. These connectivity dynamics provide valuable insights into the neural interplay associated with cross-modal plasticity.
Collapse
Affiliation(s)
- Uttam Kumar
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, Uttar Pradesh, 226014, India.
| | - Kalpana Dhanik
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, Uttar Pradesh, 226014, India
| | - Mrutyunjaya Mishra
- Department of Special Education (Hearing Impairments), Dr. Shakuntala Misra National Rehabilitation University, Lucknow, India
| | - Himanshu R Pandey
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, Uttar Pradesh, 226014, India
| | - Amit Keshri
- Department of Neuro-Otology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India
| |
Collapse
|
2
|
Cardin V, Kremneva E, Komarova A, Vinogradova V, Davidenko T, Zmeykina E, Kopnin PN, Iriskhanova K, Woll B. Resting-state functional connectivity in deaf and hearing individuals and its link to executive processing. Neuropsychologia 2023; 185:108583. [PMID: 37142052 DOI: 10.1016/j.neuropsychologia.2023.108583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 04/23/2023] [Accepted: 04/27/2023] [Indexed: 05/06/2023]
Abstract
Sensory experience shapes brain structure and function, and it is likely to influence the organisation of functional networks of the brain, including those involved in cognitive processing. Here we investigated the influence of early deafness on the organisation of resting-state networks of the brain and its relation to executive processing. We compared resting-state connectivity between deaf and hearing individuals across 18 functional networks and 400 ROIs. Our results showed significant group differences in connectivity between seeds of the auditory network and most large-scale networks of the brain, in particular the somatomotor and salience/ventral attention networks. When we investigated group differences in resting-state fMRI and their link to behavioural performance in executive function tasks (working memory, inhibition and switching), differences between groups were found in the connectivity of association networks of the brain, such as the salience/ventral attention and default-mode networks. These findings indicate that sensory experience influences not only the organisation of sensory networks, but that it also has a measurable impact on the organisation of association networks supporting cognitive processing. Overall, our findings suggest that different developmental pathways and functional organisation can support executive processing in the adult brain.
Collapse
Affiliation(s)
- Velia Cardin
- Deafness, Cognition and Language Research Centre, UCL, London, UK.
| | - Elena Kremneva
- Department of Radiology, Research Center of Neurology, Moscow, Russia
| | - Anna Komarova
- Galina Zaitseva Centre for Deaf Studies and Sign Language, Moscow, Russia; Language Department, Moscow State Linguistics University, Moscow, Russia
| | - Valeria Vinogradova
- Deafness, Cognition and Language Research Centre, UCL, London, UK; Galina Zaitseva Centre for Deaf Studies and Sign Language, Moscow, Russia; School of Psychology, University of East Anglia, Norwich, UK
| | - Tatiana Davidenko
- Galina Zaitseva Centre for Deaf Studies and Sign Language, Moscow, Russia
| | - Elina Zmeykina
- Department of Radiology, Research Center of Neurology, Moscow, Russia; Department of Neurology, University Medical Center Göttingen, Germany
| | - Petr N Kopnin
- Department of Neurorehabilitation and Physiotherapy, Research Center of Neurology, Moscow, Russia
| | - Kira Iriskhanova
- Language Department, Moscow State Linguistics University, Moscow, Russia
| | - Bencie Woll
- Deafness, Cognition and Language Research Centre, UCL, London, UK
| |
Collapse
|
3
|
Cheng Q, Roth A, Halgren E, Klein D, Chen JK, Mayberry RI. Restricted language access during childhood affects adult brain structure in selective language regions. Proc Natl Acad Sci U S A 2023; 120:e2215423120. [PMID: 36745780 PMCID: PMC9963327 DOI: 10.1073/pnas.2215423120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 01/04/2023] [Indexed: 02/08/2023] Open
Abstract
Due to the ubiquitous nature of language in the environment of infants, how it affects the anatomical structure of the brain language system over the lifespan is not well understood. In this study, we investigated the effects of early language experience on the adult brain by examining anatomical features of individuals born deaf with typical or restricted language experience in early childhood. Twenty-two deaf adults whose primary language was American Sign Language and were first immersed in it at ages ranging from birth to 14 y participated. The control group was 21 hearing non-signers. We acquired T1-weighted magnetic resonance images and used FreeSurfer [B. Fischl, Neuroimage 62, 774-781(2012)] to reconstruct the brain surface. Using an a priori regions of interest (ROI) approach, we identified 17 language and 19 somatomotor ROIs in each hemisphere from the Human Connectome Project parcellation map [M. F. Glasser et al., Nature 536, 171-178 (2016)]. Restricted language experience in early childhood was associated with negative changes in adjusted grey matter volume and/or cortical thickness in bilateral fronto-temporal regions. No evidence of anatomical differences was observed in any of these regions when deaf signers with infant sign language experience were compared with hearing speakers with infant spoken language experience, showing that the effects of early language experience on the brain language system are supramodal.
Collapse
Affiliation(s)
- Qi Cheng
- Department of Linguistics, University of Washington, Seattle, WA98195
| | - Austin Roth
- Department of Linguistics, University of California San Diego, San Diego, CA92093
| | - Eric Halgren
- Department of Radiology, University of California San Diego, San Diego, CA92093
- Department of Neuroscience, University of California San Diego, San Diego, CA92093
| | - Denise Klein
- Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, MontrealH3A 2B4Canada
| | - Jen-Kai Chen
- Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, MontrealH3A 2B4Canada
| | - Rachel I. Mayberry
- Department of Linguistics, University of California San Diego, San Diego, CA92093
| |
Collapse
|
4
|
Matchin W, İlkbaşaran D, Hatrak M, Roth A, Villwock A, Halgren E, Mayberry RI. The Cortical Organization of Syntactic Processing Is Supramodal: Evidence from American Sign Language. J Cogn Neurosci 2022; 34:224-235. [PMID: 34964898 PMCID: PMC8764739 DOI: 10.1162/jocn_a_01790] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.
Collapse
Affiliation(s)
- William Matchin
- University of California San Diego
- University of South Carolina, Columbia
| | | | | | | | - Agnes Villwock
- University of California San Diego
- Humboldt University of Berlin
| | | | | |
Collapse
|
5
|
Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Collapse
|
6
|
Finkl T, Hahne A, Friederici AD, Gerber J, Mürbe D, Anwander A. Language Without Speech: Segregating Distinct Circuits in the Human Brain. Cereb Cortex 2021; 30:812-823. [PMID: 31373629 DOI: 10.1093/cercor/bhz128] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 05/08/2019] [Accepted: 05/20/2019] [Indexed: 01/09/2023] Open
Abstract
Language is a fundamental part of human cognition. The question of whether language is processed independently of speech, however, is still heavily discussed. The absence of speech in deaf signers offers the opportunity to disentangle language from speech in the human brain. Using probabilistic tractography, we compared brain structural connectivity of adult deaf signers who had learned sign language early in life to that of matched hearing controls. Quantitative comparison of the connectivity profiles revealed that the core language tracts did not differ between signers and controls, confirming that language is independent of speech. In contrast, pathways involved in the production and perception of speech displayed lower connectivity in deaf signers compared to hearing controls. These differences were located in tracts towards the left pre-supplementary motor area and the thalamus when seeding in Broca's area, and in ipsilateral parietal areas and the precuneus with seeds in left posterior temporal regions. Furthermore, the interhemispheric connectivity between the auditory cortices was lower in the deaf than in the hearing group, underlining the importance of the transcallosal connection for early auditory processes. The present results provide evidence for a functional segregation of the neural pathways for language and speech.
Collapse
Affiliation(s)
- Theresa Finkl
- Saxonian Cochlear Implant Centre, Phoniatrics and Audiology, Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, Dresden, Germany
| | - Anja Hahne
- Saxonian Cochlear Implant Centre, Phoniatrics and Audiology, Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, Dresden, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Johannes Gerber
- Neuroradiology, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Dirk Mürbe
- Department of Audiology and Phoniatrics, Charité-Universitätsmedizin, Berlin, Germany
| | - Alfred Anwander
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
7
|
Chen L, Wang Y, Wen H. Numerical Magnitude Processing in Deaf Adolescents and Its Contribution to Arithmetical Ability. Front Psychol 2021; 12:584183. [PMID: 33841229 PMCID: PMC8026863 DOI: 10.3389/fpsyg.2021.584183] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 03/02/2021] [Indexed: 11/13/2022] Open
Abstract
Although most deaf individuals could use sign language or sign/spoken language mix, hearing loss would still affect their language acquisition. Compensatory plasticity holds that the lack of auditory stimulation experienced by deaf individuals, such as congenital deafness, can be met by enhancements in visual cognition. And the studies of hearing individuals have showed that visual form perception is the cognitive mechanism that could explain the association between numerical magnitude processing and arithmetic computation. Therefore, we examined numerical magnitude processing and its contribution to arithmetical ability in deaf adolescents, and explored the differences between the congenital and acquired deafness. 112 deaf adolescents (58 congenital deafness) and 58 hearing adolescents performed a series of cognitive and mathematical tests, and it was found there was no significant differences between the congenital group and the hearing group, but congenital group outperformed acquired group in numerical magnitude processing (reaction time) and arithmetic computation. It was also found there was a close association between numerical magnitude processing and arithmetic computation in all deaf adolescents, and after controlling for the demographic variables (age, gender, onset of hearing loss) and general cognitive abilities (non-verbal IQ, processing speed, reading comprehension), numerical magnitude processing could predict arithmetic computation in all deaf adolescents but not in congenital group. The role of numerical magnitude processing (symbolic and non-symbolic) in deaf adolescents' mathematical performance should be paid attention in the training of arithmetical ability.
Collapse
Affiliation(s)
- Lilan Chen
- School of Psychology, Hainan Normal University, Haikou, China
| | - Yan Wang
- Faculty of Education, Beijing Normal University, Beijing, China
| | - Hongbo Wen
- Collaborative Innovation Center of Assessment Toward Basic Education Quality, Beijing Normal University, Beijing, China
| |
Collapse
|
8
|
Lieberman AM, Borovsky A. Lexical Recognition in Deaf Children Learning American Sign Language: Activation of Semantic and Phonological Features of Signs. LANGUAGE LEARNING 2020; 70:935-973. [PMID: 33510545 PMCID: PMC7837603 DOI: 10.1111/lang.12409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Children learning language efficiently process single words, and activate semantic, phonological, and other features of words during recognition. We investigated lexical recognition in deaf children acquiring American Sign Language (ASL) to determine how perceiving language in the visual-spatial modality affects lexical recognition. Twenty native- or early-exposed signing deaf children (ages 4 to 8 years) participated in a visual world eye-tracking study. Children were presented with a single ASL sign, target picture, and three competitor pictures that varied in their phonological and semantic relationship to the target. Children shifted gaze to the target picture shortly after sign offset. Children showed robust evidence for activation of semantic but not phonological features of signs, however in their behavioral responses children were most susceptible to phonological competitors. Results demonstrate that single word recognition in ASL is largely parallel to spoken language recognition among children who are developing a mature lexicon.
Collapse
Affiliation(s)
- Amy M Lieberman
- Language and Literacy Department, Wheelock College of Education and Human Development, Boston University, 2 Silber Way, Boston, MA 02215
| | - Arielle Borovsky
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2122
| |
Collapse
|
9
|
Leonard MK, Lucas B, Blau S, Corina DP, Chang EF. Cortical Encoding of Manual Articulatory and Linguistic Features in American Sign Language. Curr Biol 2020; 30:4342-4351.e3. [PMID: 32888480 DOI: 10.1016/j.cub.2020.08.048] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 07/17/2020] [Accepted: 08/13/2020] [Indexed: 01/08/2023]
Abstract
The fluent production of a signed language requires exquisite coordination of sensory, motor, and cognitive processes. Similar to speech production, language produced with the hands by fluent signers appears effortless but reflects the precise coordination of both large-scale and local cortical networks. The organization and representational structure of sensorimotor features underlying sign language phonology in these networks remains unknown. Here, we present a unique case study of high-density electrocorticography (ECoG) recordings from the cortical surface of profoundly deaf signer during awake craniotomy. While neural activity was recorded from sensorimotor cortex, the participant produced a large variety of movements in linguistic and transitional movement contexts. We found that at both single electrode and neural population levels, high-gamma activity reflected tuning for particular hand, arm, and face movements, which were organized along dimensions that are relevant for phonology in sign language. Decoding of manual articulatory features revealed a clear functional organization and population dynamics for these highly practiced movements. Furthermore, neural activity clearly differentiated linguistic and transitional movements, demonstrating encoding of language-relevant articulatory features. These results provide a novel and unique view of the fine-scale dynamics of complex and meaningful sensorimotor actions.
Collapse
Affiliation(s)
- Matthew K Leonard
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, USA; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Ben Lucas
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, USA; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Shane Blau
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA; Department of Linguistics, University of California, Davis, Davis, CA, USA
| | - David P Corina
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA; Department of Linguistics, University of California, Davis, Davis, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, USA; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
10
|
Matchin W, Wood E. Syntax-Sensitive Regions of the Posterior Inferior Frontal Gyrus and the Posterior Temporal Lobe Are Differentially Recruited by Production and Perception. Cereb Cortex Commun 2020; 1:tgaa029. [PMID: 34296103 PMCID: PMC8152856 DOI: 10.1093/texcom/tgaa029] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 06/22/2020] [Accepted: 06/22/2020] [Indexed: 01/27/2023] Open
Abstract
Matchin and Hickok (2020) proposed that the left posterior inferior frontal gyrus (PIFG) and the left posterior temporal lobe (PTL) both play a role in syntactic processing, broadly construed, attributing distinct functions to these regions with respect to production and perception. Consistent with this hypothesis, functional dissociations between these regions have been demonstrated with respect to lesion-symptom mapping in aphasia. However, neuroimaging studies of syntactic comprehension typically show similar activations in these regions. In order to identify whether these regions show distinct activation patterns with respect to syntactic perception and production, we performed an fMRI study contrasting the subvocal articulation and perception of structured jabberwocky phrases (syntactic), sequences of real words (lexical), and sequences of pseudowords (phonological). We defined two sets of language-selective regions of interest (ROIs) in individual subjects for the PIFG and the PTL using the contrasts [syntactic > lexical] and [syntactic > phonological]. We found robust significant interactions of comprehension and production between these 2 regions at the syntactic level, for both sets of language-selective ROIs. This suggests a core difference in the function of these regions with respect to production and perception, consistent with the lesion literature.
Collapse
Affiliation(s)
- William Matchin
- Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA
| | - Emily Wood
- Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA
| |
Collapse
|
11
|
Crossmodal reorganisation in deafness: Mechanisms for functional preservation and functional change. Neurosci Biobehav Rev 2020; 113:227-237. [DOI: 10.1016/j.neubiorev.2020.03.019] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 01/29/2020] [Accepted: 03/16/2020] [Indexed: 11/23/2022]
|
12
|
Abstract
Syntax, the structure of sentences, enables humans to express an infinite range of meanings through finite means. The neurobiology of syntax has been intensely studied but with little consensus. Two main candidate regions have been identified: the posterior inferior frontal gyrus (pIFG) and the posterior middle temporal gyrus (pMTG). Integrating research in linguistics, psycholinguistics, and neuroscience, we propose a neuroanatomical framework for syntax that attributes distinct syntactic computations to these regions in a unified model. The key theoretical advances are adopting a modern lexicalized view of syntax in which the lexicon and syntactic rules are intertwined, and recognizing a computational asymmetry in the role of syntax during comprehension and production. Our model postulates a hierarchical lexical-syntactic function to the pMTG, which interconnects previously identified speech perception and conceptual-semantic systems in the temporal and inferior parietal lobes, crucial for both sentence production and comprehension. These relational hierarchies are transformed via the pIFG into morpho-syntactic sequences, primarily tied to production. We show how this architecture provides a better account of the full range of data and is consistent with recent proposals regarding the organization of phonological processes in the brain.
Collapse
Affiliation(s)
- William Matchin
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, 29208, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, 92697, USA
- Department of Language Science, University of California, Irvine, Irvine, CA, 92697, USA
| |
Collapse
|
13
|
Malaia EA, Krebs J, Roehm D, Wilbur RB. Age of acquisition effects differ across linguistic domains in sign language: EEG evidence. BRAIN AND LANGUAGE 2020; 200:104708. [PMID: 31698097 PMCID: PMC6934356 DOI: 10.1016/j.bandl.2019.104708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 10/10/2019] [Accepted: 10/11/2019] [Indexed: 06/10/2023]
Abstract
One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown. We examined neural responses of a group of Deaf signers who received access to signed input at varying ages to three linguistic phenomena at the levels of classifier signs, syntactic structure, and information structure. The amplitude of the N400 response to the marked word order condition negatively correlated with the age of acquisition for syntax and information structure, indicating increased cognitive load in these conditions. Additionally, the combination of behavioral and neural data suggested that late learners preferentially relied on classifiers over word order for meaning extraction. This suggests that late acquisition of sign language significantly increases cognitive load during analysis of syntax and information structure, but not word-level meaning.
Collapse
Affiliation(s)
- Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Speech and Hearing Clinic, 700 Johnny Stallings Drive, Tuscaloosa, AL 35401, USA.
| | - Julia Krebs
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Dietmar Roehm
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Ronnie B Wilbur
- Department of Linguistics, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA; Department of Speech, Language, and Hearing Sciences, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA
| |
Collapse
|
14
|
Cheng Q, Roth A, Halgren E, Mayberry RI. Effects of Early Language Deprivation on Brain Connectivity: Language Pathways in Deaf Native and Late First-Language Learners of American Sign Language. Front Hum Neurosci 2019; 13:320. [PMID: 31607879 PMCID: PMC6761297 DOI: 10.3389/fnhum.2019.00320] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 09/02/2019] [Indexed: 01/24/2023] Open
Abstract
Previous research has identified ventral and dorsal white matter tracts as being crucial for language processing; their maturation correlates with increased language processing capacity. Unknown is whether the growth or maintenance of these language-relevant pathways is shaped by language experience in early life. To investigate the effects of early language deprivation and the sensory-motor modality of language on white matter tracts, we examined the white matter connectivity of language-relevant pathways in congenitally deaf people with or without early access to language. We acquired diffusion tensor imaging (DTI) data from two groups of individuals who experienced language from birth, twelve deaf native signers of American Sign Language, and twelve hearing L2 signers of ASL (native English speakers), and from three, well-studied individual cases who experienced minimal language during childhood. The results indicate that the sensory-motor modality of early language experience does not affect the white matter microstructure between crucial language regions. Both groups with early language experience, deaf and hearing, show leftward laterality in the two language-related tracts. However, all three cases with early language deprivation showed altered white matter microstructure, especially in the left dorsal arcuate fasciculus (AF) pathway.
Collapse
Affiliation(s)
- Qi Cheng
- Department of Linguistics, University of California, San Diego, San Diego, CA, United States
| | - Austin Roth
- Department of Linguistics, University of California, San Diego, San Diego, CA, United States
- Department of Radiology, University of California, San Diego, San Diego, CA, United States
| | - Eric Halgren
- Department of Radiology, University of California, San Diego, San Diego, CA, United States
| | - Rachel I. Mayberry
- Department of Linguistics, University of California, San Diego, San Diego, CA, United States
| |
Collapse
|
15
|
Qiao Y, Li X, Shen H, Zhang X, Sun Y, Hao W, Guo B, Ni D, Gao Z, Guo H, Shang Y. Downward cross-modal plasticity in single-sided deafness. Neuroimage 2019; 197:608-617. [DOI: 10.1016/j.neuroimage.2019.05.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 03/21/2019] [Accepted: 05/10/2019] [Indexed: 10/26/2022] Open
|
16
|
Stroh AL, Rösler F, Dormal G, Salden U, Skotara N, Hänel-Faulhaber B, Röder B. Neural correlates of semantic and syntactic processing in German Sign Language. Neuroimage 2019; 200:231-241. [PMID: 31220577 DOI: 10.1016/j.neuroimage.2019.06.025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Revised: 05/16/2019] [Accepted: 06/12/2019] [Indexed: 11/24/2022] Open
Abstract
The study of deaf and hearing native users of signed languages can offer unique insights into how biological constraints and environmental input interact to shape the neural bases of language processing. Here, we use functional magnetic resonance imaging (fMRI) to address two questions: (1) Do semantic and syntactic processing in a signed language rely on anatomically and functionally distinct neural substrates as it has been shown for spoken languages? and (2) Does hearing status affect the neural correlates of these two types of linguistic processing? Deaf and hearing native signers performed a sentence judgement task on German Sign Language (Deutsche Gebärdensprache: DGS) sentences which were correct or contained either syntactic or semantic violations. We hypothesized that processing of semantic and syntactic violations in DGS relies on distinct neural substrates as it has been shown for spoken languages. Moreover, we hypothesized that effects of hearing status are observed within auditory regions, as deaf native signers have been shown to activate auditory areas to a greater extent than hearing native signers when processing a signed language. Semantic processing activated low-level visual areas and the left inferior frontal gyrus (IFG), suggesting both modality-dependent and independent processing mechanisms. Syntactic processing elicited increased activation in the right supramarginal gyrus (SMG). Moreover, psychophysiological interaction (PPI) analyses revealed a cluster in left middle occipital regions showing increased functional coupling with the right SMG during syntactic relative to semantic processing, possibly indicating spatial processing mechanisms that are specific to signed syntax. Effects of hearing status were observed in the right superior temporal cortex (STC): deaf but not hearing native signers showed greater activation for semantic violations than for syntactic violations in this region. Taken together, the present findings suggest that the neural correlates of language processing are partly determined by biological constraints, but that they may additionally be influenced by the unique processing demands of the language modality and different sensory experiences.
Collapse
Affiliation(s)
- Anna-Lena Stroh
- Biological Psychology and Neuropsychology, University of Hamburg, Germany.
| | - Frank Rösler
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Giulia Dormal
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Uta Salden
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Nils Skotara
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Barbara Hänel-Faulhaber
- Biological Psychology and Neuropsychology, University of Hamburg, Germany; Special Education, University of Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| |
Collapse
|
17
|
Mayberry RI, Kluender R. Rethinking the critical period for language: New insights into an old question from American Sign Language. BILINGUALISM (CAMBRIDGE, ENGLAND) 2018; 21:886-905. [PMID: 30643489 PMCID: PMC6329394 DOI: 10.1017/s1366728917000724] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The hypothesis that children surpass adults in long-term second-language proficiency is accepted as evidence for a critical period for language. However, the scope and nature of a critical period for language has been the subject of considerable debate. The controversy centers on whether the age-related decline in ultimate second-language proficiency is evidence for a critical period or something else. Here we argue that age-onset effects for first vs. second language outcome are largely different. We show this by examining psycholinguistic studies of ultimate attainment in L2 vs. L1 learners, longitudinal studies of adolescent L1 acquisition, and neurolinguistic studies of late L2 and L1 learners. This research indicates that L1 acquisition arises from post-natal brain development interacting with environmental linguistic experience. By contrast, L2 learning after early childhood is scaffolded by prior childhood L1 acquisition, both linguistically and neurally, making it a less clear test of the critical period for language.
Collapse
Affiliation(s)
| | - Robert Kluender
- Department of Linguistics, University of California San Diego
| |
Collapse
|
18
|
Blanco-Elorrieta E, Emmorey K, Pylkkänen L. Language switching decomposed through MEG and evidence from bimodal bilinguals. Proc Natl Acad Sci U S A 2018; 115:9708-9713. [PMID: 30206151 PMCID: PMC6166835 DOI: 10.1073/pnas.1809779115] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A defining feature of human cognition is the ability to quickly and accurately alternate between complex behaviors. One striking example of such an ability is bilinguals' capacity to rapidly switch between languages. This switching process minimally comprises disengagement from the previous language and engagement in a new language. Previous studies have associated language switching with increased prefrontal activity. However, it is unknown how the subcomputations of language switching individually contribute to these activities, because few natural situations enable full separation of disengagement and engagement processes during switching. We recorded magnetoencephalography (MEG) from American Sign Language-English bilinguals who often sign and speak simultaneously, which allows to dissociate engagement and disengagement. MEG data showed that turning a language "off" (switching from simultaneous to single language production) led to increased activity in the anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (dlPFC), while turning a language "on" (switching from one language to two simultaneously) did not. The distinct representational nature of these on and off processes was also supported by multivariate decoding analyses. Additionally, Granger causality analyses revealed that (i) compared with "turning on" a language, "turning off" required stronger connectivity between left and right dlPFC, and (ii) dlPFC activity predicted ACC activity, consistent with models in which the dlPFC is a top-down modulator of the ACC. These results suggest that the burden of language switching lies in disengagement from the previous language as opposed to engaging a new language and that, in the absence of motor constraints, producing two languages simultaneously is not necessarily more cognitively costly than producing one.
Collapse
Affiliation(s)
- Esti Blanco-Elorrieta
- Department of Psychology, New York University, New York, NY 10003;
- NYU Abu Dhabi Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182
| | - Liina Pylkkänen
- Department of Psychology, New York University, New York, NY 10003
- NYU Abu Dhabi Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Department of Linguistics, New York University, New York, NY 10003
| |
Collapse
|
19
|
Language and Sensory Neural Plasticity in the Superior Temporal Cortex of the Deaf. Neural Plast 2018; 2018:9456891. [PMID: 29853853 PMCID: PMC5954881 DOI: 10.1155/2018/9456891] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 03/26/2018] [Indexed: 11/18/2022] Open
Abstract
Visual stimuli are known to activate the auditory cortex of deaf people, presenting evidence of cross-modal plasticity. However, the mechanisms underlying such plasticity are poorly understood. In this functional MRI study, we presented two types of visual stimuli, language stimuli (words, sign language, and lip-reading) and a general stimulus (checkerboard) to investigate neural reorganization in the superior temporal cortex (STC) of deaf subjects and hearing controls. We found that only in the deaf subjects, all visual stimuli activated the STC. The cross-modal activation induced by the checkerboard was mainly due to a sensory component via a feed-forward pathway from the thalamus and primary visual cortex, positively correlated with duration of deafness, indicating a consequence of pure sensory deprivation. In contrast, the STC activity evoked by language stimuli was functionally connected to both the visual cortex and the frontotemporal areas, which were highly correlated with the learning of sign language, suggesting a strong language component via a possible feedback modulation. While the sensory component exhibited specificity to features of a visual stimulus (e.g., selective to the form of words, bodies, or faces) and the language (semantic) component appeared to recruit a common frontotemporal neural network, the two components converged to the STC and caused plasticity with different multivoxel activity patterns. In summary, the present study showed plausible neural pathways for auditory reorganization and correlations of activations of the reorganized cortical areas with developmental factors and provided unique evidence towards the understanding of neural circuits involved in cross-modal plasticity.
Collapse
|
20
|
Shared neural correlates for building phrases in signed and spoken language. Sci Rep 2018; 8:5492. [PMID: 29615785 PMCID: PMC5882945 DOI: 10.1038/s41598-018-23915-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 03/21/2018] [Indexed: 11/08/2022] Open
Abstract
Research on the mental representation of human language has convincingly shown that sign languages are structured similarly to spoken languages. However, whether the same neurobiology underlies the online construction of complex linguistic structures in sign and speech remains unknown. To investigate this question with maximally controlled stimuli, we studied the production of minimal two-word phrases in sign and speech. Signers and speakers viewed the same pictures during magnetoencephalography recording and named them with semantically identical expressions. For both signers and speakers, phrase building engaged left anterior temporal and ventromedial cortices with similar timing, despite different linguistic articulators. Thus the neurobiological similarity of sign and speech goes beyond gross measures such as lateralization: the same fronto-temporal network achieves the planning of structured linguistic expressions.
Collapse
|
21
|
Meade G, Lee B, Midgley KJ, Holcomb PJ, Emmorey K. Phonological and semantic priming in American Sign Language: N300 and N400 effects. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 33:1092-1106. [PMID: 30662923 PMCID: PMC6335044 DOI: 10.1080/23273798.2018.1446543] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Accepted: 02/20/2018] [Indexed: 05/29/2023]
Abstract
This study investigated the electrophysiological signatures of phonological and semantic priming in American Sign Language (ASL). Deaf signers made semantic relatedness judgments to pairs of ASL signs separated by a 1300 ms prime-target SOA. Phonologically related sign pairs shared two of three phonological parameters (handshape, location, and movement). Target signs preceded by phonologically related and semantically related prime signs elicited smaller negativities within the N300 and N400 windows than those preceded by unrelated primes. N300 effects, typically reported in studies of picture processing, are interpreted to reflect the mapping from the visual features of the signs to more abstract linguistic representations. N400 effects, consistent with rhyme priming effects in the spoken language literature, are taken to index lexico-semantic processes that appear to be largely modality independent. Together, these results highlight both the unique visual-manual nature of sign languages and the linguistic processing characteristics they share with spoken languages.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | - Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | | | - Phillip J. Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| |
Collapse
|
22
|
Neurolinguistic processing when the brain matures without language. Cortex 2018; 99:390-403. [PMID: 29406150 DOI: 10.1016/j.cortex.2017.12.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 10/28/2017] [Accepted: 12/14/2017] [Indexed: 11/20/2022]
Abstract
The extent to which development of the brain language system is modulated by the temporal onset of linguistic experience relative to post-natal brain maturation is unknown. This crucial question cannot be investigated with the hearing population because spoken language is ubiquitous in the environment of newborns. Deafness blocks infants' language experience in a spoken form, and in a signed form when it is absent from the environment. Using anatomically constrained magnetoencephalography, aMEG, we neuroimaged lexico-semantic processing in a deaf adult whose linguistic experience began in young adulthood. Despite using language for 30 years after initially learning it, this individual exhibited limited neural response in the perisylvian language areas to signed words during the 300-400 ms temporal window, suggesting that the brain language system requires linguistic experience during brain growth to achieve functionality. The present case study primarily exhibited neural activations in response to signed words in dorsolateral superior parietal and occipital areas bilaterally, replicating the neural patterns exhibited by two previously case studies who matured without language until early adolescence (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014). The dorsal pathway appears to assume the task of processing words when the brain matures without experiencing the form-meaning network of a language.
Collapse
|
23
|
Berger C, Kühne D, Scheper V, Kral A. Congenital deafness affects deep layers in primary and secondary auditory cortex. J Comp Neurol 2017; 525. [PMID: 28643417 PMCID: PMC5599951 DOI: 10.1002/cne.24267] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Congenital deafness leads to functional deficits in the auditory cortex for which early cochlear implantation can effectively compensate. Most of these deficits have been demonstrated functionally. Furthermore, the majority of previous studies on deafness have involved the primary auditory cortex; knowledge of higher-order areas is limited to effects of cross-modal reorganization. In this study, we compared the cortical cytoarchitecture of four cortical areas in adult hearing and congenitally deaf cats (CDCs): the primary auditory field A1, two secondary auditory fields, namely the dorsal zone and second auditory field (A2); and a reference visual association field (area 7) in the same section stained either using Nissl or SMI-32 antibodies. The general cytoarchitectonic pattern and the area-specific characteristics in the auditory cortex remained unchanged in animals with congenital deafness. Whereas area 7 did not differ between the groups investigated, all auditory fields were slightly thinner in CDCs, this being caused by reduced thickness of layers IV-VI. The study documents that, while the cytoarchitectonic patterns are in general independent of sensory experience, reduced layer thickness is observed in both primary and higher-order auditory fields in layer IV and infragranular layers. The study demonstrates differences in effects of congenital deafness between supragranular and other cortical layers, but similar dystrophic effects in all investigated auditory fields.
Collapse
Affiliation(s)
- Christoph Berger
- Institute of AudioNeuroTechnology & Department of Experimental OtologyENT Clinics, School of Medicine, Hannover Medical UniversityHannoverGermany
| | - Daniela Kühne
- Institute of AudioNeuroTechnology & Department of Experimental OtologyENT Clinics, School of Medicine, Hannover Medical UniversityHannoverGermany
| | - Verena Scheper
- Institute of AudioNeuroTechnology & Department of Experimental OtologyENT Clinics, School of Medicine, Hannover Medical UniversityHannoverGermany
| | - Andrej Kral
- Institute of AudioNeuroTechnology & Department of Experimental OtologyENT Clinics, School of Medicine, Hannover Medical UniversityHannoverGermany
- School of Behavioral and Brain SciencesThe University of TexasDallasUSA
| |
Collapse
|
24
|
Evidence from Blindness for a Cognitively Pluripotent Cortex. Trends Cogn Sci 2017; 21:637-648. [DOI: 10.1016/j.tics.2017.06.003] [Citation(s) in RCA: 100] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Revised: 05/26/2017] [Accepted: 06/01/2017] [Indexed: 01/30/2023]
|
25
|
Cardin V, Rudner M, De Oliveira RF, Andin J, Su MT, Beese L, Woll B, Rönnberg J. The Organization of Working Memory Networks is Shaped by Early Sensory Experience. Cereb Cortex 2017; 28:3540-3554. [DOI: 10.1093/cercor/bhx222] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Indexed: 11/14/2022] Open
Affiliation(s)
- Velia Cardin
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Deafness Cognition and Language Research Centre, Department of Experimental Psychology, University College London, 49 Gordon Square, London, UK
- School of Psychology, University of East Anglia, Norwich Research Park, Norwich, UK
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Rita F De Oliveira
- School of Applied Science, London South Bank University, 103 Borough Road, London, UK
| | - Josefine Andin
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Merina T Su
- Developmental Neurosciences Programme, UCL GOS Institute of Child Health, 30 Guilford Street, London, UK
| | - Lilli Beese
- Deafness Cognition and Language Research Centre, Department of Experimental Psychology, University College London, 49 Gordon Square, London, UK
| | - Bencie Woll
- Deafness Cognition and Language Research Centre, Department of Experimental Psychology, University College London, 49 Gordon Square, London, UK
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
26
|
Cross-Modal Plasticity in Higher-Order Auditory Cortex of Congenitally Deaf Cats Does Not Limit Auditory Responsiveness to Cochlear Implants. J Neurosci 2017; 36:6175-85. [PMID: 27277796 DOI: 10.1523/jneurosci.0046-16.2016] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2016] [Accepted: 04/19/2016] [Indexed: 12/29/2022] Open
Abstract
UNLABELLED Congenital sensory deprivation can lead to reorganization of the deprived cortical regions by another sensory system. Such cross-modal reorganization may either compete with or complement the "original" inputs to the deprived area after sensory restoration and can thus be either adverse or beneficial for sensory restoration. In congenital deafness, a previous inactivation study documented that supranormal visual behavior was mediated by higher-order auditory fields in congenitally deaf cats (CDCs). However, both the auditory responsiveness of "deaf" higher-order fields and interactions between the reorganized and the original sensory input remain unknown. Here, we studied a higher-order auditory field responsible for the supranormal visual function in CDCs, the auditory dorsal zone (DZ). Hearing cats and visual cortical areas served as a control. Using mapping with microelectrode arrays, we demonstrate spatially scattered visual (cross-modal) responsiveness in the DZ, but show that this did not interfere substantially with robust auditory responsiveness elicited through cochlear implants. Visually responsive and auditory-responsive neurons in the deaf auditory cortex formed two distinct populations that did not show bimodal interactions. Therefore, cross-modal plasticity in the deaf higher-order auditory cortex had limited effects on auditory inputs. The moderate number of scattered cross-modally responsive neurons could be the consequence of exuberant connections formed during development that were not pruned postnatally in deaf cats. Although juvenile brain circuits are modified extensively by experience, the main driving input to the cross-modally (visually) reorganized higher-order auditory cortex remained auditory in congenital deafness. SIGNIFICANCE STATEMENT In a common view, the "unused" auditory cortex of deaf individuals is reorganized to a compensatory sensory function during development. According to this view, cross-modal plasticity takes over the unused cortex and reassigns it to the remaining senses. Therefore, cross-modal plasticity might conflict with restoration of auditory function with cochlear implants. It is unclear whether the cross-modally reorganized auditory areas lose auditory responsiveness. We show that the presence of cross-modal plasticity in a higher-order auditory area does not reduce auditory responsiveness of that area. Visual reorganization was moderate, spatially scattered and there were no interactions between cross-modally reorganized visual and auditory inputs. These results indicate that cross-modal reorganization is less detrimental for neurosensory restoration than previously thought.
Collapse
|
27
|
Functional selectivity for face processing in the temporal voice area of early deaf individuals. Proc Natl Acad Sci U S A 2017; 114:E6437-E6446. [PMID: 28652333 DOI: 10.1073/pnas.1618287114] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.
Collapse
|
28
|
Abstract
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at [Formula: see text]1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
Collapse
|
29
|
Kral A, Yusuf PA, Land R. Higher-order auditory areas in congenital deafness: Top-down interactions and corticocortical decoupling. Hear Res 2017; 343:50-63. [DOI: 10.1016/j.heares.2016.08.017] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2016] [Revised: 07/25/2016] [Accepted: 08/29/2016] [Indexed: 11/16/2022]
|
30
|
Ferjan Ramirez N, Leonard MK, Davenport TS, Torres C, Halgren E, Mayberry RI. Neural Language Processing in Adolescent First-Language Learners: Longitudinal Case Studies in American Sign Language. Cereb Cortex 2016; 26:1015-26. [PMID: 25410427 PMCID: PMC4737603 DOI: 10.1093/cercor/bhu273] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience.
Collapse
Affiliation(s)
- Naja Ferjan Ramirez
- Department of Linguistics
- Multimodal Imaging Laboratory
- Institute for Learning and Brain Sciences, University of Washington, Seattle, WA 98195, USA
| | - Matthew K. Leonard
- Multimodal Imaging Laboratory
- Department of Radiology
- Department of Neurological Surgery, University of California, San Francisco, CA 94158, USA
| | | | | | - Eric Halgren
- Multimodal Imaging Laboratory
- Department of Radiology
- Department of Neuroscience and
- Kavli Institute for Brain and Mind, University of California, San Diego, La Jolla, CA 92093, USA
| | | |
Collapse
|
31
|
Williams JT, Darcy I, Newman SD. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study. Cortex 2016; 75:56-67. [DOI: 10.1016/j.cortex.2015.11.015] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2015] [Revised: 09/22/2015] [Accepted: 11/17/2015] [Indexed: 12/13/2022]
|
32
|
Williams JT, Darcy I, Newman SD. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure. Brain Res 2016; 1633:101-110. [DOI: 10.1016/j.brainres.2015.12.046] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Revised: 12/15/2015] [Accepted: 12/19/2015] [Indexed: 11/25/2022]
|
33
|
Higgins M, Lieberman AM. Deaf Students as a Linguistic and Cultural Minority: Shifting Perspectives and Implications for Teaching and Learning. JOURNAL OF EDUCATION (BOSTON, MASS.) 2016; 196:9-18. [PMID: 32782418 PMCID: PMC7416902 DOI: 10.1177/002205741619600103] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Deaf children have traditionally been perceived and educated as a special needs population. Over the past several decades, several factors have converged to enable a shift in perspective to one in which deaf children are viewed as a cultural and linguistic minority, and the education of deaf children is approached from a bilingual framework. In this article, we present the historical context in which such shifts in perspective have taken place and describe the linguistic, social, and cultural factors that shape a bilingual approach to deaf education. We further discuss the implications of a linguistic and cultural minority perspective of deaf children on language development, teacher preparation, and educational policy.
Collapse
Affiliation(s)
- Michael Higgins
- Kendall Demonstration Elementary School at Gallaudet University
| | - Amy M Lieberman
- Deaf Studies Program at Boston University, School of Education
| |
Collapse
|
34
|
MacSweeney M, Cardin V. What is the function of auditory cortex without auditory input? Brain 2015; 138:2468-70. [PMID: 26304150 DOI: 10.1093/brain/awv197] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Affiliation(s)
- Mairéad MacSweeney
- 1 Institute of Cognitive Neuroscience, University College London 2 ESRC Deafness, Cognition and Language Research Centre, University College London
| | - Velia Cardin
- 2 ESRC Deafness, Cognition and Language Research Centre, University College London 3 Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
35
|
Cardin V, Smittenaar RC, Orfanidou E, Rönnberg J, Capek CM, Rudner M, Woll B. Differential activity in Heschl's gyrus between deaf and hearing individuals is due to auditory deprivation rather than language modality. Neuroimage 2015; 124:96-106. [PMID: 26348556 DOI: 10.1016/j.neuroimage.2015.08.073] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2015] [Revised: 08/23/2015] [Accepted: 08/24/2015] [Indexed: 10/23/2022] Open
Abstract
Sensory cortices undergo crossmodal reorganisation as a consequence of sensory deprivation. Congenital deafness in humans represents a particular case with respect to other types of sensory deprivation, because cortical reorganisation is not only a consequence of auditory deprivation, but also of language-driven mechanisms. Visual crossmodal plasticity has been found in secondary auditory cortices of deaf individuals, but it is still unclear if reorganisation also takes place in primary auditory areas, and how this relates to language modality and auditory deprivation. Here, we dissociated the effects of language modality and auditory deprivation on crossmodal plasticity in Heschl's gyrus as a whole, and in cytoarchitectonic region Te1.0 (likely to contain the core auditory cortex). Using fMRI, we measured the BOLD response to viewing sign language in congenitally or early deaf individuals with and without sign language knowledge, and in hearing controls. Results show that differences between hearing and deaf individuals are due to a reduction in activation caused by visual stimulation in the hearing group, which is more significant in Te1.0 than in Heschl's gyrus as a whole. Furthermore, differences between deaf and hearing groups are due to auditory deprivation, and there is no evidence that the modality of language used by deaf individuals contributes to crossmodal plasticity in Heschl's gyrus.
Collapse
Affiliation(s)
- Velia Cardin
- Deafness, Cognition and Language Research Centre, 49 Gordon Square, University College London, London WC1H 0BT, UK; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden.
| | - Rebecca C Smittenaar
- Experimental Psychology, 26 Bedford Way, University College London, London WC1H 0AP, UK
| | - Eleni Orfanidou
- Deafness, Cognition and Language Research Centre, 49 Gordon Square, University College London, London WC1H 0BT, UK; School of Psychology, University of Crete, Greece
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Cheryl M Capek
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, UK
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Bencie Woll
- Deafness, Cognition and Language Research Centre, 49 Gordon Square, University College London, London WC1H 0BT, UK
| |
Collapse
|
36
|
Halgren E, Kaestner E, Marinkovic K, Cash SS, Wang C, Schomer DL, Madsen JR, Ulbert I. Laminar profile of spontaneous and evoked theta: Rhythmic modulation of cortical processing during word integration. Neuropsychologia 2015; 76:108-24. [PMID: 25801916 PMCID: PMC4575841 DOI: 10.1016/j.neuropsychologia.2015.03.021] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 03/18/2015] [Accepted: 03/18/2015] [Indexed: 01/01/2023]
Abstract
Theta may play a central role during language understanding and other extended cognitive processing, providing an envelope for widespread integration of participating cortical areas. We used linear microelectrode arrays in epileptics to define the circuits generating theta in inferotemporal, perirhinal, entorhinal, prefrontal and anterior cingulate cortices. In all locations, theta was generated by excitatory current sinks in middle layers which receive predominantly feedforward inputs, alternating with sinks in superficial layers which receive mainly feedback/associative inputs. Baseline and event-related theta were generated by indistinguishable laminar profiles of transmembrane currents and unit-firing. Word presentation could reset theta phase, permitting theta to contribute to late event-related potentials, even when theta power decreases relative to baseline. Limited recordings during sentence reading are consistent with rhythmic theta activity entrained by a given word modulating the neural background for the following word. These findings show that theta occurs spontaneously, and can be momentarily suppressed, reset and synchronized by words. Theta represents an alternation between feedforward/divergent and associative/convergent processing modes that may temporally organize sustained processing and optimize the timing of memory formation. We suggest that words are initially encoded via a ventral feedforward stream which is lexicosemantic in the anteroventral temporal lobe; its arrival may trigger a widespread theta rhythm which integrates the word within a larger context.
Collapse
Affiliation(s)
- Eric Halgren
- Departments of Radiology and Neurosciences, University of California at San Diego, La Jolla, CA 92069, USA.
| | - Erik Kaestner
- Interdepartmental Neurosciences Program, University of California at San Diego, La Jolla, CA 92069, USA
| | - Ksenija Marinkovic
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA
| | - Chunmao Wang
- Departments of Radiology and Neurosciences, University of California at San Diego, La Jolla, CA 92069, USA; Interdepartmental Neurosciences Program, University of California at San Diego, La Jolla, CA 92069, USA; Department of Psychology, San Diego State University, San Diego, CA, USA; Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA; Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA; Department of Neurosurgery, Children's Hospital, Harvard Medical School, Boston, MA, USA; Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungarian Academy of Sciences, Budapest-1117, Hungary
| | - Donald L Schomer
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Joseph R Madsen
- Department of Neurosurgery, Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Istvan Ulbert
- Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungarian Academy of Sciences, Budapest-1117, Hungary
| |
Collapse
|
37
|
Weisberg J, McCullough S, Emmorey K. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends. BRAIN AND LANGUAGE 2015; 147:96-106. [PMID: 26177161 PMCID: PMC5769874 DOI: 10.1016/j.bandl.2015.05.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 04/17/2015] [Accepted: 05/16/2015] [Indexed: 05/29/2023]
Abstract
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration.
Collapse
Affiliation(s)
- Jill Weisberg
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| | - Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| |
Collapse
|
38
|
Chabot N, Butler BE, Lomber SG. Differential modification of cortical and thalamic projections to cat primary auditory cortex following early- and late-onset deafness. J Comp Neurol 2015; 523:2297-320. [DOI: 10.1002/cne.23790] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 04/07/2015] [Accepted: 04/08/2015] [Indexed: 12/26/2022]
Affiliation(s)
- Nicole Chabot
- Cerebral Systems Laboratory; University of Western Ontario; London Ontario Canada N6A 5C2
- Department of Physiology and Pharmacology; University of Western Ontario; London Ontario Canada N6A 5C1
- Brain and Mind Institute, University of Western Ontario; London Ontario Canada N6A 5B7
| | - Blake E. Butler
- Cerebral Systems Laboratory; University of Western Ontario; London Ontario Canada N6A 5C2
- Department of Physiology and Pharmacology; University of Western Ontario; London Ontario Canada N6A 5C1
- Brain and Mind Institute, University of Western Ontario; London Ontario Canada N6A 5B7
| | - Stephen G. Lomber
- Cerebral Systems Laboratory; University of Western Ontario; London Ontario Canada N6A 5C2
- Department of Psychology; University of Western Ontario; London Ontario Canada N6A 5C2
- Department of Physiology and Pharmacology; University of Western Ontario; London Ontario Canada N6A 5C1
- Brain and Mind Institute, University of Western Ontario; London Ontario Canada N6A 5B7
- National Centre for Audiology; University of Western Ontario; London Ontario Canada N6A 1H1
| |
Collapse
|
39
|
Baus C, Costa A. On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC). Brain Res 2015; 1609:40-53. [PMID: 25801115 DOI: 10.1016/j.brainres.2015.03.013] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2014] [Revised: 03/05/2015] [Accepted: 03/08/2015] [Indexed: 11/30/2022]
Abstract
This study investigates the temporal dynamics of sign production and how particular aspects of the signed modality influence the early stages of lexical access. To that end, we explored the electrophysiological correlates associated to sign frequency and iconicity in a picture signing task in a group of bimodal bilinguals. Moreover, a subset of the same participants was tested in the same task but naming the pictures instead. Our results revealed that both frequency and iconicity influenced lexical access in sign production. At the ERP level, iconicity effects originated very early in the course of signing (while absent in the spoken modality), suggesting a stronger activation of the semantic properties for iconic signs. Moreover, frequency effects were modulated by iconicity, suggesting that lexical access in signed language is determined by the iconic properties of the signs. These results support the idea that lexical access is sensitive to the same phenomena in word and sign production, but its time-course is modulated by particular aspects of the modality in which a lexical item will be finally articulated.
Collapse
Affiliation(s)
- Cristina Baus
- Center of Brain and Cognition, CBC, Universitat Pompeu Fabra, Barcelona, Spain; Laboratoire de Psychologie Cognitive, CNRS and Université d'Aix-Marseille, Marseille, France.
| | - Albert Costa
- Center of Brain and Cognition, CBC, Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain.
| |
Collapse
|
40
|
Campbell R, MacSweeney M, Woll B. Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success. Front Hum Neurosci 2014; 8:834. [PMID: 25368567 PMCID: PMC4201085 DOI: 10.3389/fnhum.2014.00834] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2014] [Accepted: 09/30/2014] [Indexed: 11/13/2022] Open
Abstract
Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections-including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant-however that may be achieved, and whatever the success of auditory restoration.
Collapse
Affiliation(s)
- Ruth Campbell
- Deafness Cognition and Language Research Centre, University College LondonLondon, UK
| | - Mairéad MacSweeney
- Deafness Cognition and Language Research Centre, University College LondonLondon, UK
- Institute of Cognitive Neuroscience, University College LondonLondon, UK
| | - Bencie Woll
- Deafness Cognition and Language Research Centre, University College LondonLondon, UK
| |
Collapse
|
41
|
Pizzella V, Marzetti L, Penna SD, de Pasquale F, Zappasodi F, Romani GL. Magnetoencephalography in the study of brain dynamics. FUNCTIONAL NEUROLOGY 2014; 29:241-253. [PMID: 25764254 PMCID: PMC4370437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
To progress toward understanding of the mechanisms underlying the functional organization of the human brain, either a bottom-up or a top-down approach may be adopted. The former starts from the study of the detailed functioning of a small number of neuronal assemblies, while the latter tries to decode brain functioning by considering the brain as a whole. This review discusses the top-down approach and the use of magnetoencephalography (MEG) to describe global brain properties. The main idea behind this approach is that the concurrence of several areas is required for the brain to instantiate a specific behavior/functioning. A central issue is therefore the study of brain functional connectivity and the concept of brain networks as ensembles of distant brain areas that preferentially exchange information. Importantly, the human brain is a dynamic device, and MEG is ideally suited to investigate phenomena on behaviorally relevant timescales, also offering the possibility of capturing behaviorally-related brain connectivity dynamics.
Collapse
|
42
|
Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. Neural language processing in adolescent first-language learners. Cereb Cortex 2014; 24:2772-83. [PMID: 23696277 PMCID: PMC4153811 DOI: 10.1093/cercor/bht137] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The relation between the timing of language input and development of neural organization for language processing in adulthood has been difficult to tease apart because language is ubiquitous in the environment of nearly all infants. However, within the congenitally deaf population are individuals who do not experience language until after early childhood. Here, we investigated the neural underpinnings of American Sign Language (ASL) in 2 adolescents who had no sustained language input until they were approximately 14 years old. Using anatomically constrained magnetoencephalography, we found that recently learned signed words mainly activated right superior parietal, anterior occipital, and dorsolateral prefrontal areas in these 2 individuals. This spatiotemporal activity pattern was significantly different from the left fronto-temporal pattern observed in young deaf adults who acquired ASL from birth, and from that of hearing young adults learning ASL as a second language for a similar length of time as the cases. These results provide direct evidence that the timing of language experience over human development affects the organization of neural language processing.
Collapse
Affiliation(s)
| | | | | | | | - Eric Halgren
- Multimodal Imaging Laboratory
- Department of Radiology
- Department of Neurosciences
- Kavli Institute for Brain and Mind, University of California, San Diego, USA
| | | |
Collapse
|
43
|
Abstract
The study of congenitally deaf adult humans provides an opportunity to examine neuroanatomical plasticity resulting from altered sensory experience. However, attributing the source of the brain's structural variance in the deaf is complicated by the fact that deaf individuals also differ in their language experiences (e.g., sign vs spoken), which likely influence brain anatomy independently. Although the majority of deaf individuals in the United States are born to hearing parents and are exposed to English, not American Sign Language (ASL) as their first language, most studies on deafness have been conducted with deaf native users of ASL (deaf signers). This raises the question of whether observations made in deaf signers can be generalized. Using a factorial design, we compared gray (GMV) and white (WMV) matter volume in deaf and hearing native users of ASL, as well as deaf and hearing native users of English. Main effects analysis of sensory experience revealed less GMV in the deaf groups combined (compared with hearing groups combined) in early visual areas and less WMV in a left early auditory region. The interaction of sensory experience and language experience revealed that deaf native users of English had fewer areas of anatomical differences than did deaf native users of ASL (each compared with their hearing counterparts). For deaf users of ASL specifically, WMV differences resided in language areas such as the left superior temporal and inferior frontal regions. Our results demonstrate that cortical plasticity resulting from deafness depends on language experience and that findings from native signers cannot be generalized.
Collapse
|
44
|
Emmorey K, McCullough S, Mehta S, Grabowski TJ. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language. Front Psychol 2014; 5:484. [PMID: 24904497 PMCID: PMC4033845 DOI: 10.3389/fpsyg.2014.00484] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2013] [Accepted: 05/03/2014] [Indexed: 11/24/2022] Open
Abstract
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
Collapse
Affiliation(s)
- Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University San Diego, CA, USA
| | - Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University San Diego, CA, USA
| | - Sonya Mehta
- Department of Psychology, University of Washington Seattle, WA, USA ; Department of Radiology, University of Washington Seattle, WA, USA
| | | |
Collapse
|
45
|
Lyness CR, Woll B, Campbell R, Cardin V. How does visual language affect crossmodal plasticity and cochlear implant success? Neurosci Biobehav Rev 2013; 37:2621-30. [PMID: 23999083 PMCID: PMC3989033 DOI: 10.1016/j.neubiorev.2013.08.011] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Revised: 08/07/2013] [Accepted: 08/21/2013] [Indexed: 11/14/2022]
Abstract
Cochlear implants (CI) are the most successful intervention for ameliorating hearing loss in severely or profoundly deaf children. Despite this, educational performance in children with CI continues to lag behind their hearing peers. From animal models and human neuroimaging studies it has been proposed the integrative functions of auditory cortex are compromised by crossmodal plasticity. This has been argued to result partly from the use of a visual language. Here we argue that 'cochlear implant sensitive periods' comprise both auditory and language sensitive periods, and thus cannot be fully described with animal models. Despite prevailing assumptions, there is no evidence to link the use of a visual language to poorer CI outcome. Crossmodal reorganisation of auditory cortex occurs regardless of compensatory strategies, such as sign language, used by the deaf person. In contrast, language deprivation during early sensitive periods has been repeatedly linked to poor language outcomes. Language sensitive periods have largely been ignored when considering variation in CI outcome, leading to ill-founded recommendations concerning visual language in CI habilitation.
Collapse
Affiliation(s)
- C R Lyness
- Cognitive, Perceptual and Brain Sciences, 26 Bedford Way, University College London, London WC1H 0AP, UK.
| | | | | | | |
Collapse
|
46
|
Inubushi T, Sakai KL. Functional and anatomical correlates of word-, sentence-, and discourse-level integration in sign language. Front Hum Neurosci 2013; 7:681. [PMID: 24155706 PMCID: PMC3804906 DOI: 10.3389/fnhum.2013.00681] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2013] [Accepted: 09/27/2013] [Indexed: 11/17/2022] Open
Abstract
In both vocal and sign languages, we can distinguish word-, sentence-, and discourse-level integration in terms of hierarchical processes, which integrate various elements into another higher level of constructs. In the present study, we used magnetic resonance imaging and voxel-based morphometry (VBM) to test three language tasks in Japanese Sign Language (JSL): word-level (Word), sentence-level (Sent), and discourse-level (Disc) decision tasks. We analyzed cortical activity and gray matter (GM) volumes of Deaf signers, and clarified three major points. First, we found that the activated regions in the frontal language areas gradually expanded in the dorso-ventral axis, corresponding to a difference in linguistic units for the three tasks. Moreover, the activations in each region of the frontal language areas were incrementally modulated with the level of linguistic integration. These dual mechanisms of the frontal language areas may reflect a basic organization principle of hierarchically integrating linguistic information. Secondly, activations in the lateral premotor cortex and inferior frontal gyrus were left-lateralized. Direct comparisons among the language tasks exhibited more focal activation in these regions, suggesting their functional localization. Thirdly, we found significantly positive correlations between individual task performances and GM volumes in localized regions, even when the ages of acquisition (AOAs) of JSL and Japanese were factored out. More specifically, correlations with the performances of the Word and Sent tasks were found in the left precentral/postcentral gyrus and insula, respectively, while correlations with those of the Disc task were found in the left ventral inferior frontal gyrus and precuneus. The unification of functional and anatomical studies would thus be fruitful for understanding human language systems from the aspects of both universality and individuality.
Collapse
Affiliation(s)
- Tomoo Inubushi
- Department of Basic Science, Graduate School of Arts and Sciences, The University of Tokyo Tokyo, Japan ; Japan Society for the Promotion of Science Tokyo, Japan
| | | |
Collapse
|
47
|
Strelnikov K, Rouger J, Demonet JF, Lagleyre S, Fraysse B, Deguine O, Barone P. Visual activity predicts auditory recovery from deafness after adult cochlear implantation. ACTA ACUST UNITED AC 2013; 136:3682-95. [PMID: 24136826 DOI: 10.1093/brain/awt274] [Citation(s) in RCA: 95] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Collapse
Affiliation(s)
- Kuzma Strelnikov
- 1 Université de Toulouse, Cerveau and Cognition, Université Paul Sabatier, Toulouse, France
| | | | | | | | | | | | | |
Collapse
|
48
|
Leonard MK, Ferjan Ramirez N, Torres C, Hatrak M, Mayberry RI, Halgren E. Neural stages of spoken, written, and signed word processing in beginning second language learners. Front Hum Neurosci 2013; 7:322. [PMID: 23847496 PMCID: PMC3698463 DOI: 10.3389/fnhum.2013.00322] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2013] [Accepted: 06/11/2013] [Indexed: 11/23/2022] Open
Abstract
We combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine how sensory modality, language type, and language proficiency interact during two fundamental stages of word processing: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.
Collapse
Affiliation(s)
- Matthew K Leonard
- Department of Radiology, University of California San Diego, La Jolla, CA, USA ; Multimodal Imaging Laboratory, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | | | | | | | | | | |
Collapse
|
49
|
Costanzo ME, McArdle JJ, Swett B, Nechaev V, Kemeny S, Xu J, Braun AR. Spatial and temporal features of superordinate semantic processing studied with fMRI and EEG. Front Hum Neurosci 2013; 7:293. [PMID: 23847490 PMCID: PMC3696724 DOI: 10.3389/fnhum.2013.00293] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2013] [Accepted: 06/03/2013] [Indexed: 11/13/2022] Open
Abstract
The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization-the extraction of general features shared by broad classes of exemplars (e.g., living vs. non-living semantic categories). We proposed that, because of the abstract nature of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical) should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization-specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP) with functional magnetic resonance imaging (fMRI) to characterize subjects' responses as they made superordinate categorical decisions (living vs. non-living) about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC) with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items.
Collapse
Affiliation(s)
- Michelle E Costanzo
- Language Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health Bethesda, MD, USA ; Department of Medicine, Uniformed Services University of the Health Sciences Bethesda, MD, USA
| | | | | | | | | | | | | |
Collapse
|
50
|
Abstract
AbstractThere is a strong interaction between multisensory processing and the neuroplasticity of the human brain. On one hand, recent research demonstrates that experience and training in various domains modifies how information from the different senses is integrated; and, on the other hand multisensory training paradigms seem to be particularly effective in driving functional and structural plasticity. Multisensory training affects early sensory processing within separate sensory domains, as well as the functional and structural connectivity between uni- and multisensory brain regions. In this review, we discuss the evidence for interactions of multisensory processes and brain plasticity and give an outlook on promising clinical applications and open questions.
Collapse
|