1
|
Cui G, Ren Y, Zhou X. Language as a modulator to cognitive and neurological systems. Acta Psychol (Amst) 2025; 254:104803. [PMID: 39965507 DOI: 10.1016/j.actpsy.2025.104803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 12/13/2024] [Accepted: 02/11/2025] [Indexed: 02/20/2025] Open
Abstract
Language is a defining characteristic of humans, playing a crucial role in both species evolution and individual development. While traditional views, such as Chomsky's, emphasize language's dual functions in sensorimotor externalization and conceptual-intentional thought, its broader role as a modulator of cognitive and neurological systems remains underexplored. Here, we propose that language, due to its profound, accessible, and widespread neurological activation, serves as a pivotal modulator of these systems. This perspective provides new insights into the interconnection between language, cognition, and brain function, and points to novel therapeutic pathways that leverage the modulating capabilities of language for cognitive enhancement and neurological rehabilitation.
Collapse
Affiliation(s)
- Gang Cui
- Department of Foreign Languages and Literatures, Tsinghua University, Beijing, China
| | - Yufei Ren
- Department of Foreign Languages and Literatures, Tsinghua University, Beijing, China.
| | - Xiaoran Zhou
- Department of Foreign Languages and Literatures, Tsinghua University, Beijing, China
| |
Collapse
|
2
|
Li Y, Zhang J, Li X, Zhang Z. Uncovering narrative aging: an underlying neural mechanism compensated through spatial constructional ability. Commun Biol 2025; 8:104. [PMID: 39837995 PMCID: PMC11751312 DOI: 10.1038/s42003-025-07501-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 01/08/2025] [Indexed: 01/23/2025] Open
Abstract
"The narrative" is a complex cognitive process that has sparked a debate on whether its features age through maintenance or decline. To address this question, we attempted to uncover the narrative aging and its underlying neural characteristics with a cross-validation based cognitive neuro-decoding statistical framework. This framework used a total of 740 healthy older participants with completed narrative and extensive neuropsychological tests and MRI scans. The results indicated that narrative comprises macro and micro structures, with the macrostructure involving complex cognitive processes more relevant to aging. For the brain functional basis, brain hub nodes contributing to macrostructure were predominantly found in the angular gyrus and medial frontal lobe, while microstructure hub nodes were located in the supramarginal gyrus and middle cingulate cortex. Moreover, networks enriched by macrostructure included the default mode network and fronto-parietal network, indicating a higher functional gradient compared to the microstructure-enriched dorsal attention network. Additionally, an interesting finding showed that macrostructure increases in spatial contribution with age, suggesting a compensatory interaction where brain regions related to spatial-constructional ability have a greater impact on macrostructure. These results, supported by neural-level validation and multimodal structural MRI, provide detailed insights into the compensatory effect in the narrative aging process.
Collapse
Affiliation(s)
- Yumeng Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
- Beijing Aging Brain Rejuvenation Initiative (BABRI) Centre, Beijing Normal University, Beijing, 100875, China
| | - Junying Zhang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, 100700, China
| | - Xin Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
- Beijing Aging Brain Rejuvenation Initiative (BABRI) Centre, Beijing Normal University, Beijing, 100875, China.
| | - Zhanjun Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
- Beijing Aging Brain Rejuvenation Initiative (BABRI) Centre, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
3
|
Haluts N, Levy D, Friedmann N. Bimodal aphasia and dysgraphia: Phonological output buffer aphasia and orthographic output buffer dysgraphia in spoken and sign language. Cortex 2025; 182:147-180. [PMID: 39672692 DOI: 10.1016/j.cortex.2024.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 08/09/2024] [Accepted: 10/08/2024] [Indexed: 12/15/2024]
Abstract
We report a case of crossmodal bilingual aphasia-aphasia in two modalities, spoken and sign language-and dysgraphia in both writing and fingerspelling. The patient, Sunny, was a 42 year-old woman after a left temporo-parietal stroke, a speaker of Hebrew, Romanian, and English and an adult learner, daily user of Israeli Sign language (ISL). We assessed Sunny's spoken and sign languages using a comprehensive test battery of naming, reading, and repetition tasks, and also analysed her spontaneous-speech and sign. Her writing and fingerspelling were assessed using tasks of dictation, naming, and delayed copying. In spoken language production, Sunny showed a classical phonological output buffer (POB) impairment in naming, reading, repetition, and spontaneous production, with phonological errors (transpositions, substitutions, insertions, and omissions) in words and pseudo-words, and whole-unit errors in morphological affixes, function-words, and number-words, with a length effect. Importantly, her error pattern in ISL was remarkably similar in the parallel tasks, with phonological errors in signs and pseudo-signs, affecting all the phonological parameters of the sign (movement, handshape, location, and orientation), and whole-unit errors in morphemes, function-signs, and number-signs. Sunny's impairment was selective to the POB, without phonological input, semantic-conceptual, or syntactic deficits. This shows for the first time how POB impairment, a kind of conduction aphasia, manifests itself in a sign language, and indicates that the POB for sign-language has the same cognitive architecture as the one for spoken language. It may also indicate similar neural underpinnings for spoken and sign languages. In writing, Sunny forms the first case of a selective type of dysgraphia in fingerspelling, orthographic (graphemic) output buffer dysgraphia. In both writing and fingerspelling, she made letter errors (letter transpositions, substitutions, insertions, and omissions), as well as morphological errors and errors in function words, and showed length effect. Sunny's impairment was selective to the orthographic output buffer, whereas her reading, including orthographic input processing, was intact. This suggests that the orthographic output buffer is shared for writing and fingerspelling, at least in a late learner of sign language. The results shed further light on the architecture of phonological and orthographic production.
Collapse
Affiliation(s)
- Neta Haluts
- Language and Brain Lab, Sagol School of Neuroscience, and School of Education, Tel Aviv University, Tel Aviv, Israel
| | - Doron Levy
- Language and Brain Lab, Sagol School of Neuroscience, and School of Education, Tel Aviv University, Tel Aviv, Israel
| | - Naama Friedmann
- Language and Brain Lab, Sagol School of Neuroscience, and School of Education, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
4
|
Yang T, Fan X, Hou B, Wang J, Chen X. Linguistic network in early deaf individuals: A neuroimaging meta-analysis. Neuroimage 2024; 299:120720. [PMID: 38971484 DOI: 10.1016/j.neuroimage.2024.120720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 07/01/2024] [Accepted: 07/03/2024] [Indexed: 07/08/2024] Open
Abstract
This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior frontal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.
Collapse
Affiliation(s)
- Tengyu Yang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China
| | - Xinmiao Fan
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China
| | - Bo Hou
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China
| | - Jian Wang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China.
| | - Xiaowei Chen
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China.
| |
Collapse
|
5
|
Kumar U, Dhanik K, Mishra M, Pandey HR, Keshri A. Mapping the unique neural engagement in deaf individuals during picture, word, and sign language processing: fMRI study. Brain Imaging Behav 2024; 18:835-851. [PMID: 38523177 DOI: 10.1007/s11682-024-00878-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/15/2024] [Indexed: 03/26/2024]
Abstract
Employing functional magnetic resonance imaging (fMRI) techniques, we conducted a comprehensive analysis of neural responses during sign language, picture, and word processing tasks in a cohort of 35 deaf participants and contrasted these responses with those of 35 hearing counterparts. Our voxel-based analysis unveiled distinct patterns of brain activation during language processing tasks. Deaf individuals exhibited robust bilateral activation in the superior temporal regions during sign language processing, signifying the profound neural adaptations associated with sign comprehension. Similarly, during picture processing, the deaf cohort displayed activation in the right angular, right calcarine, right middle temporal, and left angular gyrus regions, elucidating the neural dynamics engaged in visual processing tasks. Intriguingly, during word processing, the deaf group engaged the right insula and right fusiform gyrus, suggesting compensatory mechanisms at play during linguistic tasks. Notably, the control group failed to manifest additional or distinctive regions in any of the tasks when compared to the deaf cohort, underscoring the unique neural signatures within the deaf population. Multivariate Pattern Analysis (MVPA) of functional connectivity provided a more nuanced perspective on connectivity patterns across tasks. Deaf participants exhibited significant activation in a myriad of brain regions, including bilateral planum temporale (PT), postcentral gyrus, insula, and inferior frontal regions, among others. These findings underscore the intricate neural adaptations in response to auditory deprivation. Seed-based connectivity analysis, utilizing the PT as a seed region, revealed unique connectivity pattern across tasks. These connectivity dynamics provide valuable insights into the neural interplay associated with cross-modal plasticity.
Collapse
Affiliation(s)
- Uttam Kumar
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, Uttar Pradesh, 226014, India.
| | - Kalpana Dhanik
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, Uttar Pradesh, 226014, India
| | - Mrutyunjaya Mishra
- Department of Special Education (Hearing Impairments), Dr. Shakuntala Misra National Rehabilitation University, Lucknow, India
| | - Himanshu R Pandey
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, Uttar Pradesh, 226014, India
| | - Amit Keshri
- Department of Neuro-Otology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India
| |
Collapse
|
6
|
Yamamoto A, Kijima N, Utsugi R, Mrakami K, Kuroda H, Tachi T, Hirayama R, Okita Y, Kagawa N, Kishima H. Awake surgery for a deaf patient using sign language: A case report. Surg Neurol Int 2024; 15:167. [PMID: 38840599 PMCID: PMC11152539 DOI: 10.25259/sni_52_2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 03/28/2024] [Indexed: 06/07/2024] Open
Abstract
Background Although awake surgery is the gold standard for resecting brain tumors in eloquent regions, patients with hearing impairment require special consideration during intraoperative tasks. Case Description We present a case of awake surgery using sign language in a 45-year-old right-handed native male patient with hearing impairment and a neoplastic lesion in the left frontal lobe, pars triangularis (suspected to be a low-grade glioma). The patient primarily communicated through sign language and writing but was able to speak at a sufficiently audible level through childhood training. Although the patient remained asymptomatic, the tumors gradually grew in size. Awake surgery was performed for tumors resection. After the craniotomy, the patient was awake, and brain function mapping was performed using tasks such as counting, picture naming, and reading. A sign language-proficient nurse facilitated communication using sign language and the patient vocally responded. Intraoperative tasks proceeded smoothly without speech arrest or verbal comprehension difficulties during electrical stimulation of the tumor-adjacent areas. Gross total tumor resection was achieved, and the patient exhibited no apparent complications. Pathological examination revealed a World Health Organization grade II oligodendroglioma with an isocitrate dehydrogenase one mutant and 1p 19q codeletion. Conclusion Since the patient in this case had no dysphonia due to training from childhood, the task was presented in sign language, and the patient responded vocally, which enabled a safe operation. Regarding awake surgery in patients with hearing impairment, safe tumor resection can be achieved by performing intraoperative tasks depending on the degree of hearing impairment and dysphonia.
Collapse
Affiliation(s)
| | | | - Reina Utsugi
- Department of Neurosurgery, Osaka University, Suita, Japan
| | - Koki Mrakami
- Department of Neurosurgery, Osaka University, Suita, Japan
| | - Hideki Kuroda
- Department of Neurosurgery, Osaka University, Suita, Japan
| | - Tetsuro Tachi
- Department of Neurosurgery, Osaka University, Suita, Japan
| | | | - Yoshiko Okita
- Department of Neurosurgery, Osaka University, Suita, Japan
| | - Naoki Kagawa
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Japan
| | | |
Collapse
|
7
|
Olson HA, Chen EM, Lydic KO, Saxe RR. Left-Hemisphere Cortical Language Regions Respond Equally to Observed Dialogue and Monologue. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:575-610. [PMID: 38144236 PMCID: PMC10745132 DOI: 10.1162/nol_a_00123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 09/20/2023] [Indexed: 12/26/2023]
Abstract
Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. Twenty adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20 s videos of puppets speaking either to another puppet (the dialogue condition) or directly to the viewer (the monologue condition), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1-3 min each) of two puppets conversing with each other, in which one puppet was comprehensible while the other's speech was reversed. All participants saw the same visual input but were randomly assigned which character's speech was comprehensible. In left-hemisphere cortical language regions, the time course of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually localized theory of mind regions and right-hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.
Collapse
|
8
|
Martin KC, Seydell-Greenwald A, Turkeltaub PE, Chambers CE, Giannetti M, Dromerick AW, Carpenter JL, Berl MM, Gaillard WD, Newport EL. One right can make a left: sentence processing in the right hemisphere after perinatal stroke. Cereb Cortex 2023; 33:11257-11268. [PMID: 37859521 PMCID: PMC10690853 DOI: 10.1093/cercor/bhad362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 09/08/2023] [Indexed: 10/21/2023] Open
Abstract
When brain regions that are critical for a cognitive function in adulthood are irreversibly damaged at birth, what patterns of plasticity support the successful development of that function in an alternative location? Here we investigate the consistency of language organization in the right hemisphere (RH) after a left hemisphere (LH) perinatal stroke. We analyzed fMRI data collected during an auditory sentence comprehension task on 14 people with large cortical LH perinatal arterial ischemic strokes (left hemisphere perinatal stroke (LHPS) participants) and 11 healthy sibling controls using a "top voxel" approach that allowed us to compare the same number of active voxels across each participant and in each hemisphere for controls. We found (1) LHPS participants consistently recruited the same RH areas that were a mirror-image of typical LH areas, and (2) the RH areas recruited in LHPS participants aligned better with the strongly activated LH areas of the typically developed brains of control participants (when flipped images were compared) than the weakly activated RH areas. Our findings suggest that the successful development of language processing in the RH after a LH perinatal stroke may in part depend on recruiting an arrangement of frontotemporal areas reflective of the typical dominant LH.
Collapse
Affiliation(s)
- Kelly C Martin
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
| | - Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Catherine E Chambers
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Margot Giannetti
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Alexander W Dromerick
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Jessica L Carpenter
- Division of Pediatric Neurology, Departments of Pediatrics and Neurology, University of Maryland School of Medicine, Baltimore MD 21201, United States
| | - Madison M Berl
- Children’s National Hospital and Center for Neuroscience, Washington, DC 20010, United States
| | - William D Gaillard
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- Children’s National Hospital and Center for Neuroscience, Washington, DC 20010, United States
| | - Elissa L Newport
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| |
Collapse
|
9
|
Song L, Wang P, Li H, Weiss PH, Fink GR, Zhou X, Chen Q. Increased functional connectivity between the auditory cortex and the frontoparietal network compensates for impaired visuomotor transformation after early auditory deprivation. Cereb Cortex 2023; 33:11126-11145. [PMID: 37814363 DOI: 10.1093/cercor/bhad351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 09/05/2023] [Indexed: 10/11/2023] Open
Abstract
Early auditory deprivation leads to a reorganization of large-scale brain networks involving and extending beyond the auditory system. It has been documented that visuomotor transformation is impaired after early deafness, associated with a hyper-crosstalk between the task-critical frontoparietal network and the default-mode network. However, it remains unknown whether and how the reorganized large-scale brain networks involving the auditory cortex contribute to impaired visuomotor transformation after early deafness. Here, we asked deaf and early hard of hearing participants and normal hearing controls to judge the spatial location of a visual target. Compared with normal hearing controls, the superior temporal gyrus showed significantly increased functional connectivity with the frontoparietal network and the default-mode network in deaf and early hard of hearing participants, specifically during egocentric judgments. However, increased superior temporal gyrus-frontoparietal network and superior temporal gyrus-default-mode network coupling showed antagonistic effects on egocentric judgments. In deaf and early hard of hearing participants, increased superior temporal gyrus-frontoparietal network connectivity was associated with improved egocentric judgments, whereas increased superior temporal gyrus-default-mode network connectivity was associated with deteriorated performance in the egocentric task. Therefore, the data suggest that the auditory cortex exhibits compensatory neuroplasticity (i.e. increased functional connectivity with the task-critical frontoparietal network) to mitigate impaired visuomotor transformation after early auditory deprivation.
Collapse
Affiliation(s)
- Li Song
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Pengfei Wang
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Hui Li
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Peter H Weiss
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Wilhelm-Johnen-Strasse, Jülich 52428, Germany
- Department of Neurology, University Hospital Cologne, Cologne University, Cologne 509737, Germany
| | - Gereon R Fink
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Wilhelm-Johnen-Strasse, Jülich 52428, Germany
- Department of Neurology, University Hospital Cologne, Cologne University, Cologne 509737, Germany
| | - Xiaolin Zhou
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Qi Chen
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, China
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Wilhelm-Johnen-Strasse, Jülich 52428, Germany
| |
Collapse
|
10
|
Hu J, Small H, Kean H, Takahashi A, Zekelman L, Kleinman D, Ryan E, Nieto-Castañón A, Ferreira V, Fedorenko E. Precision fMRI reveals that the language-selective network supports both phrase-structure building and lexical access during language production. Cereb Cortex 2023; 33:4384-4404. [PMID: 36130104 PMCID: PMC10110436 DOI: 10.1093/cercor/bhac350] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 08/01/2022] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
A fronto-temporal brain network has long been implicated in language comprehension. However, this network's role in language production remains debated. In particular, it remains unclear whether all or only some language regions contribute to production, and which aspects of production these regions support. Across 3 functional magnetic resonance imaging experiments that rely on robust individual-subject analyses, we characterize the language network's response to high-level production demands. We report 3 novel results. First, sentence production, spoken or typed, elicits a strong response throughout the language network. Second, the language network responds to both phrase-structure building and lexical access demands, although the response to phrase-structure building is stronger and more spatially extensive, present in every language region. Finally, contra some proposals, we find no evidence of brain regions-within or outside the language network-that selectively support phrase-structure building in production relative to comprehension. Instead, all language regions respond more strongly during production than comprehension, suggesting that production incurs a greater cost for the language network. Together, these results align with the idea that language comprehension and production draw on the same knowledge representations, which are stored in a distributed manner within the language-selective network and are used to both interpret and generate linguistic utterances.
Collapse
Affiliation(s)
- Jennifer Hu
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Hannah Small
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Atsushi Takahashi
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Leo Zekelman
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | | | - Elizabeth Ryan
- St. George’s Medical School, St. George’s University, Grenada, West Indies
| | - Alfonso Nieto-Castañón
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215, United States
| | - Victor Ferreira
- Department of Psychology, UCSD, La Jolla, CA 92093, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
11
|
Berent I, Gervain J. Speakers aren't blank slates (with respect to sign-language phonology)! Cognition 2023; 232:105347. [PMID: 36528980 DOI: 10.1016/j.cognition.2022.105347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 09/18/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022]
Abstract
A large literature has gauged the linguistic knowledge of signers by comparing sign-processing by signers and non-signers. Underlying this approach is the assumption that non-signers are devoid of any relevant linguistic knowledge, and as such, they present appropriate non-linguistic controls-a recent paper by Meade et al. (2022) articulates this view explicitly. Our commentary revisits this position. Informed by recent findings from adults and infants, we argue that the phonological system is partly amodal. We show that hearing infants use a shared brain network to extract phonological rules from speech and sign. Moreover, adult speakers who are sign-naïve demonstrably project knowledge of their spoken L1 to signs. So, when it comes to sign-language phonology, speakers are not linguistic blank slates. Disregarding this possibility could systematically underestimate the linguistic knowledge of signers and obscure the nature of the language faculty.
Collapse
Affiliation(s)
| | - Judit Gervain
- INCC, CNRS & Université Paris Cité, Paris, France; DPSS, University of Padua, Italy
| |
Collapse
|
12
|
Xu L, Gong T, Shuai L, Feng J. Significantly different noun-verb distinguishing mechanisms in written Chinese and Chinese sign language: An event-related potential study of bilingual native signers. Front Neurosci 2022; 16:910263. [DOI: 10.3389/fnins.2022.910263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 10/05/2022] [Indexed: 11/13/2022] Open
Abstract
Little is known about: (a) whether bilingual signers possess dissociated neural mechanisms for noun and verb processing in written language (just like native non-signers), or they utilize similar neural mechanisms for those processing (due to general lack of part-of-speech criterion in sign languages); and (b) whether learning a language from another modality (L2) influences corresponding neural mechanism of L1. In order to address these issues, we conducted an electroencephalogram (EEG) based reading comprehension study on bimodal bilinguals, namely Chinese native deaf signers, whose L1 is Chinese Sign Language and L2 is written Chinese. Analyses identified significantly dissociated neural mechanisms in the bilingual signers’ written noun and verb processing (which also became more explicit along with increase in their written Chinese understanding levels), but not in their understanding of verbal and nominal meanings in Chinese Sign Language. These findings reveal relevance between modality-based linguistic features and processing mechanisms, which suggests that: processing modality-based features of a language is unlikely affected by learning another language in a different modality; and cross-modal language transfer is subject to modal constraints rather than explicit linguistic features.
Collapse
|
13
|
Zhou X, Feng M, Hu Y, Zhang C, Zhang Q, Luo X, Yuan W. The Effects of Cortical Reorganization and Applications of Functional Near-Infrared Spectroscopy in Deaf People and Cochlear Implant Users. Brain Sci 2022; 12:brainsci12091150. [PMID: 36138885 PMCID: PMC9496692 DOI: 10.3390/brainsci12091150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/19/2022] [Accepted: 08/24/2022] [Indexed: 11/22/2022] Open
Abstract
A cochlear implant (CI) is currently the only FDA-approved biomedical device that can restore hearing for the majority of patients with severe-to-profound sensorineural hearing loss (SNHL). While prelingually and postlingually deaf individuals benefit substantially from CI, the outcomes after implantation vary greatly. Numerous studies have attempted to study the variables that affect CI outcomes, including the personal characteristics of CI candidates, environmental variables, and device-related variables. Up to 80% of the results remained unexplainable because all these variables could only roughly predict auditory performance with a CI. Brain structure/function differences after hearing deprivation, that is, cortical reorganization, has gradually attracted the attention of neuroscientists. The cross-modal reorganization in the auditory cortex following deafness is thought to be a key factor in the success of CI. In recent years, the adaptive and maladaptive effects of this reorganization on CI rehabilitation have been argued because the neural mechanisms of how this reorganization impacts CI learning and rehabilitation have not been revealed. Due to the lack of brain processes describing how this plasticity affects CI learning and rehabilitation, the adaptive and deleterious consequences of this reorganization on CI outcomes have recently been the subject of debate. This review describes the evidence for different roles of cross-modal reorganization in CI performance and attempts to explore the possible reasons. Additionally, understanding the core influencing mechanism requires taking into account the cortical changes from deafness to hearing restoration. However, methodological issues have restricted longitudinal research on cortical function in CI. Functional near-infrared spectroscopy (fNIRS) has been increasingly used for the study of brain function and language assessment in CI because of its unique advantages, which are considered to have great potential. Here, we review studies on auditory cortex reorganization in deaf patients and CI recipients, and then we try to illustrate the feasibility of fNIRS as a neuroimaging tool in predicting and assessing speech performance in CI recipients. Here, we review research on the cross-modal reorganization of the auditory cortex in deaf patients and CI recipients and seek to demonstrate the viability of using fNIRS as a neuroimaging technique to predict and evaluate speech function in CI recipients.
Collapse
Affiliation(s)
- Xiaoqing Zhou
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Menglong Feng
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Yaqin Hu
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Chanyuan Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Qingling Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Xiaoqin Luo
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Wei Yuan
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
- Correspondence: ; Tel.: +86-23-63535180
| |
Collapse
|
14
|
Leannah C, Willis AS, Quandt LC. Perceiving fingerspelling via point-light displays: The stimulus and the perceiver both matter. PLoS One 2022; 17:e0272838. [PMID: 35972921 PMCID: PMC9380947 DOI: 10.1371/journal.pone.0272838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 07/27/2022] [Indexed: 11/18/2022] Open
Abstract
Signed languages such as American Sign Language (ASL) rely on visuospatial information that combines hand and bodily movements, facial expressions, and fingerspelling. Signers communicate in a wide array of sub-optimal environments, such as in dim lighting or from a distance. While fingerspelling is a common and essential part of signed languages, the perception of fingerspelling in difficult visual environments is not well understood. The movement and spatial patterns of ASL are well-suited to representation by dynamic Point Light Display (PLD) stimuli in which human movement is shown as an array of moving dots affixed to joints on the body. We created PLD videos of fingerspelled location names. The location names were either Real (e.g., KUWAIT) or Pseudo-names (e.g., CLARTAND), and the PLDs showed either a High or a Low number of markers. In an online study, Deaf and Hearing ASL users (total N = 283) watched 27 PLD stimulus videos that varied by Word Type and Number of Markers. Participants watched the videos and typed the names they saw, along with how confident they were in their response. We predicted that when signers see ASL fingerspelling PLDs, language experience in ASL will be positively correlated with accuracy and self-rated confidence scores. We also predicted that Real location names would be understood better than Pseudo names. Our findings supported those predictions. We also discovered a significant interaction between Age and Word Type, which suggests that as people age, they use outside world knowledge to inform their fingerspelling success. Finally, we examined the accuracy and confidence in fingerspelling perception in early ASL users. Studying the relationship between language experience with PLD fingerspelling perception allows us to explore how hearing status, ASL fluency levels, and age of language acquisition affect the core abilities of understanding fingerspelling.
Collapse
Affiliation(s)
- Carly Leannah
- Educational Neuroscience, Gallaudet University, Washington, DC, United States of America
| | - Athena S. Willis
- Educational Neuroscience, Gallaudet University, Washington, DC, United States of America
| | - Lorna C. Quandt
- Educational Neuroscience, Gallaudet University, Washington, DC, United States of America
- * E-mail:
| |
Collapse
|
15
|
Caldwell HB. Sign and Spoken Language Processing Differences in the Brain: A Brief Review of Recent Research. Ann Neurosci 2022; 29:62-70. [PMID: 35875424 PMCID: PMC9305909 DOI: 10.1177/09727531211070538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 11/29/2021] [Indexed: 11/27/2022] Open
Abstract
Background: It is currently accepted that sign languages and spoken languages have significant processing commonalities. The evidence supporting this often merely investigates frontotemporal pathways, perisylvian language areas, hemispheric lateralization, and event-related potentials in typical settings. However, recent evidence has explored beyond this and uncovered numerous modality-dependent processing differences between sign languages and spoken languages by accounting for confounds that previously invalidated processing comparisons and by delving into the specific conditions in which they arise. However, these processing differences are often shallowly dismissed as unspecific to language. Summary: This review examined recent neuroscientific evidence for processing differences between sign and spoken language modalities and the arguments against these differences’ importance. Key distinctions exist in the topography of the left anterior negativity (LAN) and with modulations of event-related potential (ERP) components like the N400. There is also differential activation of typical spoken language processing areas, such as the conditional role of the temporal areas in sign language (SL) processing. Importantly, sign language processing uniquely recruits parietal areas for processing phonology and syntax and requires the mapping of spatial information to internal representations. Additionally, modality-specific feedback mechanisms distinctively involve proprioceptive post-output monitoring in sign languages, contrary to spoken languages’ auditory and visual feedback mechanisms. The only study to find ERP differences post-production revealed earlier lexical access in sign than spoken languages. Themes of temporality, the validity of an analogous anatomical mechanisms viewpoint, and the comprehensiveness of current language models were also discussed to suggest improvements for future research. Key message: Current neuroscience evidence suggests various ways in which processing differs between sign and spoken language modalities that extend beyond simple differences between languages. Consideration and further exploration of these differences will be integral in developing a more comprehensive view of language in the brain.
Collapse
Affiliation(s)
- Hayley Bree Caldwell
- Cognitive and Systems Neuroscience Research Hub (CSN-RH), School of Justice and Society, University of South Australia Magill Campus, Magill, South Australia, Australia
| |
Collapse
|
16
|
Becker Y, Claidière N, Margiotoudi K, Marie D, Roth M, Nazarian B, Anton JL, Coulon O, Meguerditchian A. Broca area homologue's asymmetry reflects gestural communication lateralisation in monkeys (Papio anubis). eLife 2022; 11:70521. [PMID: 35108197 PMCID: PMC8846582 DOI: 10.7554/elife.70521] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 02/01/2022] [Indexed: 11/23/2022] Open
Abstract
Manual gestures and speech recruit a common neural network, involving Broca’s area in the left hemisphere. Such speech-gesture integration gave rise to theories on the critical role of manual gesturing in the origin of language. Within this evolutionary framework, research on gestural communication in our closer primate relatives has received renewed attention for investigating its potential language-like features. Here, using in vivo anatomical MRI in 50 baboons, we found that communicative gesturing is related to Broca homologue’s marker in monkeys, namely the ventral portion of the Inferior Arcuate sulcus (IA sulcus). In fact, both direction and degree of gestural communication’s handedness – but not handedness for object manipulation are associated and correlated with contralateral depth asymmetry at this exact IA sulcus portion. In other words, baboons that prefer to communicate with their right hand have a deeper left-than-right IA sulcus, than those preferring to communicate with their left hand and vice versa. Interestingly, in contrast to handedness for object manipulation, gestural communication’s lateralisation is not associated to the Central sulcus depth asymmetry, suggesting a double dissociation of handedness’ types between manipulative action and gestural communication. It is thus not excluded that this specific gestural lateralisation signature within the baboons’ frontal cortex might reflect a phylogenetical continuity with language-related Broca lateralisation in humans.
Collapse
Affiliation(s)
- Yannick Becker
- UMR7290, Laboratoire de Psychologie Cognitive, CNRS, Aix-Marseille University, Marseille, France
| | - Nicolas Claidière
- UMR7290, Laboratoire de Psychologie Cognitive, CNRS, Aix-Marseille University, Marseille, France
| | - Konstantina Margiotoudi
- UMR7290, Laboratoire de Psychologie Cognitive, CNRS, Aix-Marseille University, Marseille, France
| | - Damien Marie
- UMR7290, Laboratoire de Psychologie Cognitive, CNRS, Aix-Marseille University, Marseille, France
| | - Muriel Roth
- Centre IRMf Institut de Neurosciences de la Timone, CNRS, Aix-Marseille University, Marseille, France
| | - Bruno Nazarian
- Centre IRM Institut de Neurosciences de la Timone, CNRS, Aix-Marseille University, Marseille, France
| | - Jean-Luc Anton
- Centre IRM Institut de Neurosciences de la Timone, CNRS, Aix-Marseille University, Marseille, France
| | - Olivier Coulon
- Institut de Neurosciences de la Timone, CNRS, Aix-Marseille University, Marseille, France
| | - Adrien Meguerditchian
- Laboratoire de Psychologie Cognitive, CNRS, Aix-Marseille University, Marseille, France
| |
Collapse
|
17
|
Goldberg EB, Hillis AE. Sign language aphasia. HANDBOOK OF CLINICAL NEUROLOGY 2022; 185:297-315. [PMID: 35078607 DOI: 10.1016/b978-0-12-823384-9.00019-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Signed languages are naturally occurring, fully formed linguistic systems that rely on the movement of the hands, arms, torso, and face within a sign space for production, and are perceived predominantly using visual perception. Despite stark differences in modality and linguistic structure, functional neural organization is strikingly similar to spoken language. Generally speaking, left frontal areas support sign production, and regions in the auditory cortex underlie sign comprehension-despite signers not relying on audition to process language. Given this, should a deaf or hearing signer suffer damage to the left cerebral hemisphere, language is vulnerable to impairment. Multiple cases of sign language aphasia have been documented following left hemisphere injury, and the general pattern of linguistic deficits mirrors those observed in spoken language. The right hemisphere likely plays a role in non-linguistic but critical visuospatial functions of sign language; therefore, individuals who are spared from damage to the left hemisphere but suffer injury to the right are at risk for a different set of communication deficits. In this chapter, we review the neurobiology of sign language and patterns of language deficits that follow brain injury in the deaf signing population.
Collapse
Affiliation(s)
- Emily B Goldberg
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, United States.
| | - Argye Elizabeth Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, United States; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, United States; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
18
|
Miranda M, Arias F, Arain A, Newman B, Rolston J, Richards S, Peters A, Pick LH. Neuropsychological evaluation in American Sign Language: A case study of a deaf patient with epilepsy. Epilepsy Behav Rep 2022; 19:100558. [PMID: 35856041 PMCID: PMC9287772 DOI: 10.1016/j.ebr.2022.100558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 03/23/2022] [Accepted: 06/18/2022] [Indexed: 11/19/2022] Open
Affiliation(s)
- Michelle Miranda
- University of Utah, Department of Neurology, Salt Lake City, UT 84132, USA
- Corresponding author at: University of Utah, Center for Alzheimer’s Care, Imaging, and Research (CACIR), 650 Komas Dr. Suite 106A, Salt Lake City, UT 84108, USA.
| | - Franchesca Arias
- Hinda & Arthur Marcus Institute for Aging Research at the Hebrew SeniorLife, Boston, MA 02131, USA
- Beth Israel Deaconess Medical Center, Department of Cognitive Neurology, Boston, 02215, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Amir Arain
- University of Utah, Department of Neurology, Salt Lake City, UT 84132, USA
| | - Blake Newman
- University of Utah, Department of Neurology, Salt Lake City, UT 84132, USA
| | - John Rolston
- University of Utah, Department of Neurology, Salt Lake City, UT 84132, USA
- University of Utah, Department of Neurosurgery, Salt Lake City, UT 84132, USA
| | - Sindhu Richards
- University of Utah, Department of Neurology, Salt Lake City, UT 84132, USA
| | - Angela Peters
- University of Utah, Department of Neurology, Salt Lake City, UT 84132, USA
| | - Lawrence H. Pick
- Gallaudet University, Department of Psychology, Washington, DC, 20002, USA
| |
Collapse
|
19
|
Martin KC, Ketchabaw WT, Turkeltaub PE. Plasticity of the language system in children and adults. HANDBOOK OF CLINICAL NEUROLOGY 2022; 184:397-414. [PMID: 35034751 PMCID: PMC10149040 DOI: 10.1016/b978-0-12-819410-2.00021-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The language system is perhaps the most unique feature of the human brain's cognitive architecture. It has long been a quest of cognitive neuroscience to understand the neural components that contribute to the hierarchical pattern processing and advanced rule learning required for language. The most important goal of this research is to understand how language becomes impaired when these neural components malfunction or are lost to stroke, and ultimately how we might recover language abilities under these circumstances. Additionally, understanding how the language system develops and how it can reorganize in the face of brain injury or dysfunction could help us to understand brain plasticity in cognitive networks more broadly. In this chapter we will discuss the earliest features of language organization in infants, and how deviations in typical development can-but in some cases, do not-lead to disordered language. We will then survey findings from adult stroke and aphasia research on the potential for recovering language processing in both the remaining left hemisphere tissue and in the non-dominant right hemisphere. Altogether, we hope to present a clear picture of what is known about the capacity for plastic change in the neurobiology of the human language system.
Collapse
Affiliation(s)
- Kelly C Martin
- Department of Neurology, Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States
| | - W Tyler Ketchabaw
- Department of Neurology, Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States
| | - Peter E Turkeltaub
- Department of Neurology, Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States; Research Division, MedStar National Rehabilitation Hospital, Washington, DC, United States.
| |
Collapse
|
20
|
Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Collapse
|
21
|
Berent I, de la Cruz-Pavía I, Brentari D, Gervain J. Infants differentially extract rules from language. Sci Rep 2021; 11:20001. [PMID: 34625613 PMCID: PMC8501030 DOI: 10.1038/s41598-021-99539-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 09/13/2021] [Indexed: 12/02/2022] Open
Abstract
Infants readily extract linguistic rules from speech. Here, we ask whether this advantage extends to linguistic stimuli that do not rely on the spoken modality. To address this question, we first examine whether infants can differentially learn rules from linguistic signs. We show that, despite having no previous experience with a sign language, six-month-old infants can extract the reduplicative rule (AA) from dynamic linguistic signs, and the neural response to reduplicative linguistic signs differs from reduplicative visual controls, matched for the dynamic spatiotemporal properties of signs. We next demonstrate that the brain response for reduplicative signs is similar to the response to reduplicative speech stimuli. Rule learning, then, apparently depends on the linguistic status of the stimulus, not its sensory modality. These results suggest that infants are language-ready. They possess a powerful rule system that is differentially engaged by all linguistic stimuli, speech or sign.
Collapse
Affiliation(s)
| | - Irene de la Cruz-Pavía
- Integrative Neuroscience and Cognition Center, Université de Paris & CNRS, Paris, France.,University of the Basque Country UPV/EHU, Vitoria-Gasteiz, Spain.,Basque Foundation for Science Ikerbasque, Bilbao, Spain
| | | | - Judit Gervain
- Integrative Neuroscience and Cognition Center, Université de Paris & CNRS, Paris, France.,University of Padua, Padua, Italy
| |
Collapse
|
22
|
Trettenbrein PC, Pendzich NK, Cramer JM, Steinbach M, Zaccarella E. Psycholinguistic norms for more than 300 lexical signs in German Sign Language (DGS). Behav Res Methods 2021; 53:1817-1832. [PMID: 33575986 PMCID: PMC8516755 DOI: 10.3758/s13428-020-01524-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/11/2020] [Indexed: 02/06/2023]
Abstract
Sign language offers a unique perspective on the human faculty of language by illustrating that linguistic abilities are not bound to speech and writing. In studies of spoken and written language processing, lexical variables such as, for example, age of acquisition have been found to play an important role, but such information is not as yet available for German Sign Language (Deutsche Gebärdensprache, DGS). Here, we present a set of norms for frequency, age of acquisition, and iconicity for more than 300 lexical DGS signs, derived from subjective ratings by 32 deaf signers. We also provide additional norms for iconicity and transparency for the same set of signs derived from ratings by 30 hearing non-signers. In addition to empirical norming data, the dataset includes machine-readable information about a sign's correspondence in German and English, as well as annotations of lexico-semantic and phonological properties: one-handed vs. two-handed, place of articulation, most likely lexical class, animacy, verb type, (potential) homonymy, and potential dialectal variation. Finally, we include information about sign onset and offset for all stimulus clips from automated motion-tracking data. All norms, stimulus clips, data, as well as code used for analysis are made available through the Open Science Framework in the hope that they may prove to be useful to other researchers: https://doi.org/10.17605/OSF.IO/MZ8J4.
Collapse
Affiliation(s)
- Patrick C Trettenbrein
- Department of Neuropsychology, Max Planck Institute for Human Cognitive & Brain Sciences, Stephanstraße 1a, Leipzig, 04103, Germany.
- International Max Planck Research School on Neuroscience of Communication: Structure, Function, & Plasticity (IMPRS NeuroCom), Stephanstraße 1a, Leipzig, 04103, Germany.
| | - Nina-Kristin Pendzich
- SignLab, Department of German Philology, Georg-August-University, Käte-Hamburger-Weg 3, Göttingen, 37073, Germany
| | - Jens-Michael Cramer
- SignLab, Department of German Philology, Georg-August-University, Käte-Hamburger-Weg 3, Göttingen, 37073, Germany
| | - Markus Steinbach
- SignLab, Department of German Philology, Georg-August-University, Käte-Hamburger-Weg 3, Göttingen, 37073, Germany
| | - Emiliano Zaccarella
- Department of Neuropsychology, Max Planck Institute for Human Cognitive & Brain Sciences, Stephanstraße 1a, Leipzig, 04103, Germany
| |
Collapse
|
23
|
Andin J, Holmer E, Schönström K, Rudner M. Working Memory for Signs with Poor Visual Resolution: fMRI Evidence of Reorganization of Auditory Cortex in Deaf Signers. Cereb Cortex 2021; 31:3165-3176. [PMID: 33625498 PMCID: PMC8196262 DOI: 10.1093/cercor/bhaa400] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 12/14/2020] [Accepted: 12/14/2020] [Indexed: 11/16/2022] Open
Abstract
Stimulus degradation adds to working memory load during speech processing. We investigated whether this applies to sign processing and, if so, whether the mechanism implicates secondary auditory cortex. We conducted an fMRI experiment where 16 deaf early signers (DES) and 22 hearing non-signers performed a sign-based n-back task with three load levels and stimuli presented at high and low resolution. We found decreased behavioral performance with increasing load and decreasing visual resolution, but the neurobiological mechanisms involved differed between the two manipulations and did so for both groups. Importantly, while the load manipulation was, as predicted, accompanied by activation in the frontoparietal working memory network, the resolution manipulation resulted in temporal and occipital activation. Furthermore, we found evidence of cross-modal reorganization in the secondary auditory cortex: DES had stronger activation and stronger connectivity between this and several other regions. We conclude that load and stimulus resolution have different neural underpinnings in the visual–verbal domain, which has consequences for current working memory models, and that for DES the secondary auditory cortex is involved in the binding of representations when task demands are low.
Collapse
Affiliation(s)
- Josefine Andin
- Department of Behavioural Science and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linnaeus Centre HEAD, Sweden
| | - Emil Holmer
- Department of Behavioural Science and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linnaeus Centre HEAD, Sweden.,Center for Medical Image Science and Visualization, Linköping, Sweden
| | | | - Mary Rudner
- Department of Behavioural Science and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linnaeus Centre HEAD, Sweden.,Center for Medical Image Science and Visualization, Linköping, Sweden
| |
Collapse
|
24
|
The signing body: extensive sign language practice shapes the size of hands and face. Exp Brain Res 2021; 239:2233-2249. [PMID: 34028597 PMCID: PMC8282562 DOI: 10.1007/s00221-021-06121-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 04/21/2021] [Indexed: 11/20/2022]
Abstract
The representation of the metrics of the hands is distorted, but is susceptible to malleability due to expert dexterity (magicians) and long-term tool use (baseball players). However, it remains unclear whether modulation leads to a stable representation of the hand that is adopted in every circumstance, or whether the modulation is closely linked to the spatial context where the expertise occurs. To this aim, a group of 10 experienced Sign Language (SL) interpreters were recruited to study the selective influence of expertise and space localisation in the metric representation of hands. Experiment 1 explored differences in hands’ size representation between the SL interpreters and 10 age-matched controls in near-reaching (Condition 1) and far-reaching space (Condition 2), using the localisation task. SL interpreters presented reduced hand size in near-reaching condition, with characteristic underestimation of finger lengths, and reduced overestimation of hands and wrists widths in comparison with controls. This difference was lost in far-reaching space, confirming the effect of expertise on hand representations is closely linked to the spatial context where an action is performed. As SL interpreters are also experts in the use of their face with communication purposes, the effects of expertise in the metrics of the face were also studied (Experiment 2). SL interpreters were more accurate than controls, with overall reduction of width overestimation. Overall, expertise modifies the representation of relevant body parts in a specific and context-dependent manner. Hence, different representations of the same body part can coexist simultaneously.
Collapse
|
25
|
Loos C, Napoli DJ. Expanding Echo: Coordinated Head Articulations as Nonmanual Enhancements in Sign Language Phonology. Cogn Sci 2021; 45:e12958. [PMID: 34018245 DOI: 10.1111/cogs.12958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 11/09/2020] [Accepted: 02/06/2021] [Indexed: 11/28/2022]
Abstract
Echo phonology was originally proposed to account for obligatory coordination of manual and mouth articulations observed in several sign languages. However, previous research into the phenomenon lacks clear criteria for which components of movement can or must be copied when the articulators are so different. Nor is there discussion of which nonmanual articulators can echo manual movement. Given the prosodic properties of echoes (coordination of onset/offset and of dynamics such as speed) as well as general motoric coordination of various articulators in the human body, we expect that the mouth is not the only nonmanual articulator involved in echo phonology. In this study, we look at a fixed set of lexical items across 36 sign languages and establish that the head can echo manual movement with respect to timing and to the axis/axes of manual movement. We propose that what matters in echo phonology is the visual percept of temporally coordinated movement that repeats a salient movement property in such a way as to give the visual impression of a copy. Our findings suggest that echoes are not obligatory motor couplings of two or more articulators but may enhance phonological distinctions that are otherwise difficult to see.
Collapse
Affiliation(s)
- Cornelia Loos
- Institut für Deutsche Gebärdensprache, Universität Hamburg
| | | |
Collapse
|
26
|
Xu M, Li D, Li P. Brain decoding in multiple languages: Can cross-language brain decoding work? BRAIN AND LANGUAGE 2021; 215:104922. [PMID: 33556764 DOI: 10.1016/j.bandl.2021.104922] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 01/05/2021] [Accepted: 01/19/2021] [Indexed: 06/12/2023]
Abstract
The approach of cross-language brain decoding is to use models of brain decoding from one language to decode stimuli of another language. It has the potential to provide new insights into how our brain represents multiple languages. While it is possible to decode semantic information across different languages from neuroimaging data, the approach's overall success remains to be tested and depends on a number of factors such as cross-language similarity, age of acquisition/proficiency levels, and depth of language processing. We expect to see continued progress in this domain, from a traditional focus on words and concrete concepts toward the use of naturalistic experimental tasks involving higher-level language processing (e.g., discourse processing). The approach can also be applied to understand how cross-modal, cross-cultural, and other nonlinguistic factors may influence neural representations of different languages. This article provides an overview of cross-language brain decoding with suggestions for future research directions.
Collapse
Affiliation(s)
- Min Xu
- Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen 518060, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen 518060, China.
| | - Duo Li
- Department of Chinese and Bilingual Studies, Faculty of Humanities, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Ping Li
- Department of Chinese and Bilingual Studies, Faculty of Humanities, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| |
Collapse
|
27
|
Luna S, Joubert S, Blondel M, Cecchetto C, Gagné JP. The Impact of Aging on Spatial Abilities in Deaf Users of a Sign Language. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 26:230-240. [PMID: 33221919 DOI: 10.1093/deafed/enaa034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 09/21/2020] [Accepted: 09/22/2020] [Indexed: 06/11/2023]
Abstract
Research involving the general population of people who use a spoken language to communicate has demonstrated that older adults experience cognitive and physical changes associated with aging. Notwithstanding the differences in the cognitive processes involved in sign and spoken languages, it is possible that aging can also affect cognitive processing in deaf signers. This research aims to explore the impact of aging on spatial abilities among sign language users. Results showed that younger signers were more accurate than older signers on all spatial tasks. Therefore, the age-related impact on spatial abilities found in the older hearing population can be generalized to the population of signers. Potential implications for sign language production and comprehension are discussed.
Collapse
Affiliation(s)
- Stéphanie Luna
- Faculty of Medicine, Université de Montréal
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal
| | - Sven Joubert
- Department of Psychology, Université de Montréal
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal
| | - Marion Blondel
- Centre National de Recherche Scientifique, Structures Formelles du Langage, Université Paris 8
| | - Carlo Cecchetto
- Centre National de Recherche Scientifique, Structures Formelles du Langage, Université Paris 8
- Departement of Psychology, University of Milan-Bicocca
| | - Jean-Pierre Gagné
- Faculty of Medicine, Université de Montréal
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal
| |
Collapse
|
28
|
Banaszkiewicz A, Bola Ł, Matuszewski J, Szczepanik M, Kossowski B, Mostowski P, Rutkowski P, Śliwińska M, Jednoróg K, Emmorey K, Marchewka A. The role of the superior parietal lobule in lexical processing of sign language: Insights from fMRI and TMS. Cortex 2020; 135:240-254. [PMID: 33401098 DOI: 10.1016/j.cortex.2020.10.025] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 09/24/2020] [Accepted: 10/22/2020] [Indexed: 11/29/2022]
Abstract
There is strong evidence that neuronal bases for language processing are remarkably similar for sign and spoken languages. However, as meanings and linguistic structures of sign languages are coded in movement and space and decoded through vision, differences are also present, predominantly in occipitotemporal and parietal areas, such as superior parietal lobule (SPL). Whether the involvement of SPL reflects domain-general visuospatial attention or processes specific to sign language comprehension remains an open question. Here we conducted two experiments to investigate the role of SPL and the laterality of its engagement in sign language lexical processing. First, using unique longitudinal and between-group designs we mapped brain responses to sign language in hearing late learners and deaf signers. Second, using transcranial magnetic stimulation (TMS) in both groups we tested the behavioural relevance of SPL's engagement and its lateralisation during sign language comprehension. SPL activation in hearing participants was observed in the right hemisphere before and bilaterally after the sign language course. Additionally, after the course hearing learners exhibited greater activation in the occipital cortex and left SPL than deaf signers. TMS applied to the right SPL decreased accuracy in both hearing learners and deaf signers. Stimulation of the left SPL decreased accuracy only in hearing learners. Our results suggest that right SPL might be involved in visuospatial attention while left SPL might support phonological decoding of signs in non-proficient signers.
Collapse
Affiliation(s)
- A Banaszkiewicz
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Ł Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - J Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - M Szczepanik
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - B Kossowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - P Mostowski
- Section for Sign Linguistics, Faculty of Polish Studies, University of Warsaw, Warsaw, Poland
| | - P Rutkowski
- Section for Sign Linguistics, Faculty of Polish Studies, University of Warsaw, Warsaw, Poland
| | - M Śliwińska
- Department of Psychology, University of York, Heslington, UK
| | - K Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - K Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, USA
| | - A Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
29
|
Lieberman AM, Borovsky A. Lexical Recognition in Deaf Children Learning American Sign Language: Activation of Semantic and Phonological Features of Signs. LANGUAGE LEARNING 2020; 70:935-973. [PMID: 33510545 PMCID: PMC7837603 DOI: 10.1111/lang.12409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Children learning language efficiently process single words, and activate semantic, phonological, and other features of words during recognition. We investigated lexical recognition in deaf children acquiring American Sign Language (ASL) to determine how perceiving language in the visual-spatial modality affects lexical recognition. Twenty native- or early-exposed signing deaf children (ages 4 to 8 years) participated in a visual world eye-tracking study. Children were presented with a single ASL sign, target picture, and three competitor pictures that varied in their phonological and semantic relationship to the target. Children shifted gaze to the target picture shortly after sign offset. Children showed robust evidence for activation of semantic but not phonological features of signs, however in their behavioral responses children were most susceptible to phonological competitors. Results demonstrate that single word recognition in ASL is largely parallel to spoken language recognition among children who are developing a mature lexicon.
Collapse
Affiliation(s)
- Amy M Lieberman
- Language and Literacy Department, Wheelock College of Education and Human Development, Boston University, 2 Silber Way, Boston, MA 02215
| | - Arielle Borovsky
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2122
| |
Collapse
|
30
|
Coez A, Fillon L, Saitovitch A, Rutten C, Marlin S, Boisgontier J, Vinçon-Leite A, Lemaitre H, Grévent D, Roux CJ, Dangouloff-Ros V, Levy R, Bizaguet E, Rouillon I, Garabédian EN, Denoyelle F, Zilbovicius M, Loundon N, Boddaert N. Arterial spin labeling brain MRI study to evaluate the impact of deafness on cerebral perfusion in 79 children before cochlear implantation. Neuroimage Clin 2020; 29:102510. [PMID: 33369563 PMCID: PMC7777537 DOI: 10.1016/j.nicl.2020.102510] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 11/06/2020] [Accepted: 11/16/2020] [Indexed: 01/06/2023]
Abstract
Age at implantation is considered to be a major factor, influencing outcomes after pediatric cochlear implantation. In the absence of acoustic input, it has been proposed that cross-modal reorganization can be detrimental for adaptation to the new electrical input provided by a cochlear implant. Here, through a retrospective study, we aimed to investigate differences in cerebral blood flow (CBF) at rest prior to implantation in children with congenital deafness compared to normally hearing children. In addition, we looked at the putative link between pre-operative rest-CBF and the oral intelligibility scores at 12 months post-implantation. Finally, we observed the evolution of perfusion with age, within brain areas showing abnormal rest-CBF associated to deafness, in deaf children and in normally hearing children. In children older than 5 years old, results showed a significant bilateral hypoperfusion in temporal regions in deaf children, particularly in Heschl's gyrus, and a significant hyperperfusion of occipital regions. Furthermore, in children older than 5 years old, whole brain voxel-by-voxel correlation analysis between pre-operative rest-CBF and oral intelligibility scores at 12 months post-implantation, showed significant negative correlation localized in the occipital regions: children who performed worse in the speech perception test one year after implantation were those presenting higher preoperative CBF values in these occipital regions. Finally, when comparing mean relative perfusion (extracted from the temporal regions found abnormal on whole-brain voxel-based analysis) across ages in patients and controls, we observed that the temporal perfusion evolution was significantly different in deaf children than in normally hearing children. Indeed, while temporal perfusion increased with age in normally hearing children, it remained stable in deaf children. We showed a critical period around 4 years old, where in the context of auditory deprivation, there is a lack of synaptic activity in auditory regions. These results support the benefits of early cochlear implantation to maximize the effectiveness of auditory rehabilitation and to avoid cross-modal reorganization.
Collapse
Affiliation(s)
- Arnaud Coez
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France; Service d'oto-rhino-laryngologie pédiatrique, Centre de Réference des Surdités Génétiques, Hôpital Necker Enfants Malades, AP-HP, Université de Paris, Paris, France; Laboratoire de correction auditive, Bizaguet, Paris, France; Institut de l'Audition, Paris, France.
| | - Ludovic Fillon
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - Ana Saitovitch
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - Caroline Rutten
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - Sandrine Marlin
- Service de Génétique Médicale, Centre de Référence des Surdités Génétiques, Hôpital Necker Enfants Malades, AP-HP, Université de Paris, Paris, France
| | - Jennifer Boisgontier
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - Alice Vinçon-Leite
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - Hervé Lemaitre
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - David Grévent
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - Charles-Joris Roux
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - Volodia Dangouloff-Ros
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - Raphaël Levy
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France
| | - Eric Bizaguet
- Service d'oto-rhino-laryngologie pédiatrique, Centre de Réference des Surdités Génétiques, Hôpital Necker Enfants Malades, AP-HP, Université de Paris, Paris, France; Laboratoire de correction auditive, Bizaguet, Paris, France
| | - Isabelle Rouillon
- Service d'oto-rhino-laryngologie pédiatrique, Centre de Réference des Surdités Génétiques, Hôpital Necker Enfants Malades, AP-HP, Université de Paris, Paris, France
| | - Eréa Noël Garabédian
- Service d'oto-rhino-laryngologie pédiatrique, Centre de Réference des Surdités Génétiques, Hôpital Necker Enfants Malades, AP-HP, Université de Paris, Paris, France
| | - Françoise Denoyelle
- Service d'oto-rhino-laryngologie pédiatrique, Centre de Réference des Surdités Génétiques, Hôpital Necker Enfants Malades, AP-HP, Université de Paris, Paris, France; Institut de l'Audition, Paris, France
| | - Monica Zilbovicius
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France; INSERM ERL "Developmental Trajectories & Psychiatry", Université Paris Saclay, Ecole Normale Supérieure Paris-Saclay, Université de Paris, CNRS, Centre Borelli, France
| | - Natalie Loundon
- Service d'oto-rhino-laryngologie pédiatrique, Centre de Réference des Surdités Génétiques, Hôpital Necker Enfants Malades, AP-HP, Université de Paris, Paris, France; Institut de l'Audition, Paris, France
| | - Nathalie Boddaert
- Service de radiologie pédiatrique, Hôpital Necker Enfants Malades, Assistance Publique Hôpitaux de Paris, APHP, Université de Paris, INSERM U1163, Institut Imagine, Paris, France; INSERM ERL "Developmental Trajectories & Psychiatry", Université Paris Saclay, Ecole Normale Supérieure Paris-Saclay, Université de Paris, CNRS, Centre Borelli, France
| |
Collapse
|
31
|
Shum J, Fanda L, Dugan P, Doyle WK, Devinsky O, Flinker A. Neural correlates of sign language production revealed by electrocorticography. Neurology 2020; 95:e2880-e2889. [PMID: 32788249 PMCID: PMC7734739 DOI: 10.1212/wnl.0000000000010639] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 05/20/2020] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE The combined spatiotemporal dynamics underlying sign language production remain largely unknown. To investigate these dynamics compared to speech production, we used intracranial electrocorticography during a battery of language tasks. METHODS We report a unique case of direct cortical surface recordings obtained from a neurosurgical patient with intact hearing who is bilingual in English and American Sign Language. We designed a battery of cognitive tasks to capture multiple modalities of language processing and production. RESULTS We identified 2 spatially distinct cortical networks: ventral for speech and dorsal for sign production. Sign production recruited perirolandic, parietal, and posterior temporal regions, while speech production recruited frontal, perisylvian, and perirolandic regions. Electrical cortical stimulation confirmed this spatial segregation, identifying mouth areas for speech production and limb areas for sign production. The temporal dynamics revealed superior parietal cortex activity immediately before sign production, suggesting its role in planning and producing sign language. CONCLUSIONS Our findings reveal a distinct network for sign language and detail the temporal propagation supporting sign production.
Collapse
Affiliation(s)
- Jennifer Shum
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY.
| | - Lora Fanda
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Patricia Dugan
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Werner K Doyle
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Orrin Devinsky
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Adeen Flinker
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| |
Collapse
|
32
|
Krebs J, Malaia E, Wilbur RB, Roehm D. Psycholinguistic mechanisms of classifier processing in sign language. J Exp Psychol Learn Mem Cogn 2020; 47:998-1011. [PMID: 33211523 DOI: 10.1037/xlm0000958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Nonsigners viewing sign language are sometimes able to guess the meaning of signs by relying on the overt connection between form and meaning, or iconicity (cf. Ortega, Özyürek, & Peeters, 2020; Strickland et al., 2015). One word class in sign languages that appears to be highly iconic is classifiers: verb-like signs that can refer to location change or handling. Classifier use and meaning are governed by linguistic rules, yet in comparison with lexical verb signs, classifiers are highly variable in their morpho-phonology (variety of potential handshapes and motion direction within the sign). These open-class linguistic items in sign languages prompt a question about the mechanisms of their processing: Are they part of a gestural-semiotic system (processed like the gestures of nonsigners), or are they processed as linguistic verbs? To examine the psychological mechanisms of classifier comprehension, we recorded the electroencephalogram (EEG) activity of signers who watched videos of signed sentences with classifiers. We manipulated the sentence word order of the stimuli (subject-object-verb [SOV] vs. object-subject-verb [OSV]), contrasting the two conditions, which, according to different processing hypotheses, should incur increased processing costs for OSV orders. As previously reported for lexical signs, we observed an N400 effect for OSV compared with SOV, reflecting increased cognitive load for linguistic processing. These findings support the hypothesis that classifiers are a linguistic part of speech in sign language, extending the current understanding of processing mechanisms at the interface of linguistic form and meaning. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Julia Krebs
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg
| | - Evie Malaia
- Department of Communicative Disorders, University of Alabama
| | | | - Dietmar Roehm
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg
| |
Collapse
|
33
|
Trettenbrein PC, Papitto G, Friederici AD, Zaccarella E. Functional neuroanatomy of language without speech: An ALE meta-analysis of sign language. Hum Brain Mapp 2020; 42:699-712. [PMID: 33118302 PMCID: PMC7814757 DOI: 10.1002/hbm.25254] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 10/09/2020] [Indexed: 12/19/2022] Open
Abstract
Sign language (SL) conveys linguistic information using gestures instead of sounds. Here, we apply a meta‐analytic estimation approach to neuroimaging studies (N = 23; subjects = 316) and ask whether SL comprehension in deaf signers relies on the same primarily left‐hemispheric cortical network implicated in spoken and written language (SWL) comprehension in hearing speakers. We show that: (a) SL recruits bilateral fronto‐temporo‐occipital regions with strong left‐lateralization in the posterior inferior frontal gyrus known as Broca's area, mirroring functional asymmetries observed for SWL. (b) Within this SL network, Broca's area constitutes a hub which attributes abstract linguistic information to gestures. (c) SL‐specific voxels in Broca's area are also crucially involved in SWL, as confirmed by meta‐analytic connectivity modeling using an independent large‐scale neuroimaging database. This strongly suggests that the human brain evolved a lateralized language network with a supramodal hub in Broca's area which computes linguistic information independent of speech.
Collapse
Affiliation(s)
- Patrick C Trettenbrein
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom), Leipzig, Germany
| | - Giorgio Papitto
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom), Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Emiliano Zaccarella
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
34
|
Banaszkiewicz A, Matuszewski J, Bola Ł, Szczepanik M, Kossowski B, Rutkowski P, Szwed M, Emmorey K, Jednoróg K, Marchewka A. Multimodal imaging of brain reorganization in hearing late learners of sign language. Hum Brain Mapp 2020; 42:384-397. [PMID: 33098616 PMCID: PMC7776004 DOI: 10.1002/hbm.25229] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Revised: 07/17/2020] [Accepted: 09/30/2020] [Indexed: 11/09/2022] Open
Abstract
The neural plasticity underlying language learning is a process rather than a single event. However, the dynamics of training-induced brain reorganization have rarely been examined, especially using a multimodal magnetic resonance imaging approach, which allows us to study the relationship between functional and structural changes. We focus on sign language acquisition in hearing adults who underwent an 8-month long course and five neuroimaging sessions. We assessed what neural changes occurred as participants learned a new language in a different modality-as reflected by task-based activity, connectivity changes, and co-occurring structural alterations. Major changes in the activity pattern appeared after just 3 months of learning, as indicated by increases in activation within the modality-independent perisylvian language network, together with increased activation in modality-dependent parieto-occipital, visuospatial and motion-sensitive regions. Despite further learning, no alterations in activation were detected during the following months. However, enhanced coupling between left-lateralized occipital and inferior frontal regions was observed as the proficiency increased. Furthermore, an increase in gray matter volume was detected in the left inferior frontal gyrus which peaked at the end of learning. Overall, these results showed complexity and temporal distinctiveness of various aspects of brain reorganization associated with learning of new language in different sensory modality.
Collapse
Affiliation(s)
- Anna Banaszkiewicz
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Jacek Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Łukasz Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.,Institute of Psychology, Jagiellonian University, Kraków, Poland.,Department of Psychology, Harvard University, Boston, Massachusetts, USA
| | - Michał Szczepanik
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Bartosz Kossowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Paweł Rutkowski
- Section for Sign Linguistics, Faculty of Polish Studies, University of Warsaw, Warsaw, Poland
| | - Marcin Szwed
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, California, USA
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
35
|
Deng Q, Gu F, Tong SX. Lexical processing in sign language: A visual mismatch negativity study. Neuropsychologia 2020; 148:107629. [PMID: 32976852 DOI: 10.1016/j.neuropsychologia.2020.107629] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 09/08/2020] [Accepted: 09/14/2020] [Indexed: 10/23/2022]
Abstract
Event-related potential studies of spoken and written language show the automatic access of auditory and visual words, as indexed by mismatch negativity (MMN) or visual MMN (vMMN). The present study examined whether the same automatic lexical processing occurs in a visual-gestural language, i.e., Hong Kong Sign Language (HKSL). Using a classic visual oddball paradigm, deaf signers and hearing non-signers were presented with a sequence of static images representing HKSL lexical signs and non-signs. When compared with hearing non-signers, deaf signers exhibited an enhanced vMMN elicited by the lexical signs at around 230 ms, and a larger P1-N170 complex evoked by both lexical sign and non-sign standards at the parieto-occipital area in the early time window between 65 ms and 170 ms. These findings indicate that deaf signers implicitly process the lexical sign and that neural response differences between deaf signers and hearing non-signers occur at the early stage of sign processing.
Collapse
Affiliation(s)
- Qinli Deng
- Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, China.
| | - Feng Gu
- Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, China; The College of Literature and Journalism, Sichuan University, Chengdu, China.
| | - Shelley Xiuli Tong
- Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, China.
| |
Collapse
|
36
|
Leonard MK, Lucas B, Blau S, Corina DP, Chang EF. Cortical Encoding of Manual Articulatory and Linguistic Features in American Sign Language. Curr Biol 2020; 30:4342-4351.e3. [PMID: 32888480 DOI: 10.1016/j.cub.2020.08.048] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 07/17/2020] [Accepted: 08/13/2020] [Indexed: 01/08/2023]
Abstract
The fluent production of a signed language requires exquisite coordination of sensory, motor, and cognitive processes. Similar to speech production, language produced with the hands by fluent signers appears effortless but reflects the precise coordination of both large-scale and local cortical networks. The organization and representational structure of sensorimotor features underlying sign language phonology in these networks remains unknown. Here, we present a unique case study of high-density electrocorticography (ECoG) recordings from the cortical surface of profoundly deaf signer during awake craniotomy. While neural activity was recorded from sensorimotor cortex, the participant produced a large variety of movements in linguistic and transitional movement contexts. We found that at both single electrode and neural population levels, high-gamma activity reflected tuning for particular hand, arm, and face movements, which were organized along dimensions that are relevant for phonology in sign language. Decoding of manual articulatory features revealed a clear functional organization and population dynamics for these highly practiced movements. Furthermore, neural activity clearly differentiated linguistic and transitional movements, demonstrating encoding of language-relevant articulatory features. These results provide a novel and unique view of the fine-scale dynamics of complex and meaningful sensorimotor actions.
Collapse
Affiliation(s)
- Matthew K Leonard
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, USA; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Ben Lucas
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, USA; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Shane Blau
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA; Department of Linguistics, University of California, Davis, Davis, CA, USA
| | - David P Corina
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA; Department of Linguistics, University of California, Davis, Davis, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, USA; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
37
|
Keitel A, Gross J, Kayser C. Shared and modality-specific brain regions that mediate auditory and visual word comprehension. eLife 2020; 9:e56972. [PMID: 32831168 PMCID: PMC7470824 DOI: 10.7554/elife.56972] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 08/18/2020] [Indexed: 12/22/2022] Open
Abstract
Visual speech carried by lip movements is an integral part of communication. Yet, it remains unclear in how far visual and acoustic speech comprehension are mediated by the same brain regions. Using multivariate classification of full-brain MEG data, we first probed where the brain represents acoustically and visually conveyed word identities. We then tested where these sensory-driven representations are predictive of participants' trial-wise comprehension. The comprehension-relevant representations of auditory and visual speech converged only in anterior angular and inferior frontal regions and were spatially dissociated from those representations that best reflected the sensory-driven word identity. These results provide a neural explanation for the behavioural dissociation of acoustic and visual speech comprehension and suggest that cerebral representations encoding word identities may be more modality-specific than often upheld.
Collapse
Affiliation(s)
- Anne Keitel
- Psychology, University of DundeeDundeeUnited Kingdom
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
- Institute for Biomagnetism and Biosignalanalysis, University of MünsterMünsterGermany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld UniversityBielefeldGermany
| |
Collapse
|
38
|
Simon M, Lazzouni L, Campbell E, Delcenserie A, Muise-Hennessey A, Newman AJ, Champoux F, Lepore F. Enhancement of visual biological motion recognition in early-deaf adults: Functional and behavioral correlates. PLoS One 2020; 15:e0236800. [PMID: 32776962 PMCID: PMC7416928 DOI: 10.1371/journal.pone.0236800] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 07/15/2020] [Indexed: 11/19/2022] Open
Abstract
Deafness leads to brain modifications that are generally associated with a cross-modal activity of the auditory cortex, particularly for visual stimulations. In the present study, we explore the cortical processing of biological motion that conveyed either non-communicative (pantomimes) or communicative (emblems) information, in early-deaf and hearing individuals, using fMRI analyses. Behaviorally, deaf individuals showed an advantage in detecting communicative gestures relative to hearing individuals. Deaf individuals also showed significantly greater activation in the superior temporal cortex (including the planum temporale and primary auditory cortex) than hearing individuals. The activation levels in this region were correlated with deaf individuals’ response times. This study provides neural and behavioral evidence that cross-modal plasticity leads to functional advantages in the processing of biological motion following lifelong auditory deprivation.
Collapse
Affiliation(s)
- Marie Simon
- Département de Psychologie, Centre de recherche en neuropsychologie et cognition, Université de Montréal, Québec, Canada
- * E-mail:
| | - Latifa Lazzouni
- Département de Psychologie, Centre de recherche en neuropsychologie et cognition, Université de Montréal, Québec, Canada
| | - Emma Campbell
- Département de Psychologie, Centre de recherche en neuropsychologie et cognition, Université de Montréal, Québec, Canada
| | - Audrey Delcenserie
- Département de Psychologie, Centre de recherche en neuropsychologie et cognition, Université de Montréal, Québec, Canada
- École d’orthophonie et d’audiologie, Université de Montréal, Montréal, Québec, Canada
| | - Alexandria Muise-Hennessey
- Department of Psychology and Neuroscience, NeuroCognitive Imaging Lab, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Aaron J. Newman
- Department of Psychology and Neuroscience, NeuroCognitive Imaging Lab, Dalhousie University, Halifax, Nova Scotia, Canada
| | - François Champoux
- École d’orthophonie et d’audiologie, Université de Montréal, Montréal, Québec, Canada
- Centre de recherche de l’Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
| | - Franco Lepore
- Département de Psychologie, Centre de recherche en neuropsychologie et cognition, Université de Montréal, Québec, Canada
| |
Collapse
|
39
|
Liu L, Yan X, Li H, Gao D, Ding G. Identifying a supramodal language network in human brain with individual fingerprint. Neuroimage 2020; 220:117131. [PMID: 32622983 DOI: 10.1016/j.neuroimage.2020.117131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Revised: 06/21/2020] [Accepted: 06/29/2020] [Indexed: 11/26/2022] Open
Abstract
Where is human language processed in the brain independent of its form? We addressed this issue by analyzing the cortical responses to spoken, written and signed sentences at the level of individual subjects. By applying a novel fingerprinting method based on the distributed pattern of brain activity, we identified a left-lateralized network composed by the superior temporal gyrus/sulcus (STG/STS), inferior frontal gyrus (IFG), precentral gyrus/sulcus (PCG/PCS), and supplementary motor area (SMA). In these regions, the local distributed activity pattern induced by any of the three language modalities can predict the activity pattern induced by the other two modalities, and such cross-modal prediction is individual-specific. The prediction is successful for speech-sign bilinguals across all possible modality pairs, but fails for monolinguals across sign-involved pairs. In comparison, conventional group-mean focused analysis detects shared cortical activations across modalities only in the STG, PCG/PCS and SMA, and the shared activations were found in both groups. This study reveals the core language system in the brain that is shared by spoken, written and signed language, and demonstrates that it is possible and desirable to utilize the information of individual differences for functional brain mapping.
Collapse
Affiliation(s)
- Lanfang Liu
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, 510006, China; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, 100875, China
| | - Xin Yan
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, MI, 48823, United States; Mental Health Center, Wenhua College, Wuhan, 430000, China
| | - Hehui Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, 100875, China
| | - Dingguo Gao
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, 510006, China.
| | - Guosheng Ding
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, 100875, China.
| |
Collapse
|
40
|
Crossmodal reorganisation in deafness: Mechanisms for functional preservation and functional change. Neurosci Biobehav Rev 2020; 113:227-237. [DOI: 10.1016/j.neubiorev.2020.03.019] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 01/29/2020] [Accepted: 03/16/2020] [Indexed: 11/23/2022]
|
41
|
Castaldi E, Lunghi C, Morrone MC. Neuroplasticity in adult human visual cortex. Neurosci Biobehav Rev 2020; 112:542-552. [DOI: 10.1016/j.neubiorev.2020.02.028] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 12/30/2019] [Accepted: 02/20/2020] [Indexed: 12/27/2022]
|
42
|
Abstract
Human language allows us to create an infinitude of ideas from a finite set of basic building blocks. What is the neurobiology of this combinatory system? Research has begun to dissect the neural basis of natural language syntax and semantics by analyzing the basics of meaning composition, such as two-word phrases. This work has revealed a system of composition that involves rapidly peaking activity in the left anterior temporal lobe and later engagement of the medial prefrontal cortex. Both brain regions show evidence of shared processing between comprehension and production, as well as between spoken and signed language. Both appear to compute meaning, not syntactic structure. This Review discusses how language builds meaning and lays out directions for future neurobiological research on the combinatory system.
Collapse
Affiliation(s)
- Liina Pylkkänen
- Departments of Linguistics and Psychology, New York University, New York, NY, USA.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
43
|
Simon M, Campbell E, Genest F, MacLean MW, Champoux F, Lepore F. The Impact of Early Deafness on Brain Plasticity: A Systematic Review of the White and Gray Matter Changes. Front Neurosci 2020; 14:206. [PMID: 32292323 PMCID: PMC7135892 DOI: 10.3389/fnins.2020.00206] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 02/25/2020] [Indexed: 11/29/2022] Open
Abstract
Background: Auditory deprivation alters cortical and subcortical brain regions, primarily linked to auditory and language processing, resulting in behavioral consequences. Neuroimaging studies have reported various degrees of structural changes, yet multiple variables in deafness profiles need to be considered for proper interpretation of results. To date, many inconsistencies are reported in the gray and white matter alterations following early profound deafness. The purpose of this study was to provide the first systematic review synthesizing gray and white matter changes in deaf individuals. Methods: We conducted a systematic review according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement in 27 studies comprising 626 deaf individuals. Results: Evidence shows that auditory deprivation significantly alters the white matter across the primary and secondary auditory cortices. The most consistent alteration across studies was in the bilateral superior temporal gyri. Furthermore, reductions in the fractional anisotropy of white matter fibers comprising in inferior fronto-occipital fasciculus, the superior longitudinal fasciculus, and the subcortical auditory pathway are reported. The reviewed studies also suggest that gray and white matter integrity is sensitive to early sign language acquisition, attenuating the effect of auditory deprivation on neurocognitive development. Conclusions: These findings suggest that understanding cortical reorganization through gray and white matter changes in auditory and non-auditory areas is an important factor in the development of auditory rehabilitation strategies in the deaf population.
Collapse
Affiliation(s)
- Marie Simon
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| | - Emma Campbell
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| | - François Genest
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| | - Michèle W MacLean
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| | - François Champoux
- École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada
| | - Franco Lepore
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
44
|
Pant R, Kanjlia S, Bedny M. A sensitive period in the neural phenotype of language in blind individuals. Dev Cogn Neurosci 2020; 41:100744. [PMID: 31999565 PMCID: PMC6994632 DOI: 10.1016/j.dcn.2019.100744] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 11/15/2019] [Accepted: 11/29/2019] [Indexed: 01/18/2023] Open
Abstract
Congenital blindness modifies the neural basis of language: "visual" cortices respond to linguistic information, and fronto-temporal language networks are less left-lateralized. We tested the hypothesis that this plasticity follows a sensitive period by comparing the neural basis of sentence processing between adult-onset blind (AB, n = 16), congenitally blind (CB, n = 22) and blindfolded sighted adults (n = 18). In Experiment 1, participants made semantic judgments for spoken sentences and, in a control condition, solved math equations. In Experiment 2, participants answered "who did what to whom" yes/no questions for grammatically complex (with syntactic movement) and simpler sentences. In a control condition, participants performed a memory task with non-words. In both experiments, visual cortices of CB and AB but not sighted participants responded more to sentences than control conditions, but the effect was much larger in the CB group. Only the "visual" cortex of CB participants responded to grammatical complexity. Unlike the CB group, the AB group showed no reduction in left-lateralization of fronto-temporal language network, relative to the sighted. These results suggest that congenital blindness modifies the neural basis of language differently from adult-onset blindness, consistent with a developmental sensitive period hypothesis.
Collapse
Affiliation(s)
- Rashi Pant
- Department of Psychological and Brain Sciences, Johns Hopkins University, USA; Biological Psychology and Neuropsychology, University of Hamburg, Germany.
| | - Shipra Kanjlia
- Department of Psychological and Brain Sciences, Johns Hopkins University, USA
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, USA
| |
Collapse
|
45
|
Neuroscience and Sign Language. PAJOUHAN SCIENTIFIC JOURNAL 2020. [DOI: 10.52547/psj.18.2.90] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
46
|
Mercure E, Evans S, Pirazzoli L, Goldberg L, Bowden-Howl H, Coulson-Thaker K, Beedie I, Lloyd-Fox S, Johnson MH, MacSweeney M. Language Experience Impacts Brain Activation for Spoken and Signed Language in Infancy: Insights From Unimodal and Bimodal Bilinguals. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:9-32. [PMID: 32274469 PMCID: PMC7145445 DOI: 10.1162/nol_a_00001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Recent neuroimaging studies suggest that monolingual infants activate a left-lateralized frontotemporal brain network in response to spoken language, which is similar to the network involved in processing spoken and signed language in adulthood. However, it is unclear how brain activation to language is influenced by early experience in infancy. To address this question, we present functional near-infrared spectroscopy (fNIRS) data from 60 hearing infants (4 to 8 months of age): 19 monolingual infants exposed to English, 20 unimodal bilingual infants exposed to two spoken languages, and 21 bimodal bilingual infants exposed to English and British Sign Language (BSL). Across all infants, spoken language elicited activation in a bilateral brain network including the inferior frontal and posterior temporal areas, whereas sign language elicited activation in the right temporoparietal area. A significant difference in brain lateralization was observed between groups. Activation in the posterior temporal region was not lateralized in monolinguals and bimodal bilinguals, but right lateralized in response to both language modalities in unimodal bilinguals. This suggests that the experience of two spoken languages influences brain activation for sign language when experienced for the first time. Multivariate pattern analyses (MVPAs) could classify distributed patterns of activation within the left hemisphere for spoken and signed language in monolinguals (proportion correct = 0.68; p = 0.039) but not in unimodal or bimodal bilinguals. These results suggest that bilingual experience in infancy influences brain activation for language and that unimodal bilingual experience has greater impact on early brain lateralization than bimodal bilingual experience.
Collapse
Affiliation(s)
| | - Samuel Evans
- University College London, London, UK
- University of Westminster, London, UK
| | - Laura Pirazzoli
- Birkbeck - University of London, London, UK
- Boston Children’s Hospital, Boston, Massachusetts, US
| | | | - Harriet Bowden-Howl
- University College London, London, UK
- University of Plymouth, Plymouth, Devon, UK
| | | | | | - Sarah Lloyd-Fox
- Birkbeck - University of London, London, UK
- University of Cambridge, Cambridge, Cambridgeshire, UK
| | - Mark H. Johnson
- Birkbeck - University of London, London, UK
- University of Cambridge, Cambridge, Cambridgeshire, UK
| | | |
Collapse
|
47
|
Rudner M, Orfanidou E, Kästner L, Cardin V, Woll B, Capek CM, Rönnberg J. Neural Networks Supporting Phoneme Monitoring Are Modulated by Phonology but Not Lexicality or Iconicity: Evidence From British and Swedish Sign Language. Front Hum Neurosci 2019; 13:374. [PMID: 31695602 PMCID: PMC6817460 DOI: 10.3389/fnhum.2019.00374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 10/03/2019] [Indexed: 11/18/2022] Open
Abstract
Sign languages are natural languages in the visual domain. Because they lack a written form, they provide a sharper tool than spoken languages for investigating lexicality effects which may be confounded by orthographic processing. In a previous study, we showed that the neural networks supporting phoneme monitoring in deaf British Sign Language (BSL) users are modulated by phonology but not lexicality or iconicity. In the present study, we investigated whether this pattern generalizes to deaf Swedish Sign Language (SSL) users. British and SSLs have a largely overlapping phoneme inventory but are mutually unintelligible because lexical overlap is small. This is important because it means that even when signs lexicalized in BSL are unintelligible to users of SSL they are usually still phonologically acceptable. During fMRI scanning, deaf users of the two different sign languages monitored signs that were lexicalized in either one or both of those languages for phonologically contrastive elements. Neural activation patterns relating to different linguistic levels of processing were similar across SLs; in particular, we found no effect of lexicality, supporting the notion that apparent lexicality effects on sublexical processing of speech may be driven by orthographic strategies. As expected, we found an effect of phonology but not iconicity. Further, there was a difference in neural activation between the two groups in a motion-processing region of the left occipital cortex, possibly driven by cultural differences, such as education. Importantly, this difference was not modulated by the linguistic characteristics of the material, underscoring the robustness of the neural activation patterns relating to different linguistic levels of processing.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Eleni Orfanidou
- Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom.,School of Psychology, University of Crete, Rethymno, Greece
| | - Lena Kästner
- Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom.,Department of Philosophy, Saarland University, Saarbrücken, Germany
| | - Velia Cardin
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden.,Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom.,School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Bencie Woll
- Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom
| | - Cheryl M Capek
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, University of Manchester, Manchester, United Kingdom
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
48
|
Sign and Speech Share Partially Overlapping Conceptual Representations. Curr Biol 2019; 29:3739-3747.e5. [PMID: 31668623 PMCID: PMC6839399 DOI: 10.1016/j.cub.2019.08.075] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Revised: 08/01/2019] [Accepted: 08/30/2019] [Indexed: 11/24/2022]
Abstract
Conceptual knowledge is fundamental to human cognition. Yet, the extent to which it is influenced by language is unclear. Studies of semantic processing show that similar neural patterns are evoked by the same concepts presented in different modalities (e.g., spoken words and pictures or text) [1, 2, 3]. This suggests that conceptual representations are “modality independent.” However, an alternative possibility is that the similarity reflects retrieval of common spoken language representations. Indeed, in hearing spoken language users, text and spoken language are co-dependent [4, 5], and pictures are encoded via visual and verbal routes [6]. A parallel approach investigating semantic cognition shows that bilinguals activate similar patterns for the same words in their different languages [7, 8]. This suggests that conceptual representations are “language independent.” However, this has only been tested in spoken language bilinguals. If different languages evoke different conceptual representations, this should be most apparent comparing languages that differ greatly in structure. Hearing people with signing deaf parents are bilingual in sign and speech: languages conveyed in different modalities. Here, we test the influence of modality and bilingualism on conceptual representation by comparing semantic representations elicited by spoken British English and British Sign Language in hearing early, sign-speech bilinguals. We show that representations of semantic categories are shared for sign and speech, but not for individual spoken words and signs. This provides evidence for partially shared representations for sign and speech and shows that language acts as a subtle filter through which we understand and interact with the world. RSA analyses show that semantic categories are shared for sign and speech Neural patterns for individual spoken words and signs differ Spoken word and sign form representations are found in auditory and visual cortices Language acts as a subtle filter through which we interact with the world
Collapse
|
49
|
Zhang C, Lee TMC, Fu Y, Ren C, Chan CCH, Tao Q. Properties of cross-modal occipital responses in early blindness: An ALE meta-analysis. NEUROIMAGE-CLINICAL 2019; 24:102041. [PMID: 31677587 PMCID: PMC6838549 DOI: 10.1016/j.nicl.2019.102041] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 09/20/2019] [Accepted: 10/17/2019] [Indexed: 11/10/2022]
Abstract
ALE meta-analysis reveals distributed brain networks for object and spatial functions in individuals with early blindness. ALE contrast analysis reveals specific activations in the left cuneus and lingual gyrus for language function, suggesting a reverse hierarchical organization of the visual cortex for early blind individuals. The findings contribute to visual rehabilitation in blind individuals by revealing the function-dependent and sensory-independent networks during nonvisual processing.
Cross-modal occipital responses appear to be essential for nonvisual processing in individuals with early blindness. However, it is not clear whether the recruitment of occipital regions depends on functional domain or sensory modality. The current study utilized a coordinate-based meta-analysis to identify the distinct brain regions involved in the functional domains of object, spatial/motion, and language processing and the common brain regions involved in both auditory and tactile modalities in individuals with early blindness. Following the PRISMA guidelines, a total of 55 studies were included in the meta-analysis. The specific analyses revealed the brain regions that are consistently recruited for each function, such as the dorsal fronto-parietal network for spatial function and ventral occipito-temporal network for object function. This is consistent with the literature, suggesting that the two visual streams are preserved in early blind individuals. The contrast analyses found specific activations in the left cuneus and lingual gyrus for language function. This finding is novel and suggests a reverse hierarchical organization of the visual cortex for early blind individuals. The conjunction analyses found common activations in the right middle temporal gyrus, right precuneus and a left parieto-occipital region. Clinically, this work contributes to visual rehabilitation in early blind individuals by revealing the function-dependent and sensory-independent networks during nonvisual processing.
Collapse
Affiliation(s)
- Caiyun Zhang
- Psychology Department, School of Medicine, Jinan University, Guangzhou 510632, China
| | - Tatia M C Lee
- Laboratory of Neuropsychology, The University of Hong Kong, Hong Kong, CHINA; Laboratory of Cognitive Affective Neuroscience, The University of Hong Kong, Hong Kong, CHINA; The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China
| | - Yunwei Fu
- Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, 510632, China
| | - Chaoran Ren
- Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, 510632, China; Guangdong key Laboratory of Brain Function and Diseases, Jinan University, Guangzhou, 510632, China; Co-innovation Center of Neuroregeneration, Nantong University, Nantong, 226001, China; Center for Brain Science and Brain-Inspired Intelligence, Guangdong-Hong Kong-Macao Greater Bay Area, Guangzhou, China
| | - Chetwyn C H Chan
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, CHINA.
| | - Qian Tao
- Psychology Department, School of Medicine, Jinan University, Guangzhou 510632, China; Center for Brain Science and Brain-Inspired Intelligence, Guangdong-Hong Kong-Macao Greater Bay Area, Guangzhou, China.
| |
Collapse
|
50
|
Grandchamp R, Rapin L, Perrone-Bertolotti M, Pichat C, Haldin C, Cousin E, Lachaux JP, Dohen M, Perrier P, Garnier M, Baciu M, Lœvenbruck H. The ConDialInt Model: Condensation, Dialogality, and Intentionality Dimensions of Inner Speech Within a Hierarchical Predictive Control Framework. Front Psychol 2019; 10:2019. [PMID: 31620039 PMCID: PMC6759632 DOI: 10.3389/fpsyg.2019.02019] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Accepted: 08/19/2019] [Indexed: 11/19/2022] Open
Abstract
Inner speech has been shown to vary in form along several dimensions. Along condensation, condensed inner speech forms have been described, that are supposed to be deprived of acoustic, phonological and even syntactic qualities. Expanded forms, on the other extreme, display articulatory and auditory properties. Along dialogality, inner speech can be monologal, when we engage in internal soliloquy, or dialogal, when we recall past conversations or imagine future dialogs involving our own voice as well as that of others addressing us. Along intentionality, it can be intentional (when we deliberately rehearse material in short-term memory) or it can arise unintentionally (during mind wandering). We introduce the ConDialInt model, a neurocognitive predictive control model of inner speech that accounts for its varieties along these three dimensions. ConDialInt spells out the condensation dimension by including inhibitory control at the conceptualization, formulation or articulatory planning stage. It accounts for dialogality, by assuming internal model adaptations and by speculating on neural processes underlying perspective switching. It explains the differences between intentional and spontaneous varieties in terms of monitoring. We present an fMRI study in which we probed varieties of inner speech along dialogality and intentionality, to examine the validity of the neuroanatomical correlates posited in ConDialInt. Condensation was also informally tackled. Our data support the hypothesis that expanded inner speech recruits speech production processes down to articulatory planning, resulting in a predicted signal, the inner voice, with auditory qualities. Along dialogality, covertly using an avatar's voice resulted in the activation of right hemisphere homologs of the regions involved in internal own-voice soliloquy and in reduced cerebellar activation, consistent with internal model adaptation. Switching from first-person to third-person perspective resulted in activations in precuneus and parietal lobules. Along intentionality, compared with intentional inner speech, mind wandering with inner speech episodes was associated with greater bilateral inferior frontal activation and decreased activation in left temporal regions. This is consistent with the reported subjective evanescence and presumably reflects condensation processes. Our results provide neuroanatomical evidence compatible with predictive control and in favor of the assumptions made in the ConDialInt model.
Collapse
Affiliation(s)
- Romain Grandchamp
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Lucile Rapin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | | | - Cédric Pichat
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Célise Haldin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Emilie Cousin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Jean-Philippe Lachaux
- INSERM U1028, CNRS UMR5292, Brain Dynamics and Cognition Team, Lyon Neurosciences Research Center, Bron, France
| | - Marion Dohen
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Maëva Garnier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Monica Baciu
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Hélène Lœvenbruck
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| |
Collapse
|