1
|
Yang T, Fan X, Hou B, Wang J, Chen X. Linguistic network in early deaf individuals: A neuroimaging meta-analysis. Neuroimage 2024:120720. [PMID: 38971484 DOI: 10.1016/j.neuroimage.2024.120720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 07/01/2024] [Accepted: 07/03/2024] [Indexed: 07/08/2024] Open
Abstract
This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior temporal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.
Collapse
Affiliation(s)
- Tengyu Yang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China
| | - Xinmiao Fan
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China
| | - Bo Hou
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China
| | - Jian Wang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China.
| | - Xiaowei Chen
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, the People's Republic of China.
| |
Collapse
|
2
|
Matchin W, Almeida D, Hickok G, Sprouse J. Cortical networks responsive to phrase structure and subject island violations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.05.592579. [PMID: 38746262 PMCID: PMC11092748 DOI: 10.1101/2024.05.05.592579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
In principle, functional neuroimaging provides uniquely informative data in addressing linguistic questions, because it can indicate distinct processes that are not apparent from behavioral data alone. This could involve adjudicating the source of unacceptability via the different patterns of elicited brain responses to different ungrammatical sentence types. However, it is difficult to interpret brain activations to syntactic violations. Such responses could reflect processes that have nothing intrinsically related to linguistic representations, such as domain-general executive function abilities. In order to facilitate the potential use of functional neuroimaging methods to identify the source of different syntactic violations, we conducted an fMRI experiment to identify the brain activation maps associated with two distinct syntactic violation types: phrase structure (created by inverting the order of two adjacent words within a sentence) and subject islands (created by extracting a wh-phrase out of an embedded subject). The comparison of these violations to control sentences surprisingly showed no indication of a generalized violation response, with almost completely divergent activation patterns. Phrase structure violations seemingly activated regions previously implicated in verbal working memory and structural complexity in sentence processing, whereas the subject islands appeared to activate regions previously implicated in conceptual-semantic processing, broadly defined. We review our findings in the context of previous research on syntactic and semantic violations using event-related potentials. We suggest that functional neuroimaging is a potentially fruitful technique in unpacking the distinct sets of cognitive processes elicited by theoretically-relevant syntactic violations, when interpreted with care and paired with appropriate control conditions.
Collapse
Affiliation(s)
- William Matchin
- Dept. of Communication Sciences and Disorders, University of South Carolina
| | | | - Gregory Hickok
- Dept. of Cognitive Sciences and Dept. of Language Science, University of California, Irvine
| | | |
Collapse
|
3
|
Mayberry RI, Hatrak M, Ilbasaran D, Cheng Q, Huang Y, Hall ML. Impoverished language in early childhood affects the development of complex sentence structure. Dev Sci 2024; 27:e13416. [PMID: 37255282 PMCID: PMC10687309 DOI: 10.1111/desc.13416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 02/10/2023] [Accepted: 05/12/2023] [Indexed: 06/01/2023]
Abstract
The hypothesis that impoverished language experience affects complex sentence structure development around the end of early childhood was tested using a fully randomized, sentence-to-picture matching study in American Sign Language (ASL). The participants were ASL signers who had impoverished or typical access to language in early childhood. Deaf signers whose access to language was highly impoverished in early childhood (N = 11) primarily comprehended structures consisting of a single verb and argument (Subject or Object), agreeing verbs, and the spatial relation or path of semantic classifiers. They showed difficulty comprehending more complex sentence structures involving dual lexical arguments or multiple verbs. As predicted, participants with typical language access in early childhood, deaf native signers (N = 17) or hearing second-language learners (N = 10), comprehended the range of 12 ASL sentence structures, independent of the subjective iconicity or frequency of the stimulus lexical items, or length of ASL experience and performance on non-verbal cognitive tasks. The results show that language experience in early childhood is necessary for the development of complex syntax. RESEARCH HIGHLIGHTS: Previous research with deaf signers suggests an inflection point around the end of early childhood for sentence structure development. Deaf signers who experienced impoverished language until the age of 9 or older comprehend several basic sentence structures but few complex structures. Language experience in early childhood is necessary for the development of complex sentence structure.
Collapse
Affiliation(s)
- Rachel I Mayberry
- Department of Linguistics, University of California San Diego, La Jolla, USA, California
| | - Marla Hatrak
- Department of Linguistics, University of California San Diego, La Jolla, USA, California
| | - Deniz Ilbasaran
- Department of Linguistics, University of California San Diego, La Jolla, USA, California
| | - Qi Cheng
- Department of Linguistics, University of Washington, Seattle, Washington, USA
| | - Yaqian Huang
- Department of Linguistics, University of California San Diego, La Jolla, USA, California
| | - Matt L Hall
- Communication Sciences and Disorders, Temple University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
4
|
Matchin W, den Ouden DB, Basilakos A, Stark BC, Fridriksson J, Hickok G. Grammatical Parallelism in Aphasia: A Lesion-Symptom Mapping Study. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:550-574. [PMID: 37946730 PMCID: PMC10631800 DOI: 10.1162/nol_a_00117] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 07/19/2023] [Indexed: 11/12/2023]
Abstract
Sentence structure, or syntax, is potentially a uniquely creative aspect of the human mind. Neuropsychological experiments in the 1970s suggested parallel syntactic production and comprehension deficits in agrammatic Broca's aphasia, thought to result from damage to syntactic mechanisms in Broca's area in the left frontal lobe. This hypothesis was sometimes termed overarching agrammatism, converging with developments in linguistic theory concerning central syntactic mechanisms supporting language production and comprehension. However, the evidence supporting an association among receptive syntactic deficits, expressive agrammatism, and damage to frontal cortex is equivocal. In addition, the relationship among a distinct grammatical production deficit in aphasia, paragrammatism, and receptive syntax has not been assessed. We used lesion-symptom mapping in three partially overlapping groups of left-hemisphere stroke patients to investigate these issues: grammatical production deficits in a primary group of 53 subjects and syntactic comprehension in larger sample sizes (N = 130, 218) that overlapped with the primary group. Paragrammatic production deficits were significantly associated with multiple analyses of syntactic comprehension, particularly when incorporating lesion volume as a covariate, but agrammatic production deficits were not. The lesion correlates of impaired performance of syntactic comprehension were significantly associated with damage to temporal lobe regions, which were also implicated in paragrammatism, but not with the inferior and middle frontal regions implicated in expressive agrammatism. Our results provide strong evidence against the overarching agrammatism hypothesis. By contrast, our results suggest the possibility of an alternative grammatical parallelism hypothesis rooted in paragrammatism and a central syntactic system in the posterior temporal lobe.
Collapse
Affiliation(s)
- William Matchin
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, USA
| | - Dirk-Bart den Ouden
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, USA
| | - Alexandra Basilakos
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, USA
| | - Brielle Caserta Stark
- Department of Speech, Language and Hearing Sciences, Program for Neuroscience, Indiana University Bloomington, Bloomington, IN, USA
| | - Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, Department of Language Science, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
5
|
Musso M, Altenmüller E, Reisert M, Hosp J, Schwarzwald R, Blank B, Horn J, Glauche V, Kaller C, Weiller C, Schumacher M. Speaking in gestures: Left dorsal and ventral frontotemporal brain systems underlie communication in conducting. Eur J Neurosci 2023; 57:324-350. [PMID: 36509461 DOI: 10.1111/ejn.15883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 09/27/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022]
Abstract
Conducting constitutes a well-structured system of signs anticipating information concerning the rhythm and dynamic of a musical piece. Conductors communicate the musical tempo to the orchestra, unifying the individual instrumental voices to form an expressive musical Gestalt. In a functional magnetic resonance imaging (fMRI) experiment, 12 professional conductors and 16 instrumentalists conducted real-time novel pieces with diverse complexity in orchestration and rhythm. For control, participants either listened to the stimuli or performed beat patterns, setting the time of a metronome or complex rhythms played by a drum. Activation of the left superior temporal gyrus (STG), supplementary and premotor cortex and Broca's pars opercularis (F3op) was shared in both musician groups and separated conducting from the other conditions. Compared to instrumentalists, conductors activated Broca's pars triangularis (F3tri) and the STG, which differentiated conducting from time beating and reflected the increase in complexity during conducting. In comparison to conductors, instrumentalists activated F3op and F3tri when distinguishing complex rhythm processing from simple rhythm processing. Fibre selection from a normative human connectome database, constructed using a global tractography approach, showed that the F3op and STG are connected via the arcuate fasciculus, whereas the F3tri and STG are connected via the extreme capsule. Like language, the anatomical framework characterising conducting gestures is located in the left dorsal system centred on F3op. This system reflected the sensorimotor mapping for structuring gestures to musical tempo. The ventral system centred on F3Tri may reflect the art of conductors to set this musical tempo to the individual orchestra's voices in a global, holistic way.
Collapse
Affiliation(s)
- Mariacristina Musso
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Eckart Altenmüller
- Institute of Music Physiology and Musician's Medicine, Hannover University of Music Drama and Media, Hannover, Germany
| | - Marco Reisert
- Department of Medical Physics, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jonas Hosp
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Ralf Schwarzwald
- Department of Neuroradiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Bettina Blank
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Julian Horn
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Volkmar Glauche
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Christoph Kaller
- Department of Medical Physics, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Cornelius Weiller
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Martin Schumacher
- Department of Neuroradiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
6
|
Syntax through the looking glass: A review on two-word linguistic processing across behavioral, neuroimaging and neurostimulation studies. Neurosci Biobehav Rev 2022; 142:104881. [DOI: 10.1016/j.neubiorev.2022.104881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 09/13/2022] [Accepted: 09/15/2022] [Indexed: 11/23/2022]
|
7
|
Villwock A, Grin K. Somatosensory processing in deaf and deafblind individuals: How does the brain adapt as a function of sensory and linguistic experience? A critical review. Front Psychol 2022; 13:938842. [PMID: 36324786 PMCID: PMC9618853 DOI: 10.3389/fpsyg.2022.938842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 09/22/2022] [Indexed: 11/17/2022] Open
Abstract
How do deaf and deafblind individuals process touch? This question offers a unique model to understand the prospects and constraints of neural plasticity. Our brain constantly receives and processes signals from the environment and combines them into the most reliable information content. The nervous system adapts its functional and structural organization according to the input, and perceptual processing develops as a function of individual experience. However, there are still many unresolved questions regarding the deciding factors for these changes in deaf and deafblind individuals, and so far, findings are not consistent. To date, most studies have not taken the sensory and linguistic experiences of the included participants into account. As a result, the impact of sensory deprivation vs. language experience on somatosensory processing remains inconclusive. Even less is known about the impact of deafblindness on brain development. The resulting neural adaptations could be even more substantial, but no clear patterns have yet been identified. How do deafblind individuals process sensory input? Studies on deafblindness have mostly focused on single cases or groups of late-blind individuals. Importantly, the language backgrounds of deafblind communities are highly variable and include the usage of tactile languages. So far, this kind of linguistic experience and its consequences have not been considered in studies on basic perceptual functions. Here, we will provide a critical review of the literature, aiming at identifying determinants for neuroplasticity and gaps in our current knowledge of somatosensory processing in deaf and deafblind individuals.
Collapse
|
8
|
Murphy E, Woolnough O, Rollo PS, Roccaforte ZJ, Segaert K, Hagoort P, Tandon N. Minimal Phrase Composition Revealed by Intracranial Recordings. J Neurosci 2022; 42:3216-3227. [PMID: 35232761 PMCID: PMC8994536 DOI: 10.1523/jneurosci.1575-21.2022] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 01/11/2022] [Accepted: 01/18/2022] [Indexed: 11/21/2022] Open
Abstract
The ability to comprehend phrases is an essential integrative property of the brain. Here, we evaluate the neural processes that enable the transition from single-word processing to a minimal compositional scheme. Previous research has reported conflicting timing effects of composition, and disagreement persists with respect to inferior frontal and posterior temporal contributions. To address these issues, 19 patients (10 male, 9 female) implanted with penetrating depth or surface subdural intracranial electrodes, heard auditory recordings of adjective-noun, pseudoword-noun, and adjective-pseudoword phrases and judged whether the phrase matched a picture. Stimulus-dependent alterations in broadband gamma activity, low-frequency power, and phase-locking values across the language-dominant left hemisphere were derived. This revealed a mosaic located on the lower bank of the posterior superior temporal sulcus (pSTS), in which closely neighboring cortical sites displayed exclusive sensitivity to either lexicality or phrase structure, but not both. Distinct timings were found for effects of phrase composition (210-300 ms) and pseudoword processing (∼300-700 ms), and these were localized to neighboring electrodes in pSTS. The pars triangularis and temporal pole encoded anticipation of composition in broadband low frequencies, and both regions exhibited greater functional connectivity with pSTS during phrase composition. Our results suggest that the pSTS is a highly specialized region composed of sparsely interwoven heterogeneous constituents that encodes both lower and higher level linguistic features. This hub in pSTS for minimal phrase processing may form the neural basis for the human-specific computational capacity for forming hierarchically organized linguistic structures.SIGNIFICANCE STATEMENT Linguists have claimed that the integration of multiple words into a phrase demands a computational procedure distinct from single-word processing. Here, we provide intracranial recordings from a large patient cohort, with high spatiotemporal resolution, to track the cortical dynamics of phrase composition. Epileptic patients volunteered to participate in a task in which they listened to phrases (red boat), word-pseudoword or pseudoword-word pairs (e.g., red fulg). At the onset of the second word in phrases, greater broadband high gamma activity was found in posterior superior temporal sulcus in electrodes that exclusively indexed phrasal meaning and not lexical meaning. These results provide direct, high-resolution signatures of minimal phrase composition in humans, a potentially species-specific computational capacity.
Collapse
Affiliation(s)
- Elliot Murphy
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, Texas 77030
| | - Oscar Woolnough
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, Texas 77030
| | - Patrick S Rollo
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, Texas 77030
| | - Zachary J Roccaforte
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
| | - Katrien Segaert
- School of Psychology and Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, United Kingdom
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD Nijmegen, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, 6525 HR Nijmegen, The Netherlands
| | - Nitin Tandon
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Memorial Hermann Hospital, Texas Medical Center, Houston, Texas 77030
| |
Collapse
|
9
|
Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Collapse
|