1
|
Morgan AM, Devinsky O, Doyle WK, Dugan P, Friedman D, Flinker A. Decoding words during sentence production: Syntactic role encoding and structure-dependent dynamics revealed by ECoG. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.10.30.621177. [PMID: 39554006 PMCID: PMC11565881 DOI: 10.1101/2024.10.30.621177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/19/2024]
Abstract
Sentence production is the uniquely human ability to transform complex thoughts into strings of words. Despite the importance of this process, language production research has primarily focused on single words. However, it remains a largely untested assumption that the principles of word production generalize to more naturalistic utterances like sentences. Here, we investigate this using high-resolution neurosurgical recordings (ECoG) and an overt production experiment where patients produced six words in isolation (picture naming) and in sentences (scene description). We trained machine learning classifiers to identify the unique brain activity patterns for each word during picture naming, and used these patterns to decode which words patients were processing while they produced sentences. Our findings confirm that words share cortical representations across tasks, but reveal a division of labor within the language network. In sensorimotor cortex, words were consistently activated in the order in which they were said in the sentence. However, in inferior and middle frontal gyri (IFG and MFG), the order in which words were processed depended on the syntactic structure of the sentence. Deeper analysis of this pattern revealed a spatial code for representing a word's position in the sentence, with subjects selectively encoded in IFG and objects in MFG. Finally, we argue that the processes we observe in prefrontal cortex may impose a subtle pressure on language evolution, explaining why nearly all the world's languages position subjects before objects.
Collapse
Affiliation(s)
| | - Orrin Devinsky
- Neurosurgery Department, NYU Grossman School of Medicine
| | | | | | | | - Adeen Flinker
- Neurology Department, NYU Grossman School of Medicine
- Biomedical Engineering Department, NYU Tandon School of Engineering
| |
Collapse
|
2
|
Trettenbrein PC, Zaccarella E, Friederici AD. Functional and structural brain asymmetries in sign language processing. HANDBOOK OF CLINICAL NEUROLOGY 2025; 208:327-350. [PMID: 40074405 DOI: 10.1016/b978-0-443-15646-5.00021-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2025]
Abstract
The capacity for language constitutes a cornerstone of human cognition and distinguishes our species from other animals. Research in the cognitive sciences has demonstrated that this capacity is not bound to speech but can also be externalized in the form of sign language. Sign languages are the naturally occurring languages of the deaf and rely on movements and configurations of hands, arms, face, and torso in space. This chapter reviews the functional and structural organization of the neural substrates of sign language, as identified by neuroimaging research over the past decades. Most aspects of sign language processing in adult deaf signers markedly mirror the well-known, functional left-lateralization of spoken and written language. However, both hemispheres exhibit a certain equipotentiality for processing linguistic information and the right hemisphere seems to specifically support processing of some constructions unique to the signed modality. Crucially, the so-called "core language network" in the left hemisphere constitutes a functional and structural asymmetry in typically developed deaf and hearing populations alike: This network is (i) pivotal for processing complex syntax independent of the modality of language use, (ii) matures in accordance with a genetically determined biologic matrix, and (iii) may have constituted an evolutionary prerequisite for the emergence of the human capacity for language.
Collapse
Affiliation(s)
- Patrick C Trettenbrein
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom), Leipzig, Germany; Experimental Sign Language Laboratory (SignLab), Department of German Philology, University of Göttingen, Göttingen, Germany.
| | - Emiliano Zaccarella
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
3
|
Haluts N, Levy D, Friedmann N. Bimodal aphasia and dysgraphia: Phonological output buffer aphasia and orthographic output buffer dysgraphia in spoken and sign language. Cortex 2025; 182:147-180. [PMID: 39672692 DOI: 10.1016/j.cortex.2024.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 08/09/2024] [Accepted: 10/08/2024] [Indexed: 12/15/2024]
Abstract
We report a case of crossmodal bilingual aphasia-aphasia in two modalities, spoken and sign language-and dysgraphia in both writing and fingerspelling. The patient, Sunny, was a 42 year-old woman after a left temporo-parietal stroke, a speaker of Hebrew, Romanian, and English and an adult learner, daily user of Israeli Sign language (ISL). We assessed Sunny's spoken and sign languages using a comprehensive test battery of naming, reading, and repetition tasks, and also analysed her spontaneous-speech and sign. Her writing and fingerspelling were assessed using tasks of dictation, naming, and delayed copying. In spoken language production, Sunny showed a classical phonological output buffer (POB) impairment in naming, reading, repetition, and spontaneous production, with phonological errors (transpositions, substitutions, insertions, and omissions) in words and pseudo-words, and whole-unit errors in morphological affixes, function-words, and number-words, with a length effect. Importantly, her error pattern in ISL was remarkably similar in the parallel tasks, with phonological errors in signs and pseudo-signs, affecting all the phonological parameters of the sign (movement, handshape, location, and orientation), and whole-unit errors in morphemes, function-signs, and number-signs. Sunny's impairment was selective to the POB, without phonological input, semantic-conceptual, or syntactic deficits. This shows for the first time how POB impairment, a kind of conduction aphasia, manifests itself in a sign language, and indicates that the POB for sign-language has the same cognitive architecture as the one for spoken language. It may also indicate similar neural underpinnings for spoken and sign languages. In writing, Sunny forms the first case of a selective type of dysgraphia in fingerspelling, orthographic (graphemic) output buffer dysgraphia. In both writing and fingerspelling, she made letter errors (letter transpositions, substitutions, insertions, and omissions), as well as morphological errors and errors in function words, and showed length effect. Sunny's impairment was selective to the orthographic output buffer, whereas her reading, including orthographic input processing, was intact. This suggests that the orthographic output buffer is shared for writing and fingerspelling, at least in a late learner of sign language. The results shed further light on the architecture of phonological and orthographic production.
Collapse
Affiliation(s)
- Neta Haluts
- Language and Brain Lab, Sagol School of Neuroscience, and School of Education, Tel Aviv University, Tel Aviv, Israel
| | - Doron Levy
- Language and Brain Lab, Sagol School of Neuroscience, and School of Education, Tel Aviv University, Tel Aviv, Israel
| | - Naama Friedmann
- Language and Brain Lab, Sagol School of Neuroscience, and School of Education, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
4
|
Goldberg EB, Hillis AE. Sign language aphasia. HANDBOOK OF CLINICAL NEUROLOGY 2022; 185:297-315. [PMID: 35078607 DOI: 10.1016/b978-0-12-823384-9.00019-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Signed languages are naturally occurring, fully formed linguistic systems that rely on the movement of the hands, arms, torso, and face within a sign space for production, and are perceived predominantly using visual perception. Despite stark differences in modality and linguistic structure, functional neural organization is strikingly similar to spoken language. Generally speaking, left frontal areas support sign production, and regions in the auditory cortex underlie sign comprehension-despite signers not relying on audition to process language. Given this, should a deaf or hearing signer suffer damage to the left cerebral hemisphere, language is vulnerable to impairment. Multiple cases of sign language aphasia have been documented following left hemisphere injury, and the general pattern of linguistic deficits mirrors those observed in spoken language. The right hemisphere likely plays a role in non-linguistic but critical visuospatial functions of sign language; therefore, individuals who are spared from damage to the left hemisphere but suffer injury to the right are at risk for a different set of communication deficits. In this chapter, we review the neurobiology of sign language and patterns of language deficits that follow brain injury in the deaf signing population.
Collapse
Affiliation(s)
- Emily B Goldberg
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, United States.
| | - Argye Elizabeth Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, United States; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, United States; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
5
|
Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Collapse
|
6
|
Barker MS, Knight JL, Dean RJ, Mandelstam S, Richards LJ, Robinson GA. Verbal Adynamia and Conceptualization in Partial Rhombencephalosynapsis and Corpus Callosum Dysgenesis. Cogn Behav Neurol 2021; 34:38-52. [PMID: 33652468 DOI: 10.1097/wnn.0000000000000261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Accepted: 07/02/2020] [Indexed: 11/26/2022]
Abstract
Verbal adynamia is characterized by markedly reduced spontaneous speech that is not attributable to a core language deficit such as impaired naming, reading, repetition, or comprehension. In some cases, verbal adynamia is severe enough to be considered dynamic aphasia. We report the case of a 40-year-old, left-handed, male native English speaker who presented with partial rhombencephalosynapsis, corpus callosum dysgenesis, and a language profile that is consistent with verbal adynamia, or subclinical dynamic aphasia, possibly underpinned by difficulties selecting and generating ideas for expression. This case is only the second investigation of dynamic aphasia in an individual with a congenital brain malformation. It is also the first detailed neuropsychological report of an adult with partial rhombencephalosynapsis and corpus callosum dysgenesis, and the only known case of superior intellectual abilities in this context.
Collapse
Affiliation(s)
- Megan S Barker
- Neuropsychology Research Unit, School of Psychology, The University of Queensland, St Lucia, Brisbane, Australia
- Taub Institute, Columbia University Medical Center, New York, New York
| | - Jacquelyn L Knight
- Neuropsychology Research Unit, School of Psychology, The University of Queensland, St Lucia, Brisbane, Australia
| | - Ryan J Dean
- Queensland Brain Institute, The University of Queensland, St Lucia, Brisbane, Australia
| | - Simone Mandelstam
- Department of Radiology, University of Melbourne, Royal Children's Hospital, Parkville, Victoria, Australia
- Department of Paediatrics, University of Melbourne, Parkville, Victoria, Australia
- Florey Institute of Neuroscience and Mental Health, Melbourne, Victoria, Australia
| | - Linda J Richards
- Queensland Brain Institute, The University of Queensland, St Lucia, Brisbane, Australia
- School of Biomedical Sciences, The University of Queensland, St. Lucia, Brisbane, Australia
| | - Gail A Robinson
- Neuropsychology Research Unit, School of Psychology, The University of Queensland, St Lucia, Brisbane, Australia
- Queensland Brain Institute, The University of Queensland, St Lucia, Brisbane, Australia
| |
Collapse
|
7
|
Banaszkiewicz A, Bola Ł, Matuszewski J, Szczepanik M, Kossowski B, Mostowski P, Rutkowski P, Śliwińska M, Jednoróg K, Emmorey K, Marchewka A. The role of the superior parietal lobule in lexical processing of sign language: Insights from fMRI and TMS. Cortex 2020; 135:240-254. [PMID: 33401098 DOI: 10.1016/j.cortex.2020.10.025] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 09/24/2020] [Accepted: 10/22/2020] [Indexed: 11/29/2022]
Abstract
There is strong evidence that neuronal bases for language processing are remarkably similar for sign and spoken languages. However, as meanings and linguistic structures of sign languages are coded in movement and space and decoded through vision, differences are also present, predominantly in occipitotemporal and parietal areas, such as superior parietal lobule (SPL). Whether the involvement of SPL reflects domain-general visuospatial attention or processes specific to sign language comprehension remains an open question. Here we conducted two experiments to investigate the role of SPL and the laterality of its engagement in sign language lexical processing. First, using unique longitudinal and between-group designs we mapped brain responses to sign language in hearing late learners and deaf signers. Second, using transcranial magnetic stimulation (TMS) in both groups we tested the behavioural relevance of SPL's engagement and its lateralisation during sign language comprehension. SPL activation in hearing participants was observed in the right hemisphere before and bilaterally after the sign language course. Additionally, after the course hearing learners exhibited greater activation in the occipital cortex and left SPL than deaf signers. TMS applied to the right SPL decreased accuracy in both hearing learners and deaf signers. Stimulation of the left SPL decreased accuracy only in hearing learners. Our results suggest that right SPL might be involved in visuospatial attention while left SPL might support phonological decoding of signs in non-proficient signers.
Collapse
Affiliation(s)
- A Banaszkiewicz
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Ł Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - J Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - M Szczepanik
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - B Kossowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - P Mostowski
- Section for Sign Linguistics, Faculty of Polish Studies, University of Warsaw, Warsaw, Poland
| | - P Rutkowski
- Section for Sign Linguistics, Faculty of Polish Studies, University of Warsaw, Warsaw, Poland
| | - M Śliwińska
- Department of Psychology, University of York, Heslington, UK
| | - K Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - K Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, USA
| | - A Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
8
|
Mosley PE, Robinson K, Coyne T, Silburn P, Barker MS, Breakspear M, Robinson GA, Perry A. Subthalamic deep brain stimulation identifies frontal networks supporting initiation, inhibition and strategy use in Parkinson's disease. Neuroimage 2020; 223:117352. [DOI: 10.1016/j.neuroimage.2020.117352] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/22/2020] [Accepted: 09/04/2020] [Indexed: 12/13/2022] Open
|
9
|
Shum J, Fanda L, Dugan P, Doyle WK, Devinsky O, Flinker A. Neural correlates of sign language production revealed by electrocorticography. Neurology 2020; 95:e2880-e2889. [PMID: 32788249 PMCID: PMC7734739 DOI: 10.1212/wnl.0000000000010639] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 05/20/2020] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE The combined spatiotemporal dynamics underlying sign language production remain largely unknown. To investigate these dynamics compared to speech production, we used intracranial electrocorticography during a battery of language tasks. METHODS We report a unique case of direct cortical surface recordings obtained from a neurosurgical patient with intact hearing who is bilingual in English and American Sign Language. We designed a battery of cognitive tasks to capture multiple modalities of language processing and production. RESULTS We identified 2 spatially distinct cortical networks: ventral for speech and dorsal for sign production. Sign production recruited perirolandic, parietal, and posterior temporal regions, while speech production recruited frontal, perisylvian, and perirolandic regions. Electrical cortical stimulation confirmed this spatial segregation, identifying mouth areas for speech production and limb areas for sign production. The temporal dynamics revealed superior parietal cortex activity immediately before sign production, suggesting its role in planning and producing sign language. CONCLUSIONS Our findings reveal a distinct network for sign language and detail the temporal propagation supporting sign production.
Collapse
Affiliation(s)
- Jennifer Shum
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY.
| | - Lora Fanda
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Patricia Dugan
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Werner K Doyle
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Orrin Devinsky
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Adeen Flinker
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| |
Collapse
|
10
|
Ramage AE, Aytur S, Ballard KJ. Resting-State Functional Magnetic Resonance Imaging Connectivity Between Semantic and Phonological Regions of Interest May Inform Language Targets in Aphasia. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3051-3067. [PMID: 32755498 PMCID: PMC7890222 DOI: 10.1044/2020_jslhr-19-00117] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Revised: 03/16/2020] [Accepted: 06/01/2020] [Indexed: 06/11/2023]
Abstract
Purpose Brain imaging has provided puzzle pieces in the understanding of language. In neurologically healthy populations, the structure of certain brain regions is associated with particular language functions (e.g., semantics, phonology). In studies on focal brain damage, certain brain regions or connections are considered sufficient or necessary for a given language function. However, few of these account for the effects of lesioned tissue on the "functional" dynamics of the brain for language processing. Here, functional connectivity (FC) among semantic-phonological regions of interest (ROIs) is assessed to fill a gap in our understanding about the neural substrates of impaired language and whether connectivity strength can predict language performance on a clinical tool in individuals with aphasia. Method Clinical assessment of language, using the Western Aphasia Battery-Revised, and resting-state functional magnetic resonance imaging data were obtained for 30 individuals with chronic aphasia secondary to left-hemisphere stroke and 18 age-matched healthy controls. FC between bilateral ROIs was contrasted by group and used to predict Western Aphasia Battery-Revised scores. Results Network coherence was observed in healthy controls and participants with stroke. The left-right premotor cortex connection was stronger in healthy controls, as reported by New et al. (2015) in the same data set. FC of (a) connections between temporal regions, in the left hemisphere and bilaterally, predicted lexical-semantic processing for auditory comprehension and (b) ipsilateral connections between temporal and frontal regions in both hemispheres predicted access to semantic-phonological representations and processing for verbal production. Conclusions Network connectivity of brain regions associated with semantic-phonological processing is predictive of language performance in poststroke aphasia. The most predictive connections involved right-hemisphere ROIs-particularly those for which structural adaptions are known to associate with recovered word retrieval performance. Predictions may be made, based on these findings, about which connections have potential as targets for neuroplastic functional changes with intervention in aphasia. Supplemental Material https://doi.org/10.23641/asha.12735785.
Collapse
Affiliation(s)
- Amy E. Ramage
- Department of Communication Sciences and Disorders, University of New Hampshire, Durham
| | - Semra Aytur
- Department of Health Policy and Management, University of New Hampshire, Durham
| | - Kirrie J. Ballard
- Faculty of Medicine and Health and the Brain and Mind Centre, The University of Sydney, New South Wales, Australia
| |
Collapse
|
11
|
Matar SJ, Sorinola IO, Newton C, Pavlou M. Transcranial Direct-Current Stimulation May Improve Discourse Production in Healthy Older Adults. Front Neurol 2020; 11:935. [PMID: 32982943 PMCID: PMC7479316 DOI: 10.3389/fneur.2020.00935] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 07/20/2020] [Indexed: 01/10/2023] Open
Abstract
Background: The use of transcranial direct-current stimulation (tDCS) for therapeutic and neurorehabilitation purposes has become increasingly popular in recent years. Previous research has found that anodal tDCS may enhance naming ability and verbal fluency in healthy participants. However, the effect of tDCS on more functional, higher level language skills such as discourse production has yet to be understood. Aims: The present study aimed to investigate in healthy, older adults (a) the effect of anodal tDCS on discourse production vs. sham stimulation and (b) optimal electrode placement for tDCS to target language improvement at the discourse level. Methods: Fourteen healthy, older right-handed participants took part in this sham controlled, repeated measures pilot study. Each participant experienced three different experimental conditions; anodal tDCS on the left inferior frontal gyrus (IFG), anodal tDCS on the right IFG and sham stimulation while performing a story telling task. Significant changes in language performance before and after each condition were examined in three discourse production tasks: recount, procedural and narrative. Results: Left and right IFG conditions showed a greater number of significant within-group improvements (p < 0.05) in discourse production compared to sham with 6/12 for left IFG, 4/12 for right IFG and 2/12 for sham. There were no significant differences noted between tDCS conditions. No relationship was noted between language performance and physical activity, age, or gender. Conclusions: This study suggests that anodal tDCS may significantly improve discourse production in healthy, older adults. In line with previous tDCS language studies, the left IFG is highlighted as an optimal stimulation site for the modulation of language in healthy speakers. The findings support further exploration of tDCS as a rehabilitative tool for higher-level language skills in persons with aphasia.
Collapse
Affiliation(s)
- Shereen J Matar
- Centre for Human & Applied Physiological Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Isaac O Sorinola
- Department of Public Health Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Caroline Newton
- Division of Psychology & Language Sciences, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Marousa Pavlou
- Centre for Human & Applied Physiological Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| |
Collapse
|
12
|
Cwik JC, Vahle N, Woud ML, Potthoff D, Kessler H, Sartory G, Seitz RJ. Reduced gray matter volume in the left prefrontal, occipital, and temporal regions as predictors for posttraumatic stress disorder: a voxel-based morphometric study. Eur Arch Psychiatry Clin Neurosci 2020; 270:577-588. [PMID: 30937515 DOI: 10.1007/s00406-019-01011-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 03/26/2019] [Indexed: 02/07/2023]
Abstract
The concept of acute stress disorder (ASD) was introduced as a diagnostic entity to improve the identification of traumatized people who are likely to develop posttraumatic stress disorder (PTSD). Neuroanatomical models suggest that changes in the prefrontal cortex, amygdala, and hippocampus play a role in the development of PTSD. Using voxel-based morphometry, this study aimed to investigate the predictive power of gray matter volume (GMV) alterations for developing PTSD. The GMVs of ASD patients (n = 21) were compared to those of PTSD patients (n = 17) and healthy controls (n = 18) in whole-brain and region-of-interest analyses. The GMV alterations seen in ASD patients shortly after the traumatic event (T1) were also correlated with PTSD symptom severity and symptom clusters 4 weeks later (T2). Compared with healthy controls, the ASD patients had significantly reduced GMV in the left visual cortex shortly after the traumatic event (T1) and in the left occipital and prefrontal regions 4 weeks later (T2); no significant differences in GMV were seen between the ASD and PTSD patients. Furthermore, a significant negative association was found between the GMV reduction in the left lateral temporal regions seen after the traumatic event (T1) and PTSD hyperarousal symptoms 4 weeks later (T2). Neither amygdala nor hippocampus alterations were predictive for the development of PTSD. These data suggest that gray matter deficiencies in the left hemispheric occipital and temporal regions in ASD patients may predict a liability for developing PTSD.
Collapse
Affiliation(s)
- Jan Christopher Cwik
- Department of Clinical Psychology and Psychotherapy, Faculty of Human Sciences, Universität zu Köln, Pohligstr. 1, 50969, Cologne, Germany. .,Faculty of Psychology, Mental Health Research and Treatment Center, Ruhr-Universität Bochum, Bochum, Germany.
| | - Nils Vahle
- Department of Psychology and Psychotherapy, University Witten/Herdecke, Witten, Germany
| | - Marcella Lydia Woud
- Faculty of Psychology, Mental Health Research and Treatment Center, Ruhr-Universität Bochum, Bochum, Germany
| | - Denise Potthoff
- Department of Neurology, Center for Neurology and Neuropsychiatry, Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany
| | - Henrik Kessler
- Department of Psychosomatic Medicine and Psychotherapy, LWL University Hospital, Ruhr-Universität Bochum, Bochum, Germany
| | - Gudrun Sartory
- Department of Clinical Psychology and Psychotherapy, School of Human and Social Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Rüdiger J Seitz
- Department of Neurology, Center for Neurology and Neuropsychiatry, Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
13
|
Riès SK, Nadalet L, Mickelsen S, Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Pre-output Language Monitoring in Sign Production. J Cogn Neurosci 2020; 32:1079-1091. [PMID: 32027582 PMCID: PMC7234262 DOI: 10.1162/jocn_a_01542] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A domain-general monitoring mechanism is proposed to be involved in overt speech monitoring. This mechanism is reflected in a medial frontal component, the error negativity (Ne), present in both errors and correct trials (Ne-like wave) but larger in errors than correct trials. In overt speech production, this negativity starts to rise before speech onset and is therefore associated with inner speech monitoring. Here, we investigate whether the same monitoring mechanism is involved in sign language production. Twenty deaf signers (American Sign Language [ASL] dominant) and 16 hearing signers (English dominant) participated in a picture-word interference paradigm in ASL. As in previous studies, ASL naming latencies were measured using the keyboard release time. EEG results revealed a medial frontal negativity peaking within 15 msec after keyboard release in the deaf signers. This negativity was larger in errors than correct trials, as previously observed in spoken language production. No clear negativity was present in the hearing signers. In addition, the slope of the Ne was correlated with ASL proficiency (measured by the ASL Sentence Repetition Task) across signers. Our results indicate that a similar medial frontal mechanism is engaged in preoutput language monitoring in sign and spoken language production. These results suggest that the monitoring mechanism reflected by the Ne/Ne-like wave is independent of output modality (i.e., spoken or signed) and likely monitors prearticulatory representations of language. Differences between groups may be linked to several factors including differences in language proficiency or more variable lexical access to motor programming latencies for hearing than deaf signers.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Karen Emmorey
- San Diego State University
- University of California, San Diego
| |
Collapse
|
14
|
Abstract
BACKGROUND Language and communication are fundamental to the human experience, and, traditionally, spoken language is studied as an isolated skill. However, before propositional language (i.e., spontaneous, voluntary, novel speech) can be produced, propositional content or 'ideas' must be formulated. OBJECTIVE This review highlights the role of broader cognitive processes, particularly 'executive attention', in the formulation of propositional content (i.e., 'ideas') for propositional language production. CONCLUSIONS Several key lines of evidence converge to suggest that the formulation of ideas for propositional language production draws on executive attentional processes. Larger-scale clinical research has demonstrated a link between attentional processes and language, while detailed case studies of neurological patients have elucidated specific idea formulation mechanisms relating to the generation, selection and sequencing of ideas for expression. Furthermore, executive attentional processes have been implicated in the generation of ideas for propositional language production. Finally, neuroimaging studies suggest that a widely distributed network of brain regions, including parts of the prefrontal and parietal cortices, supports propositional language production. IMPLICATIONS Theoretically driven experimental research studies investigating mechanisms involved in the formulation of ideas are lacking. We suggest that novel experimental approaches are needed to define the contribution of executive attentional processes to idea formulation, from which comprehensive models of spoken language production can be developed. Clinically, propositional language impairments should be considered in the context of broader executive attentional deficits.
Collapse
|
15
|
Hoffman P. Reductions in prefrontal activation predict off-topic utterances during speech production. Nat Commun 2019; 10:515. [PMID: 30705284 PMCID: PMC6355898 DOI: 10.1038/s41467-019-08519-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 01/15/2019] [Indexed: 11/09/2022] Open
Abstract
The ability to speak coherently is essential for effective communication but declines with age: older people more frequently produce tangential, off-topic speech. Little is known, however, about the neural systems that support coherence in speech production. Here, fMRI was used to investigate extended speech production in healthy older adults. Computational linguistic analyses were used to quantify the coherence of utterances produced in the scanner, allowing identification of the neural correlates of coherence for the first time. Highly coherent speech production was associated with increased activity in bilateral inferior prefrontal cortex (BA45), an area implicated in selection of task-relevant knowledge from semantic memory, and in bilateral rostrolateral prefrontal cortex (BA10), implicated more generally in planning of complex goal-directed behaviours. These findings demonstrate that neural activity during spontaneous speech production can be predicted from formal analysis of speech content, and that multiple prefrontal systems contribute to coherence in speech. The ability to speak coherently is essential for effective communication, but little is known about the neural systems that support coherence. Here, the authors show that activity in two prefrontal cortex regions, BA10 and BA45, predicts the level of coherence in the speech of healthy older adults.
Collapse
Affiliation(s)
- Paul Hoffman
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, EH8 9JZ, UK.
| |
Collapse
|
16
|
Walenski M, Europa E, Caplan D, Thompson CK. Neural networks for sentence comprehension and production: An ALE-based meta-analysis of neuroimaging studies. Hum Brain Mapp 2019; 40:2275-2304. [PMID: 30689268 DOI: 10.1002/hbm.24523] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Revised: 12/14/2018] [Accepted: 12/26/2018] [Indexed: 12/24/2022] Open
Abstract
Comprehending and producing sentences is a complex endeavor requiring the coordinated activity of multiple brain regions. We examined three issues related to the brain networks underlying sentence comprehension and production in healthy individuals: First, which regions are recruited for sentence comprehension and sentence production? Second, are there differences for auditory sentence comprehension vs. visual sentence comprehension? Third, which regions are specifically recruited for the comprehension of syntactically complex sentences? Results from activation likelihood estimation (ALE) analyses (from 45 studies) implicated a sentence comprehension network occupying bilateral frontal and temporal lobe regions. Regions implicated in production (from 15 studies) overlapped with the set of regions associated with sentence comprehension in the left hemisphere, but did not include inferior frontal cortex, and did not extend to the right hemisphere. Modality differences between auditory and visual sentence comprehension were found principally in the temporal lobes. Results from the analysis of complex syntax (from 37 studies) showed engagement of left inferior frontal and posterior temporal regions, as well as the right insula. The involvement of the right hemisphere in the comprehension of these structures has potentially important implications for language treatment and recovery in individuals with agrammatic aphasia following left hemisphere brain damage.
Collapse
Affiliation(s)
- Matthew Walenski
- Center for the Neurobiology of Language Recovery, Northwestern University, Evanston, Illinois.,Department of Communication Sciences and Disorders, School of Communication, Northwestern University, Evanston, Illinois
| | - Eduardo Europa
- Department of Neurology, University of California, San Francisco
| | - David Caplan
- Department of Neurology, Harvard Medical School, Massachusetts General Hospital, Boston, Massachusetts
| | - Cynthia K Thompson
- Center for the Neurobiology of Language Recovery, Northwestern University, Evanston, Illinois.,Department of Communication Sciences and Disorders, School of Communication, Northwestern University, Evanston, Illinois.,Department of Neurology, Feinberg School of Medicine, Northwestern University, Evanston, Illinois
| |
Collapse
|
17
|
Blanco-Elorrieta E, Emmorey K, Pylkkänen L. Language switching decomposed through MEG and evidence from bimodal bilinguals. Proc Natl Acad Sci U S A 2018; 115:9708-9713. [PMID: 30206151 PMCID: PMC6166835 DOI: 10.1073/pnas.1809779115] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A defining feature of human cognition is the ability to quickly and accurately alternate between complex behaviors. One striking example of such an ability is bilinguals' capacity to rapidly switch between languages. This switching process minimally comprises disengagement from the previous language and engagement in a new language. Previous studies have associated language switching with increased prefrontal activity. However, it is unknown how the subcomputations of language switching individually contribute to these activities, because few natural situations enable full separation of disengagement and engagement processes during switching. We recorded magnetoencephalography (MEG) from American Sign Language-English bilinguals who often sign and speak simultaneously, which allows to dissociate engagement and disengagement. MEG data showed that turning a language "off" (switching from simultaneous to single language production) led to increased activity in the anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (dlPFC), while turning a language "on" (switching from one language to two simultaneously) did not. The distinct representational nature of these on and off processes was also supported by multivariate decoding analyses. Additionally, Granger causality analyses revealed that (i) compared with "turning on" a language, "turning off" required stronger connectivity between left and right dlPFC, and (ii) dlPFC activity predicted ACC activity, consistent with models in which the dlPFC is a top-down modulator of the ACC. These results suggest that the burden of language switching lies in disengagement from the previous language as opposed to engaging a new language and that, in the absence of motor constraints, producing two languages simultaneously is not necessarily more cognitively costly than producing one.
Collapse
Affiliation(s)
- Esti Blanco-Elorrieta
- Department of Psychology, New York University, New York, NY 10003;
- NYU Abu Dhabi Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182
| | - Liina Pylkkänen
- Department of Psychology, New York University, New York, NY 10003
- NYU Abu Dhabi Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Department of Linguistics, New York University, New York, NY 10003
| |
Collapse
|
18
|
Wen J, Yu T, Wang X, Liu C, Zhou T, Li Y, Li X. Continuous behavioral tracing-based online functional brain mapping with intracranial electroencephalography. J Neural Eng 2018; 15:054002. [DOI: 10.1088/1741-2552/aad405] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
19
|
Magdalinou NK, Golden HL, Nicholas JM, Witoonpanich P, Mummery CJ, Morris HR, Djamshidian A, Warner TT, Warrington EK, Lees AJ, Warren JD. Verbal adynamia in parkinsonian syndromes: behavioral correlates and neuroanatomical substrate. Neurocase 2018; 24:204-212. [PMID: 30293517 PMCID: PMC6234546 DOI: 10.1080/13554794.2018.1527368] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Verbal adynamia (impaired language generation, as during conversation) has not been assessed systematically in parkinsonian disorders. We addressed this in patients with Parkinson's dementia, progressive supranuclear palsy and corticobasal degeneration. All disease groups showed impaired verbal fluency and sentence generation versus healthy age-matched controls, after adjusting for general linguistic and executive factors. Dopaminergic stimulation in the Parkinson's group selectively improved verbal generation versus other cognitive functions. Voxel-based morphometry identified left inferior frontal and posterior superior temporal cortical correlates of verbal generation performance. Verbal adynamia warrants further evaluation as an index of language network dysfunction and dopaminergic state in parkinsonian disorders.
Collapse
Affiliation(s)
- Nadia K Magdalinou
- a Reta Lila Weston Institute of Neurological Studies , UCL Institute of Neurology , London , UK
| | - Hannah L Golden
- b Dementia Research Centre , UCL Institute of Neurology , London , UK
| | - Jennifer M Nicholas
- b Dementia Research Centre , UCL Institute of Neurology , London , UK.,c Department of Medical Statistics , London School of Hygiene and Tropical Medicine , London , UK
| | - Pirada Witoonpanich
- b Dementia Research Centre , UCL Institute of Neurology , London , UK.,d Division of Neurology, Department of Medicine, Faculty of Medicine , Ramathibodi Hospital, Mahidol University , Bangkok , Thailand
| | | | - Huw R Morris
- e Department of Clinical Neuroscience , UCL Institute of Neurology , London , UK
| | - Atbin Djamshidian
- a Reta Lila Weston Institute of Neurological Studies , UCL Institute of Neurology , London , UK
| | - Tom T Warner
- a Reta Lila Weston Institute of Neurological Studies , UCL Institute of Neurology , London , UK
| | | | - Andrew J Lees
- a Reta Lila Weston Institute of Neurological Studies , UCL Institute of Neurology , London , UK
| | - Jason D Warren
- b Dementia Research Centre , UCL Institute of Neurology , London , UK
| |
Collapse
|
20
|
Shared neural correlates for building phrases in signed and spoken language. Sci Rep 2018; 8:5492. [PMID: 29615785 PMCID: PMC5882945 DOI: 10.1038/s41598-018-23915-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 03/21/2018] [Indexed: 11/08/2022] Open
Abstract
Research on the mental representation of human language has convincingly shown that sign languages are structured similarly to spoken languages. However, whether the same neurobiology underlies the online construction of complex linguistic structures in sign and speech remains unknown. To investigate this question with maximally controlled stimuli, we studied the production of minimal two-word phrases in sign and speech. Signers and speakers viewed the same pictures during magnetoencephalography recording and named them with semantically identical expressions. For both signers and speakers, phrase building engaged left anterior temporal and ventromedial cortices with similar timing, despite different linguistic articulators. Thus the neurobiological similarity of sign and speech goes beyond gross measures such as lateralization: the same fronto-temporal network achieves the planning of structured linguistic expressions.
Collapse
|
21
|
Yang M, Yang P, Fan YS, Li J, Yao D, Liao W, Chen H. Altered Structure and Intrinsic Functional Connectivity in Post-stroke Aphasia. Brain Topogr 2017; 31:300-310. [PMID: 28921389 DOI: 10.1007/s10548-017-0594-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Accepted: 09/13/2017] [Indexed: 01/19/2023]
Abstract
Previous studies have demonstrated that alterations of gray matter exist in post-stroke aphasia (PSA) patients. However, so far, few studies combined structural alterations of gray matter volume (GMV) and intrinsic functional connectivity (iFC) imbalances of resting-state functional MRI to investigate the mechanism underlying PSA. The present study investigated specific regions with GMV abnormality in patients with PSA (n = 17) and age- and sex- matched healthy controls (HCs, n = 20) using voxel-based morphometry. In addition, we examined whether there is a link between abnormal gray matter and altered iFC. Furthermore, we explored the correlations between abnormal iFC and clinical scores in aphasic patients. We found significantly increased GMV in the right superior temporal gyrus, right inferior parietal lobule (IPL)/supramarginal gyrus (SMG), and left middle occipital gyrus. Decreased GMV was found in the right caudate gyrus, bilateral thalami in PSA patients. Patients showed increased remote interregional FC between the right IPL/SMG and right precuneus, right angular gyrus, right superior occipital gyrus; while reduced FC in the right caudate gyrus and supplementary motor area, dorsolateral superior frontal gyrus. Moreover, iFC strength between the left middle occipital gyrus and the left orbital middle frontal gyrus was positively correlated with the performance quotient. We suggest that GMV abnormality contributes to interregional FC in PSA. These results may provide useful information to understand the pathogenesis of post-stroke aphasia.
Collapse
Affiliation(s)
- Mi Yang
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, People's Republic of China
- Department of Stomatology, the Fourth People's Hospital of Chengdu, Chengdu, 610036, People's Republic of China
| | - Pu Yang
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, People's Republic of China
| | - Yun-Shuang Fan
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, People's Republic of China
| | - Jiao Li
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, People's Republic of China
| | - Dezhong Yao
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, People's Republic of China
| | - Wei Liao
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, People's Republic of China.
| | - Huafu Chen
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, People's Republic of China.
| |
Collapse
|
22
|
Li L, Abutalebi J, Emmorey K, Gong G, Yan X, Feng X, Zou L, Ding G. How bilingualism protects the brain from aging: Insights from bimodal bilinguals. Hum Brain Mapp 2017; 38:4109-4124. [PMID: 28513102 DOI: 10.1002/hbm.23652] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Revised: 03/16/2017] [Accepted: 05/04/2017] [Indexed: 12/11/2022] Open
Abstract
Bilingual experience can delay cognitive decline during aging. A general hypothesis is that the executive control system of bilinguals faces an increased load due to controlling two languages, and this increased load results in a more "tuned brain" that eventually creates a neural reserve. Here we explored whether such a neuroprotective effect is independent of language modality, i.e., not limited to bilinguals who speak two languages but also occurs for bilinguals who use a spoken and a signed language. We addressed this issue by comparing bimodal bilinguals to monolinguals in order to detect age-induced structural brain changes and to determine whether we can detect the same beneficial effects on brain structure, in terms of preservation of gray matter volume (GMV), for bimodal bilinguals as has been reported for unimodal bilinguals. Our GMV analyses revealed a significant interaction effect of age × group in the bilateral anterior temporal lobes, left hippocampus/amygdala, and left insula where bimodal bilinguals showed slight GMV increases while monolinguals showed significant age-induced GMV decreases. We further found through cortical surface-based measurements that this effect was present for surface area and not for cortical thickness. Moreover, to further explore the hypothesis that overall bilingualism provides neuroprotection, we carried out a direct comparison of GMV, extracted from the brain regions reported above, between bimodal bilinguals, unimodal bilinguals, and monolinguals. Bilinguals, regardless of language modality, exhibited higher GMV compared to monolinguals. This finding highlights the general beneficial effects provided by experience handling two language systems, whether signed or spoken. Hum Brain Mapp 38:4109-4124, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Le Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Jubin Abutalebi
- Centre for Neurolinguistics and Psycholinguistics, University Vita Salute San Raffaele, Milan, Italy
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, California
| | - Gaolang Gong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Xin Yan
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Xiaoxia Feng
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Lijuan Zou
- College of Psychology and Education, Zaozhuang University, Zaozhuang, 277100, People's Republic of China
| | - Guosheng Ding
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| |
Collapse
|
23
|
Narratives of focal brain injured individuals: A macro-level analysis. Neuropsychologia 2017; 99:314-325. [PMID: 28347806 DOI: 10.1016/j.neuropsychologia.2017.03.027] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Revised: 03/20/2017] [Accepted: 03/22/2017] [Indexed: 11/21/2022]
Abstract
Focal brain injury can have detrimental effects on the pragmatics of communication. This study examined narrative production by unilateral brain damaged people (n=36) and healthy controls and focused on the complexity (content and coherence) and the evaluative aspect of their narratives to test the general hypothesis that the left hemisphere is biased to process microlinguistic information and the right hemisphere is biased to process macrolinguistic information. We found that people with left hemisphere damage's (LHD) narratives were less likely to maintain the overall theme of the story and produced fewer evaluative comments in their narratives. These deficits correlated with their performances on microlinguistic linguistic tasks. People with the right hemisphere damage (RHD) seemed to be preserved in expressing narrative complexity and evaluations as a group. Yet, single case analyses revealed that particular regions in the right hemisphere such as damage to the dorsolateral prefrontal cortex (DLPFC), the anterior and superior temporal gyrus, the middle temporal gyrus, and the supramarginal gyrus lead to problems in creating narratives. Our findings demonstrate that both hemispheres are necessary to produce competent narrative production. LHD people's poor production is related to their microlinguistic language problems whereas RHD people's impaired abilities can be associated with planning and working memory abilities required to relate events in a narrative.
Collapse
|
24
|
Li L, Emmorey K, Feng X, Lu C, Ding G. Functional Connectivity Reveals Which Language the "Control Regions" Control during Bilingual Production. Front Hum Neurosci 2016; 10:616. [PMID: 27965563 PMCID: PMC5127791 DOI: 10.3389/fnhum.2016.00616] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Accepted: 11/18/2016] [Indexed: 11/28/2022] Open
Abstract
Bilingual studies have revealed critical roles for the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (Lcaudate) in controlling language processing, but how these regions manage activation of a bilingual’s two languages remains an open question. We addressed this question by identifying the functional connectivity (FC) of these control regions during a picture-naming task by bimodal bilinguals who were fluent in both a spoken and a signed language. To quantify language control processes, we measured the FC of the dACC and Lcaudate with a region specific to each language modality: left superior temporal gyrus (LSTG) for speech and left pre/postcentral gyrus (LPCG) for sign. Picture-naming occurred in either a single- or dual-language context. The results showed that in a single-language context, the dACC exhibited increased FC with the target language region, but not with the non-target language region. During the dual-language context when both languages were alternately the target language, the dACC showed strong FC to the LPCG, the region specific to the less proficient (signed) language. By contrast, the Lcaudate revealed a strong connectivity to the LPCG in the single-language context and to the LSTG (the region specific to spoken language) in the dual-language context. Our findings suggest that the dACC monitors and supports the processing of the target language, and that the Lcaudate controls the selection of the less accessible language. The results support the hypothesis that language control processes adapt to task demands that vary due to different interactional contexts.
Collapse
Affiliation(s)
- Le Li
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University Beijing, China
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University San Diego, CA, USA
| | - Xiaoxia Feng
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University Beijing, China
| | - Chunming Lu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal UniversityBeijing, China; Centre for Collaboration and Innovation in Brain and Learning SciencesBeijing, China
| | - Guosheng Ding
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal UniversityBeijing, China; Centre for Collaboration and Innovation in Brain and Learning SciencesBeijing, China
| |
Collapse
|
25
|
Emmorey K, Mehta S, McCullough S, Grabowski TJ. The neural circuits recruited for the production of signs and fingerspelled words. BRAIN AND LANGUAGE 2016; 160:30-41. [PMID: 27459390 PMCID: PMC5002375 DOI: 10.1016/j.bandl.2016.07.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Revised: 07/06/2016] [Accepted: 07/11/2016] [Indexed: 06/06/2023]
Abstract
Signing differs from typical non-linguistic hand actions because movements are not visually guided, finger movements are complex (particularly for fingerspelling), and signs are not produced as holistic gestures. We used positron emission tomography to investigate the neural circuits involved in the production of American Sign Language (ASL). Different types of signs (one-handed (articulated in neutral space), two-handed (neutral space), and one-handed body-anchored signs) were elicited by asking deaf native signers to produce sign translations of English words. Participants also fingerspelled (one-handed) printed English words. For the baseline task, participants indicated whether a word contained a descending letter. Fingerspelling engaged ipsilateral motor cortex and cerebellar cortex in contrast to both one-handed signs and the descender baseline task, which may reflect greater timing demands and complexity of handshape sequences required for fingerspelling. Greater activation in the visual word form area was also observed for fingerspelled words compared to one-handed signs. Body-anchored signs engaged bilateral superior parietal cortex to a greater extent than the descender baseline task and neutral space signs, reflecting the motor control and proprioceptive monitoring required to direct the hand toward a specific location on the body. Less activation in parts of the motor circuit was observed for two-handed signs compared to one-handed signs, possibly because, for half of the signs, handshape and movement goals were spread across the two limbs. Finally, the conjunction analysis comparing each sign type with the descender baseline task revealed common activation in the supramarginal gyrus bilaterally, which we interpret as reflecting phonological retrieval and encoding processes.
Collapse
|
26
|
Hartwigsen G, Henseler I, Stockert A, Wawrzyniak M, Wendt C, Klingbeil J, Baumgaertner A, Saur D. Integration demands modulate effective connectivity in a fronto-temporal network for contextual sentence integration. Neuroimage 2016; 147:812-824. [PMID: 27542723 DOI: 10.1016/j.neuroimage.2016.08.026] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Revised: 07/23/2016] [Accepted: 08/15/2016] [Indexed: 11/29/2022] Open
Abstract
Previous neuroimaging studies demonstrated that a network of left-hemispheric frontal and temporal brain regions contributes to the integration of contextual information into a sentence. However, it remains unclear how these cortical areas influence and drive each other during contextual integration. The present study used dynamic causal modeling (DCM) to investigate task-related changes in the effective connectivity within this network. We found increased neural activity in left anterior inferior frontal gyrus (aIFG), posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and anterior superior temporal sulcus/MTG (aSTS/MTG) that probably reflected increased integration demands and restructuring attempts during the processing of unexpected or semantically anomalous relative to expected endings. DCM analyses of this network revealed that unexpected endings increased the inhibitory influence of left aSTS/MTG on pSTS/MTG during contextual integration. In contrast, during the processing of semantically anomalous endings, left aIFG increased its inhibitory drive on pSTS/MTG. Probabilistic fiber tracking showed that effective connectivity between these areas is mediated by distinct ventral and dorsal white matter association tracts. Together, these results suggest that increasing integration demands require an inhibition of the left pSTS/MTG, which presumably reflects the inhibition of the dominant expected sentence ending. These results are important for a better understanding of the neural implementation of sentence comprehension on a large-scale network level and might influence future studies of language in post-stroke aphasia after focal lesions.
Collapse
Affiliation(s)
- Gesa Hartwigsen
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany; Language & Aphasia Laboratory, Department of Neurology, University of Leipzig, Germany.
| | - Ilona Henseler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Anika Stockert
- Language & Aphasia Laboratory, Department of Neurology, University of Leipzig, Germany
| | - Max Wawrzyniak
- Language & Aphasia Laboratory, Department of Neurology, University of Leipzig, Germany
| | - Christin Wendt
- Language & Aphasia Laboratory, Department of Neurology, University of Leipzig, Germany
| | - Julian Klingbeil
- Language & Aphasia Laboratory, Department of Neurology, University of Leipzig, Germany
| | | | - Dorothee Saur
- Language & Aphasia Laboratory, Department of Neurology, University of Leipzig, Germany
| |
Collapse
|
27
|
Gutierrez-Sigut E, Payne H, MacSweeney M. Examining the contribution of motor movement and language dominance to increased left lateralization during sign generation in native signers. BRAIN AND LANGUAGE 2016; 159:109-17. [PMID: 27388786 PMCID: PMC4980063 DOI: 10.1016/j.bandl.2016.06.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Revised: 05/19/2016] [Accepted: 06/18/2016] [Indexed: 06/06/2023]
Abstract
The neural systems supporting speech and sign processing are very similar, although not identical. In a previous fTCD study of hearing native signers (Gutierrez-Sigut, Daws, et al., 2015) we found stronger left lateralization for sign than speech. Given that this increased lateralization could not be explained by hand movement alone, the contribution of motor movement versus 'linguistic' processes to the strength of hemispheric lateralization during sign production remains unclear. Here we directly contrast lateralization strength of covert versus overt signing during phonological and semantic fluency tasks. To address the possibility that hearing native signers' elevated lateralization indices (LIs) were due to performing a task in their less dominant language, here we test deaf native signers, whose dominant language is British Sign Language (BSL). Signers were more strongly left lateralized for overt than covert sign generation. However, the strength of lateralization was not correlated with the amount of time producing movements of the right hand. Comparisons with previous data from hearing native English speakers suggest stronger laterality indices for sign than speech in both covert and overt tasks. This increased left lateralization may be driven by specific properties of sign production such as the increased use of self-monitoring mechanisms or the nature of phonological encoding of signs.
Collapse
Affiliation(s)
- Eva Gutierrez-Sigut
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom; Departamento de Metodología de las Ciencias del Comportamiento, Universitat de València, Spain.
| | - Heather Payne
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom; Institute of Cognitive Neuroscience, University College London, United Kingdom.
| | - Mairéad MacSweeney
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom; Institute of Cognitive Neuroscience, University College London, United Kingdom.
| |
Collapse
|
28
|
Coderre EL, Smith JF, van Heuven WJB, Horwitz B. The Functional Overlap of Executive Control and Language Processing in Bilinguals. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:471-488. [PMID: 27695385 PMCID: PMC5042330 DOI: 10.1017/s1366728915000188] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The need to control multiple languages is thought to require domain-general executive control (EC) in bilinguals such that the EC and language systems become interdependent. However, there has been no systematic investigation into how and where EC and language processes overlap in the bilingual brain. If the concurrent recruitment of EC during bilingual language processing is domain-general and extends to non-linguistic EC, we hypothesize that regions commonly involvement in language processing, linguistic EC, and non-linguistic EC may be selectively altered in bilinguals compared to monolinguals. A conjunction of functional magnetic resonance imaging (fMRI) data from a flanker task with linguistic and nonlinguistic distractors and a semantic categorization task showed functional overlap in the left inferior frontal gyrus (LIFG) in bilinguals, whereas no overlap occurred in monolinguals. This research therefore identifies a neural locus of functional overlap of language and EC in the bilingual brain.
Collapse
Affiliation(s)
- Emily L Coderre
- School of Psychology, University of Nottingham, Nottingham, UK; Brain Imaging and Modeling Section, Voice, Speech and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, USA; Cognitive Neurology/Neuropsychology, Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jason F Smith
- Brain Imaging and Modeling Section, Voice, Speech and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, USA; Affective and Translational Neuroscience Laboratory, Department of Psychology and Maryland Neuroimaging Center, University of Maryland, College Park, MD
| | | | - Barry Horwitz
- Brain Imaging and Modeling Section, Voice, Speech and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
29
|
Jeong H, Sugiura M, Suzuki W, Sassa Y, Hashizume H, Kawashima R. Neural correlates of second-language communication and the effect of language anxiety. Neuropsychologia 2016; 84:e2-12. [DOI: 10.1016/j.neuropsychologia.2016.02.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
30
|
Sussman D, Pang EW, Jetly R, Dunkley BT, Taylor MJ. Neuroanatomical features in soldiers with post-traumatic stress disorder. BMC Neurosci 2016; 17:13. [PMID: 27029195 PMCID: PMC4815085 DOI: 10.1186/s12868-016-0247-x] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2015] [Accepted: 03/21/2016] [Indexed: 11/25/2022] Open
Abstract
Background Posttraumatic stress disorder (PTSD), an anxiety disorder that can develop after exposure to psychological trauma, impacts up to 20 % of soldiers returning from combat-related deployment. Advanced neuroimaging holds diagnostic and prognostic potential for furthering our understanding of its etiology. Previous imaging studies on combat-related PTSD have focused on selected structures, such as the hippocampi and cortex, but none conducted a comprehensive examination of both the cerebrum and cerebellum. The present study provides a complete analysis of cortical, subcortical, and cerebellar anatomy in a single cohort. Forty-seven magnetic resonance images (MRIs) were collected from 24 soldiers with PTSD and 23 Control soldiers. Each image was segmented into 78 cortical brain regions and 81,924 vertices using the corticometric iterative vertex based estimation of thickness algorithm, allowing for both a region-based and a vertex-based cortical analysis, respectively. Subcortical volumetric analyses of the hippocampi, cerebellum, thalamus, globus pallidus, caudate, putamen, and many sub-regions were conducted following their segmentation using Multiple Automatically Generated Templates Brain algorithm. Results Participants with PTSD were found to have reduced cortical thickness, primarily in the frontal and temporal lobes, with no preference for laterality. The region-based analyses further revealed localized thinning as well as thickening in several sub-regions. These results were accompanied by decreased volumes of the caudate and right hippocampus, as computed relative to total cerebral volume. Enlargement in several cerebellar lobules (relative to total cerebellar volume) was also observed in the PTSD group. Conclusions These data highlight the distributed structural differences between soldiers with and without PTSD, and emphasize the diagnostic potential of high-resolution MRI.
Collapse
Affiliation(s)
- D Sussman
- Department of Diagnostic Imaging, The Hospital for Sick Children, 555 University Avenue, Toronto, ON, M5G 1X8, Canada
| | - E W Pang
- Division of Neurology, Neuroscience and Mental Health Program, The Hospital for Sick Children, 555 University Avenue, Toronto, ON, M5G 1X8, Canada
| | - R Jetly
- Directorate of Mental Health, Canadian Forces Health Services, Ottawa, ON, Canada
| | - B T Dunkley
- Department of Diagnostic Imaging, The Hospital for Sick Children, 555 University Avenue, Toronto, ON, M5G 1X8, Canada.
| | - M J Taylor
- Department of Diagnostic Imaging, The Hospital for Sick Children, 555 University Avenue, Toronto, ON, M5G 1X8, Canada
| |
Collapse
|
31
|
Emmorey K, Giezen MR, Gollan TH. Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:223-242. [PMID: 28804269 PMCID: PMC5553278 DOI: 10.1017/s1366728915000085] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| | | | - Tamar H Gollan
- University of California San Diego, Department of Psychiatry
| |
Collapse
|
32
|
Matchin W, Hickok G. 'Syntactic Perturbation' During Production Activates the Right IFG, but not Broca's Area or the ATL. Front Psychol 2016; 7:241. [PMID: 26941692 PMCID: PMC4763068 DOI: 10.3389/fpsyg.2016.00241] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2015] [Accepted: 02/05/2016] [Indexed: 01/30/2023] Open
Abstract
Research on the neural organization of syntax - the core structure-building component of language - has focused on Broca's area and the anterior temporal lobe (ATL) as the chief candidates for syntactic processing. However, these proposals have received considerable challenges. In order to better understand the neural basis of syntactic processing, we performed a functional magnetic resonance imaging experiment using a constrained sentence production task. We examined the BOLD response to sentence production for active and passive sentences, unstructured word lists, and syntactic perturbation. Perturbation involved cued restructuring of the planned syntax of a sentence mid utterance. Perturbation was designed to capture the effects of syntactic violations previously studied in sentence comprehension. Our experiment showed that Broca's area and the ATL did not exhibit response profiles consistent with syntactic operations - we found no increase of activation in these areas for sentences > lists or for perturbation. Syntactic perturbation activated a cortical-subcortical network including robust activation of the right inferior frontal gyrus (RIFG). This network is similar to one previously shown to be involved in motor response inhibition. We hypothesize that RIFG activation in our study and in previous studies of sentence comprehension is due to an inhibition mechanism that may facilitate efficient syntactic restructuring.
Collapse
Affiliation(s)
- William Matchin
- Cognitive Neuroscience of Language Laboratory, Department of Linguistics, University of Maryland, College Park MD, USA
| | - Gregory Hickok
- Auditory and Language Neuroscience Laboratory, Department of Cognitive Sciences, University of California, Irvine, Irvine CA, USA
| |
Collapse
|
33
|
Okada K, Rogalsky C, O'Grady L, Hanaumi L, Bellugi U, Corina D, Hickok G. An fMRI study of perception and action in deaf signers. Neuropsychologia 2016; 82:179-188. [PMID: 26796716 DOI: 10.1016/j.neuropsychologia.2016.01.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Revised: 01/11/2016] [Accepted: 01/13/2016] [Indexed: 11/30/2022]
Abstract
Since the discovery of mirror neurons, there has been a great deal of interest in understanding the relationship between perception and action, and the role of the human mirror system in language comprehension and production. Two questions have dominated research. One concerns the role of Broca's area in speech perception. The other concerns the role of the motor system more broadly in understanding action-related language. The current study investigates both of these questions in a way that bridges research on language with research on manual actions. We studied the neural basis of observing and executing American Sign Language (ASL) object and action signs. In an fMRI experiment, deaf signers produced signs depicting actions and objects as well as observed/comprehended signs of actions and objects. Different patterns of activation were found for observation and execution although with overlap in Broca's area, providing prima facie support for the claim that the motor system participates in language perception. In contrast, we found no evidence that action related signs differentially involved the motor system compared to object related signs. These findings are discussed in the context of lesion studies of sign language execution and observation. In this broader context, we conclude that the activation in Broca's area during ASL observation is not causally related to sign language understanding.
Collapse
Affiliation(s)
- Kayoko Okada
- Department of Psychological Sciences, Whittier College, Whittier, CA, United states; Department of Cognitive Sciences, University of California, Irvine, CA, United States
| | - Corianne Rogalsky
- Department of Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| | - Lucinda O'Grady
- Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, San Diego, CA, United States
| | - Leila Hanaumi
- Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, San Diego, CA, United States
| | - Ursula Bellugi
- Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, San Diego, CA, United States
| | - David Corina
- Department of Linguistics, University of California, Davis, CA, United States
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, CA, United States.
| |
Collapse
|
34
|
Gutierrez-Sigut E, Daws R, Payne H, Blott J, Marshall C, MacSweeney M. Language lateralization of hearing native signers: A functional transcranial Doppler sonography (fTCD) study of speech and sign production. BRAIN AND LANGUAGE 2015; 151:23-34. [PMID: 26605960 PMCID: PMC4918793 DOI: 10.1016/j.bandl.2015.10.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Revised: 10/19/2015] [Accepted: 10/24/2015] [Indexed: 06/05/2023]
Abstract
Neuroimaging studies suggest greater involvement of the left parietal lobe in sign language compared to speech production. This stronger activation might be linked to the specific demands of sign encoding and proprioceptive monitoring. In Experiment 1 we investigate hemispheric lateralization during sign and speech generation in hearing native users of English and British Sign Language (BSL). Participants exhibited stronger lateralization during BSL than English production. In Experiment 2 we investigated whether this increased lateralization index could be due exclusively to the higher motoric demands of sign production. Sign naïve participants performed a phonological fluency task in English and a non-sign repetition task. Participants were left lateralized in the phonological fluency task but there was no consistent pattern of lateralization for the non-sign repetition in these hearing non-signers. The current data demonstrate stronger left hemisphere lateralization for producing signs than speech, which was not primarily driven by motoric articulatory demands.
Collapse
Affiliation(s)
- Eva Gutierrez-Sigut
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom.
| | - Richard Daws
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom
| | - Heather Payne
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom; Institute of Cognitive Neuroscience, University College London, United Kingdom
| | - Jonathan Blott
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom
| | - Chloë Marshall
- UCL Institute of Education, University College London, United Kingdom
| | - Mairéad MacSweeney
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom; Institute of Cognitive Neuroscience, University College London, United Kingdom
| |
Collapse
|
35
|
Williams JT, Darcy I, Newman SD. Modality-independent neural mechanisms for novel phonetic processing. Brain Res 2015; 1620:107-15. [DOI: 10.1016/j.brainres.2015.05.014] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2015] [Revised: 04/28/2015] [Accepted: 05/11/2015] [Indexed: 01/20/2023]
|
36
|
Weisberg J, McCullough S, Emmorey K. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends. BRAIN AND LANGUAGE 2015; 147:96-106. [PMID: 26177161 PMCID: PMC5769874 DOI: 10.1016/j.bandl.2015.05.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 04/17/2015] [Accepted: 05/16/2015] [Indexed: 05/29/2023]
Abstract
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration.
Collapse
Affiliation(s)
- Jill Weisberg
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| | - Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| |
Collapse
|
37
|
Proverbio AM, Gabaro V, Orlandi A, Zani A. Semantic brain areas are involved in gesture comprehension: An electrical neuroimaging study. BRAIN AND LANGUAGE 2015; 147:30-40. [PMID: 26011745 DOI: 10.1016/j.bandl.2015.05.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Revised: 04/13/2015] [Accepted: 05/02/2015] [Indexed: 06/04/2023]
Abstract
While the mechanism of sign language comprehension in deaf people has been widely investigated, little is known about the neural underpinnings of spontaneous gesture comprehension in healthy speakers. Bioelectrical responses to 800 pictures of actors showing common Italian gestures (e.g., emblems, deictic or iconic gestures) were recorded in 14 persons. Stimuli were selected from a wider corpus of 1122 gestures. Half of the pictures were preceded by an incongruent description. ERPs were recorded from 128 sites while participants decided whether the stimulus was congruent. Congruent pictures elicited a posterior P300 followed by late positivity, while incongruent gestures elicited an anterior N400 response. N400 generators were investigated with swLORETA reconstruction. Processing of congruent gestures activated face- and body-related visual areas (e.g., BA19, BA37, BA22), the left angular gyrus, mirror fronto/parietal areas. The incongruent-congruent contrast particularly stimulated linguistic and semantic brain areas, such as the left medial and the superior temporal lobe.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- NeuroMI-Milan Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milan, Italy.
| | - Veronica Gabaro
- NeuroMI-Milan Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milan, Italy
| | - Andrea Orlandi
- NeuroMI-Milan Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milan, Italy; Institute of Bioimaging and Molecular Physiology, IBFM-CNR, Milan, Italy
| | - Alberto Zani
- Institute of Bioimaging and Molecular Physiology, IBFM-CNR, Milan, Italy
| |
Collapse
|
38
|
Wilson SM, Lam D, Babiak MC, Perry DW, Shih T, Hess CP, Berger MS, Chang EF. Transient aphasias after left hemisphere resective surgery. J Neurosurg 2015; 123:581-93. [PMID: 26115463 DOI: 10.3171/2015.4.jns141962] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECT Transient aphasias are often observed in the first few days after a patient has undergone resection in the language-dominant hemisphere. The aims of this prospective study were to characterize the incidence and nature of these aphasias and to determine whether there are relationships between location of the surgical site and deficits in specific language domains. METHODS One hundred ten patients undergoing resection to the language-dominant hemisphere participated in the study. Language was evaluated prior to surgery and 2-3 days and 1 month postsurgery using the Western Aphasia Battery and the Boston Naming Test. Voxel-based lesion-symptom mapping was used to identify relationships between the surgical site location assessed on MRI and deficits in fluency, information content, comprehension, repetition, and naming. RESULTS Seventy-one percent of patients were classified as aphasic based on the Western Aphasia Battery 2-3 days postsurgery, with deficits observed in each of the language domains examined. Fluency deficits were associated with resection of the precentral gyrus and adjacent inferior frontal cortex. Reduced information content of spoken output was associated with resection of the ventral precentral gyrus and posterior inferior frontal gyrus (pars opercularis). Repetition deficits were associated with resection of the posterior superior temporal gyrus. Naming deficits were associated with resection of the ventral temporal cortex, with midtemporal and posterior temporal damage more predictive of naming deficits than anterior temporal damage. By 1 month postsurgery, nearly all language deficits were resolved, and no language measure except for naming differed significantly from its presurgical level. CONCLUSIONS These findings show that transient aphasias are very common after left hemisphere resective surgery and that the precise nature of the aphasia depends on the specific location of the surgical site. The patient cohort in this study provides a unique window into the neural basis of language because resections are discrete, their locations are not limited by vascular distribution or patterns of neurodegeneration, and language can be studied prior to substantial reorganization.
Collapse
Affiliation(s)
- Stephen M Wilson
- Departments of 1 Speech, Language, and Hearing Sciences and.,Neurology, University of Arizona, Tucson, Arizona; and
| | | | | | | | - Tina Shih
- Neurology, and.,UCSF Epilepsy Center, University of California, San Francisco, California
| | | | | | - Edward F Chang
- Departments of 3 Neurological Surgery.,UCSF Epilepsy Center, University of California, San Francisco, California
| |
Collapse
|
39
|
Bilingualism alters brain functional connectivity between “control” regions and “language” regions: Evidence from bimodal bilinguals. Neuropsychologia 2015; 71:236-47. [DOI: 10.1016/j.neuropsychologia.2015.04.007] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Revised: 03/21/2015] [Accepted: 04/04/2015] [Indexed: 01/12/2023]
|
40
|
Jeong H, Sugiura M, Suzuki W, Sassa Y, Hashizume H, Kawashima R. Neural correlates of second-language communication and the effect of language anxiety. Neuropsychologia 2015; 66:182-92. [DOI: 10.1016/j.neuropsychologia.2014.11.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2014] [Revised: 11/12/2014] [Accepted: 11/14/2014] [Indexed: 11/27/2022]
|
41
|
Geranmayeh F, Leech R, Wise RJS. Semantic retrieval during overt picture description: Left anterior temporal or the parietal lobe? Neuropsychologia 2014; 76:125-35. [PMID: 25497693 PMCID: PMC4582804 DOI: 10.1016/j.neuropsychologia.2014.12.012] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2014] [Revised: 12/07/2014] [Accepted: 12/08/2014] [Indexed: 11/15/2022]
Abstract
Retrieval of semantic representations is a central process during overt speech production. There is an increasing consensus that an amodal semantic 'hub' must exist that draws together modality-specific representations of concepts. Based on the distribution of atrophy and the behavioral deficit of patients with the semantic variant of fronto-temporal lobar degeneration, it has been proposed that this hub is localized within both anterior temporal lobes (ATL), and is functionally connected with verbal 'output' systems via the left ATL. An alternative view, dating from Geschwind's proposal in 1965, is that the angular gyrus (AG) is central to object-based semantic representations. In this fMRI study we examined the connectivity of the left ATL and parietal lobe (PL) with whole brain networks known to be activated during overt picture description. We decomposed each of these two brain volumes into 15 regions of interest (ROIs), using independent component analysis. A dual regression analysis was used to establish the connectivity of each ROI with whole brain-networks. An ROI within the left anterior superior temporal sulcus (antSTS) was functionally connected to other parts of the left ATL, including anterior ventromedial left temporal cortex (partially attenuated by signal loss due to susceptibility artifact), a large left dorsolateral prefrontal region (including 'classic' Broca's area), extensive bilateral sensory-motor cortices, and the length of both superior temporal gyri. The time-course of this functionally connected network was associated with picture description but not with non-semantic baseline tasks. This system has the distribution expected for the production of overt speech with appropriate semantic content, and the auditory monitoring of the overt speech output. In contrast, the only left PL ROI that showed connectivity with brain systems most strongly activated by the picture-description task, was in the superior parietal lobe (supPL). This region showed connectivity with predominantly posterior cortical regions required for the visual processing of the pictorial stimuli, with additional connectivity to the dorsal left AG and a small component of the left inferior frontal gyrus. None of the other PL ROIs that included part of the left AG were activated by Speech alone. The best interpretation of these results is that the left antSTS connects the proposed semantic hub (specifically localized to ventral anterior temporal cortex based on clinical neuropsychological studies) to posterior frontal regions and sensory-motor cortices responsible for the overt production of speech.
Collapse
Affiliation(s)
- Fatemeh Geranmayeh
- Computational Cognitive and Clinical Neuroimaging Laboratory, Imperial College, Hammersmith Hospital, London W12 0NN, UK.
| | - Robert Leech
- Computational Cognitive and Clinical Neuroimaging Laboratory, Imperial College, Hammersmith Hospital, London W12 0NN, UK
| | - Richard J S Wise
- Computational Cognitive and Clinical Neuroimaging Laboratory, Imperial College, Hammersmith Hospital, London W12 0NN, UK
| |
Collapse
|
42
|
Sensory-motor integration during speech production localizes to both left and right plana temporale. J Neurosci 2014; 34:12963-72. [PMID: 25253845 DOI: 10.1523/jneurosci.0336-14.2014] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Speech production relies on fine voluntary motor control of respiration, phonation, and articulation. The cortical initiation of complex sequences of coordinated movements is thought to result in parallel outputs, one directed toward motor neurons while the "efference copy" projects to auditory and somatosensory fields. It is proposed that the latter encodes the expected sensory consequences of speech and compares expected with actual postarticulatory sensory feedback. Previous functional neuroimaging evidence has indicated that the cortical target for the merging of feedforward motor and feedback sensory signals is left-lateralized and lies at the junction of the supratemporal plane with the parietal operculum, located mainly in the posterior half of the planum temporale (PT). The design of these studies required participants to imagine speaking or generating nonverbal vocalizations in response to external stimuli. The resulting assumption is that verbal and nonverbal vocal motor imagery activates neural systems that integrate the sensory-motor consequences of speech, even in the absence of primary motor cortical activity or sensory feedback. The present human functional magnetic resonance imaging study used univariate and multivariate analyses to investigate both overt and covert (internally generated) propositional and nonpropositional speech (noun definition and counting, respectively). Activity in response to overt, but not covert, speech was present in bilateral anterior PT, with no increased activity observed in posterior PT or parietal opercula for either speech type. On this evidence, the response of the left and right anterior PTs better fulfills the criteria for sensory target and state maps during overt speech production.
Collapse
|
43
|
Pylkkänen L, Bemis DK, Blanco Elorrieta E. Building phrases in language production: An MEG study of simple composition. Cognition 2014; 133:371-84. [DOI: 10.1016/j.cognition.2014.07.001] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2012] [Revised: 05/21/2014] [Accepted: 07/07/2014] [Indexed: 10/24/2022]
|
44
|
Bourguignon NJ. A rostro-caudal axis for language in the frontal lobe: the role of executive control in speech production. Neurosci Biobehav Rev 2014; 47:431-44. [PMID: 25305636 DOI: 10.1016/j.neubiorev.2014.09.008] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 09/11/2014] [Indexed: 01/09/2023]
Abstract
The present article promotes a formal executive model of frontal functions underlying speech production, bringing together hierarchical theories of adaptive behavior in the (pre-)frontal cortex (pFC) and psycho- and neurolinguistic approaches to spoken language within an information-theoretic framework. Its biological plausibility is revealed through two Activation Likelihood Estimation meta-analyses carried out on a total of 41 hemodynamic studies of overt word and continuous speech production respectively. Their principal findings, considered in light of neuropsychological evidence and earlier models of speech-related frontal functions, support the engagement of a caudal-to-rostral gradient of pFC activity operationalized by the nature and quantity of speech-related information conveyed by task-related external cues (i.e., cue codability) on the one hand, and the total informational content of generated utterances on the other. In particular, overt reading or repetition and picture naming recruit primarily caudal motor-premotor regions involved in the sensorimotor and phonological aspects of speech; word and sentence generation engage mid- ventro- and dorsolateral areas supporting its basic predicative and syntactic functions; finally, rostral- and fronto-polar cortices subsume domain-general strategic processes of discourse generation for creative speech. These different levels interact in a top-down fashion, ranging representationally and temporally from the most general and extended to the most specific and immediate. The end-result is an integrative theory of pFC as the main executive component of the language cortical network, which supports the existence of areas specialized for speech communication and articulation and regions subsuming internal reasoning and planning. Prospective avenues of research pertaining to this model's principal predictions are discussed.
Collapse
Affiliation(s)
- Nicolas J Bourguignon
- Centre de recherche du CHU Sainte-Justine, Montreal, Canada; Département d'orthophonie et d'audiologie, Université de Montréal, Canada; Centre for Research on the Brain, Language and Music, Montreal, Canada.
| |
Collapse
|
45
|
Coupled neural systems underlie the production and comprehension of naturalistic narrative speech. Proc Natl Acad Sci U S A 2014; 111:E4687-96. [PMID: 25267658 DOI: 10.1073/pnas.1323812111] [Citation(s) in RCA: 201] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
Neuroimaging studies of language have typically focused on either production or comprehension of single speech utterances such as syllables, words, or sentences. In this study we used a new approach to functional MRI acquisition and analysis to characterize the neural responses during production and comprehension of complex real-life speech. First, using a time-warp based intrasubject correlation method, we identified all areas that are reliably activated in the brains of speakers telling a 15-min-long narrative. Next, we identified areas that are reliably activated in the brains of listeners as they comprehended that same narrative. This allowed us to identify networks of brain regions specific to production and comprehension, as well as those that are shared between the two processes. The results indicate that production of a real-life narrative is not localized to the left hemisphere but recruits an extensive bilateral network, which overlaps extensively with the comprehension system. Moreover, by directly comparing the neural activity time courses during production and comprehension of the same narrative we were able to identify not only the spatial overlap of activity but also areas in which the neural activity is coupled across the speaker's and listener's brains during production and comprehension of the same narrative. We demonstrate widespread bilateral coupling between production- and comprehension-related processing within both linguistic and nonlinguistic areas, exposing the surprising extent of shared processes across the two systems.
Collapse
|
46
|
Hartwigsen G, Golombek T, Obleser J. Repetitive transcranial magnetic stimulation over left angular gyrus modulates the predictability gain in degraded speech comprehension. Cortex 2014; 68:100-10. [PMID: 25444577 DOI: 10.1016/j.cortex.2014.08.027] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2014] [Revised: 07/08/2014] [Accepted: 08/26/2014] [Indexed: 10/24/2022]
Abstract
Increased neural activity in left angular gyrus (AG) accompanies successful comprehension of acoustically degraded but highly predictable sentences, as previous functional imaging studies have shown. However, it remains unclear whether the left AG is causally relevant for the comprehension of degraded speech. Here, we applied transient virtual lesions to either the left AG or superior parietal lobe (SPL, as a control area) with repetitive transcranial magnetic stimulation (rTMS) while healthy volunteers listened to and repeated sentences with high- versus low-predictable endings and different noise vocoding levels. We expected that rTMS of AG should selectively modulate the predictability gain (i.e., the comprehension benefit from sentences with high-predictable endings) at a medium degradation level. We found that rTMS of AG indeed reduced the predictability gain at a medium degradation level of 4-band noise vocoding (relative to control rTMS of SPL). In contrast, the behavioral perturbation induced by rTMS changed with increased signal quality. Hence, at 8-band noise vocoding, rTMS over AG versus SPL decreased the number of correctly repeated keywords for sentences with low-predictable endings. Together, these results show that the degree of the rTMS interference depended jointly on signal quality and predictability. Our results provide the first causal evidence that the left AG is a critical node for facilitating speech comprehension in challenging listening conditions.
Collapse
Affiliation(s)
- Gesa Hartwigsen
- Language & Aphasia Laboratory, Department of Neurology, University of Leipzig, Leipzig, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Department of Psychology, Christian-Albrechts-University, Kiel, Germany.
| | - Thomas Golombek
- Language & Aphasia Laboratory, Department of Neurology, University of Leipzig, Leipzig, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
47
|
Xu Y, Tong Y, Liu S, Chow HM, AbdulSabur NY, Mattay GS, Braun AR. Denoising the speaking brain: toward a robust technique for correcting artifact-contaminated fMRI data under severe motion. Neuroimage 2014; 103:33-47. [PMID: 25225001 DOI: 10.1016/j.neuroimage.2014.09.013] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2014] [Revised: 07/29/2014] [Accepted: 09/04/2014] [Indexed: 11/24/2022] Open
Abstract
A comprehensive set of methods based on spatial independent component analysis (sICA) is presented as a robust technique for artifact removal, applicable to a broad range of functional magnetic resonance imaging (fMRI) experiments that have been plagued by motion-related artifacts. Although the applications of sICA for fMRI denoising have been studied previously, three fundamental elements of this approach have not been established as follows: 1) a mechanistically-based ground truth for component classification; 2) a general framework for evaluating the performance and generalizability of automated classifiers; and 3) a reliable method for validating the effectiveness of denoising. Here we perform a thorough investigation of these issues and demonstrate the power of our technique by resolving the problem of severe imaging artifacts associated with continuous overt speech production. As a key methodological feature, a dual-mask sICA method is proposed to isolate a variety of imaging artifacts by directly revealing their extracerebral spatial origins. It also plays an important role for understanding the mechanistic properties of noise components in conjunction with temporal measures of physical or physiological motion. The potentials of a spatially-based machine learning classifier and the general criteria for feature selection have both been examined, in order to maximize the performance and generalizability of automated component classification. The effectiveness of denoising is quantitatively validated by comparing the activation maps of fMRI with those of positron emission tomography acquired under the same task conditions. The general applicability of this technique is further demonstrated by the successful reduction of distance-dependent effect of head motion on resting-state functional connectivity.
Collapse
Affiliation(s)
- Yisheng Xu
- Language Section, Voice, Speech, and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892, USA.
| | - Yunxia Tong
- Clinical Brain Disorders Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA
| | - Siyuan Liu
- Language Section, Voice, Speech, and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892, USA
| | - Ho Ming Chow
- Language Section, Voice, Speech, and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892, USA
| | - Nuria Y AbdulSabur
- Language Section, Voice, Speech, and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892, USA; Department of Linguistics, University of Maryland, College Park, MD 20742, USA
| | - Govind S Mattay
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Allen R Braun
- Language Section, Voice, Speech, and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892, USA
| |
Collapse
|
48
|
AbdulSabur NY, Xu Y, Liu S, Chow HM, Baxter M, Carson J, Braun AR. Neural correlates and network connectivity underlying narrative production and comprehension: A combined fMRI and PET study. Cortex 2014; 57:107-27. [DOI: 10.1016/j.cortex.2014.01.017] [Citation(s) in RCA: 63] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2013] [Revised: 05/14/2013] [Accepted: 01/27/2014] [Indexed: 11/16/2022]
|
49
|
Derix J, Iljina O, Weiske J, Schulze-Bonhage A, Aertsen A, Ball T. From speech to thought: the neuronal basis of cognitive units in non-experimental, real-life communication investigated using ECoG. Front Hum Neurosci 2014; 8:383. [PMID: 24982625 PMCID: PMC4056309 DOI: 10.3389/fnhum.2014.00383] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2013] [Accepted: 05/14/2014] [Indexed: 11/13/2022] Open
Abstract
Exchange of thoughts by means of expressive speech is fundamental to human communication. However, the neuronal basis of real-life communication in general, and of verbal exchange of ideas in particular, has rarely been studied until now. Here, our aim was to establish an approach for exploring the neuronal processes related to cognitive “idea” units (IUs) in conditions of non-experimental speech production. We investigated whether such units corresponding to single, coherent chunks of speech with syntactically-defined borders, are useful to unravel the neuronal mechanisms underlying real-world human cognition. To this aim, we employed simultaneous electrocorticography (ECoG) and video recordings obtained in pre-neurosurgical diagnostics of epilepsy patients. We transcribed non-experimental, daily hospital conversations, identified IUs in transcriptions of the patients' speech, classified the obtained IUs according to a previously-proposed taxonomy focusing on memory content, and investigated the underlying neuronal activity. In each of our three subjects, we were able to collect a large number of IUs which could be assigned to different functional IU subclasses with a high inter-rater agreement. Robust IU-onset-related changes in spectral magnitude could be observed in high gamma frequencies (70–150 Hz) on the inferior lateral convexity and in the superior temporal cortex regardless of the IU content. A comparison of the topography of these responses with mouth motor and speech areas identified by electrocortical stimulation showed that IUs might be of use for extraoperative mapping of eloquent cortex (average sensitivity: 44.4%, average specificity: 91.1%). High gamma responses specific to memory-related IU subclasses were observed in the inferior parietal and prefrontal regions. IU-based analysis of ECoG recordings during non-experimental communication thus elicits topographically- and functionally-specific effects. We conclude that segmentation of spontaneous real-world speech in linguistically-motivated units is a promising strategy for elucidating the neuronal basis of mental processing during non-experimental communication.
Collapse
Affiliation(s)
- Johanna Derix
- Department of Neurosurgery, Epilepsy Center, University Medical Center Freiburg Freiburg, Germany ; Department of Neurobiology and Biophysics, Faculty of Biology, University of Freiburg Freiburg, Germany ; Bernstein Center Freiburg, University of Freiburg Freiburg, Germany
| | - Olga Iljina
- Department of Neurosurgery, Epilepsy Center, University Medical Center Freiburg Freiburg, Germany ; GRK 1624, University of Freiburg Freiburg, Germany ; Department of German Linguistics, University of Freiburg Freiburg, Germany ; Hermann Paul School of Linguistics, University of Freiburg Freiburg, Germany
| | - Johanna Weiske
- Department of Neurosurgery, Epilepsy Center, University Medical Center Freiburg Freiburg, Germany ; Department of Neurobiology and Biophysics, Faculty of Biology, University of Freiburg Freiburg, Germany ; Bernstein Center Freiburg, University of Freiburg Freiburg, Germany
| | - Andreas Schulze-Bonhage
- Department of Neurosurgery, Epilepsy Center, University Medical Center Freiburg Freiburg, Germany ; Bernstein Center Freiburg, University of Freiburg Freiburg, Germany
| | - Ad Aertsen
- Department of Neurobiology and Biophysics, Faculty of Biology, University of Freiburg Freiburg, Germany ; Bernstein Center Freiburg, University of Freiburg Freiburg, Germany
| | - Tonio Ball
- Department of Neurosurgery, Epilepsy Center, University Medical Center Freiburg Freiburg, Germany ; Bernstein Center Freiburg, University of Freiburg Freiburg, Germany
| |
Collapse
|
50
|
Emmorey K, McCullough S, Mehta S, Grabowski TJ. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language. Front Psychol 2014; 5:484. [PMID: 24904497 PMCID: PMC4033845 DOI: 10.3389/fpsyg.2014.00484] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2013] [Accepted: 05/03/2014] [Indexed: 11/24/2022] Open
Abstract
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
Collapse
Affiliation(s)
- Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University San Diego, CA, USA
| | - Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University San Diego, CA, USA
| | - Sonya Mehta
- Department of Psychology, University of Washington Seattle, WA, USA ; Department of Radiology, University of Washington Seattle, WA, USA
| | | |
Collapse
|