1
|
Yang T, Fan X, Hou B, Wang J, Chen X. Linguistic network in early deaf individuals: A neuroimaging meta-analysis. Neuroimage 2024; 299:120720. [PMID: 38971484 DOI: 10.1016/j.neuroimage.2024.120720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 07/01/2024] [Accepted: 07/03/2024] [Indexed: 07/08/2024] Open
Abstract
This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior frontal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.
Collapse
Affiliation(s)
- Tengyu Yang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China
| | - Xinmiao Fan
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China
| | - Bo Hou
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China
| | - Jian Wang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China.
| | - Xiaowei Chen
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China.
| |
Collapse
|
2
|
Malaia EA, Borneman SC, Borneman JD, Krebs J, Wilbur RB. Prediction underlying comprehension of human motion: an analysis of Deaf signer and non-signer EEG in response to visual stimuli. Front Neurosci 2023; 17:1218510. [PMID: 37901437 PMCID: PMC10602904 DOI: 10.3389/fnins.2023.1218510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/27/2023] [Indexed: 10/31/2023] Open
Abstract
Introduction Sensory inference and top-down predictive processing, reflected in human neural activity, play a critical role in higher-order cognitive processes, such as language comprehension. However, the neurobiological bases of predictive processing in higher-order cognitive processes are not well-understood. Methods This study used electroencephalography (EEG) to track participants' cortical dynamics in response to Austrian Sign Language and reversed sign language videos, measuring neural coherence to optical flow in the visual signal. We then used machine learning to assess entropy-based relevance of specific frequencies and regions of interest to brain state classification accuracy. Results EEG features highly relevant for classification were distributed across language processing-related regions in Deaf signers (frontal cortex and left hemisphere), while in non-signers such features were concentrated in visual and spatial processing regions. Discussion The results highlight functional significance of predictive processing time windows for sign language comprehension and biological motion processing, and the role of long-term experience (learning) in minimizing prediction error.
Collapse
Affiliation(s)
- Evie A. Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, United States
| | - Sean C. Borneman
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, United States
| | - Joshua D. Borneman
- Department of Linguistics, Purdue University, West Lafayette, IN, United States
| | - Julia Krebs
- Linguistics Department, University of Salzburg, Salzburg, Austria
- Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
| | - Ronnie B. Wilbur
- Department of Linguistics, Purdue University, West Lafayette, IN, United States
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
3
|
Papafragou A, Ji Y. Events and objects are similar cognitive entities. Cogn Psychol 2023; 143:101573. [PMID: 37178616 DOI: 10.1016/j.cogpsych.2023.101573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 04/25/2023] [Accepted: 05/02/2023] [Indexed: 05/15/2023]
Abstract
Logico-semantic theories have long noted parallels between the linguistic representation of temporal entities (events) and spatial entities (objects): bounded (or telic) predicates such as fix a car resemble count nouns such as sandcastle because they are "atoms" that have well-defined boundaries, contain discrete minimal parts and cannot be divided arbitrarily. By contrast, unbounded (or atelic) phrases such as drive a car resemble mass nouns such as sand in that they are unspecified for atomic features. Here, we demonstrate for the first time the parallels in the perceptual-cognitive representation of events and objects even in entirely non-linguistic tasks. Specifically, after viewers form categories of bounded or unbounded events, they can extend the category to objects or substances respectively (Experiments 1 and 2). Furthermore, in a training study, people successfully learn event-to-object mappings that respect atomicity (i.e., grouping bounded events with objects and unbounded events with substances) but fail to acquire the opposite, atomicity-violating mappings (Experiment 3). Finally, viewers can spontaneously draw connections between events and objects without any prior training (Experiment 4). These striking similarities between the mental representation of events and objects have implications for current theories of event cognition, as well as the relationship between language and thought.
Collapse
Affiliation(s)
- Anna Papafragou
- Department of Linguistics, University of Pennsylvania, 3401-C Walnut St., Philadelphia, PA 19104, USA.
| | - Yue Ji
- Department of English, School of Foreign Languages, Beijing Institute of Technology, No. 5 South Street, Zhongguancun, Haidian District, Beijing 100081, China.
| |
Collapse
|
4
|
Abstract
Early sensory deprivation, such as deafness, shapes brain development in multiple ways. Deprived auditory areas become engaged in the processing of stimuli from the remaining modalities and in high-level cognitive tasks. Yet, structural and functional changes were also observed in non-deprived brain areas, which may suggest the whole-brain network changes in deaf individuals. To explore this possibility, we compared the resting-state functional network organization of the brain in early deaf adults and hearing controls and examined global network segregation and integration. Relative to hearing controls, deaf adults exhibited decreased network segregation and an altered modular structure. In the deaf, regions of the salience network were coupled with the fronto-parietal network, while in the hearing controls, they were coupled with other large-scale networks. Deaf adults showed weaker connections between auditory and somatomotor regions, stronger coupling between the fronto-parietal network and several other large-scale networks (visual, memory, cingulo-opercular and somatomotor), and an enlargement of the default mode network. Our findings suggest that brain plasticity in deaf adults is not limited to changes in the auditory cortex but additionally alters the coupling between other large-scale networks and the development of functional brain modules. These widespread functional connectivity changes may provide a mechanism for the superior behavioral performance of the deaf in visual and attentional tasks.
Collapse
|
5
|
Kumar U, Keshri A, Mishra M. Alteration of brain resting-state networks and functional connectivity in prelingual deafness. J Neuroimaging 2021; 31:1135-1145. [PMID: 34189809 DOI: 10.1111/jon.12904] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 06/12/2021] [Accepted: 06/15/2021] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND AND PURPOSE Early hearing loss causes several changes in the brain structure and function at multiple levels; these changes can be observed through neuroimaging. These changes are directly associated with sensory loss (hearing) and the acquisition of alternative communication strategies. Such plasticity changes in the brain might establish a different connectivity pattern with resting-state networks (RSNs) and other brain regions. We performed resting-state functional magnetic resonance imaging (rsfMRI) to evaluate these intrinsic modifications. METHODS We used two methods to characterize the functional connectivity (FC) of RSN components in 20 prelingual deaf adults and 20 demographic-matched hearing adults. rsfMRI data were analyzed using independent component analysis (ICA) and region-of-interest seed-to-voxel correlation analysis. RESULTS In ICA, we identified altered FC of RSNs in the deaf group. RSNs with altered FC were observed in higher visual, auditory, default mode, salience, and sensorimotor networks. The findings of seed-to-voxel correlation analysis suggested increased temporal coherence with other neural networks in the deaf group compared with the hearing control group. CONCLUSION These findings suggest a highly diverse resting-state connectivity pattern in prelingual deaf adults resulting from compensatory cross-modal plasticity that includes both auditory and nonauditory regions.
Collapse
Affiliation(s)
- Uttam Kumar
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, India
| | - Amit Keshri
- Department of Neuro-otology, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, India
| | - Mrutyunjaya Mishra
- Department of Special Education (Hearing Impairments), Dr. Shakuntala Misra National Rehabilitation University, Lucknow, India
| |
Collapse
|
6
|
White matter alteration in adults with prelingual deafness: A TBSS and SBM analysis of fractional anisotropy data. Brain Cogn 2020; 148:105676. [PMID: 33388552 DOI: 10.1016/j.bandc.2020.105676] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 11/06/2020] [Accepted: 12/21/2020] [Indexed: 11/23/2022]
Abstract
A loss of hearing in early life leads to diversifications of important white matter networks. Previous studies related to WM alterations in adult deaf individuals mainly involved univariate analysis of fractional anisotropy (FA) data and volumetric analysis, which yielded inconsistent results. To address this issue, we investigated the FA value alterations in 38 prelingual adult deaf individuals and compared the results with those obtained from the same number of adults with normal hearing by using univariate (tract-based spatial statistics) and multivariate (source-based morphometry) methods. The findings from tract-based spatial statistics suggested an increased FA value in regions such as the left cingulate gyrus, left inferior frontal occipital fasciculus, left inferior longitudinal fasciculus and superior corona radiata; however, the results indicated a decreased FA value in the left planum temporale of adult deaf individuals. While source-based morphometry analysis outlined higher FA values in regions such as bilateral lingual gyrus, bilateral cerebellum, bilateral putamen and bilateral caudate, a considerable decrease was observed in the bilateral superior temporal region of the deaf group. These alterations in multiple neural regions might be linked to the compensatory cross-modal reorganizations attributed to early hearing loss.
Collapse
|
7
|
Hribar M, Šuput D, Battelino S, Vovk A. Review article: Structural brain alterations in prelingually deaf. Neuroimage 2020; 220:117042. [PMID: 32534128 DOI: 10.1016/j.neuroimage.2020.117042] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 05/14/2020] [Accepted: 06/06/2020] [Indexed: 11/20/2022] Open
Abstract
Functional studies show that our brain has a remarkable ability to reorganize itself in the absence of one or more sensory modalities. In this review, we gathered all the available articles investigating structural alterations in congenitally deaf subjects. Some concentrated only on specific regions of interest (e.g., auditory areas), while others examined the whole brain. The majority of structural alterations were observed in the auditory white matter and were more pronounced in the right hemisphere. A decreased white matter volume or fractional anisotropy in the auditory areas were the most common findings in congenitally deaf subjects. Only a few studies observed alterations in the auditory grey matter. Preservation of the grey matter might be due to the cross-modal plasticity as well as due to the lack of sensitivity of methods used for microstructural alterations of grey matter. Structural alterations were also observed in the frontal, visual, and other cerebral regions as well as in the cerebellum. The observed structural brain alterations in the deaf can probably be attributed mainly to the cross-modal plasticity in the absence of sound input and use of sign instead of spoken language.
Collapse
Affiliation(s)
- Manja Hribar
- Center for Clinical Physiology, Faculty of Medicine, University of Ljubljana, Slovenia; Clinic for Otorhinolaryngology and Cervicofacial Surgery, University Medical Centre Ljubljana, Slovenia; Department of Otorhinolaryngology, Faculty of Medicine, University of Ljubljana, Slovenia
| | - Dušan Šuput
- Center for Clinical Physiology, Faculty of Medicine, University of Ljubljana, Slovenia; Institute of Pathophysiology, Faculty of Medicine, University of Ljubljana, Slovenia
| | - Saba Battelino
- Clinic for Otorhinolaryngology and Cervicofacial Surgery, University Medical Centre Ljubljana, Slovenia; Department of Otorhinolaryngology, Faculty of Medicine, University of Ljubljana, Slovenia
| | - Andrej Vovk
- Center for Clinical Physiology, Faculty of Medicine, University of Ljubljana, Slovenia; Institute of Pathophysiology, Faculty of Medicine, University of Ljubljana, Slovenia.
| |
Collapse
|
8
|
Ji Y, Papafragou A. Is there an end in sight? Viewers' sensitivity to abstract event structure. Cognition 2020; 197:104197. [DOI: 10.1016/j.cognition.2020.104197] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/10/2020] [Accepted: 01/18/2020] [Indexed: 11/15/2022]
|
9
|
Malaia EA, Krebs J, Roehm D, Wilbur RB. Age of acquisition effects differ across linguistic domains in sign language: EEG evidence. BRAIN AND LANGUAGE 2020; 200:104708. [PMID: 31698097 PMCID: PMC6934356 DOI: 10.1016/j.bandl.2019.104708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 10/10/2019] [Accepted: 10/11/2019] [Indexed: 06/10/2023]
Abstract
One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown. We examined neural responses of a group of Deaf signers who received access to signed input at varying ages to three linguistic phenomena at the levels of classifier signs, syntactic structure, and information structure. The amplitude of the N400 response to the marked word order condition negatively correlated with the age of acquisition for syntax and information structure, indicating increased cognitive load in these conditions. Additionally, the combination of behavioral and neural data suggested that late learners preferentially relied on classifiers over word order for meaning extraction. This suggests that late acquisition of sign language significantly increases cognitive load during analysis of syntax and information structure, but not word-level meaning.
Collapse
Affiliation(s)
- Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Speech and Hearing Clinic, 700 Johnny Stallings Drive, Tuscaloosa, AL 35401, USA.
| | - Julia Krebs
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Dietmar Roehm
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Ronnie B Wilbur
- Department of Linguistics, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA; Department of Speech, Language, and Hearing Sciences, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA
| |
Collapse
|
10
|
Ünal E, Ji Y, Papafragou A. From Event Representation to Linguistic Meaning. Top Cogn Sci 2019; 13:224-242. [DOI: 10.1111/tops.12475] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Revised: 09/25/2019] [Accepted: 10/07/2019] [Indexed: 11/30/2022]
Affiliation(s)
| | - Yue Ji
- Department of Linguistics University of Delaware
| | | |
Collapse
|
11
|
Malaia EA, Wilbur RB. Syllable as a unit of information transfer in linguistic communication: The entropy syllable parsing model. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2019; 11:e1518. [PMID: 31505710 DOI: 10.1002/wcs.1518] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 08/03/2019] [Accepted: 08/16/2019] [Indexed: 12/12/2022]
Abstract
To understand human language-both spoken and signed-the listener or viewer has to parse the continuous external signal into components. The question of what those components are (e.g., phrases, words, sounds, phonemes?) has been a subject of long-standing debate. We re-frame this question to ask: What properties of the incoming visual or auditory signal are indispensable to eliciting language comprehension? In this review, we assess the phenomenon of language parsing from modality-independent viewpoint. We show that the interplay between dynamic changes in the entropy of the signal and between neural entrainment to the signal at syllable level (4-5 Hz range) is causally related to language comprehension in both speech and sign language. This modality-independent Entropy Syllable Parsing model for the linguistic signal offers insight into the mechanisms of language processing, suggesting common neurocomputational bases for syllables in speech and sign language. This article is categorized under: Linguistics > Linguistic Theory Linguistics > Language in Mind and Brain Linguistics > Computational Models of Language Psychology > Language.
Collapse
Affiliation(s)
- Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama
| | - Ronnie B Wilbur
- Department of Speech, Language, Hearing Sciences, College of Health and Human Sciences, Purdue University, West Lafayette, Indiana.,Linguistics, School of Interdisciplinary Studies, College of Liberal Arts, Purdue University, West Lafayette, Indiana
| |
Collapse
|
12
|
Papenmeier F, Maurer AE, Huff M. Linguistic Information in Auditory Dynamic Events Contributes to the Detection of Fine, Not Coarse Event Boundaries. Adv Cogn Psychol 2019; 15:30-40. [PMID: 32509043 PMCID: PMC7265132 DOI: 10.5709/acp-0254-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Human observers (comprehenders) segment dynamic information into discrete events. That is, although there is continuous sensory information, comprehenders perceive boundaries between two meaningful units of information. In narrative comprehension, comprehenders use linguistic, non-linguistic , and physical cues for this event boundary perception. Yet, it is an open question - both from a theoretical and an empirical perspective - how linguistic and non-linguistic cues contribute to this process. The current study explores how linguistic cues contribute to the participants' ability to segment continuous auditory information into discrete, hierarchically structured events. Native speakers of German and non-native speakers, who neither spoke nor understood German, segmented a German audio drama into coarse and fine events. Whereas native participants could make use of linguistic, non-linguistic, and physical cues for segmentation, non-native participants could only use non-linguistic and physical cues. We analyzed segmentation behavior in terms of the ability to identify coarse and fine event boundaries and the resulting hierarchical structure. Non-native listeners identified almost identical coarse event boundaries as native listeners, but missed some of the fine event boundaries identified by the native listeners. Interestingly, hierarchical event perception (as measured by hierarchical alignment and enclosure) was comparable for native and non-native participants. In summary, linguistic cues contributed particularly to the identification of certain fine event boundaries. The results are discussed with regard to the current theories of event cognition.
Collapse
Affiliation(s)
- Frank Papenmeier
- Department of Psychology, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Annika E. Maurer
- Department of Psychology, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Markus Huff
- Department of Psychology, Eberhard Karls Universität Tübingen, Tübingen, Germany
- Department of Research Infrastructures, German Institute for Adult Education, Bonn, Germany
| |
Collapse
|
13
|
Malaia E, Wilbur RB. Visual and linguistic components of short-term memory: Generalized Neural Model (GNM) for spoken and sign languages. Cortex 2019; 112:69-79. [DOI: 10.1016/j.cortex.2018.05.020] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 04/02/2018] [Accepted: 05/29/2018] [Indexed: 10/14/2022]
|
14
|
Blumenthal-Dramé A, Malaia E. Shared neural and cognitive mechanisms in action and language: The multiscale information transfer framework. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2018; 10:e1484. [PMID: 30417551 DOI: 10.1002/wcs.1484] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 09/20/2018] [Accepted: 10/02/2018] [Indexed: 11/11/2022]
Abstract
This review compares how humans process action and language sequences produced by other humans. On the one hand, we identify commonalities between action and language processing in terms of cognitive mechanisms (e.g., perceptual segmentation, predictive processing, integration across multiple temporal scales), neural resources (e.g., the left inferior frontal cortex), and processing algorithms (e.g., comprehension based on changes in signal entropy). On the other hand, drawing on sign language with its particularly strong motor component, we also highlight what differentiates (both oral and signed) linguistic communication from nonlinguistic action sequences. We propose the multiscale information transfer framework (MSIT) as a way of integrating these insights and highlight directions into which future empirical research inspired by the MSIT framework might fruitfully evolve. This article is categorized under: Psychology > Language Linguistics > Language in Mind and Brain Psychology > Motor Skill and Performance Psychology > Prediction.
Collapse
Affiliation(s)
- Alice Blumenthal-Dramé
- Department of English, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany.,Freiburg Institute for Advanced Studies, Freiburg, Germany
| | - Evie Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama.,Freiburg Institute for Advanced Studies, Freiburg, Germany
| |
Collapse
|
15
|
Johnson L, Fitzhugh MC, Yi Y, Mickelsen S, Baxter LC, Howard P, Rogalsky C. Functional Neuroanatomy of Second Language Sentence Comprehension: An fMRI Study of Late Learners of American Sign Language. Front Psychol 2018; 9:1626. [PMID: 30237778 PMCID: PMC6136263 DOI: 10.3389/fpsyg.2018.01626] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Accepted: 08/14/2018] [Indexed: 01/16/2023] Open
Abstract
The neurobiology of sentence comprehension is well-studied but the properties and characteristics of sentence processing networks remain unclear and highly debated. Sign languages (i.e., visual-manual languages), like spoken languages, have complex grammatical structures and thus can provide valuable insights into the specificity and function of brain regions supporting sentence comprehension. The present study aims to characterize how these well-studied spoken language networks can adapt in adults to be responsive to sign language sentences, which contain combinatorial semantic and syntactic visual-spatial linguistic information. Twenty native English-speaking undergraduates who had completed introductory American Sign Language (ASL) courses viewed videos of the following conditions during fMRI acquisition: signed sentences, signed word lists, English sentences and English word lists. Overall our results indicate that native language (L1) sentence processing resources are responsive to ASL sentence structures in late L2 learners, but that certain L1 sentence processing regions respond differently to L2 ASL sentences, likely due to the nature of their contribution to language comprehension. For example, L1 sentence regions in Broca's area were significantly more responsive to L2 than L1 sentences, supporting the hypothesis that Broca's area contributes to sentence comprehension as a cognitive resource when increased processing is required. Anterior temporal L1 sentence regions were sensitive to L2 ASL sentence structure, but demonstrated no significant differences in activation to L1 than L2, suggesting its contribution to sentence processing is modality-independent. Posterior superior temporal L1 sentence regions also responded to ASL sentence structure but were more activated by English than ASL sentences. An exploratory analysis of the neural correlates of L2 ASL proficiency indicates that ASL proficiency is positively correlated with increased activations in response to ASL sentences in L1 sentence processing regions. Overall these results suggest that well-established fronto-temporal spoken language networks involved in sentence processing exhibit functional plasticity with late L2 ASL exposure, and thus are adaptable to syntactic structures widely different than those in an individual's native language. Our findings also provide valuable insights into the unique contributions of the inferior frontal and superior temporal regions that are frequently implicated in sentence comprehension but whose exact roles remain highly debated.
Collapse
Affiliation(s)
- Lisa Johnson
- Department of Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| | - Megan C Fitzhugh
- Department of Speech and Hearing Science, Arizona State University, Tempe, AZ, United States.,Interdisciplinary Graduate Neuroscience Program, Arizona State University, Tempe, AZ, United States
| | - Yuji Yi
- Department of Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| | - Soren Mickelsen
- Department of Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| | - Leslie C Baxter
- Barrow Neurological Institute and St. Joseph's Hospital and Medical Center, Phoenix, AZ, United States
| | - Pamela Howard
- Department of Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| | - Corianne Rogalsky
- Department of Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
| |
Collapse
|
16
|
Subject preference emerges as cross-modal strategy for linguistic processing. Brain Res 2018; 1691:105-117. [PMID: 29627484 DOI: 10.1016/j.brainres.2018.03.029] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2017] [Revised: 01/30/2018] [Accepted: 03/24/2018] [Indexed: 11/23/2022]
Abstract
Research on spoken languages has identified a "subject preference" processing strategy for tackling input that is syntactically ambiguous as to whether a sentence-initial NP is a subject or object. The present study documents that the "subject preference" strategy is also seen in the processing of a sign language, supporting the hypothesis that the "subject"-first strategy is universal and not dependent on the language modality (spoken vs. signed). Deaf signers of Austrian Sign Language (ÖGS) were shown videos of locally ambiguous signed sentences in SOV and OSV word orders. Electroencephalogram (EEG) data indicated higher cognitive load in response to OSV stimuli (i.e. a negativity for OSV compared to SOV), indicative of syntactic reanalysis cost. A finding that is specific to the visual modality is that the ERP (event-related potential) effect reflecting linguistic reanalysis occurred earlier than might have been expected, that is, before the time point when the path movement of the disambiguating sign was visible. We suggest that in the visual modality, transitional movement of the articulators prior to the disambiguating verb position or co-occurring non-manual (face/body) markings were used in resolving the local ambiguity in ÖGS. Thus, whereas the processing strategy of "subject preference" is cross-modal at the linguistic level, the cues that enable the processor to apply that strategy differ in signing as compared to speech.
Collapse
|
17
|
Malaia E, Borneman JD, Wilbur RB. Information Transfer Capacity of Articulators in American Sign Language. LANGUAGE AND SPEECH 2018; 61:97-112. [PMID: 28565932 DOI: 10.1177/0023830917708461] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The ability to convey information is a fundamental property of communicative signals. For sign languages, which are overtly produced with multiple, completely visible articulators, the question arises as to how the various channels co-ordinate and interact with each other. We analyze motion capture data of American Sign Language (ASL) narratives, and show that the capacity of information throughput, mathematically defined, is highest on the dominant hand (DH). We further demonstrate that information transfer capacity is also significant for the non-dominant hand (NDH), and the head channel too, as compared to control channels (ankles). We discuss both redundancy and independence in articulator motion in sign language, and argue that the NDH and the head articulators contribute to the overall information transfer capacity, indicating that they are neither completely redundant to, nor completely independent of, the DH.
Collapse
|
18
|
Malaia E, Cockerham D, Rublein K. Visual integration of fear and anger emotional cues by children on the autism spectrum and neurotypical peers: An EEG study. Neuropsychologia 2017. [PMID: 28633887 DOI: 10.1016/j.neuropsychologia.2017.06.014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Communication deficits in children with autism spectrum disorders (ASD) are often related to inefficient interpretation of emotional cues, which are conveyed visually through both facial expressions and body language. The present study examined ASD behavioral and ERP responses to emotional expressions of anger and fear, as conveyed by the face and body. Behavioral results showed significantly faster response times for the ASD than for the typically developing (TD) group when processing fear, but not anger, in isolated face expressions, isolated body expressions, and in the integration of the two. In addition, EEG data for the N170 and P1 indicated processing differences between fear and anger stimuli only in TD group, suggesting that individuals with ASD may not be distinguishing between emotional expressions. These results suggest that ASD children may employ a different neural mechanism for visual emotion recognition than their TD peers, possibly relying on inferential processing.
Collapse
|
19
|
Current and future methodologies for quantitative analysis of information transfer in sign language and gesture data. Behav Brain Sci 2017; 40:e63. [DOI: 10.1017/s0140525x15002988] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractState-of-the-art methods of analysis of video data now include motion capture and optical flow from video recordings. These techniques allow for biological differentiation between visual communication and noncommunicative motion, enabling further inquiry into neural bases of communication. The requirements for additional noninvasive methods of data collection and automatic analysis of natural gesture and sign language are discussed.
Collapse
|
20
|
Malaia E, Borneman JD, Wilbur RB. Assessment of information content in visual signal: analysis of optical flow fractal complexity. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1225142] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
21
|
Abstract
Comprehension of complex sentences is necessarily supported by both syntactic and semantic knowledge, but what linguistic factors trigger a readers' reliance on a specific system? This functional neuroimaging study orthogonally manipulated argument plausibility and verb event type to investigate cortical bases of the semantic effect on argument comprehension during reading. The data suggest that telic verbs facilitate online processing by means of consolidating the event schemas in episodic memory and by easing the computation of syntactico-thematic hierarchies in the left inferior frontal gyrus. The results demonstrate that syntax-semantics integration relies on trade-offs among a distributed network of regions for maximum comprehension efficiency.
Collapse
Affiliation(s)
- Evie Malaia
- a Department of Curriculum and Instruction, Center for Mind, Brain, and Education , University of Texas at Arlington , Arlington , TX , USA
| | | |
Collapse
|
22
|
Malaia E, Talavage TM, Wilbur RB. Functional connectivity in task-negative network of the Deaf: effects of sign language experience. PeerJ 2014; 2:e446. [PMID: 25024915 PMCID: PMC4081178 DOI: 10.7717/peerj.446] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2014] [Accepted: 06/02/2014] [Indexed: 01/23/2023] Open
Abstract
Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia). We report the first investigation of the task-negative network in Deaf signers and its functional connectivity—the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG), but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.
Collapse
Affiliation(s)
- Evie Malaia
- Center for Mind, Brain, and Education, University of Texas at Arlington , TX , USA
| | - Thomas M Talavage
- Weldon School of Biomedical Engineering, Purdue University , IN , USA ; School of Electrical and Computer Engineering, Purdue University , IN , USA
| | - Ronnie B Wilbur
- Speech, Language, and Hearing Sciences, and Linguistics Program, Purdue University , IN , USA
| |
Collapse
|
23
|
Malaia E, Wilbur RB, Milkovic M. Kinematic parameters of signed verbs. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1677-1688. [PMID: 23926292 DOI: 10.1044/1092-4388(2013/12-0257)] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE Sign language users recruit physical properties of visual motion to convey linguistic information. Research on American Sign Language (ASL) indicates that signers systematically use kinematic features (e.g., velocity, deceleration) of dominant hand motion for distinguishing specific semantic properties of verb classes in production ( Malaia & Wilbur, 2012a) and process these distinctions as part of the phonological structure of these verb classes in comprehension ( Malaia, Ranaweera, Wilbur, & Talavage, 2012). These studies are driven by the event visibility hypothesis by Wilbur (2003), who proposed that such use of kinematic features should be universal to sign language (SL) by the grammaticalization of physics and geometry for linguistic purposes. In a prior motion capture study, Malaia and Wilbur (2012a) lent support for the event visibility hypothesis in ASL, but there has not been quantitative data from other SLs to test the generalization to other languages. METHOD The authors investigated the kinematic parameters of predicates in Croatian Sign Language ( Hrvatskom Znakovnom Jeziku [HZJ]). RESULTS Kinematic features of verb signs were affected both by event structure of the predicate (semantics) and phrase position within the sentence (prosody). CONCLUSION The data demonstrate that kinematic features of motion in HZJ verb signs are recruited to convey morphological and prosodic information. This is the first crosslinguistic motion capture confirmation that specific kinematic properties of articulator motion are grammaticalized in other SLs to express linguistic features.
Collapse
|
24
|
Talavage TM, Gonzalez-Castillo J, Scott SK. Auditory neuroimaging with fMRI and PET. Hear Res 2013; 307:4-15. [PMID: 24076424 DOI: 10.1016/j.heares.2013.09.009] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2013] [Revised: 09/06/2013] [Accepted: 09/17/2013] [Indexed: 11/28/2022]
Abstract
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Thomas M Talavage
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA; Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA.
| | | | | |
Collapse
|
25
|
Leonard MK, Ferjan Ramirez N, Torres C, Hatrak M, Mayberry RI, Halgren E. Neural stages of spoken, written, and signed word processing in beginning second language learners. Front Hum Neurosci 2013; 7:322. [PMID: 23847496 PMCID: PMC3698463 DOI: 10.3389/fnhum.2013.00322] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2013] [Accepted: 06/11/2013] [Indexed: 11/23/2022] Open
Abstract
We combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine how sensory modality, language type, and language proficiency interact during two fundamental stages of word processing: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.
Collapse
Affiliation(s)
- Matthew K Leonard
- Department of Radiology, University of California San Diego, La Jolla, CA, USA ; Multimodal Imaging Laboratory, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | | | | | | | | | | |
Collapse
|