1
|
Querella P, Attout L, Fias W, Majerus S. From long-term to short-term: Distinct neural networks underlying semantic knowledge and its recruitment in working memory. Neuropsychologia 2024; 202:108949. [PMID: 38971371 DOI: 10.1016/j.neuropsychologia.2024.108949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 04/30/2024] [Accepted: 07/01/2024] [Indexed: 07/08/2024]
Abstract
Although numerous studies suggest that working memory (WM) and semantic long-term knowledge interact, the nature and underlying neural mechanisms of this intervention remain poorly understood. Using functional magnetic resonance imaging (fMRI), this study investigated the extent to which neural markers of semantic knowledge in long-term memory (LTM) are activated during the WM maintenance stage in 32 young adults. First, the multivariate neural patterns associated with four semantic categories were determined via an implicit semantic activation task. Next, the participants maintained words - the names of the four semantic categories implicitly activated in the first task - in a verbal WM task. Multi-voxel pattern analyses showed reliable neural decoding of the four semantic categories in the implicit semantic activation and the verbal WM tasks. Critically, however, no between-task classification of semantic categories was observed. Searchlight analyses showed that for the WM task, semantic category information could be decoded in anterior temporal areas associated with abstract semantic category knowledge. In the implicit semantic activation task, semantic category information was decoded in superior temporal, occipital and frontal cortices associated with domain-specific semantic feature representations. These results indicate that item-level semantic activation during verbal WM involves shallow rather than deep semantic information.
Collapse
Affiliation(s)
- Pauline Querella
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Belgium.
| | - Lucie Attout
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Belgium; National Fund for Scientific Research, Belgium, Department of Psychology, Psychology and Cognitive Neuroscience Research Unit, University of Liège, Place des Orateurs 1 (B33), 4000, Liège, Belgium
| | - Wim Fias
- Department of Experimental Psychology, Ghent University, Belgium
| | - Steve Majerus
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Belgium; National Fund for Scientific Research, Belgium, Department of Psychology, Psychology and Cognitive Neuroscience Research Unit, University of Liège, Place des Orateurs 1 (B33), 4000, Liège, Belgium
| |
Collapse
|
2
|
Silva AB, Liu JR, Metzger SL, Bhaya-Grossman I, Dougherty ME, Seaton MP, Littlejohn KT, Tu-Chan A, Ganguly K, Moses DA, Chang EF. A bilingual speech neuroprosthesis driven by cortical articulatory representations shared between languages. Nat Biomed Eng 2024:10.1038/s41551-024-01207-5. [PMID: 38769157 DOI: 10.1038/s41551-024-01207-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 04/01/2024] [Indexed: 05/22/2024]
Abstract
Advancements in decoding speech from brain activity have focused on decoding a single language. Hence, the extent to which bilingual speech production relies on unique or shared cortical activity across languages has remained unclear. Here, we leveraged electrocorticography, along with deep-learning and statistical natural-language models of English and Spanish, to record and decode activity from speech-motor cortex of a Spanish-English bilingual with vocal-tract and limb paralysis into sentences in either language. This was achieved without requiring the participant to manually specify the target language. Decoding models relied on shared vocal-tract articulatory representations across languages, which allowed us to build a syllable classifier that generalized across a shared set of English and Spanish syllables. Transfer learning expedited training of the bilingual decoder by enabling neural data recorded in one language to improve decoding in the other language. Overall, our findings suggest shared cortical articulatory representations that persist after paralysis and enable the decoding of multiple languages without the need to train separate language-specific decoders.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Sean L Metzger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Ilina Bhaya-Grossman
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Maximilian E Dougherty
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Margaret P Seaton
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Kaylo T Littlejohn
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Adelyn Tu-Chan
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Karunesh Ganguly
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA.
| |
Collapse
|
3
|
Timofeeva P, Finisguerra A, D’Argenio G, García AM, Carreiras M, Quiñones I, Urgesi C, Amoruso L. Switching off: disruptive TMS reveals distinct contributions of the posterior middle temporal gyrus and angular gyrus to bilingual speech production. Cereb Cortex 2024; 34:bhae188. [PMID: 38741267 PMCID: PMC11090997 DOI: 10.1093/cercor/bhae188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 04/16/2024] [Accepted: 04/17/2024] [Indexed: 05/16/2024] Open
Abstract
The role of the left temporoparietal cortex in speech production has been extensively studied during native language processing, proving crucial in controlled lexico-semantic retrieval under varying cognitive demands. Yet, its role in bilinguals, fluent in both native and second languages, remains poorly understood. Here, we employed continuous theta burst stimulation to disrupt neural activity in the left posterior middle-temporal gyrus (pMTG) and angular gyrus (AG) while Italian-Friulian bilinguals performed a cued picture-naming task. The task involved between-language (naming objects in Italian or Friulian) and within-language blocks (naming objects ["knife"] or associated actions ["cut"] in a single language) in which participants could either maintain (non-switch) or change (switch) instructions based on cues. During within-language blocks, cTBS over the pMTG entailed faster naming for high-demanding switch trials, while cTBS to the AG elicited slower latencies in low-demanding non-switch trials. No cTBS effects were observed in the between-language block. Our findings suggest a causal involvement of the left pMTG and AG in lexico-semantic processing across languages, with distinct contributions to controlled vs. "automatic" retrieval, respectively. However, they do not support the existence of shared control mechanisms within and between language(s) production. Altogether, these results inform neurobiological models of semantic control in bilinguals.
Collapse
Affiliation(s)
- Polina Timofeeva
- BCBL, Basque Center on Cognition, Brain, and Language (BCBL), Paseo Mikeletegi 69, 2nd floor, 20009 San Sebastian, Spain
- Universidad del País Vasco (UPV/EHU), Doctoral School, 48940, Sarriena s/n, Leioa, Spain
| | - Alessandra Finisguerra
- Scientific Institute, IRCCS E. Medea, Via Cialdini 29, 33037, Pasian di Prato, UD, Italy
| | - Giulia D’Argenio
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, Via Margreth 3, 33100, Udine, Italy
| | - Adolfo M García
- Cognitive Neuroscience Center (CNC), University of San Andres, Vito Dumas 284, B1644 BID, Buenos Aires, Argentina
- Global Brain Health Institute (GBHI), University of California, Parnassus 513, CA 94143, San Franscisco, United States & Trinity College Dublin, College Green, Dublin 2, D02X9W9, Ireland
- Departamento de Lingüística y Literatura, Facultad de Humanidades, Universidad de Santiago de Chile, Av. Libertador B. O'Higgins 3363, 9170022, Santiago de Chile, Chile
| | - Manuel Carreiras
- BCBL, Basque Center on Cognition, Brain, and Language (BCBL), Paseo Mikeletegi 69, 2nd floor, 20009 San Sebastian, Spain
- Universidad del País Vasco (UPV/EHU), Doctoral School, 48940, Sarriena s/n, Leioa, Spain
- Ikerbasque, Basque Foundation for Science, Plaza Euskadi 5, 48009, Bilbao, Spain
| | - Ileana Quiñones
- Ikerbasque, Basque Foundation for Science, Plaza Euskadi 5, 48009, Bilbao, Spain
- Neurosciences Department, BioGipuzkoa Health Research Institute, Paseo Dr. Begiristain s/n, 20014, San Sebastian, Spain
| | - Cosimo Urgesi
- Scientific Institute, IRCCS E. Medea, Via Cialdini 29, 33037, Pasian di Prato, UD, Italy
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, Via Margreth 3, 33100, Udine, Italy
| | - Lucia Amoruso
- BCBL, Basque Center on Cognition, Brain, and Language (BCBL), Paseo Mikeletegi 69, 2nd floor, 20009 San Sebastian, Spain
- Cognitive Neuroscience Center (CNC), University of San Andres, Vito Dumas 284, B1644 BID, Buenos Aires, Argentina
- Ikerbasque, Basque Foundation for Science, Plaza Euskadi 5, 48009, Bilbao, Spain
| |
Collapse
|
4
|
Dong J, Yan H, Mei L, Wang G, Qu J, Liu X, Xu S, Jiang W, Zheng A, Feng G. Greater Pattern Similarity between Mother Tongue and Second Language in the Right ATL Facilitates Understanding of Written Language. Neuroscience 2024; 544:117-127. [PMID: 38447688 DOI: 10.1016/j.neuroscience.2024.02.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 02/25/2024] [Accepted: 02/28/2024] [Indexed: 03/08/2024]
Abstract
Previous research has mapped out the brain regions that respond to semantic stimuli presented visually and auditorily, but there is debate about whether semantic representation is modality-specific (only written or only spoken) or modality-invariant (both written and spoken). The mechanism of semantic representation underlying native (L1) and second language (L2) comprehension in different modalities as well as how this mechanism is influenced by L2 proficiency, remains unclear. We used functional magnetic resonance imaging (fMRI) data from the OpenNEURO database to calculate neural pattern similarity across native and second languages (Spanish and English) for different input modalities (written and spoken) and learning sessions (before and after training). The correlations between behavioral performance and cross-language pattern similarity for L1 and L2 were also calculated. Spanish-English bilingual adolescents (N = 24; ages 16-17; 19 girls) participated in a 3-month English immersion after-school program. As L2 proficiency increased, greater cross-language pattern similarity between L1 and L2 spoken words was observed in the left pars triangularis. Cross-language pattern similarity between L1 and L2 written words was observed in the right anterior temporal lobe. Brain-behavior correlations indicated that increased cross-language pattern similarity between L1 and L2 written words in the right anterior temporal lobe was associated with L2 written word comprehension. This study identified an effective neurofunctional predictor related to L2 written word comprehension.
Collapse
Affiliation(s)
- Jie Dong
- Key Laboratory for Artificial Intelligence and Cognitive Neuroscience of Language, Xi'an International Studies University, Xi'an, China
| | - Hao Yan
- Key Laboratory for Artificial Intelligence and Cognitive Neuroscience of Language, Xi'an International Studies University, Xi'an, China
| | - Leilei Mei
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, 510631 Guangzhou, China
| | - Gang Wang
- Xi'an GEM Flower Changqing Hospital, Xi'an, China
| | - Jing Qu
- Key Laboratory of Behavioral and Mental Health of Gansu, Northwest Normal University, Lanzhou, China
| | - Xinyi Liu
- Key Laboratory for Artificial Intelligence and Cognitive Neuroscience of Language, Xi'an International Studies University, Xi'an, China
| | - Shanshan Xu
- Key Laboratory for Artificial Intelligence and Cognitive Neuroscience of Language, Xi'an International Studies University, Xi'an, China
| | - Wenjing Jiang
- Key Laboratory for Artificial Intelligence and Cognitive Neuroscience of Language, Xi'an International Studies University, Xi'an, China
| | - Aoke Zheng
- Key Laboratory for Artificial Intelligence and Cognitive Neuroscience of Language, Xi'an International Studies University, Xi'an, China
| | - Genyi Feng
- Xi'an GEM Flower Changqing Hospital, Xi'an, China.
| |
Collapse
|
5
|
Malik-Moraleda S, Jouravlev O, Taliaferro M, Mineroff Z, Cucu T, Mahowald K, Blank IA, Fedorenko E. Functional characterization of the language network of polyglots and hyperpolyglots with precision fMRI. Cereb Cortex 2024; 34:bhae049. [PMID: 38466812 PMCID: PMC10928488 DOI: 10.1093/cercor/bhae049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 01/24/2024] [Accepted: 01/25/2024] [Indexed: 03/13/2024] Open
Abstract
How do polyglots-individuals who speak five or more languages-process their languages, and what can this population tell us about the language system? Using fMRI, we identified the language network in each of 34 polyglots (including 16 hyperpolyglots with knowledge of 10+ languages) and examined its response to the native language, non-native languages of varying proficiency, and unfamiliar languages. All language conditions engaged all areas of the language network relative to a control condition. Languages that participants rated as higher proficiency elicited stronger responses, except for the native language, which elicited a similar or lower response than a non-native language of similar proficiency. Furthermore, unfamiliar languages that were typologically related to the participants' high-to-moderate-proficiency languages elicited a stronger response than unfamiliar unrelated languages. The results suggest that the language network's response magnitude scales with the degree of engagement of linguistic computations (e.g. related to lexical access and syntactic-structure building). We also replicated a prior finding of weaker responses to native language in polyglots than non-polyglot bilinguals. These results contribute to our understanding of how multiple languages coexist within a single brain and provide new evidence that the language network responds more strongly to stimuli that more fully engage linguistic computations.
Collapse
Affiliation(s)
- Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Boston, MA 02114, United States
| | - Olessia Jouravlev
- Department of Cognitive Science, Carleton University, Ottawa K1S 5B6, Canada
| | - Maya Taliaferro
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
| | - Zachary Mineroff
- Eberly Center, Carnegie Mellon University, Pittsburgh, PA 15289, United States
| | - Theodore Cucu
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15289, United States
| | - Kyle Mahowald
- Department of Linguistics, The University of Texas at Austin, Austin, TX 78712, United States
| | - Idan A Blank
- Department of Psychology, University of California Los Angeles, Los Angeles, CA 90095, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Boston, MA 02114, United States
| |
Collapse
|
6
|
Malik-Moraleda S, Jouravlev O, Taliaferro M, Mineroff Z, Cucu T, Mahowald K, Blank IA, Fedorenko E. Functional characterization of the language network of polyglots and hyperpolyglots with precision fMRI. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.01.19.524657. [PMID: 36711949 PMCID: PMC9882290 DOI: 10.1101/2023.01.19.524657] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
How do polyglots-individuals who speak five or more languages-process their languages, and what can this population tell us about the language system? Using fMRI, we identified the language network in each of 34 polyglots (including 16 hyperpolyglots with knowledge of 10+ languages) and examined its response to the native language, non-native languages of varying proficiency, and unfamiliar languages. All language conditions engaged all areas of the language network relative to a control condition. Languages that participants rated as higher-proficiency elicited stronger responses, except for the native language, which elicited a similar or lower response than a non-native language of similar proficiency. Furthermore, unfamiliar languages that were typologically related to the participants' high-to-moderate-proficiency languages elicited a stronger response than unfamiliar unrelated languages. The results suggest that the language network's response magnitude scales with the degree of engagement of linguistic computations (e.g., related to lexical access and syntactic-structure building). We also replicated a prior finding of weaker responses to native language in polyglots than non-polyglot bilinguals. These results contribute to our understanding of how multiple languages co-exist within a single brain and provide new evidence that the language network responds more strongly to stimuli that more fully engage linguistic computations.
Collapse
Affiliation(s)
- Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Boston, MA 02114
| | - Olessia Jouravlev
- Department of Cognitive Science, Carleton University, Ottawa, Canada, K1S 5B6
| | - Maya Taliaferro
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | | | - Theodore Cucu
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15289
| | - Kyle Mahowald
- Department of Linguistics, The University of Texas at Austin, Austin, TX 78712
| | - Idan A. Blank
- Department of Psychology, University of California Los Angeles, CA 90095
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Boston, MA 02114
| |
Collapse
|
7
|
Wang J, Lin H, Cai Q. How Grammar Conveys Meaning: Language-Specific Spatial Encoding Patterns and Cross-Language Commonality in Higher-Order Neural Space. J Neurosci 2023; 43:7831-7841. [PMID: 37714708 PMCID: PMC10648508 DOI: 10.1523/jneurosci.0599-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 09/03/2023] [Accepted: 09/11/2023] [Indexed: 09/17/2023] Open
Abstract
Languages come in different forms but have shared meanings to convey. Some meanings are expressed by sentence structure and morphologic inflections rather than content words, such as indicating time frame using tense. This fMRI study investigates whether there is cross-language common representation of grammatical meanings that can be identified from neural signatures in the bilingual human brain. Based on the representations in intersentence neural similarity space, identifying grammatical construction of a sentence in one language by models trained on the other language resulted in reliable accuracy. By contrast, cross-language identification of grammatical construction by spatially matched activation patterns was only marginally accurate. Brain locations representing grammatical meaning in the two languages were interleaved in common regions bilaterally. The locations of voxels representing grammatical features in the second language were more varied across individuals than voxels representing the first language. These findings suggest grammatical meaning is represented by language-specific activation patterns, which is different from lexical semantics. Commonality of grammatical meaning is neurally reflected only in the interstimulus similarity space.SIGNIFICANCE STATEMENT Whether human brain encodes sentence-level meanings beyond content words in different languages similarly has been a long-standing question. We characterize the neural representations of similar grammatical meanings in different languages. Using complementary analytic approaches on fMRI data, we show that the same grammatical meaning is neurally represented as the common pattern of neural distances between sentences. The results suggest the possibility of identifying specific grammatical meaning expressed by different morphologic and syntactic implementations of different languages. The neural realization of grammatical meanings is constrained by the specific language being used, but the relationships between the neural representations of sentences are preserved across languages. These findings have some theoretical implications on a distinction between grammar and lexical meanings.
Collapse
Affiliation(s)
- Jing Wang
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center (ECNU), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 20062, China
- Shanghai Changning Mental Health Center, Shanghai 200335, China
- Shanghai Center for Brain Science and Brain-Inspired Technology, East China Normal University, Shanghai 200062, China
| | - Hui Lin
- Shanghai Key Laboratory of Artificial Intelligence in Learning and Cognitive Science, LAIX Inc., Shanghai 200090, China
| | - Qing Cai
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center (ECNU), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 20062, China
- Shanghai Changning Mental Health Center, Shanghai 200335, China
- Shanghai Center for Brain Science and Brain-Inspired Technology, East China Normal University, Shanghai 200062, China
- New York University-ECNU Institute of Brain and Cognitive Science, New York University, Shanghai 200062, China
| |
Collapse
|
8
|
Patel T, Morales M, Pickering MJ, Hoffman P. A common neural code for meaning in discourse production and comprehension. Neuroimage 2023; 279:120295. [PMID: 37536526 DOI: 10.1016/j.neuroimage.2023.120295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 06/28/2023] [Accepted: 07/23/2023] [Indexed: 08/05/2023] Open
Abstract
How does the brain code the meanings conveyed by language? Neuroimaging studies have investigated this by linking neural activity patterns during discourse comprehension to semantic models of language content. Here, we applied this approach to the production of discourse for the first time. Participants underwent fMRI while producing and listening to discourse on a range of topics. We used a distributional semantic model to quantify the similarity between different speech passages and identified where similarity in neural activity was predicted by semantic similarity. When people produced discourse, speech on similar topics elicited similar activation patterns in a widely distributed and bilateral brain network. This network was overlapping with, but more extensive than, the regions that showed similarity effects during comprehension. Critically, cross-task neural similarities between comprehension and production were also predicted by similarities in semantic content. This result suggests that discourse semantics engages a common neural code that is shared between comprehension and production. Effects of semantic similarity were bilateral in all three RSA analyses, even while univariate activation contrasts in the same data indicated left-lateralised BOLD responses. This indicates that right-hemisphere regions encode semantic properties even when they are not activated above baseline. We suggest that right-hemisphere regions play a supporting role in processing the meaning of discourse during both comprehension and production.
Collapse
Affiliation(s)
- Tanvi Patel
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK
| | - Matías Morales
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK
| | - Martin J Pickering
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK
| | - Paul Hoffman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK.
| |
Collapse
|
9
|
Zhao Y, Chen Y, Cheng K, Huang W. Artificial intelligence based multimodal language decoding from brain activity: A review. Brain Res Bull 2023; 201:110713. [PMID: 37487829 DOI: 10.1016/j.brainresbull.2023.110713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 06/26/2023] [Accepted: 07/20/2023] [Indexed: 07/26/2023]
Abstract
Decoding brain activity is conducive to the breakthrough of brain-computer interface (BCI) technology. The development of artificial intelligence (AI) continually promotes the progress of brain language decoding technology. Existent research has mainly focused on a single modality and paid insufficient attention to AI methods. Therefore, our objective is to provide an overview of relevant decoding research from the perspective of different modalities and methodologies. The modalities involve text, speech, image, and video, whereas the core method is using AI-built decoders to translate brain signals induced by multimodal stimuli into text or vocal language. The semantic information of brain activity can be successfully decoded into a language at various levels, ranging from words through sentences to discourses. However, the decoding effect is affected by various factors, such as the decoding model, vector representation model, and brain regions. Challenges and future directions are also discussed. The advances in brain language decoding and BCI technology will potentially assist patients with clinical aphasia in regaining the ability to communicate.
Collapse
Affiliation(s)
- Yuhao Zhao
- College of Language Intelligence, Sichuan International Studies University, Chongqing 400031, PR China
| | - Yu Chen
- Technical College for the Deaf, Tianjin University of Technology, Tianjin 300384, PR China
| | - Kaiwen Cheng
- College of Language Intelligence, Sichuan International Studies University, Chongqing 400031, PR China.
| | - Wei Huang
- Sichuan Provincial Key Laboratory for Human Disease Gene Study, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, PR China.
| |
Collapse
|
10
|
Timofeeva P, Quiñones I, Geng S, de Bruin A, Carreiras M, Amoruso L. Behavioral and oscillatory signatures of switch costs in highly proficient bilinguals. Sci Rep 2023; 13:7725. [PMID: 37173436 PMCID: PMC10176297 DOI: 10.1038/s41598-023-34895-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 05/09/2023] [Indexed: 05/15/2023] Open
Abstract
Bilinguals with a high proficiency in their first (L1) and second language (L2) often show comparable reaction times when switching from their L1 to L2 and vice-versa ("symmetrical switch costs"). However, the neurophysiological signatures supporting this effect are not well understood. Here, we ran two separate experiments and assessed behavioral and MEG responses in highly proficient Spanish-Basque bilinguals while they overtly name pictures in a mixed-language context. In the behavioral experiment, bilinguals were slower when naming items in switch relative to non-switch trials, and this switch cost was comparable for both languages (symmetrical). The MEG experiment mimicked the behavioral one, with switch trials showing more desynchronization than non-switch trials across languages (symmetric neural cost) in the alpha band (8-13 Hz). Source-localization revealed the engagement of right parietal and premotor areas, which have been linked to language selection and inhibitory control; and of the left anterior temporal lobe (ATL), a cross-linguistic region housing conceptual knowledge that generalizes across languages. Our results suggest that highly proficient bilinguals implement a language-independent mechanism, supported by alpha oscillations, which is involved in cue-based language selection and facilitates conceptually-driven lexical access in the ATL, possibly by inhibiting non-target lexical items or disinhibiting target ones.
Collapse
Affiliation(s)
- Polina Timofeeva
- BCBL, Basque Center On Brain, Language and Cognition, Paseo Mikeletegi 69, 2nd floor, 20009, Donostia/San Sebastian, Spain
- Universidad del País Vasco (UPV/EHU), 20009, San Sebastian, Spain
| | - Ileana Quiñones
- BCBL, Basque Center On Brain, Language and Cognition, Paseo Mikeletegi 69, 2nd floor, 20009, Donostia/San Sebastian, Spain
| | - Shuang Geng
- BCBL, Basque Center On Brain, Language and Cognition, Paseo Mikeletegi 69, 2nd floor, 20009, Donostia/San Sebastian, Spain
- Universidad del País Vasco (UPV/EHU), 20009, San Sebastian, Spain
| | - Angela de Bruin
- Department of Psychology, University of York, York, YO10 5DD, UK
| | - Manuel Carreiras
- BCBL, Basque Center On Brain, Language and Cognition, Paseo Mikeletegi 69, 2nd floor, 20009, Donostia/San Sebastian, Spain
- Universidad del País Vasco (UPV/EHU), 20009, San Sebastian, Spain
- Ikerbasque, Basque Foundation for Science, 48940, Bilbao, Spain
| | - Lucia Amoruso
- BCBL, Basque Center On Brain, Language and Cognition, Paseo Mikeletegi 69, 2nd floor, 20009, Donostia/San Sebastian, Spain.
- Ikerbasque, Basque Foundation for Science, 48940, Bilbao, Spain.
| |
Collapse
|
11
|
Li H, Cao Y, Chen C, Liu X, Zhang S, Mei L. The depth of semantic processing modulates cross-language pattern similarity in Chinese-English bilinguals. Hum Brain Mapp 2023; 44:2085-2098. [PMID: 36579666 PMCID: PMC9980893 DOI: 10.1002/hbm.26195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 11/22/2022] [Accepted: 12/16/2022] [Indexed: 12/30/2022] Open
Abstract
Previous studies have investigated factors related to the degree of cross-language overlap in brain activations in bilinguals/multilinguals. However, it is still unclear whether and how the depth of semantic processing (a critical task-related factor) affects the neural pattern similarity between native and second languages. To address this question, 26 Chinese-English bilinguals were scanned with fMRI while performing a word naming task (i.e., a task with shallow semantic processing) and a semantic judgment task (i.e., a task with deep semantic processing) in both native and second languages. Based on three sets of representational similarity analysis (whole brain, ROI-based, and within-language vs. cross-language semantic representation), we found that select regions in the reading brain network showed higher cross-language pattern similarity and higher cross-language semantic representations during deep semantic processing than during shallow semantic processing. These results suggest that compared to shallow semantic processing, deep semantic processing may lead to greater language-independent processing (i.e., cross-language semantic representation) and cross-language pattern similarity, and provide direct quantitative neuroimaging evidence for cognitive models of bilingual lexical memory.
Collapse
Affiliation(s)
- Huiling Li
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Ying Cao
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Chuansheng Chen
- Department of Psychological Science, University of California, Irvine, California, USA
| | - Xiaoyu Liu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Shuo Zhang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Leilei Mei
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China
| |
Collapse
|
12
|
Bişkin OT, Candemir C, Gonul AS, Selver MA. Diverse Task Classification from Activation Patterns of Functional Neuro-Images Using Feature Fusion Module. SENSORS (BASEL, SWITZERLAND) 2023; 23:3382. [PMID: 37050440 PMCID: PMC10098749 DOI: 10.3390/s23073382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 03/08/2023] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
One of the emerging fields in functional magnetic resonance imaging (fMRI) is the decoding of different stimulations. The underlying idea is to reveal the hidden representative signal patterns of various fMRI tasks for achieving high task-classification performance. Unfortunately, when multiple tasks are processed, performance remains limited due to several challenges, which are rarely addressed since the majority of the state-of-the-art studies cover a single neuronal activity task. Accordingly, the first contribution of this study is the collection and release of a rigorously acquired dataset, which contains cognitive, behavioral, and affective fMRI tasks together with resting state. After a comprehensive analysis of the pitfalls of existing systems on this new dataset, we propose an automatic multitask classification (MTC) strategy using a feature fusion module (FFM). FFM aims to create a unique signature for each task by combining deep features with time-frequency representations. We show that FFM creates a feature space that is superior for representing task characteristics compared to their individual use. Finally, for MTC, we test a diverse set of deep-models and analyze their complementarity. Our results reveal higher classification accuracy compared to benchmarks. Both the dataset and the code are accessible to researchers for further developments.
Collapse
Affiliation(s)
- Osman Tayfun Bişkin
- Department of Electrical and Electronics Engineering, Burdur Mehmet Akif Ersoy University, Burdur 15030, Turkey
| | - Cemre Candemir
- International Computer Institute, Ege University, Izmir 35100, Turkey
- Standardization of Computational Anatomy Techniques, SoCAT Lab, Ege University, Izmir 35100, Turkey
| | - Ali Saffet Gonul
- Standardization of Computational Anatomy Techniques, SoCAT Lab, Ege University, Izmir 35100, Turkey
- Department of Psychiatry, Medical Faculty, Ege University, Izmir 35100, Turkey
| | - Mustafa Alper Selver
- Department of Electrical and Electronics Engineering and Izmir Health Technologies Development and Accelerator (BioIzmir), Dokuz Eylul University, Izmir 35160, Turkey
| |
Collapse
|
13
|
Abstract
Neural decoding models can be used to decode neural representations of visual, acoustic, or semantic information. Recent studies have demonstrated neural decoders that are able to decode accoustic information from a variety of neural signal types including electrocortiography (ECoG) and the electroencephalogram (EEG). In this study we explore how functional magnetic resonance imaging (fMRI) can be combined with EEG to develop an accoustic decoder. Specifically, we first used a joint EEG-fMRI paradigm to record brain activity while participants listened to music. We then used fMRI-informed EEG source localisation and a bi-directional long-term short term deep learning network to first extract neural information from the EEG related to music listening and then to decode and reconstruct the individual pieces of music an individual was listening to. We further validated our decoding model by evaluating its performance on a separate dataset of EEG-only recordings. We were able to reconstruct music, via our fMRI-informed EEG source analysis approach, with a mean rank accuracy of 71.8% ([Formula: see text], [Formula: see text]). Using only EEG data, without participant specific fMRI-informed source analysis, we were able to identify the music a participant was listening to with a mean rank accuracy of 59.2% ([Formula: see text], [Formula: see text]). This demonstrates that our decoding model may use fMRI-informed source analysis to aid EEG based decoding and reconstruction of acoustic information from brain activity and makes a step towards building EEG-based neural decoders for other complex information domains such as other acoustic, visual, or semantic information.
Collapse
|
14
|
Hao S, Duan Y, Qi L, Li Z, Ren J, Nangale N, Yang C. A resting-state fMRI study of temporal lobe epilepsy using multivariate pattern analysis and Granger causality analysis. J Neuroimaging 2022; 32:977-990. [PMID: 35670638 DOI: 10.1111/jon.13012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 05/09/2022] [Accepted: 05/10/2022] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND AND PURPOSE Understanding the pathogenesis of temporal lobe epilepsy (TLE) is essential for its diagnosis and treatment. The study aimed to explore regional homogeneity (ReHo) and changes in effective connectivity (EC) between brain regions in TLE patients, hoping to discover potential abnormalities in certain brain regions in TLE patients. METHODS Resting-state functional magnetic resonance data were collected from 23 TLE patients and 32 normal controls (NC). ReHo was used as a feature of multivariate pattern analysis (MVPA) to explore the ability of its alterations in identifying TLE. Based on the results of the MVPA, certain brain regions were selected as seed points to further explore alterations in EC between brain regions using Granger causality analysis. RESULTS MVPA results showed that the classification accuracy for the TLE and NC groups was 87.27%, and the right posterior cerebellum lobe, right lingual gyrus (LING_R), right cuneus (CUN_R), and left superior temporal gyrus (STG_L) provided significant contributions. Moreover, the EC from STG_L to right fusiform gyrus (FFG_R) and LING_R and the EC from CUN_R to the right occipital superior gyrus (SOG_R) and right occipital middle gyrus (MOG_R) were altered compared to the NC group. CONCLUSION The MVPA results indicated that ReHo abnormalities in brain regions may be an important feature in the identification of TLE. The enhanced EC from STG_L to FFG_R and LING_R indicates a shift in language processing to the right hemisphere, and the weakened EC from SOG_R and MOG_R to CUN_R may reveal an underlying mechanism of TLE.
Collapse
Affiliation(s)
- Siyao Hao
- Faculty of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| | - Ying Duan
- Beijing Universal Medical Imaging Diagnostic Center, Beijing, China
| | - Lei Qi
- Beijing Universal Medical Imaging Diagnostic Center, Beijing, China
| | - Zhimei Li
- Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Jiechuan Ren
- Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | | | - Chunlan Yang
- Faculty of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| |
Collapse
|
15
|
Lin Y, Hsieh PJ. Neural decoding of speech with semantic-based classification. Cortex 2022; 154:231-240. [DOI: 10.1016/j.cortex.2022.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/17/2022] [Accepted: 05/09/2022] [Indexed: 11/16/2022]
|
16
|
Geng S, Molinaro N, Timofeeva P, Quiñones I, Carreiras M, Amoruso L. Oscillatory dynamics underlying noun and verb production in highly proficient bilinguals. Sci Rep 2022; 12:764. [PMID: 35031665 PMCID: PMC8760282 DOI: 10.1038/s41598-021-04737-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 12/30/2021] [Indexed: 11/09/2022] Open
Abstract
Words representing objects (nouns) and words representing actions (verbs) are essential components of speech across languages. While there is evidence regarding the organizational principles governing neural representation of nouns and verbs in monolingual speakers, little is known about how this knowledge is represented in the bilingual brain. To address this gap, we recorded neuromagnetic signals while highly proficient Spanish-Basque bilinguals performed a picture-naming task and tracked the brain oscillatory dynamics underlying this process. We found theta (4-8 Hz) power increases and alpha-beta (8-25 Hz) power decreases irrespectively of the category and language at use in a time window classically associated to the controlled retrieval of lexico-semantic information. When comparing nouns and verbs within each language, we found theta power increases for verbs as compared to nouns in bilateral visual cortices and cognitive control areas including the left SMA and right middle temporal gyrus. In addition, stronger alpha-beta power decreases were observed for nouns as compared to verbs in visual cortices and semantic-related regions such as the left anterior temporal lobe and right premotor cortex. No differences were observed between categories across languages. Overall, our results suggest that noun and verb processing recruit partially different networks during speech production but that these category-based representations are similarly processed in the bilingual brain.
Collapse
Affiliation(s)
- Shuang Geng
- grid.423986.20000 0004 0536 1366Basque Center on Cognition, Brain and Language (BCBL), 20009 San Sebastian, Spain ,grid.11480.3c0000000121671098University of the Basque Country, UPV/EHU, 48940 Bilbao, Spain
| | - Nicola Molinaro
- grid.423986.20000 0004 0536 1366Basque Center on Cognition, Brain and Language (BCBL), 20009 San Sebastian, Spain ,grid.424810.b0000 0004 0467 2314IKERBASQUE, Basque Foundation for Science, 48009 Bilbao, Spain
| | - Polina Timofeeva
- grid.423986.20000 0004 0536 1366Basque Center on Cognition, Brain and Language (BCBL), 20009 San Sebastian, Spain ,grid.11480.3c0000000121671098University of the Basque Country, UPV/EHU, 48940 Bilbao, Spain
| | - Ileana Quiñones
- grid.423986.20000 0004 0536 1366Basque Center on Cognition, Brain and Language (BCBL), 20009 San Sebastian, Spain
| | - Manuel Carreiras
- grid.423986.20000 0004 0536 1366Basque Center on Cognition, Brain and Language (BCBL), 20009 San Sebastian, Spain ,grid.424810.b0000 0004 0467 2314IKERBASQUE, Basque Foundation for Science, 48009 Bilbao, Spain ,grid.11480.3c0000000121671098University of the Basque Country, UPV/EHU, 48940 Bilbao, Spain
| | - Lucia Amoruso
- Basque Center on Cognition, Brain and Language (BCBL), 20009, San Sebastian, Spain. .,IKERBASQUE, Basque Foundation for Science, 48009, Bilbao, Spain.
| |
Collapse
|
17
|
Yang S, Zhang X, Jiang M. Bilingual Brains Learn to Use L2 Alliterations Covertly like Poets: Brain ERP Evidence. Front Psychol 2021; 12:691846. [PMID: 34621210 PMCID: PMC8491624 DOI: 10.3389/fpsyg.2021.691846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 08/11/2021] [Indexed: 01/02/2023] Open
Abstract
Bilinguals were documented to access their native or first language (L1) during comprehension of their second languages (L2). However, it is uncertain whether they can access L2 when reading their first language. This study used the event-related potential (ERP) technique to demonstrate the implicit and unconscious access to English words when Chinese–English bilinguals read words in Chinese, their native language. The participants were asked to judge whether the Chinese words presented in pairs were semantically related or not, meanwhile unconscious of the occasional alliteration (repetition of the first phoneme) if the Chinese words were translated into English. While the concealed prime in English translations failed to affect the reaction time, the alliteration significantly modulated N400 among advanced English learners, especially for semantically unrelated word pairs. Critically, this modulation effect was discrepant between bilinguals with high-level and normal-level English proficiency. These results indicate that L2 activation is an unconscious correlate of native-language processing depending on L2 proficiency.
Collapse
Affiliation(s)
- Siqin Yang
- Center for Psychology and Cognitive Science, Tsinghua University, Beijing, China
| | - Xiaochen Zhang
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Minghu Jiang
- Center for Psychology and Cognitive Science, Tsinghua University, Beijing, China
| |
Collapse
|
18
|
Asyraff A, Lemarchand R, Tamm A, Hoffman P. Stimulus-independent neural coding of event semantics: Evidence from cross-sentence fMRI decoding. Neuroimage 2021; 236:118073. [PMID: 33878380 PMCID: PMC8270886 DOI: 10.1016/j.neuroimage.2021.118073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 04/06/2021] [Accepted: 04/11/2021] [Indexed: 11/25/2022] Open
Abstract
Multivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of simple events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical structure, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance.
Collapse
Affiliation(s)
- Aliff Asyraff
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Rafael Lemarchand
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Andres Tamm
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Paul Hoffman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK.
| |
Collapse
|
19
|
Xu M, Li D, Li P. Brain decoding in multiple languages: Can cross-language brain decoding work? BRAIN AND LANGUAGE 2021; 215:104922. [PMID: 33556764 DOI: 10.1016/j.bandl.2021.104922] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 01/05/2021] [Accepted: 01/19/2021] [Indexed: 06/12/2023]
Abstract
The approach of cross-language brain decoding is to use models of brain decoding from one language to decode stimuli of another language. It has the potential to provide new insights into how our brain represents multiple languages. While it is possible to decode semantic information across different languages from neuroimaging data, the approach's overall success remains to be tested and depends on a number of factors such as cross-language similarity, age of acquisition/proficiency levels, and depth of language processing. We expect to see continued progress in this domain, from a traditional focus on words and concrete concepts toward the use of naturalistic experimental tasks involving higher-level language processing (e.g., discourse processing). The approach can also be applied to understand how cross-modal, cross-cultural, and other nonlinguistic factors may influence neural representations of different languages. This article provides an overview of cross-language brain decoding with suggestions for future research directions.
Collapse
Affiliation(s)
- Min Xu
- Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen 518060, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen 518060, China.
| | - Duo Li
- Department of Chinese and Bilingual Studies, Faculty of Humanities, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Ping Li
- Department of Chinese and Bilingual Studies, Faculty of Humanities, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| |
Collapse
|
20
|
Sheikh UA, Carreiras M, Soto D. Neurocognitive mechanisms supporting the generalization of concepts across languages. Neuropsychologia 2020; 153:107740. [PMID: 33388337 DOI: 10.1016/j.neuropsychologia.2020.107740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 12/26/2020] [Accepted: 12/27/2020] [Indexed: 10/22/2022]
Abstract
The neurocognitive mechanisms that support the generalization of semantic representations across different languages remain to be determined. Current psycholinguistic models propose that semantic representations are likely to overlap across languages, although there is evidence also to the contrary. Neuroimaging studies observed that brain activity patterns associated with the meaning of words may be similar across languages. However, the factors that mediate cross-language generalization of semantic representations are not known. We here identify a key factor: the depth of processing. Human participants were asked to process visual words as they underwent functional MRI. We found that, during shallow processing, multivariate pattern classifiers could decode the word semantic category within each language in putative substrates of the semantic network, but there was no evidence of cross-language generalization in the shallow processing context. By contrast, when the depth of processing was higher, significant cross-language generalization was observed in several regions, including inferior parietal, ventromedial, lateral temporal, and inferior frontal cortex. These results are in keeping with distributed-only views of semantic processing and favour models based on multiple semantic hubs. The results also have ramifications for existing psycholinguistic models of word processing such as the BIA+, which by default assumes non-selective access to both native and second languages.
Collapse
Affiliation(s)
- Usman Ayub Sheikh
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Manuel Carreiras
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain; University of the Basque Country, Bilbao, Spain
| | - David Soto
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain.
| |
Collapse
|
21
|
Decoding visual information from high-density diffuse optical tomography neuroimaging data. Neuroimage 2020; 226:117516. [PMID: 33137479 PMCID: PMC8006181 DOI: 10.1016/j.neuroimage.2020.117516] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 10/12/2020] [Accepted: 10/23/2020] [Indexed: 12/27/2022] Open
Abstract
Background: Neural decoding could be useful in many ways, from serving as a neuroscience research tool to providing a means of augmented communication for patients with neurological conditions. However, applications of decoding are currently constrained by the limitations of traditional neuroimaging modalities. Electrocorticography requires invasive neurosurgery, magnetic resonance imaging (MRI) is too cumbersome for uses like daily communication, and alternatives like functional near-infrared spectroscopy (fNIRS) offer poor image quality. High-density diffuse optical tomography (HD-DOT) is an emerging modality that uses denser optode arrays than fNIRS to combine logistical advantages of optical neuroimaging with enhanced image quality. Despite the resulting promise of HD-DOT for facilitating field applications of neuroimaging, decoding of brain activity as measured by HD-DOT has yet to be evaluated. Objective: To assess the feasibility and performance of decoding with HD-DOT in visual cortex. Methods and Results: To establish the feasibility of decoding at the single-trial level with HD-DOT, a template matching strategy was used to decode visual stimulus position. A receiver operating characteristic (ROC) analysis was used to quantify the sensitivity, specificity, and reproducibility of binary visual decoding. Mean areas under the curve (AUCs) greater than 0.97 across 10 imaging sessions in a highly sampled participant were observed. ROC analyses of decoding across 5 participants established both reproducibility in multiple individuals and the feasibility of inter-individual decoding (mean AUCs > 0.7), although decoding performance varied between individuals. Phase-encoded checkerboard stimuli were used to assess more complex, non-binary decoding with HD-DOT. Across 3 highly sampled participants, the phase of a 60° wide checkerboard wedge rotating 10° per second through 360° was decoded with a within-participant error of 25.8±24.7°. Decoding between participants was also feasible based on permutation-based significance testing. Conclusions: Visual stimulus information can be decoded accurately, reproducibly, and across a range of detail (for both binary and non-binary outcomes) at the single-trial level (without needing to block-average test data) using HD-DOT data. These results lay the foundation for future studies of more complex decoding with HD-DOT and applications in clinical populations.
Collapse
|
22
|
Luthra S, Correia JM, Kleinschmidt DF, Mesite L, Myers EB. Lexical Information Guides Retuning of Neural Patterns in Perceptual Learning for Speech. J Cogn Neurosci 2020; 32:2001-2012. [PMID: 32662731 PMCID: PMC8048099 DOI: 10.1162/jocn_a_01612] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A listener's interpretation of a given speech sound can vary probabilistically from moment to moment. Previous experience (i.e., the contexts in which one has encountered an ambiguous sound) can further influence the interpretation of speech, a phenomenon known as perceptual learning for speech. This study used multivoxel pattern analysis to query how neural patterns reflect perceptual learning, leveraging archival fMRI data from a lexically guided perceptual learning study conducted by Myers and Mesite [Myers, E. B., & Mesite, L. M. Neural systems underlying perceptual adjustment to non-standard speech tokens. Journal of Memory and Language, 76, 80-93, 2014]. In that study, participants first heard ambiguous /s/-/∫/ blends in either /s/-biased lexical contexts (epi_ode) or /∫/-biased contexts (refre_ing); subsequently, they performed a phonetic categorization task on tokens from an /asi/-/a∫i/ continuum. In the current work, a classifier was trained to distinguish between phonetic categorization trials in which participants heard unambiguous productions of /s/ and those in which they heard unambiguous productions of /∫/. The classifier was able to generalize this training to ambiguous tokens from the middle of the continuum on the basis of individual participants' trial-by-trial perception. We take these findings as evidence that perceptual learning for speech involves neural recalibration, such that the pattern of activation approximates the perceived category. Exploratory analyses showed that left parietal regions (supramarginal and angular gyri) and right temporal regions (superior, middle, and transverse temporal gyri) were most informative for categorization. Overall, our results inform an understanding of how moment-to-moment variability in speech perception is encoded in the brain.
Collapse
Affiliation(s)
| | - João M Correia
- University of Algarve
- Basque Center on Cognition, Brain and Language
| | | | - Laura Mesite
- MGH Institute of Health Professions
- Harvard Graduate School of Education
| | | |
Collapse
|
23
|
Zheng B, Báez S, Su L, Xiang X, Weis S, Ibáñez A, García AM. Semantic and attentional networks in bilingual processing: fMRI connectivity signatures of translation directionality. Brain Cogn 2020; 143:105584. [PMID: 32485460 DOI: 10.1016/j.bandc.2020.105584] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Revised: 03/04/2020] [Accepted: 05/13/2020] [Indexed: 12/31/2022]
Abstract
Comparisons between backward and forward translation (BT, FT) have long illuminated the organization of bilingual memory, with neuroscientific evidence indicating that FT would involve greater linguistic and attentional demands. However, no study has directly assessed the functional interaction between relevant mechanisms. Against this background, we conducted the first fMRI investigation of functional connectivity (FC) differences between BT and FT. In addition to yielding lower behavioral outcomes, FT was characterized by increased FC between a core semantic hub (the left anterior temporal lobe, ATL) and key nodes of attentional and vigilance networks (left inferior frontal, left orbitofrontal, and bilateral parietal clusters). Instead, distinct FC patterns for BT emerged only between the left ATL and the right thalamus, a region implicated in automatic relaying of sensory information to cortical regions. Therefore, FT seems to involve enhanced coupling between semantic and attentional mechanisms, suggesting that asymmetries in cross-language processing reflect dynamic interactions between linguistic and domain-general systems.
Collapse
Affiliation(s)
- Binghan Zheng
- School of Modern Languages & Cultures, Durham University, Durham, UK
| | - Sandra Báez
- Grupo de Investigación Cerebro y Cognición Social, Bogotá, Colombia; Universidad de los Andes, Bogotá, Colombia
| | - Li Su
- Department of Psychiatry, University of Cambridge, Cambridge, UK
| | - Xia Xiang
- College of Science and Technology, Ningbo University, Zhejiang, China
| | - Susanne Weis
- Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine (INM-7: Brain and Behaviour), Research Centre Jülich, Jülich, Germany
| | - Agustín Ibáñez
- Universidad de San Andrés, Buenos Aires, Argentina; National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina; Centre of Excellence in Cognition and its Disorders, Australian Research Council (ARC), Sydney, Australia; Center for Social and Cognitive Neuroscience (CSCN), School of Psychology, Universidad Adolfo Ibáñez, Santiago, Chile; Universidad Autónoma del Caribe, Barranquilla, Colombia
| | - Adolfo M García
- Universidad de San Andrés, Buenos Aires, Argentina; National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina; Faculty of Education, National University of Cuyo (UNCuyo), Mendoza, Argentina; Departamento de Lingüística y Literatura, Facultad de Humanidades, Universidad de Santiago de Chile, Santiago, Chile.
| |
Collapse
|
24
|
Brignoni-Perez E, Jamal NI, Eden GF. An fMRI study of English and Spanish word reading in bilingual adults. BRAIN AND LANGUAGE 2020; 202:104725. [PMID: 31978619 PMCID: PMC7461633 DOI: 10.1016/j.bandl.2019.104725] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2019] [Revised: 12/05/2019] [Accepted: 12/06/2019] [Indexed: 06/10/2023]
Abstract
Reading relies on a left-lateralized brain system, including occipito-temporal (OTC), temporo-parietal, and inferior frontal (IFC) cortices. Neuroimaging studies have investigated whether activation in these cortices is modulated by a language's orthographic depth (consistency of grapheme-to-phoneme conversion). In Spanish-English bilinguals, some but not all studies have reported activation differences between the two languages during reading. Here, we studied Spanish-English early bilingual adults living in the United States (N = 25; 17 females, 8 males). We examined local activity, functional connectivity, and spatially distributed activity patterns during English and Spanish word reading. We found overlap in local activity for the two languages in the left IFC, but no differences in activation between them and few differences in functional connectivity (none of which were in pairs of regions known to be involved in reading); yet, there were spatially distributed patterns of brain activity that differentiate English and Spanish in regions of bilateral cerebellum/left OTC, the left superior occipital gyrus, the left IFC, and the left medial frontal gyrus. Overall, we found no evidence for differences in local activation or functional connectivity during English versus Spanish word processing in regions known to be involved in reading, yet we found brain-based evidence that Spanish-English bilinguals distinguish between the two languages.
Collapse
Affiliation(s)
- Edith Brignoni-Perez
- Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, 4000 Reservoir Road NW, Washington, DC 20057, United States; Center for the Study of Learning, Georgetown University Medical Center, 4000 Reservoir Road NW, Washington, DC 20057, United States
| | - Nasheed I Jamal
- Center for the Study of Learning, Georgetown University Medical Center, 4000 Reservoir Road NW, Washington, DC 20057, United States
| | - Guinevere F Eden
- Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, 4000 Reservoir Road NW, Washington, DC 20057, United States; Center for the Study of Learning, Georgetown University Medical Center, 4000 Reservoir Road NW, Washington, DC 20057, United States.
| |
Collapse
|
25
|
Elli GV, Lane C, Bedny M. A Double Dissociation in Sensitivity to Verb and Noun Semantics Across Cortical Networks. Cereb Cortex 2019; 29:4803-4817. [PMID: 30767007 DOI: 10.1093/cercor/bhz014] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 01/15/2019] [Accepted: 01/23/2019] [Indexed: 12/31/2022] Open
Abstract
What is the neural organization of the mental lexicon? Previous research suggests that partially distinct cortical networks are active during verb and noun processing, but what information do these networks represent? We used multivoxel pattern analysis (MVPA) to investigate whether these networks are sensitive to lexicosemantic distinctions among verbs and among nouns and, if so, whether they are more sensitive to distinctions among words in their preferred grammatical class. Participants heard 4 types of verbs (light emission, sound emission, hand-related actions, mouth-related actions) and 4 types of nouns (birds, mammals, manmade places, natural places). As previously shown, the left posterior middle temporal gyrus (LMTG+), and inferior frontal gyrus (LIFG) responded more to verbs, whereas the inferior parietal lobule (LIP), precuneus (LPC), and inferior temporal (LIT) cortex responded more to nouns. MVPA revealed a double-dissociation in lexicosemantic sensitivity: classification was more accurate among verbs than nouns in the LMTG+, and among nouns than verbs in the LIP, LPC, and LIT. However, classification was similar for verbs and nouns in the LIFG, and above chance for the nonpreferred category in all regions. These results suggest that the lexicosemantic information about verbs and nouns is represented in partially nonoverlapping networks.
Collapse
Affiliation(s)
- Giulia V Elli
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Connor Lane
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
26
|
Striking loss of second language in bilingual patients with semantic dementia. J Neurol 2019; 267:551-560. [PMID: 31705289 DOI: 10.1007/s00415-019-09616-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 11/01/2019] [Accepted: 11/02/2019] [Indexed: 10/25/2022]
Abstract
BACKGROUND Studies of bilingual or multilingual patients with neurodegenerative diseases that disrupt language like the primary progressive aphasias (PPA) may contribute valuable information on language organization in the bilingual brain and on the factors affecting language decline. There is limited literature on bilingual PPA and in particular on semantic dementia, a type of PPA with selective loss of semantic memory. We studied the nature and severity of naming and comprehension deficits across languages in bilingual patients with semantic dementia (SD). METHODS Sixteen bilingual patients with SD and 34 bilingual age-matched controls were administered the modified Boston Naming Test and components of Cambridge Semantic Battery. The patients' performance on picture naming and word comprehension was compared across languages and with controls. The most proficient language on self-rating was labelled as L1 and less proficient as L2. RESULTS We observed striking loss of second language (L2) in SD for both receptive and expressive language, even in patients who were premorbidly fluent in their L2. Naming and comprehension in every patient's L2 were impaired relative to both their own first-language (L1) scores and controls' L2 scores. Furthermore, item-specific correct responses in each patient's L2 were a subset of their successes in L1. DISCUSSION A striking contrast in performance between two languages in bilingual patients with SD indicates that a bilingual's L2 or less proficient language is more vulnerable to neurodegeneration. Our findings also support a common semantic network in the brain for the different languages of bilinguals.
Collapse
|
27
|
Sign and Speech Share Partially Overlapping Conceptual Representations. Curr Biol 2019; 29:3739-3747.e5. [PMID: 31668623 PMCID: PMC6839399 DOI: 10.1016/j.cub.2019.08.075] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Revised: 08/01/2019] [Accepted: 08/30/2019] [Indexed: 11/24/2022]
Abstract
Conceptual knowledge is fundamental to human cognition. Yet, the extent to which it is influenced by language is unclear. Studies of semantic processing show that similar neural patterns are evoked by the same concepts presented in different modalities (e.g., spoken words and pictures or text) [1, 2, 3]. This suggests that conceptual representations are “modality independent.” However, an alternative possibility is that the similarity reflects retrieval of common spoken language representations. Indeed, in hearing spoken language users, text and spoken language are co-dependent [4, 5], and pictures are encoded via visual and verbal routes [6]. A parallel approach investigating semantic cognition shows that bilinguals activate similar patterns for the same words in their different languages [7, 8]. This suggests that conceptual representations are “language independent.” However, this has only been tested in spoken language bilinguals. If different languages evoke different conceptual representations, this should be most apparent comparing languages that differ greatly in structure. Hearing people with signing deaf parents are bilingual in sign and speech: languages conveyed in different modalities. Here, we test the influence of modality and bilingualism on conceptual representation by comparing semantic representations elicited by spoken British English and British Sign Language in hearing early, sign-speech bilinguals. We show that representations of semantic categories are shared for sign and speech, but not for individual spoken words and signs. This provides evidence for partially shared representations for sign and speech and shows that language acts as a subtle filter through which we understand and interact with the world. RSA analyses show that semantic categories are shared for sign and speech Neural patterns for individual spoken words and signs differ Spoken word and sign form representations are found in auditory and visual cortices Language acts as a subtle filter through which we interact with the world
Collapse
|
28
|
|
29
|
Hu Z, Yang H, Yang Y, Nishida S, Madden-Lombardi C, Ventre-Dominey J, Dominey PF, Ogawa K. Common Neural System for Sentence and Picture Comprehension Across Languages: A Chinese-Japanese Bilingual Study. Front Hum Neurosci 2019; 13:380. [PMID: 31708762 PMCID: PMC6823717 DOI: 10.3389/fnhum.2019.00380] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Accepted: 10/11/2019] [Indexed: 11/13/2022] Open
Abstract
While common semantic representations for individual words across languages have been identified, a common meaning system at sentence-level has not been determined. In this study, fMRI was used to investigate whether an across-language sentence comprehension system exists. Chinese–Japanese bilingual participants (n = 32) were asked to determine whether two consecutive stimuli were related (coherent) or not (incoherent) to the same event. Stimuli were displayed with three different modalities (Chinese written sentences, Japanese written sentences, and pictures). The behavioral results showed no significant difference in accuracy and response times among the three modalities. Multi-voxel pattern analysis (MVPA) of fMRI data was used to classify the semantic relationship (coherent or incoherent) across the stimulus modalities. The classifier was first trained to determine coherency within Chinese sentences and then tested with Japanese sentences, and vice versa. A whole-brain searchlight analysis revealed significant above-chance classification accuracy across Chinese and Japanese sentences in the supramarginal gyrus (BA 40), extending into the angular gyrus (BA 39) as well as the opercular (BA 44) and triangular (BA 45) parts of the inferior frontal gyrus in the left hemisphere (cluster-level FWE corrected p < 0.05). Significant above-chance classification accuracy was also found across Japanese sentences and pictures in the supramarginal (BA 40) and angular gyrus (BA 39). These results indicate that a common meaning system for sentence processing across languages and modalities exists, and it involves the left inferior parietal gyrus.
Collapse
Affiliation(s)
- Zhengfei Hu
- Department of Psychology, Hokkaido University, Sapporo, Japan
| | - Huixiang Yang
- Department of Psychology, Hokkaido University, Sapporo, Japan
| | - Yuxiang Yang
- Department of Psychology, Hokkaido University, Sapporo, Japan
| | - Shuhei Nishida
- Department of Psychology, Hokkaido University, Sapporo, Japan
| | | | | | - Peter Ford Dominey
- INSERM - U1093 Cognition, Action, and Sensorimotor Plasticity, Dijon, France
| | - Kenji Ogawa
- Department of Psychology, Hokkaido University, Sapporo, Japan
| |
Collapse
|
30
|
Gao Y, Zhang Y, Cao Z, Guo X, Zhang J. Decoding Brain States From fMRI Signals by Using Unsupervised Domain Adaptation. IEEE J Biomed Health Inform 2019; 24:1677-1685. [PMID: 31514162 DOI: 10.1109/jbhi.2019.2940695] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
With the development of deep learning in medical image analysis, decoding brain states from functional magnetic resonance imaging (fMRI) signals has made significant progress. Previous studies often utilized deep neural networks to automatically classify brain activity patterns related to diverse cognitive states. However, due to the individual differences between subjects and the variation in acquisition parameters across devices, the inconsistency in data distributions degrades the performance of cross-subject decoding. Besides, most current networks were trained in a supervised way, which is not suitable for the actual scenarios in which massive amounts of data are unlabeled. To address these problems, we proposed the deep cross-subject adaptation decoding (DCAD) framework to decipher the brain states. The proposed volume-based 3D feature extraction architecture can automatically learn the common spatiotemporal features of labeled source data to generate a distinct descriptor. Then, the distance between the source and target distributions is minimized via an unsupervised domain adaptation (UDA) method, which can help to accurately decode the cognitive states across subjects. The performance of the DCAD was evaluated on task-fMRI (tfMRI) dataset from the Human Connectome Project (HCP). Experimental results showed that the proposed method achieved the state-of-the-art decoding performance with mean 81.9% and 84.9% accuracies under two conditions (4 brain states and 9 brain states respectively) of working memory task. Our findings also demonstrated that UDA can mitigate the impact of the data distribution shift, thereby providing a superior choice for increasing the performance of cross-subject decoding without depending on annotations.
Collapse
|
31
|
Gurunandan K, Carreiras M, Paz-Alonso PM. Functional plasticity associated with language learning in adults. Neuroimage 2019; 201:116040. [PMID: 31336190 DOI: 10.1016/j.neuroimage.2019.116040] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 07/15/2019] [Accepted: 07/19/2019] [Indexed: 01/25/2023] Open
Abstract
Learning a new language in adulthood is increasingly common and among the most difficult tasks attempted by adults. Adult language learners thus offer an excellent window into the nature of learning-dependent plasticity. The present functional magnetic resonance imaging (fMRI) study was aimed at characterising functional neuroplasticity in adults at different stages of learning a second language (L2). To this end, a total of 34 adults, either intermediate or advanced L2 learners, underwent MRI scanning while performing a semantic judgement task with print and speech stimuli. Three separate analytical approaches were used to comprehensively map neural differences: print-speech convergence, L1-L2 similarity, and functional connectivity with language control regions. Results revealed that (i) print-speech convergence was not affected by L2 proficiency level, (ii) L1-L2 similarity was significantly higher in intermediate than in advanced L2 learners, and (iii) functional coupling of language and language control areas was higher in the advanced relative to the intermediate group during reading comprehension. The results point to significant functional differences between intermediate and advanced language learners, indicating that, even well into adulthood, increasing L2 proficiency modulates the functional similarity between L1 and L2 and the connectivity between language comprehension and language control regions, particularly in reading comprehension.
Collapse
Affiliation(s)
- Kshipra Gurunandan
- BCBL, Basque Center on Cognition, Brain and Language, Donostia-San Sebastian, Spain.
| | - Manuel Carreiras
- BCBL, Basque Center on Cognition, Brain and Language, Donostia-San Sebastian, Spain; IKERBASQUE, Basque Foundation for Science, Bilbao, Spain; Department of Basque Language and Communication, EHU/UPV, Bilbao, Spain
| | - Pedro M Paz-Alonso
- BCBL, Basque Center on Cognition, Brain and Language, Donostia-San Sebastian, Spain.
| |
Collapse
|
32
|
Calabria M, Grunden N, Serra M, García-Sánchez C, Costa A. Semantic Processing in Bilingual Aphasia: Evidence of Language Dependency. Front Hum Neurosci 2019; 13:205. [PMID: 31258471 PMCID: PMC6587373 DOI: 10.3389/fnhum.2019.00205] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 05/29/2019] [Indexed: 11/13/2022] Open
Abstract
Individuals with aphasia frequently show lexical retrieval deficits due to increased interference of semantically related competitors, a phenomenon that can be observed in tasks such as naming pictures grouped by semantic category. These deficits are explained in terms of impaired semantic control, a set of abilities that are to some extent dependent upon executive control (EC). However, the extent to which semantic control abilities can be affected in a second and non-dominant language has not been extensively explored. Additionally, findings in healthy individuals are inconclusive regarding the degree to which semantic processing is shared between languages. In this study, we explored the effect of brain damage on semantic processing by comparing the performance of bilingual individuals with aphasia on tasks involving semantic control during word production and comprehension. Furthermore, we explored whether semantic deficits are related to domain-general EC deficits. First, we investigated the naming performance of Catalan-Spanish bilinguals with fluent aphasia and age-matched healthy controls on a semantically blocked cyclic naming task in each of their two languages (Catalan and Spanish). This task measured semantic interference in terms of the difference in naming latencies between pictures grouped by the same semantic category or different categories. Second, we explored whether lexical deficits extend to comprehension by testing participants in a word-picture matching task during a mixed language condition. Third, we used a conflict monitoring task to explore the presence of EC deficits in patients with aphasia. We found two main results. First, in both language tasks, bilingual patients' performances were more affected than those of healthy controls when they performed the task in their non-dominant language. Second, there was a significant correlation between the speed of processing on the EC task and the magnitude of the semantic interference effect exclusively in the non-dominant language. Taken together, these results suggest that lexical retrieval may be selectively impaired in bilinguals within those conditions where semantic competition is higher, i.e.,- in their non-dominant language; this could possibly be explained by an excessive amount of inhibition placed upon this language. Moreover, lexico-semantic impairments seem to be at least somewhat related to conflict monitoring deficits, suggesting a certain degree of overlap between EC and semantic control.
Collapse
Affiliation(s)
- Marco Calabria
- Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain
| | - Nicholas Grunden
- Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain.,Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Mariona Serra
- Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain
| | | | - Albert Costa
- Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| |
Collapse
|
33
|
Sheikh UA, Carreiras M, Soto D. Decoding the meaning of unconsciously processed words using fMRI-based MVPA. Neuroimage 2019; 191:430-440. [DOI: 10.1016/j.neuroimage.2019.02.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Revised: 01/22/2019] [Accepted: 02/05/2019] [Indexed: 11/30/2022] Open
|
34
|
Grégoire L, Greening SG. Fear of the known: semantic generalisation of fear conditioning across languages in bilinguals. Cogn Emot 2019; 34:352-358. [PMID: 30987523 DOI: 10.1080/02699931.2019.1604319] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
While modern theories of emotion emphasize the role of higher-order cognitive processes such as semantics in human emotion, much research into emotional learning has ignored the potential contributions of such processes. This study aimed to determine whether emotional learning affects semantic representations of words independent of perceptual features by assessing whether fear conditioning to a neutral word generalises across languages in bilingual participants. Two sessions differing according to the reinforced language were performed by English-Spanish bilinguals. In each session, a neutral word was reinforced by an electrical shock whereas its equivalent in the other language was never paired with shock. Across two sessions within our sample, we found replicable evidence that fear conditioning consistently transferred to the non-reinforced language as measured by both self-reported fear and electrodermal activity, irrespective of the conditioned language. Our findings extend knowledge about the role of semantic similarity in fear generalisation and highlight the importance of higher-order cognitive processes in human emotions.
Collapse
Affiliation(s)
- Laurent Grégoire
- CNAPs Lab, Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Steven G Greening
- CNAPs Lab, Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| |
Collapse
|
35
|
Ostarek M, Joosen D, Ishag A, de Nijs M, Huettig F. Are visual processes causally involved in "perceptual simulation" effects in the sentence-picture verification task? Cognition 2018; 182:84-94. [PMID: 30219635 DOI: 10.1016/j.cognition.2018.08.017] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 08/27/2018] [Accepted: 08/27/2018] [Indexed: 10/28/2022]
Abstract
Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation.
Collapse
Affiliation(s)
- Markus Ostarek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; International Max Planck Research School for Language Sciences, The Netherlands.
| | - Dennis Joosen
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Adil Ishag
- International University of Africa, Khartoum, Sudan
| | - Monique de Nijs
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Falk Huettig
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
36
|
Chen S, Fang J, An D, Xiao F, Chen D, Chen T, Zhou D, Liu L. The focal alteration and causal connectivity in children with new-onset benign epilepsy with centrotemporal spikes. Sci Rep 2018; 8:5689. [PMID: 29632387 PMCID: PMC5890242 DOI: 10.1038/s41598-018-23336-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2017] [Accepted: 03/09/2018] [Indexed: 02/05/2023] Open
Abstract
The aim of the current study was to find the epileptic focus and examine its causal relationship to other brain regions in children with new-onset benign childhood epilepsy with centrotemporal spikes (BECTS). Resting-state functional magnetic resonance imaging (fMRI) was performed in 66 children with BECTS and 37 matched control children. We compared the amplitude of low frequency fluctuation (ALFF) signals between the two groups to find the potential epileptogenic zone (EZ), then used Granger causality analysis (GCA) to explore the causal effects of EZ on the whole brain. Children with BECTS had significantly increased ALFF in the right Broca’s area, and decreased ALFF in bilateral fusiform gyrus. The patients also showed increased driving effect from the EZ in Broca’s area to the right prefrontal lobe, and decreased effects to the frontal lobe and posterior parts of the language network. The causal effect on left Wernicke’s area negatively correlated with verbal IQ (VIQ) score. Our research on new-onset BECTS patients illustrates a possible compensatory mechanism in the language network at early stages of BECTS, and the negative correlation of GCA and VIQ suggest the disturbance of epileptiform activity on language. These findings shed light on the mechanisms of and language dysfunction in BECTS.
Collapse
Affiliation(s)
- Sihan Chen
- Epilepsy Center, Department of Neurology, West China Hospital, Sichuan University, Chengdu, PR China
| | - Jiajia Fang
- Department of Neurology, Fourth Affiliated Hospital, School of Medicine, Zhejiang University, Yiwu, PR China
| | - Dongmei An
- Epilepsy Center, Department of Neurology, West China Hospital, Sichuan University, Chengdu, PR China
| | - Fenglai Xiao
- Epilepsy Center, Department of Neurology, West China Hospital, Sichuan University, Chengdu, PR China
| | - Deng Chen
- Epilepsy Center, Department of Neurology, West China Hospital, Sichuan University, Chengdu, PR China
| | - Tao Chen
- Epilepsy Center, Department of Neurology, West China Hospital, Sichuan University, Chengdu, PR China
| | - Dong Zhou
- Epilepsy Center, Department of Neurology, West China Hospital, Sichuan University, Chengdu, PR China.
| | - Ling Liu
- Epilepsy Center, Department of Neurology, West China Hospital, Sichuan University, Chengdu, PR China.
| |
Collapse
|
37
|
Van de Putte E, De Baene W, Price CJ, Duyck W. "Neural overlap of L1 and L2 semantic representations across visual and auditory modalities: a decoding approach". Neuropsychologia 2018; 113:68-77. [PMID: 29605594 PMCID: PMC5946896 DOI: 10.1016/j.neuropsychologia.2018.03.037] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 03/14/2018] [Accepted: 03/28/2018] [Indexed: 11/30/2022]
Abstract
This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using multi-voxel pattern analysis (MVPA), within and across language comprehension (word listening and word reading) and production (picture naming). It was possible to identify the picture or word named, read or heard in one language (e.g. maan, meaning moon) based on the brain activity in a distributed bilateral brain network while, respectively, naming, reading or listening to the picture or word in the other language (e.g. lune). The brain regions identified differed across tasks. During picture naming, brain activation in the occipital and temporal regions allowed concepts to be predicted across languages. During word listening and word reading, across-language predictions were observed in the rolandic operculum and several motor-related areas (pre- and postcentral, the cerebellum). In addition, across-language predictions during reading were identified in regions typically associated with semantic processing (left inferior frontal, middle temporal cortex, right cerebellum and precuneus) and visual processing (inferior and middle occipital regions and calcarine sulcus). Furthermore, across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading. These findings support the idea of at least partially language- and modality-independent semantic neural representations. Evidence for at least partially language- and modality-independent semantic neural representations. With a decoding approach, we tested whether brain activity during the semantic access of individual nouns in one language and modality (e.g. production) allowed predicting the semantic access of the same concepts in the other language and modalities (e.g. word listening, word reading). Across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading.
Collapse
Affiliation(s)
- Eowyn Van de Putte
- Department of Experimental Psychology, Ghent University, Ghent, Belgium.
| | - Wouter De Baene
- Department of Experimental Psychology, Ghent University, Ghent, Belgium; Department of Cognitive Neuropsychology, Tilburg University, Tilburg, the Netherlands
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, UCL, London, UK
| | - Wouter Duyck
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
38
|
Zheng L, Chen C, Liu W, Long Y, Zhao H, Bai X, Zhang Z, Han Z, Liu L, Guo T, Chen B, Ding G, Lu C. Enhancement of teaching outcome through neural prediction of the students' knowledge state. Hum Brain Mapp 2018; 39:3046-3057. [PMID: 29575392 DOI: 10.1002/hbm.24059] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 03/12/2018] [Accepted: 03/13/2018] [Indexed: 11/07/2022] Open
Abstract
The neural mechanism for the dyadic process of teaching is poorly understood. Although theories about teaching have proposed that before any teaching takes place, the teacher will predict the knowledge state of the student(s) to enhance the teaching outcome, this theoretical Prediction-Transmission hypothesis has not been tested with any neuroimaging studies. Using functional near-infrared spectroscopy-based hyperscanning, this study measured brain activities of the teacher-student pairs simultaneously. Results showed that better teaching outcome was associated with higher time-lagged interpersonal neural synchronization (INS) between right temporal-parietal junction (TPJ) of the teacher and anterior superior temporal cortex (aSTC) of the student, when the teacher's brain activity preceded that of the student. Moreover, time course analyses suggested that such INS could mark the quality of the teaching outcome at an early stage of the teaching process. These results provided key neural evidence for the Prediction-Transmission hypothesis about teaching, and suggested that the INS plays an important role in the successful teaching.
Collapse
Affiliation(s)
- Lifen Zheng
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Chuansheng Chen
- Department of Psychology and Social Behavior, University of California, Irvine, California, 92697
| | - Wenda Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Yuhang Long
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Hui Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Xialu Bai
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Zhanjun Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Zaizhu Han
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Li Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Taomei Guo
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Baoguo Chen
- Beijing Key Laboratory of Applied Experimental Psychology, School of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Guosheng Ding
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Chunming Lu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.,IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| |
Collapse
|
39
|
Popov V, Ostarek M, Tenison C. Practices and pitfalls in inferring neural representations. Neuroimage 2018; 174:340-351. [PMID: 29578030 DOI: 10.1016/j.neuroimage.2018.03.041] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 03/16/2018] [Accepted: 03/18/2018] [Indexed: 10/17/2022] Open
Abstract
A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain. Stimulus-feature-based encoding models are becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid because successful prediction can occur even if the two representational spaces use different, but correlated, representational schemes. We support this claim with three simulations in which we achieved high prediction accuracy despite systematic differences in the geometries and dimensions of the underlying representations. Detailed analysis of the encoding models' predictions showed systematic deviations from ground-truth, indicating that high prediction accuracy is insufficient for making representational inferences. This fallacy applies to the prediction of actual neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations, including model comparison, absolute model performance, visualization techniques and attentional modulation.
Collapse
Affiliation(s)
- Vencislav Popov
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Baker Hall, 15289, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, 4400 Fifth Ave, 15213, Pittsburgh, PA, USA.
| | - Markus Ostarek
- Max Planck Institute for Psycholinguistics, PO Box 310, 6500, AH Nijmegen, The Netherlands
| | - Caitlin Tenison
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Baker Hall, 15289, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, 4400 Fifth Ave, 15213, Pittsburgh, PA, USA
| |
Collapse
|
40
|
Focal versus distributed temporal cortex activity for speech sound category assignment. Proc Natl Acad Sci U S A 2018; 115:E1299-E1308. [PMID: 29363598 PMCID: PMC5819402 DOI: 10.1073/pnas.1714279115] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
When listening to speech, phonemes are represented in a distributed fashion in our temporal and prefrontal cortices. How these representations are selected in a phonemic decision context, and in particular whether distributed or focal neural information is required for explicit phoneme recognition, is unclear. We hypothesized that focal and early neural encoding of acoustic signals is sufficiently informative to access speech sound representations and permit phoneme recognition. We tested this hypothesis by combining a simple speech-phoneme categorization task with univariate and multivariate analyses of fMRI, magnetoencephalography, intracortical, and clinical data. We show that neural information available focally in the temporal cortex prior to decision-related neural activity is specific enough to account for human phonemic identification. Percepts and words can be decoded from distributed neural activity measures. However, the existence of widespread representations might conflict with the more classical notions of hierarchical processing and efficient coding, which are especially relevant in speech processing. Using fMRI and magnetoencephalography during syllable identification, we show that sensory and decisional activity colocalize to a restricted part of the posterior superior temporal gyrus (pSTG). Next, using intracortical recordings, we demonstrate that early and focal neural activity in this region distinguishes correct from incorrect decisions and can be machine-decoded to classify syllables. Crucially, significant machine decoding was possible from neuronal activity sampled across different regions of the temporal and frontal lobes, despite weak or absent sensory or decision-related responses. These findings show that speech-sound categorization relies on an efficient readout of focal pSTG neural activity, while more distributed activity patterns, although classifiable by machine learning, instead reflect collateral processes of sensory perception and decision.
Collapse
|
41
|
Can Lextale-Esp discriminate between groups of highly proficient Catalan-Spanish bilinguals with different language dominances? Behav Res Methods 2017; 49:717-723. [PMID: 27004486 DOI: 10.3758/s13428-016-0728-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Researchers have recently introduced various LexTALE-type word recognition tests in order to assess vocabulary size in a second language (L2) mastered by participants. These tests correlate well with other measures of language proficiency in unbalanced bilinguals whose second language ability is well below the level of their native language. In the present study, we investigated whether LexTALE-type tests also discriminate at the high end of the proficiency range. In several regions of Spain, people speak both the regional language (e.g., Catalan or Basque) and Spanish to very high degrees. Still, because of their living circumstances, some consider themselves as either Spanish-dominant or regional-language dominant. We showed that these two groups perform differently on the recently published Spanish Lextale-Esp: The Spanish-dominant group had significantly higher scores than the Catalan-dominant group. We also showed that the noncognate words of the test have the highest discrimination power. This indicates that the existing Lextale-Esp can be used to estimate proficiency differences in highly proficient bilinguals with Spanish as an L2, and that a more sensitive test could be built by replacing the cognates.
Collapse
|
42
|
Yang Y, Wang J, Bailer C, Cherkassky V, Just MA. Commonalities and differences in the neural representations of English, Portuguese, and Mandarin sentences: When knowledge of the brain-language mappings for two languages is better than one. BRAIN AND LANGUAGE 2017; 175:77-85. [PMID: 29045921 DOI: 10.1016/j.bandl.2017.09.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Revised: 09/25/2017] [Accepted: 09/29/2017] [Indexed: 06/07/2023]
Abstract
This study extended cross-language semantic decoding (based on a concept's fMRI signature) to the decoding of sentences across three different languages (English, Portuguese and Mandarin). A classifier was trained on either the mapping between words and activation patterns in one language or the mappings in two languages (using an equivalent amount of training data), and then tested on its ability to decode the semantic content of a third language. The model trained on two languages was reliably more accurate than a classifier trained on one language for all three pairs of languages. This two-language advantage was selective to abstract concept domains such as social interactions and mental activity. Representational Similarity Analyses (RSA) of the inter-sentence neural similarities resulted in similar clustering of sentences in all the three languages, indicating a shared neural concept space among languages. These findings identify semantic domains that are common across these three languages versus those that are more language or culture-specific.
Collapse
Affiliation(s)
- Ying Yang
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Jing Wang
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Cyntia Bailer
- Department of Foreign Language and Literature, Federal University of Santa Catarina, Brazil
| | | | - Marcel Adam Just
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
43
|
Dehghani M, Boghrati R, Man K, Hoover J, Gimbel SI, Vaswani A, Zevin JD, Immordino‐Yang MH, Gordon AS, Damasio A, Kaplan JT. Decoding the neural representation of story meanings across languages. Hum Brain Mapp 2017; 38:6096-6106. [PMID: 28940969 PMCID: PMC6867091 DOI: 10.1002/hbm.23814] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Revised: 09/01/2017] [Accepted: 09/06/2017] [Indexed: 12/17/2022] Open
Abstract
Drawing from a common lexicon of semantic units, humans fashion narratives whose meaning transcends that of their individual utterances. However, while brain regions that represent lower-level semantic units, such as words and sentences, have been identified, questions remain about the neural representation of narrative comprehension, which involves inferring cumulative meaning. To address these questions, we exposed English, Mandarin, and Farsi native speakers to native language translations of the same stories during fMRI scanning. Using a new technique in natural language processing, we calculated the distributed representations of these stories (capturing the meaning of the stories in high-dimensional semantic space), and demonstrate that using these representations we can identify the specific story a participant was reading from the neural data. Notably, this was possible even when the distributed representations were calculated using stories in a different language than the participant was reading. Our results reveal that identification relied on a collection of brain regions most prominently located in the default mode network. These results demonstrate that neuro-semantic encoding of narratives happens at levels higher than individual semantic units and that this encoding is systematic across both individuals and languages. Hum Brain Mapp 38:6096-6106, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
| | | | - Kingson Man
- University of Southern California, Los Angeles, CA
| | - Joe Hoover
- University of Southern California, Los Angeles, CA
| | | | | | | | | | | | | | | |
Collapse
|
44
|
Van de Putte E, De Baene W, Brass M, Duyck W. Neural overlap of L1 and L2 semantic representations in speech: A decoding approach. Neuroimage 2017; 162:106-116. [DOI: 10.1016/j.neuroimage.2017.08.082] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Revised: 08/03/2017] [Accepted: 08/31/2017] [Indexed: 11/29/2022] Open
|
45
|
Bocquelet F, Hueber T, Girin L, Chabardès S, Yvert B. Key considerations in designing a speech brain-computer interface. ACTA ACUST UNITED AC 2017; 110:392-401. [PMID: 28756027 DOI: 10.1016/j.jphysparis.2017.07.002] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 06/21/2017] [Accepted: 07/19/2017] [Indexed: 01/08/2023]
Abstract
Restoring communication in case of aphasia is a key challenge for neurotechnologies. To this end, brain-computer strategies can be envisioned to allow artificial speech synthesis from the continuous decoding of neural signals underlying speech imagination. Such speech brain-computer interfaces do not exist yet and their design should consider three key choices that need to be made: the choice of appropriate brain regions to record neural activity from, the choice of an appropriate recording technique, and the choice of a neural decoding scheme in association with an appropriate speech synthesis method. These key considerations are discussed here in light of (1) the current understanding of the functional neuroanatomy of cortical areas underlying overt and covert speech production, (2) the available literature making use of a variety of brain recording techniques to better characterize and address the challenge of decoding cortical speech signals, and (3) the different speech synthesis approaches that can be considered depending on the level of speech representation (phonetic, acoustic or articulatory) envisioned to be decoded at the core of a speech BCI paradigm.
Collapse
Affiliation(s)
- Florent Bocquelet
- INSERM, BrainTech Laboratory U1205, F-38000 Grenoble, France; Univ. Grenoble Alpes, BrainTech Laboratory U1205, F-38000 Grenoble, France
| | - Thomas Hueber
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
| | - Laurent Girin
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
| | | | - Blaise Yvert
- INSERM, BrainTech Laboratory U1205, F-38000 Grenoble, France; Univ. Grenoble Alpes, BrainTech Laboratory U1205, F-38000 Grenoble, France.
| |
Collapse
|
46
|
Xu M, Baldauf D, Chang CQ, Desimone R, Tan LH. Distinct distributed patterns of neural activity are associated with two languages in the bilingual brain. SCIENCE ADVANCES 2017; 3:e1603309. [PMID: 28706990 PMCID: PMC5507633 DOI: 10.1126/sciadv.1603309] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2016] [Accepted: 06/07/2017] [Indexed: 05/30/2023]
Abstract
A large body of previous neuroimaging studies suggests that multiple languages are processed and organized in a single neuroanatomical system in the bilingual brain, although differential activation may be seen in some studies because of different proficiency levels and/or age of acquisition of the two languages. However, one important possibility is that the two languages may involve interleaved but functionally independent neural populations within a given cortical region, and thus, distinct patterns of neural computations may be pivotal for the processing of the two languages. Using functional magnetic resonance imaging (fMRI) and multivariate pattern analyses, we tested this possibility in Chinese-English bilinguals when they performed an implicit reading task. We found a broad network of regions wherein the two languages evoked different patterns of activity, with only partially overlapping patterns of voxels in a given region. These regions, including the middle occipital cortices, fusiform gyri, and lateral temporal, temporoparietal, and prefrontal cortices, are associated with multiple aspects of language processing. The results suggest the functional independence of neural computations underlying the representations of different languages in bilinguals.
Collapse
Affiliation(s)
- Min Xu
- Neuroimaging Laboratory, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen 518060, China
- Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen 518057, China
| | - Daniel Baldauf
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento 38100, Italy
| | - Chun Qi Chang
- Neuroimaging Laboratory, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen 518060, China
- Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen 518057, China
| | - Robert Desimone
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Li Hai Tan
- Neuroimaging Laboratory, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen 518060, China
- Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen 518057, China
| |
Collapse
|
47
|
Li L, Abutalebi J, Emmorey K, Gong G, Yan X, Feng X, Zou L, Ding G. How bilingualism protects the brain from aging: Insights from bimodal bilinguals. Hum Brain Mapp 2017; 38:4109-4124. [PMID: 28513102 DOI: 10.1002/hbm.23652] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Revised: 03/16/2017] [Accepted: 05/04/2017] [Indexed: 12/11/2022] Open
Abstract
Bilingual experience can delay cognitive decline during aging. A general hypothesis is that the executive control system of bilinguals faces an increased load due to controlling two languages, and this increased load results in a more "tuned brain" that eventually creates a neural reserve. Here we explored whether such a neuroprotective effect is independent of language modality, i.e., not limited to bilinguals who speak two languages but also occurs for bilinguals who use a spoken and a signed language. We addressed this issue by comparing bimodal bilinguals to monolinguals in order to detect age-induced structural brain changes and to determine whether we can detect the same beneficial effects on brain structure, in terms of preservation of gray matter volume (GMV), for bimodal bilinguals as has been reported for unimodal bilinguals. Our GMV analyses revealed a significant interaction effect of age × group in the bilateral anterior temporal lobes, left hippocampus/amygdala, and left insula where bimodal bilinguals showed slight GMV increases while monolinguals showed significant age-induced GMV decreases. We further found through cortical surface-based measurements that this effect was present for surface area and not for cortical thickness. Moreover, to further explore the hypothesis that overall bilingualism provides neuroprotection, we carried out a direct comparison of GMV, extracted from the brain regions reported above, between bimodal bilinguals, unimodal bilinguals, and monolinguals. Bilinguals, regardless of language modality, exhibited higher GMV compared to monolinguals. This finding highlights the general beneficial effects provided by experience handling two language systems, whether signed or spoken. Hum Brain Mapp 38:4109-4124, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Le Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Jubin Abutalebi
- Centre for Neurolinguistics and Psycholinguistics, University Vita Salute San Raffaele, Milan, Italy
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, California
| | - Gaolang Gong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Xin Yan
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Xiaoxia Feng
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Lijuan Zou
- College of Psychology and Education, Zaozhuang University, Zaozhuang, 277100, People's Republic of China
| | - Guosheng Ding
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, People's Republic of China
| |
Collapse
|
48
|
Vu AT, Phillips JS, Kay K, Phillips ME, Johnson MR, Shinkareva SV, Tubridy S, Millin R, Grossman M, Gureckis T, Bhattacharyya R, Yacoub E. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment. Cogn Neuropsychol 2017; 33:265-75. [PMID: 27686111 DOI: 10.1080/02643294.2016.1195343] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.
Collapse
Affiliation(s)
- An T Vu
- a Center for Magnetic Resonance Research , University of Minnesota , Minneapolis , MN , USA
| | - Jeffrey S Phillips
- b Department of Neurology , University of Pennsylvania , Philadelphia , PA , USA
| | - Kendrick Kay
- c Department of Psychology , Washington University in St. Louis , St. Louis , MO , USA
| | | | | | | | - Shannon Tubridy
- g Department of Psychology , New York University , New York , NY , USA
| | | | - Murray Grossman
- b Department of Neurology , University of Pennsylvania , Philadelphia , PA , USA
| | - Todd Gureckis
- g Department of Psychology , New York University , New York , NY , USA
| | | | - Essa Yacoub
- a Center for Magnetic Resonance Research , University of Minnesota , Minneapolis , MN , USA
| |
Collapse
|
49
|
Evans S. What Has Replication Ever Done for Us? Insights from Neuroimaging of Speech Perception. Front Hum Neurosci 2017; 11:41. [PMID: 28203154 PMCID: PMC5285370 DOI: 10.3389/fnhum.2017.00041] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Accepted: 01/19/2017] [Indexed: 12/03/2022] Open
Affiliation(s)
- Samuel Evans
- Institute of Cognitive Neuroscience, University College LondonLondon UK; Department of Psychology, University of WestminsterLondon, UK
| |
Collapse
|
50
|
Murphy C, Rueschemeyer SA, Watson D, Karapanagiotidis T, Smallwood J, Jefferies E. Fractionating the anterior temporal lobe: MVPA reveals differential responses to input and conceptual modality. Neuroimage 2016; 147:19-31. [PMID: 27908787 PMCID: PMC5315053 DOI: 10.1016/j.neuroimage.2016.11.067] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 11/23/2016] [Accepted: 11/27/2016] [Indexed: 12/12/2022] Open
Abstract
Words activate cortical regions in accordance with their modality of presentation (i.e., written vs. spoken), yet there is a long-standing debate about whether patterns of activity in any specific brain region capture modality-invariant conceptual information. Deficits in patients with semantic dementia highlight the anterior temporal lobe (ATL) as an amodal store of semantic knowledge but these studies do not permit precise localisation of this function. The current investigation used multiple imaging methods in healthy participants to examine functional dissociations within ATL. Multi-voxel pattern analysis identified spatially segregated regions: a response to input modality in anterior superior temporal gyrus (aSTG) and a response to meaning in more ventral anterior temporal lobe (vATL). This functional dissociation was supported by resting-state connectivity that found greater coupling for aSTG with primary auditory cortex and vATL with the default mode network. A meta-analytic decoding of these connectivity patterns implicated aSTG in processes closely tied to auditory processing (such as phonology and language) and vATL in meaning-based tasks (such as comprehension or social cognition). Thus we provide converging evidence for the segregation of meaning and input modality in the ATL.
Collapse
Affiliation(s)
- Charlotte Murphy
- Department of Psychology and York Neuroimaging Centre, University of York, UK.
| | | | - David Watson
- Department of Psychology and York Neuroimaging Centre, University of York, UK
| | | | - Jonathan Smallwood
- Department of Psychology and York Neuroimaging Centre, University of York, UK
| | - Elizabeth Jefferies
- Department of Psychology and York Neuroimaging Centre, University of York, UK
| |
Collapse
|