1
|
Deniz F, Tseng C, Wehbe L, Dupré la Tour T, Gallant JL. Semantic Representations during Language Comprehension Are Affected by Context. J Neurosci 2023; 43:3144-3158. [PMID: 36973013 PMCID: PMC10146529 DOI: 10.1523/jneurosci.2459-21.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 02/17/2023] [Accepted: 02/26/2023] [Indexed: 03/29/2023] Open
Abstract
The meaning of words in natural language depends crucially on context. However, most neuroimaging studies of word meaning use isolated words and isolated sentences with little context. Because the brain may process natural language differently from how it processes simplified stimuli, there is a pressing need to determine whether prior results on word meaning generalize to natural language. fMRI was used to record human brain activity while four subjects (two female) read words in four conditions that vary in context: narratives, isolated sentences, blocks of semantically similar words, and isolated words. We then compared the signal-to-noise ratio (SNR) of evoked brain responses, and we used a voxelwise encoding modeling approach to compare the representation of semantic information across the four conditions. We find four consistent effects of varying context. First, stimuli with more context evoke brain responses with higher SNR across bilateral visual, temporal, parietal, and prefrontal cortices compared with stimuli with little context. Second, increasing context increases the representation of semantic information across bilateral temporal, parietal, and prefrontal cortices at the group level. In individual subjects, only natural language stimuli consistently evoke widespread representation of semantic information. Third, context affects voxel semantic tuning. Finally, models estimated using stimuli with little context do not generalize well to natural language. These results show that context has large effects on the quality of neuroimaging data and on the representation of meaning in the brain. Thus, neuroimaging studies that use stimuli with little context may not generalize well to the natural regime.SIGNIFICANCE STATEMENT Context is an important part of understanding the meaning of natural language, but most neuroimaging studies of meaning use isolated words and isolated sentences with little context. Here, we examined whether the results of neuroimaging studies that use out-of-context stimuli generalize to natural language. We find that increasing context improves the quality of neuro-imaging data and changes where and how semantic information is represented in the brain. These results suggest that findings from studies using out-of-context stimuli may not generalize to natural language used in daily life.
Collapse
Affiliation(s)
- Fatma Deniz
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin 10623, Germany
| | - Christine Tseng
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
| | - Leila Wehbe
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
| | - Tom Dupré la Tour
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
- Department of Psychology, University of California, Berkeley, California 94720
| |
Collapse
|
2
|
Aguirre-Celis N, Miikkulainen R. How the Brain Dynamically Constructs Sentence-Level Meanings From Word-Level Features. Front Artif Intell 2022; 5:733163. [PMID: 35527795 PMCID: PMC9069966 DOI: 10.3389/frai.2022.733163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 02/09/2022] [Indexed: 11/24/2022] Open
Abstract
How are words connected to the thoughts they help to express? Recent brain imaging studies suggest that word representations are embodied in different neural systems through which the words are experienced. Building on this idea, embodied approaches such as the Concept Attribute Representations (CAR) theory represents concepts as a set of semantic features (attributes) mapped to different brain systems. An intriguing challenge to this theory is that people weigh concept attributes differently based on context, i.e., they construct meaning dynamically according to the combination of concepts that occur in the sentence. This research addresses this challenge through the Context-dEpendent meaning REpresentations in the BRAin (CEREBRA) neural network model. Based on changes in the brain images, CEREBRA quantifies the effect of sentence context on word meanings. Computational experiments demonstrated that words in different contexts have different representations, the changes observed in the concept attributes reveal unique conceptual combinations, and that the new representations are more similar to the other words in the sentence than to the original representations. Behavioral analysis further confirmed that the changes produced by CEREBRA are actionable knowledge that can be used to predict human responses. These experiments constitute a comprehensive evaluation of CEREBRA's context-based representations, showing that CARs can be dynamic and change based on context. Thus, CEREBRA is a useful tool for understanding how word meanings are represented in the brain, providing a framework for future interdisciplinary research on the mental lexicon.
Collapse
Affiliation(s)
- Nora Aguirre-Celis
- Department of Computer Science, ITESM, Monterrey, Mexico
- Department of Computer Science, The University of Texas in Austin, Austin, TX, United States
- *Correspondence: Nora Aguirre-Celis
| | - Risto Miikkulainen
- Department of Computer Science, The University of Texas in Austin, Austin, TX, United States
| |
Collapse
|
3
|
Rybář M, Daly I. Neural decoding of semantic concepts: A systematic literature review. J Neural Eng 2022; 19. [PMID: 35344941 DOI: 10.1088/1741-2552/ac619a] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 03/27/2022] [Indexed: 11/12/2022]
Abstract
Objective Semantic concepts are coherent entities within our minds. They underpin our thought processes and are a part of the basis for our understanding of the world. Modern neuroscience research is increasingly exploring how individual semantic concepts are encoded within our brains and a number of studies are beginning to reveal key patterns of neural activity that underpin specific concepts. Building upon this basic understanding of the process of semantic neural encoding, neural engineers are beginning to explore tools and methods for semantic decoding: identifying which semantic concepts an individual is focused on at a given moment in time from recordings of their neural activity. In this paper we review the current literature on semantic neural decoding. Approach We conducted this review according to the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) guidelines. Specifically, we assess the eligibility of published peer-reviewed reports via a search of PubMed and Google Scholar. We identify a total of 74 studies in which semantic neural decoding is used to attempt to identify individual semantic concepts from neural activity. Results Our review reveals how modern neuroscientific tools have been developed to allow decoding of individual concepts from a range of neuroimaging modalities. We discuss specific neuroimaging methods, experimental designs, and machine learning pipelines that are employed to aid the decoding of semantic concepts. We quantify the efficacy of semantic decoders by measuring information transfer rates. We also discuss current challenges presented by this research area and present some possible solutions. Finally, we discuss some possible emerging and speculative future directions for this research area. Significance Semantic decoding is a rapidly growing area of research. However, despite its increasingly widespread popularity and use in neuroscientific research this is the first literature review focusing on this topic across neuroimaging modalities and with a focus on quantifying the efficacy of semantic decoders.
Collapse
Affiliation(s)
- Milan Rybář
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Ian Daly
- University of Essex, School of Computer Science and Electronic Engineering, Wivenhoe Park, Colchester, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
4
|
Asyraff A, Lemarchand R, Tamm A, Hoffman P. Stimulus-independent neural coding of event semantics: Evidence from cross-sentence fMRI decoding. Neuroimage 2021; 236:118073. [PMID: 33878380 PMCID: PMC8270886 DOI: 10.1016/j.neuroimage.2021.118073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 04/06/2021] [Accepted: 04/11/2021] [Indexed: 11/25/2022] Open
Abstract
Multivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of simple events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical structure, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance.
Collapse
Affiliation(s)
- Aliff Asyraff
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Rafael Lemarchand
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Andres Tamm
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Paul Hoffman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK.
| |
Collapse
|
5
|
Anderson AJ, Lalor EC, Lin F, Binder JR, Fernandino L, Humphries CJ, Conant LL, Raizada RDS, Grimm S, Wang X. Multiple Regions of a Cortical Network Commonly Encode the Meaning of Words in Multiple Grammatical Positions of Read Sentences. Cereb Cortex 2020; 29:2396-2411. [PMID: 29771323 DOI: 10.1093/cercor/bhy110] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 04/19/2018] [Indexed: 01/05/2023] Open
Abstract
Deciphering how sentence meaning is represented in the brain remains a major challenge to science. Semantically related neural activity has recently been shown to arise concurrently in distributed brain regions as successive words in a sentence are read. However, what semantic content is represented by different regions, what is common across them, and how this relates to words in different grammatical positions of sentences is weakly understood. To address these questions, we apply a semantic model of word meaning to interpret brain activation patterns elicited in sentence reading. The model is based on human ratings of 65 sensory/motor/emotional and cognitive features of experience with words (and their referents). Through a process of mapping functional Magnetic Resonance Imaging activation back into model space we test: which brain regions semantically encode content words in different grammatical positions (e.g., subject/verb/object); and what semantic features are encoded by different regions. In left temporal, inferior parietal, and inferior/superior frontal regions we detect the semantic encoding of words in all grammatical positions tested and reveal multiple common components of semantic representation. This suggests that sentence comprehension involves a common core representation of multiple words' meaning being encoded in a network of regions distributed across the brain.
Collapse
Affiliation(s)
| | - Edmund C Lalor
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA.,Department of Neuroscience, University of Rochester, Rochester, NY, USA.,School of Engineering, Trinity Centre for Bioengineering, and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Feng Lin
- School of Nursing, University of Rochester, Rochester, NY, USA.,Psychiatry, University of Rochester, Rochester, NY, USA
| | - Jeffrey R Binder
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | | | - Colin J Humphries
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Lisa L Conant
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Rajeev D S Raizada
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Scott Grimm
- Department of Linguistics, University of Rochester, Rochester, NY, USA
| | - Xixi Wang
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
| |
Collapse
|
6
|
Frankland SM, Greene JD. Two Ways to Build a Thought: Distinct Forms of Compositional Semantic Representation across Brain Regions. Cereb Cortex 2020; 30:3838-3855. [PMID: 32279078 DOI: 10.1093/cercor/bhaa001] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Revised: 11/30/2019] [Accepted: 01/02/2020] [Indexed: 12/23/2022] Open
Abstract
To understand a simple sentence such as "the woman chased the dog", the human mind must dynamically organize the relevant concepts to represent who did what to whom. This structured recombination of concepts (woman, dog, chased) enables the representation of novel events, and is thus a central feature of intelligence. Here, we use functional magnetic resonance (fMRI) and encoding models to delineate the contributions of three brain regions to the representation of relational combinations. We identify a region of anterior-medial prefrontal cortex (amPFC) that shares representations of noun-verb conjunctions across sentences: for example, a combination of "woman" and "chased" to encode woman-as-chaser, distinct from woman-as-chasee. This PFC region differs from the left-mid superior temporal cortex (lmSTC) and hippocampus, two regions previously implicated in representing relations. lmSTC represents broad role combinations that are shared across verbs (e.g., woman-as-agent), rather than narrow roles, limited to specific actions (woman-as-chaser). By contrast, a hippocampal sub-region represents events sharing narrow conjunctions as dissimilar. The success of the hippocampal conjunctive encoding model is anti-correlated with generalization performance in amPFC on a trial-by-trial basis, consistent with a pattern separation mechanism. Thus, these three regions appear to play distinct, but complementary, roles in encoding compositional event structure.
Collapse
Affiliation(s)
- Steven M Frankland
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540
| | - Joshua D Greene
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, MA 02138
| |
Collapse
|
7
|
Information-Processing Model of Concept Formation – Is First Language Acquisition Universal? CYBERNETICS AND INFORMATION TECHNOLOGIES 2018. [DOI: 10.2478/cait-2018-0035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Abstract
The analysis of child’s speech corpora shows that the process of acquisition of English and French displays identical development of children’s expressions when the speech-utterances are presented as Fibonacci-weighted classes of concepts. A model of concept complexity and information processing based on principles of optimality is proposed to explain this statistical result.
Collapse
|
8
|
Hamilton LS, Huth AG. The revolution will not be controlled: natural stimuli in speech neuroscience. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 35:573-582. [PMID: 32656294 PMCID: PMC7324135 DOI: 10.1080/23273798.2018.1499946] [Citation(s) in RCA: 106] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 07/03/2018] [Indexed: 05/22/2023]
Abstract
Humans have a unique ability to produce and consume rich, complex, and varied language in order to communicate ideas to one another. Still, outside of natural reading, the most common methods for studying how our brains process speech or understand language use only isolated words or simple sentences. Recent studies have upset this status quo by employing complex natural stimuli and measuring how the brain responds to language as it is used. In this article we argue that natural stimuli offer many advantages over simplified, controlled stimuli for studying how language is processed by the brain. Furthermore, the downsides of using natural language stimuli can be mitigated using modern statistical and computational techniques.
Collapse
Affiliation(s)
- Liberty S. Hamilton
- Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin, Austin, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, USA
| | - Alexander G. Huth
- Department of Neuroscience, The University of Texas at Austin, Austin, USA
- Department of Computer Science, The University of Texas at Austin, Austin, USA
| |
Collapse
|
9
|
Wang J, Cherkassky VL, Just MA. Predicting the brain activation pattern associated with the propositional content of a sentence: Modeling neural representations of events and states. Hum Brain Mapp 2017; 38:4865-4881. [PMID: 28653794 PMCID: PMC6867144 DOI: 10.1002/hbm.23692] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Revised: 06/06/2017] [Accepted: 06/09/2017] [Indexed: 11/10/2022] Open
Abstract
Even though much has recently been learned about the neural representation of individual concepts and categories, neuroimaging research is only beginning to reveal how more complex thoughts, such as event and state descriptions, are neurally represented. We present a predictive computational theory of the neural representations of individual events and states as they are described in 240 sentences. Regression models were trained to determine the mapping between 42 neurally plausible semantic features (NPSFs) and thematic roles of the concepts of a proposition and the fMRI activation patterns of various cortical regions that process different types of information. Given a semantic characterization of the content of a sentence that is new to the model, the model can reliably predict the resulting neural signature, or, given an observed neural signature of a new sentence, the model can predict its semantic content. The models were also reliably generalizable across participants. This computational model provides an account of the brain representation of a complex yet fundamental unit of thought, namely, the conceptual content of a proposition. In addition to characterizing a sentence representation at the level of the semantic and thematic features of its component concepts, factor analysis was used to develop a higher level characterization of a sentence, specifying the general type of event representation that the sentence evokes (e.g., a social interaction versus a change of physical state) and the voxel locations most strongly associated with each of the factors. Hum Brain Mapp 38:4865-4881, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Jing Wang
- Center for Cognitive Brain Imaging, Psychology Department, Carnegie Mellon University, Pittsburgh, Pennsylvania
| | - Vladimir L Cherkassky
- Center for Cognitive Brain Imaging, Psychology Department, Carnegie Mellon University, Pittsburgh, Pennsylvania
| | - Marcel Adam Just
- Center for Cognitive Brain Imaging, Psychology Department, Carnegie Mellon University, Pittsburgh, Pennsylvania
| |
Collapse
|