1
|
Matchin W, Almeida D, Hickok G, Sprouse J. An fMRI study of phrase structure and subject island violations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.05.592579. [PMID: 38746262 PMCID: PMC11092748 DOI: 10.1101/2024.05.05.592579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
In principle, functional neuroimaging provides uniquely informative data in addressing linguistic questions, because it can indicate distinct processes that are not apparent from behavioral data alone. This could involve adjudicating the source of unacceptability via the different patterns of elicited brain responses to different ungrammatical sentence types. However, it is difficult to interpret brain activations to syntactic violations. Such responses could reflect processes that have nothing intrinsically related to linguistic representations, such as domain-general executive function abilities. In order to facilitate the potential use of functional neuroimaging methods to identify the source of different syntactic violations, we conducted an fMRI experiment to identify the brain activation maps associated with two distinct syntactic violation types: phrase structure (created by inverting the order of two adjacent words within a sentence) and subject islands (created by extracting a wh-phrase out of an embedded subject). The comparison of these violations to control sentences surprisingly showed no indication of a generalized violation response, with almost completely divergent activation patterns. Phrase structure violations seemingly activated regions previously implicated in verbal working memory and structural complexity in sentence processing, whereas the subject islands appeared to activate regions previously implicated in conceptual-semantic processing, broadly defined. We review our findings in the context of previous research on syntactic and semantic violations using event-related potentials. Although our results suggest potentially distinct underlying mechanisms underlying phrase structure and subject island violations, our results are tentative and suggest important methodological considerations for future research in this area.
Collapse
Affiliation(s)
- William Matchin
- Dept. of Communication Sciences and Disorders, University of South Carolina
| | - Diogo Almeida
- Program in Psychology, New York University Abu Dhabi
| | - Gregory Hickok
- Dept. of Cognitive Sciences and Dept. of Language Science, University of California, Irvine
| | - Jon Sprouse
- Program in Psychology, New York University Abu Dhabi
| |
Collapse
|
2
|
Engesser S, Ridley AR, Watson SK, Kita S, Townsend SW. Seeds of language-like generativity in bird call combinations. Proc Biol Sci 2024; 291:20240922. [PMID: 39412245 PMCID: PMC11521141 DOI: 10.1098/rspb.2024.0922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 07/27/2024] [Accepted: 09/09/2024] [Indexed: 11/01/2024] Open
Abstract
Language is unbounded in its generativity, enabling the flexible combination of words into novel sentences. Critically, these constructions are intelligible to others due to our ability to derive a sentence's compositional meaning from the semantic relationships among its components. Some animals also concatenate meaningful calls into compositional-like combinations to communicate more complex information. However, these combinations are structurally highly stereotyped, suggesting a bounded system of holistically perceived signals that impedes the processing of novel variants. Using long-term data and playback experiments on pied babblers, we demonstrate that, despite production stereotypy, they can nevertheless process structurally modified and novel combinations of their calls, demonstrating a capacity for deriving meaning compositionally. Furthermore, differential responses to artificial combinations by fledglings suggest that this compositional sensitivity is acquired ontogenetically. Our findings demonstrate animal combinatorial systems can be flexible at the perceptual level and that such perceptual flexibility may represent a precursor of language-like generativity.
Collapse
Affiliation(s)
- Sabrina Engesser
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Affolternstrasse 56, Zurich8050, Switzerland
- Department of Biology, University of Copenhagen, Universitetsparken 15, Copenhagen2100, Denmark
| | - Amanda R. Ridley
- Centre for Evolutionary Biology, School of Biological Sciences, The University of Western Australia, 35 Stirling Highway, Perth, Western Australia6009, Australia
- Percy FitzPatrick Institute of African Ornithology, University of Cape Town, Rondebosch, Cape Town7701, South Africa
| | - Stuart K. Watson
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Affolternstrasse 56, Zurich8050, Switzerland
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, Zurich8057, Switzerland
- Department of Comparative Language Science, University of Zurich, Affolternstrasse 56, Zurich 8050, Switzerland
| | - Sotaro Kita
- Department of Psychology, University of Warwick, University Road, CoventryCV4 7AL, UK
| | - Simon W. Townsend
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Affolternstrasse 56, Zurich8050, Switzerland
- Department of Psychology, University of Warwick, University Road, CoventryCV4 7AL, UK
- Department of Evolutionary Anthropology, University of Zurich, Winterthurerstrasse 190, Zurich8057, Switzerland
| |
Collapse
|
3
|
Kauf C, Kim HS, Lee EJ, Jhingan N, Selena She J, Taliaferro M, Gibson E, Fedorenko E. Linguistic inputs must be syntactically parsable to fully engage the language network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.21.599332. [PMID: 38948870 PMCID: PMC11212959 DOI: 10.1101/2024.06.21.599332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Human language comprehension is remarkably robust to ill-formed inputs (e.g., word transpositions). This robustness has led some to argue that syntactic parsing is largely an illusion, and that incremental comprehension is more heuristic, shallow, and semantics-based than is often assumed. However, the available data are also consistent with the possibility that humans always perform rule-like symbolic parsing and simply deploy error correction mechanisms to reconstruct ill-formed inputs when needed. We put these hypotheses to a new stringent test by examining brain responses to a) stimuli that should pose a challenge for syntactic reconstruction but allow for complex meanings to be built within local contexts through associative/shallow processing (sentences presented in a backward word order), and b) grammatically well-formed but semantically implausible sentences that should impede semantics-based heuristic processing. Using a novel behavioral syntactic reconstruction paradigm, we demonstrate that backward-presented sentences indeed impede the recovery of grammatical structure during incremental comprehension. Critically, these backward-presented stimuli elicit a relatively low response in the language areas, as measured with fMRI. In contrast, semantically implausible but grammatically well-formed sentences elicit a response in the language areas similar in magnitude to naturalistic (plausible) sentences. In other words, the ability to build syntactic structures during incremental language processing is both necessary and sufficient to fully engage the language network. Taken together, these results provide strongest to date support for a generalized reliance of human language comprehension on syntactic parsing.
Collapse
Affiliation(s)
- Carina Kauf
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Hee So Kim
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Elizabeth J. Lee
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Niharika Jhingan
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Jingyuan Selena She
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Maya Taliaferro
- Department of Psychology, New York University, New York, NY 10012 USA
| | - Edward Gibson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138 USA
| |
Collapse
|
4
|
Ozernov-Palchik O, O’Brien AM, Jiachen Lee E, Richardson H, Romeo R, Lipkin B, Small H, Capella J, Nieto-Castañón A, Saxe R, Gabrieli JDE, Fedorenko E. Precision fMRI reveals that the language network exhibits adult-like left-hemispheric lateralization by 4 years of age. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.15.594172. [PMID: 38798360 PMCID: PMC11118489 DOI: 10.1101/2024.05.15.594172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Left hemisphere damage in adulthood often leads to linguistic deficits, but many cases of early damage leave linguistic processing preserved, and a functional language system can develop in the right hemisphere. To explain this early apparent equipotentiality of the two hemispheres for language, some have proposed that the language system is bilateral during early development and only becomes left-lateralized with age. We examined language lateralization using functional magnetic resonance imaging with two large pediatric cohorts (total n=273 children ages 4-16; n=107 adults). Strong, adult-level left-hemispheric lateralization (in activation volume and response magnitude) was evident by age 4. Thus, although the right hemisphere can take over language function in some cases of early brain damage, and although some features of the language system do show protracted development (magnitude of language response and strength of inter-regional correlations in the language network), the left-hemisphere bias for language is robustly present by 4 years of age. These results call for alternative accounts of early equipotentiality of the two hemispheres for language.
Collapse
Affiliation(s)
- Ola Ozernov-Palchik
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Amanda M. O’Brien
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | - Elizabeth Jiachen Lee
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Hilary Richardson
- School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, EH8 9JZ, United Kingdom
| | - Rachel Romeo
- Department of Human Development and Quantitative Methodology, University of Maryland, College Park, MD 20742, United States
| | - Benjamin Lipkin
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Hannah Small
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, United States
| | - Jimmy Capella
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | | | - Rebecca Saxe
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - John D. E. Gabrieli
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Evelina Fedorenko
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| |
Collapse
|
5
|
Shain C, Kean H, Casto C, Lipkin B, Affourtit J, Siegelman M, Mollica F, Fedorenko E. Distributed Sensitivity to Syntax and Semantics throughout the Language Network. J Cogn Neurosci 2024; 36:1427-1471. [PMID: 38683732 DOI: 10.1162/jocn_a_02164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Human language is expressive because it is compositional: The meaning of a sentence (semantics) can be inferred from its structure (syntax). It is commonly believed that language syntax and semantics are processed by distinct brain regions. Here, we revisit this claim using precision fMRI methods to capture separation or overlap of function in the brains of individual participants. Contrary to prior claims, we find distributed sensitivity to both syntax and semantics throughout a broad frontotemporal brain network. Our results join a growing body of evidence for an integrated network for language in the human brain within which internal specialization is primarily a matter of degree rather than kind, in contrast with influential proposals that advocate distinct specialization of different brain areas for different types of linguistic functions.
Collapse
Affiliation(s)
| | - Hope Kean
- Massachusetts Institute of Technology
| | | | | | | | | | | | | |
Collapse
|
6
|
Rambelli G, Chersoni E, Testa D, Blache P, Lenci A. Neural Generative Models and the Parallel Architecture of Language: A Critical Review and Outlook. Top Cogn Sci 2024. [PMID: 38635667 DOI: 10.1111/tops.12733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 03/15/2024] [Accepted: 03/21/2024] [Indexed: 04/20/2024]
Abstract
According to the parallel architecture, syntactic and semantic information processing are two separate streams that interact selectively during language comprehension. While considerable effort is put into psycho- and neurolinguistics to understand the interchange of processing mechanisms in human comprehension, the nature of this interaction in recent neural Large Language Models remains elusive. In this article, we revisit influential linguistic and behavioral experiments and evaluate the ability of a large language model, GPT-3, to perform these tasks. The model can solve semantic tasks autonomously from syntactic realization in a manner that resembles human behavior. However, the outcomes present a complex and variegated picture, leaving open the question of how Language Models could learn structured conceptual representations.
Collapse
Affiliation(s)
- Giulia Rambelli
- Department of Modern Languages, Literatures, and Cultures, University of Bologna
| | - Emmanuele Chersoni
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University
| | | | | | - Alessandro Lenci
- Department of Philology, Literature, and Linguistics, University of Pisa
| |
Collapse
|
7
|
Zhang Y, Taft M, Tang J, Li L. Neural correlates of semantic-driven syntactic parsing in sentence comprehension. Neuroimage 2024; 289:120543. [PMID: 38369168 DOI: 10.1016/j.neuroimage.2024.120543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 02/13/2024] [Accepted: 02/14/2024] [Indexed: 02/20/2024] Open
Abstract
For sentence comprehension, information carried by semantic relations between constituents must be combined with other information to decode the constituent structure of a sentence, due to atypical and noisy situations of language use. Neural correlates of decoding sentence structure by semantic information have remained largely unexplored. In this functional MRI study, we examine the neural basis of semantic-driven syntactic parsing during sentence reading and compare it with that of other types of syntactic parsing driven by word order and case marking. Chinese transitive sentences of various structures were investigated, differing in word order, case making, and agent-patient semantic relations (i.e., same vs. different in animacy). For the non-canonical unmarked sentences without usable case marking, a semantic-driven effect triggered by agent-patient ambiguity was found in the left inferior frontal gyrus opercularis (IFGoper) and left inferior parietal lobule, with the activity not being modulated by naturalness factors of the sentences. The comparison between each type of non-canonical sentences with canonical sentences revealed that the non-canonicity effect engaged the left posterior frontal and temporal regions, in line with previous studies. No extra neural activity was found responsive to case marking within the non-canonical sentences. A word order effect across all types of sentences was also found in the left IFGoper, suggesting a common neural substrate between different types of parsing. The semantic-driven effect was also observed for the non-canonical marked sentences but not for the canonical sentences, suggesting that semantic information is used in decoding sentence structure in addition to case marking. The current findings illustrate the neural correlates of syntactic parsing with semantics, and provide neural evidence of how semantics facilitates syntax together with other information.
Collapse
Affiliation(s)
- Yun Zhang
- Center for the Cognitive Science and Language, Beijing Language and Culture University, Beijing 100083, PR China
| | - Marcus Taft
- Center for the Cognitive Science and Language, Beijing Language and Culture University, Beijing 100083, PR China; School of Psychology, UNSW Sydney, Australia
| | - Jiaman Tang
- Center for the Cognitive Science and Language, Beijing Language and Culture University, Beijing 100083, PR China
| | - Le Li
- Center for the Cognitive Science and Language, Beijing Language and Culture University, Beijing 100083, PR China.
| |
Collapse
|
8
|
Kauf C, Tuckute G, Levy R, Andreas J, Fedorenko E. Lexical-Semantic Content, Not Syntactic Structure, Is the Main Contributor to ANN-Brain Similarity of fMRI Responses in the Language Network. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:7-42. [PMID: 38645614 PMCID: PMC11025651 DOI: 10.1162/nol_a_00116] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 07/11/2023] [Indexed: 04/23/2024]
Abstract
Representations from artificial neural network (ANN) language models have been shown to predict human brain activity in the language network. To understand what aspects of linguistic stimuli contribute to ANN-to-brain similarity, we used an fMRI data set of responses to n = 627 naturalistic English sentences (Pereira et al., 2018) and systematically manipulated the stimuli for which ANN representations were extracted. In particular, we (i) perturbed sentences' word order, (ii) removed different subsets of words, or (iii) replaced sentences with other sentences of varying semantic similarity. We found that the lexical-semantic content of the sentence (largely carried by content words) rather than the sentence's syntactic form (conveyed via word order or function words) is primarily responsible for the ANN-to-brain similarity. In follow-up analyses, we found that perturbation manipulations that adversely affect brain predictivity also lead to more divergent representations in the ANN's embedding space and decrease the ANN's ability to predict upcoming tokens in those stimuli. Further, results are robust as to whether the mapping model is trained on intact or perturbed stimuli and whether the ANN sentence representations are conditioned on the same linguistic context that humans saw. The critical result-that lexical-semantic content is the main contributor to the similarity between ANN representations and neural ones-aligns with the idea that the goal of the human language system is to extract meaning from linguistic strings. Finally, this work highlights the strength of systematic experimental manipulations for evaluating how close we are to accurate and generalizable models of the human language network.
Collapse
Affiliation(s)
- Carina Kauf
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Greta Tuckute
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Roger Levy
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jacob Andreas
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA
| |
Collapse
|
9
|
Juzek TS. Signal Smoothing and Syntactic Choices: A Critical Reflection on the UID Hypothesis. Open Mind (Camb) 2024; 8:217-234. [PMID: 38476664 PMCID: PMC10932588 DOI: 10.1162/opmi_a_00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 01/28/2024] [Indexed: 03/14/2024] Open
Abstract
The Smooth Signal Redundancy Hypothesis explains variations in syllable length as a means to more uniformly distribute information throughout the speech signal. The Uniform Information Density hypothesis seeks to generalize this to choices on all linguistic levels, particularly syntactic choices. While there is some evidence for the Uniform Information Density hypothesis, it faces several challenges, four of which are discussed in this paper. First, it is not clear what exactly counts as uniform. Second, there are syntactic alternations that occur systematically but that can cause notable fluctuations in the information signature. Third, there is an increasing body of negative results. Fourth, there is a lack of large-scale evidence. As to the fourth point, this paper provides a broader array of data-936 sentence pairs for nine syntactic constructions-and analyzes them in a test setup that treats the hypothesis as a classifier. For our data, the Uniform Information Density hypothesis showed little predictive capacity. We explore ways to reconcile our data with theory.
Collapse
Affiliation(s)
- Tom S. Juzek
- Department of Modern Languages and Linguistics, Florida State University, Tallahassee, FL, USA
| |
Collapse
|
10
|
Regev TI, Kim HS, Chen X, Affourtit J, Schipper AE, Bergen L, Mahowald K, Fedorenko E. High-level language brain regions process sublexical regularities. Cereb Cortex 2024; 34:bhae077. [PMID: 38494886 PMCID: PMC11486690 DOI: 10.1093/cercor/bhae077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 02/05/2024] [Accepted: 02/07/2024] [Indexed: 03/19/2024] Open
Abstract
A network of left frontal and temporal brain regions supports language processing. This "core" language network stores our knowledge of words and constructions as well as constraints on how those combine to form sentences. However, our linguistic knowledge additionally includes information about phonemes and how they combine to form phonemic clusters, syllables, and words. Are phoneme combinatorics also represented in these language regions? Across five functional magnetic resonance imaging experiments, we investigated the sensitivity of high-level language processing brain regions to sublexical linguistic regularities by examining responses to diverse nonwords-sequences of phonemes that do not constitute real words (e.g. punes, silory, flope). We establish robust responses in the language network to visually (experiment 1a, n = 605) and auditorily (experiments 1b, n = 12, and 1c, n = 13) presented nonwords. In experiment 2 (n = 16), we find stronger responses to nonwords that are more well-formed, i.e. obey the phoneme-combinatorial constraints of English. Finally, in experiment 3 (n = 14), we provide suggestive evidence that the responses in experiments 1 and 2 are not due to the activation of real words that share some phonology with the nonwords. The results suggest that sublexical regularities are stored and processed within the same fronto-temporal network that supports lexical and syntactic processes.
Collapse
Affiliation(s)
- Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Hee So Kim
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Xuanyi Chen
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Sciences, Rice University, Houston, TX 77005, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Abigail E Schipper
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Leon Bergen
- Department of Linguistics, University of California San Diego, San Diego CA 92093, United States
| | - Kyle Mahowald
- Department of Linguistics, University of Texas at Austin, Austin, TX 78712, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Harvard Program in Speech and Hearing Bioscience and Technology, Boston, MA 02115, United States
| |
Collapse
|
11
|
Arvidsson C, Torubarova E, Pereira A, Uddén J. Conversational production and comprehension: fMRI-evidence reminiscent of but deviant from the classical Broca-Wernicke model. Cereb Cortex 2024; 34:bhae073. [PMID: 38501383 PMCID: PMC10949358 DOI: 10.1093/cercor/bhae073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 02/02/2024] [Accepted: 02/03/2024] [Indexed: 03/20/2024] Open
Abstract
A key question in research on the neurobiology of language is to which extent the language production and comprehension systems share neural infrastructure, but this question has not been addressed in the context of conversation. We utilized a public fMRI dataset where 24 participants engaged in unscripted conversations with a confederate outside the scanner, via an audio-video link. We provide evidence indicating that the two systems share neural infrastructure in the left-lateralized perisylvian language network, but diverge regarding the level of activation in regions within the network. Activity in the left inferior frontal gyrus was stronger in production compared to comprehension, while comprehension showed stronger recruitment of the left anterior middle temporal gyrus and superior temporal sulcus, compared to production. Although our results are reminiscent of the classical Broca-Wernicke model, the anterior (rather than posterior) temporal activation is a notable difference from that model. This is one of the findings that may be a consequence of the conversational setting, another being that conversational production activated what we interpret as higher-level socio-pragmatic processes. In conclusion, we present evidence for partial overlap and functional asymmetry of the neural infrastructure of production and comprehension, in the above-mentioned frontal vs temporal regions during conversation.
Collapse
Affiliation(s)
- Caroline Arvidsson
- Department of Linguistics, Stockholm University, Universitetsvägen 10 C, 114 18 Stockholm, Sweden
| | - Ekaterina Torubarova
- Division of Speech, Music, and Hearing, KTH Royal Institute of Technology, Lindstedtsvägen 24, 114 28 Stockholm, Sweden
| | - André Pereira
- Division of Speech, Music, and Hearing, KTH Royal Institute of Technology, Lindstedtsvägen 24, 114 28 Stockholm, Sweden
| | - Julia Uddén
- Department of Linguistics, Stockholm University, Universitetsvägen 10 C, 114 18 Stockholm, Sweden
- Department of Psychology, Stockholm University, Albanovägen 12, 114 19 Stockholm, Sweden
| |
Collapse
|
12
|
Pasquiou A, Lakretz Y, Thirion B, Pallier C. Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax, and Context. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:611-636. [PMID: 38144237 PMCID: PMC10745090 DOI: 10.1162/nol_a_00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 09/28/2023] [Indexed: 12/26/2023]
Abstract
A fundamental question in neurolinguistics concerns the brain regions involved in syntactic and semantic processing during speech comprehension, both at the lexical (word processing) and supra-lexical levels (sentence and discourse processing). To what extent are these regions separated or intertwined? To address this question, we introduce a novel approach exploiting neural language models to generate high-dimensional feature sets that separately encode semantic and syntactic information. More precisely, we train a lexical language model, GloVe, and a supra-lexical language model, GPT-2, on a text corpus from which we selectively removed either syntactic or semantic information. We then assess to what extent the features derived from these information-restricted models are still able to predict the fMRI time courses of humans listening to naturalistic text. Furthermore, to determine the windows of integration of brain regions involved in supra-lexical processing, we manipulate the size of contextual information provided to GPT-2. The analyses show that, while most brain regions involved in language comprehension are sensitive to both syntactic and semantic features, the relative magnitudes of these effects vary across these regions. Moreover, regions that are best fitted by semantic or syntactic features are more spatially dissociated in the left hemisphere than in the right one, and the right hemisphere shows sensitivity to longer contexts than the left. The novelty of our approach lies in the ability to control for the information encoded in the models' embeddings by manipulating the training set. These "information-restricted" models complement previous studies that used language models to probe the neural bases of language, and shed new light on its spatial organization.
Collapse
Affiliation(s)
- Alexandre Pasquiou
- Cognitive Neuroimaging Unit (UNICOG), NeuroSpin, National Institute of Health and Medical Research (Inserm) and French Alternative Energies and Atomic Energy Commission (CEA), Frédéric Joliot Life Sciences Institute, Paris-Saclay University, Gif-sur-Yvette, France
- Models and Inference for Neuroimaging Data (MIND), NeuroSpin, French Alternative Energies and Atomic Energy Commission (CEA), Inria Saclay, Frédéric Joliot Life Sciences Institute, Paris-Saclay University, Gif-sur-Yvette, France
| | - Yair Lakretz
- Cognitive Neuroimaging Unit (UNICOG), NeuroSpin, National Institute of Health and Medical Research (Inserm) and French Alternative Energies and Atomic Energy Commission (CEA), Frédéric Joliot Life Sciences Institute, Paris-Saclay University, Gif-sur-Yvette, France
| | - Bertrand Thirion
- Models and Inference for Neuroimaging Data (MIND), NeuroSpin, French Alternative Energies and Atomic Energy Commission (CEA), Inria Saclay, Frédéric Joliot Life Sciences Institute, Paris-Saclay University, Gif-sur-Yvette, France
| | - Christophe Pallier
- Cognitive Neuroimaging Unit (UNICOG), NeuroSpin, National Institute of Health and Medical Research (Inserm) and French Alternative Energies and Atomic Energy Commission (CEA), Frédéric Joliot Life Sciences Institute, Paris-Saclay University, Gif-sur-Yvette, France
| |
Collapse
|
13
|
Mahowald K, Diachek E, Gibson E, Fedorenko E, Futrell R. Grammatical cues to subjecthood are redundant in a majority of simple clauses across languages. Cognition 2023; 241:105543. [PMID: 37713956 DOI: 10.1016/j.cognition.2023.105543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 06/27/2023] [Accepted: 06/27/2023] [Indexed: 09/17/2023]
Abstract
Grammatical cues are sometimes redundant with word meanings in natural language. For instance, English word order rules constrain the word order of a sentence like "The dog chewed the bone" even though the status of "dog" as subject and "bone" as object can be inferred from world knowledge and plausibility. Quantifying how often this redundancy occurs, and how the level of redundancy varies across typologically diverse languages, can shed light on the function and evolution of grammar. To that end, we performed a behavioral experiment in English and Russian and a cross-linguistic computational analysis measuring the redundancy of grammatical cues in transitive clauses extracted from corpus text. English and Russian speakers (n = 484) were presented with subjects, verbs, and objects (in random order and with morphological markings removed) extracted from naturally occurring sentences and were asked to identify which noun is the subject of the action. Accuracy was high in both languages (∼89% in English, ∼87% in Russian). Next, we trained a neural network machine classifier on a similar task: predicting which nominal in a subject-verb-object triad is the subject. Across 30 languages from eight language families, performance was consistently high: a median accuracy of 87%, comparable to the accuracy observed in the human experiments. The conclusion is that grammatical cues such as word order are necessary to convey subjecthood and objecthood in a minority of naturally occurring transitive clauses; nevertheless, they can (a) provide an important source of redundancy and (b) are crucial for conveying intended meaning that cannot be inferred from the words alone, including descriptions of human interactions, where roles are often reversible (e.g., Ray helped Lu/Lu helped Ray), and expressing non-prototypical meanings (e.g., "The bone chewed the dog.").
Collapse
Affiliation(s)
- Kyle Mahowald
- The University of Texas at Austin, Linguistics, USA.
| | | | - Edward Gibson
- Massachusetts Institute of Technology, Brain and Cognitive Sciences, USA
| | - Evelina Fedorenko
- Massachusetts Institute of Technology, Brain and Cognitive Sciences, USA; Massachusetts Institute of Technology, McGovern Institute for Brain Research, USA
| | | |
Collapse
|
14
|
Bruera A, Tao Y, Anderson A, Çokal D, Haber J, Poesio M. Modeling Brain Representations of Words' Concreteness in Context Using GPT-2 and Human Ratings. Cogn Sci 2023; 47:e13388. [PMID: 38103208 DOI: 10.1111/cogs.13388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 09/12/2023] [Accepted: 10/27/2023] [Indexed: 12/18/2023]
Abstract
The meaning of most words in language depends on their context. Understanding how the human brain extracts contextualized meaning, and identifying where in the brain this takes place, remain important scientific challenges. But technological and computational advances in neuroscience and artificial intelligence now provide unprecedented opportunities to study the human brain in action as language is read and understood. Recent contextualized language models seem to be able to capture homonymic meaning variation ("bat", in a baseball vs. a vampire context), as well as more nuanced differences of meaning-for example, polysemous words such as "book", which can be interpreted in distinct but related senses ("explain a book", information, vs. "open a book", object) whose differences are fine-grained. We study these subtle differences in lexical meaning along the concrete/abstract dimension, as they are triggered by verb-noun semantic composition. We analyze functional magnetic resonance imaging (fMRI) activations elicited by Italian verb phrases containing nouns whose interpretation is affected by the verb to different degrees. By using a contextualized language model and human concreteness ratings, we shed light on where in the brain such fine-grained meaning variation takes place and how it is coded. Our results show that phrase concreteness judgments and the contextualized model can predict BOLD activation associated with semantic composition within the language network. Importantly, representations derived from a complex, nonlinear composition process consistently outperform simpler composition approaches. This is compatible with a holistic view of semantic composition in the brain, where semantic representations are modified by the process of composition itself. When looking at individual brain areas, we find that encoding performance is statistically significant, although with differing patterns of results, suggesting differential involvement, in the posterior superior temporal sulcus, inferior frontal gyrus and anterior temporal lobe, and in motor areas previously associated with processing of concreteness/abstractness.
Collapse
Affiliation(s)
- Andrea Bruera
- School of Electronic Engineering and Computer Science, Cognitive Science Research Group, Queen Mary University of London
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences
| | - Yuan Tao
- Department of Cognitive Science, Johns Hopkins University
| | | | - Derya Çokal
- Department of German Language and Literature I-Linguistics, University of Cologne
| | - Janosch Haber
- School of Electronic Engineering and Computer Science, Cognitive Science Research Group, Queen Mary University of London
- Chattermill, London
| | - Massimo Poesio
- School of Electronic Engineering and Computer Science, Cognitive Science Research Group, Queen Mary University of London
- Department of Information and Computing Sciences, University of Utrecht
| |
Collapse
|
15
|
Mirault J, Vandendaele A, Pegado F, Grainger J. The impact of atypical text presentation on transposed-word effects. Atten Percept Psychophys 2023; 85:2859-2868. [PMID: 37495931 DOI: 10.3758/s13414-023-02760-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/26/2023] [Indexed: 07/28/2023]
Abstract
When asked to decide if an ungrammatical sequence of words is grammatically correct or not, readers find it more difficult to do so (longer response times (RTs) and more errors) if the ungrammatical sequence is created by transposing two words from a correct sentence (e.g., the white was cat big) compared with matched ungrammatical sequences where transposing two words does not produce a correct sentence (e.g., the white was cat slowly). Here, we provide a further exploration of transposed-word effects when reading unspaced text in Experiment 1, and when reading from right-to-left ("backwards" reading) in Experiment 2. We found significant transposed-word effects in error rates but not in RTs, a pattern previously found in studies using a one-word-at-a-time sequential presentation. We conclude that the absence of transposed-word effects in RTs in the present study and prior work is due to that atypical nature of the way that text was presented. Under the hypothesis that transposed-word effects at least partly reflect a certain amount of parallel word processing during reading, we further suggest that the ability to process words in parallel would require years of exposure to text in its regular format.
Collapse
Affiliation(s)
- Jonathan Mirault
- Laboratoire de Psychologie Cognitive, UMR 7290, Aix-Marseille Université & Centre National de la Recherche Scientifique, Aix-Marseille Université, 3, Place Victor Hugo, 13331, Marseille, France.
- Pôle pilôte Ampiric, Institut National Supérieur du Professorat et de l'Éducation, Aix-Marseille Université, Marseille, France.
| | - Aaron Vandendaele
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Felipe Pegado
- Laboratoire de Psychologie du Développement et de l'Éducation de l'Enfant, UMR CNRS 8240, Université Paris Cité, Sorbonne, France
| | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, UMR 7290, Aix-Marseille Université & Centre National de la Recherche Scientifique, Aix-Marseille Université, 3, Place Victor Hugo, 13331, Marseille, France
- Institute of Language, Communication and the Brain, Aix-Marseille Université, Aix-en-Provence, France
| |
Collapse
|
16
|
Kauf C, Tuckute G, Levy R, Andreas J, Fedorenko E. Lexical semantic content, not syntactic structure, is the main contributor to ANN-brain similarity of fMRI responses in the language network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.05.539646. [PMID: 37205405 PMCID: PMC10187317 DOI: 10.1101/2023.05.05.539646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Representations from artificial neural network (ANN) language models have been shown to predict human brain activity in the language network. To understand what aspects of linguistic stimuli contribute to ANN-to-brain similarity, we used an fMRI dataset of responses to n=627 naturalistic English sentences (Pereira et al., 2018) and systematically manipulated the stimuli for which ANN representations were extracted. In particular, we i) perturbed sentences' word order, ii) removed different subsets of words, or iii) replaced sentences with other sentences of varying semantic similarity. We found that the lexical semantic content of the sentence (largely carried by content words) rather than the sentence's syntactic form (conveyed via word order or function words) is primarily responsible for the ANN-to-brain similarity. In follow-up analyses, we found that perturbation manipulations that adversely affect brain predictivity also lead to more divergent representations in the ANN's embedding space and decrease the ANN's ability to predict upcoming tokens in those stimuli. Further, results are robust to whether the mapping model is trained on intact or perturbed stimuli, and whether the ANN sentence representations are conditioned on the same linguistic context that humans saw. The critical result-that lexical-semantic content is the main contributor to the similarity between ANN representations and neural ones-aligns with the idea that the goal of the human language system is to extract meaning from linguistic strings. Finally, this work highlights the strength of systematic experimental manipulations for evaluating how close we are to accurate and generalizable models of the human language network.
Collapse
Affiliation(s)
- Carina Kauf
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
| | - Greta Tuckute
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
| | - Roger Levy
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | - Jacob Andreas
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
- Program in Speech and Hearing Bioscience and Technology, Harvard University
| |
Collapse
|
17
|
Deniz F, Tseng C, Wehbe L, Dupré la Tour T, Gallant JL. Semantic Representations during Language Comprehension Are Affected by Context. J Neurosci 2023; 43:3144-3158. [PMID: 36973013 PMCID: PMC10146529 DOI: 10.1523/jneurosci.2459-21.2023] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 02/17/2023] [Accepted: 02/26/2023] [Indexed: 03/29/2023] Open
Abstract
The meaning of words in natural language depends crucially on context. However, most neuroimaging studies of word meaning use isolated words and isolated sentences with little context. Because the brain may process natural language differently from how it processes simplified stimuli, there is a pressing need to determine whether prior results on word meaning generalize to natural language. fMRI was used to record human brain activity while four subjects (two female) read words in four conditions that vary in context: narratives, isolated sentences, blocks of semantically similar words, and isolated words. We then compared the signal-to-noise ratio (SNR) of evoked brain responses, and we used a voxelwise encoding modeling approach to compare the representation of semantic information across the four conditions. We find four consistent effects of varying context. First, stimuli with more context evoke brain responses with higher SNR across bilateral visual, temporal, parietal, and prefrontal cortices compared with stimuli with little context. Second, increasing context increases the representation of semantic information across bilateral temporal, parietal, and prefrontal cortices at the group level. In individual subjects, only natural language stimuli consistently evoke widespread representation of semantic information. Third, context affects voxel semantic tuning. Finally, models estimated using stimuli with little context do not generalize well to natural language. These results show that context has large effects on the quality of neuroimaging data and on the representation of meaning in the brain. Thus, neuroimaging studies that use stimuli with little context may not generalize well to the natural regime.SIGNIFICANCE STATEMENT Context is an important part of understanding the meaning of natural language, but most neuroimaging studies of meaning use isolated words and isolated sentences with little context. Here, we examined whether the results of neuroimaging studies that use out-of-context stimuli generalize to natural language. We find that increasing context improves the quality of neuro-imaging data and changes where and how semantic information is represented in the brain. These results suggest that findings from studies using out-of-context stimuli may not generalize to natural language used in daily life.
Collapse
Affiliation(s)
- Fatma Deniz
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin 10623, Germany
| | - Christine Tseng
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
| | - Leila Wehbe
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
| | - Tom Dupré la Tour
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
- Department of Psychology, University of California, Berkeley, California 94720
| |
Collapse
|
18
|
Hu J, Small H, Kean H, Takahashi A, Zekelman L, Kleinman D, Ryan E, Nieto-Castañón A, Ferreira V, Fedorenko E. Precision fMRI reveals that the language-selective network supports both phrase-structure building and lexical access during language production. Cereb Cortex 2023; 33:4384-4404. [PMID: 36130104 PMCID: PMC10110436 DOI: 10.1093/cercor/bhac350] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 08/01/2022] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
A fronto-temporal brain network has long been implicated in language comprehension. However, this network's role in language production remains debated. In particular, it remains unclear whether all or only some language regions contribute to production, and which aspects of production these regions support. Across 3 functional magnetic resonance imaging experiments that rely on robust individual-subject analyses, we characterize the language network's response to high-level production demands. We report 3 novel results. First, sentence production, spoken or typed, elicits a strong response throughout the language network. Second, the language network responds to both phrase-structure building and lexical access demands, although the response to phrase-structure building is stronger and more spatially extensive, present in every language region. Finally, contra some proposals, we find no evidence of brain regions-within or outside the language network-that selectively support phrase-structure building in production relative to comprehension. Instead, all language regions respond more strongly during production than comprehension, suggesting that production incurs a greater cost for the language network. Together, these results align with the idea that language comprehension and production draw on the same knowledge representations, which are stored in a distributed manner within the language-selective network and are used to both interpret and generate linguistic utterances.
Collapse
Affiliation(s)
- Jennifer Hu
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Hannah Small
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Atsushi Takahashi
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Leo Zekelman
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | | | - Elizabeth Ryan
- St. George’s Medical School, St. George’s University, Grenada, West Indies
| | - Alfonso Nieto-Castañón
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215, United States
| | - Victor Ferreira
- Department of Psychology, UCSD, La Jolla, CA 92093, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
19
|
Dufour S, Mirault J, Grainger J. Transposed-word effects in speeded grammatical decisions to sequences of spoken words. Sci Rep 2022; 12:22035. [PMID: 36543850 PMCID: PMC9772206 DOI: 10.1038/s41598-022-26584-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
We used the grammatical decision task (a speeded version of the grammaticality judgment task) with auditorily presented sequences of five words that could either form a grammatically correct sentence or an ungrammatical sequence. The critical ungrammatical sequences were either formed by transposing two adjacent words in a correct sentence (transposed-word sequences: e.g., "The black was dog big") or were matched ungrammatical sequences that could not be resolved into a correct sentence by transposing any two words (control sequences: e.g., "The black was dog slowly"). These were intermixed with an equal number of correct sentences for the purpose of the grammatical decision task. Transposed-word sequences were harder to reject as being ungrammatical (longer response times and more errors) relative to the ungrammatical control sequences, hence attesting for the first time that transposed-word effects can be observed in the spoken language version of the grammatical decision task. Given the relatively unambiguous nature of the speech input in terms of word order, we interpret these transposed-word effects as reflecting the constraints imposed by syntax when processing a sequence of spoken words in order to make a speeded grammatical decision.
Collapse
Affiliation(s)
- Sophie Dufour
- Laboratoire Parole et Langage, Aix-Marseille Université, CNRS, LPL, UMR 7309, 5, avenue Pasteur, 13100, Aix-en-Provence, France.
- Institute for Language, Communication, and the Brain, Aix-Marseille Université, Aix-en-Provence, France.
| | - Jonathan Mirault
- Laboratoire de Psychologie Cognitive, Aix-Marseille Université & CNRS, Marseille, France
- Pôle pilote AMPIRIC, Institut National Supérieur du Professorat et de l'Éducation (INSPÉ), Aix-Marseille Université, Aix-en-Provence, France
| | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Aix-Marseille Université & CNRS, Marseille, France
- Institute for Language, Communication, and the Brain, Aix-Marseille Université, Aix-en-Provence, France
| |
Collapse
|
20
|
Mirault J, Vandendaele A, Pegado F, Grainger J. Transposed-word effects when reading serially. PLoS One 2022; 17:e0277116. [PMID: 36355749 PMCID: PMC9648719 DOI: 10.1371/journal.pone.0277116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 10/20/2022] [Indexed: 11/12/2022] Open
Abstract
When asked to decide if an ungrammatical sequence of words is grammatically correct or not readers find it more difficult to do so (longer response times (RTs) and more errors) if the ungrammatical sequence is created by transposing two words from a correct sentence (e.g., the white was cat big) compared with a set of matched ungrammatical sequences for which transposing any two words could not produce a correct sentence (e.g., the white was cat slowly). Here, we provide a further exploration of transposed-word effects while imposing serial reading by using rapid serial visual presentation (RSVP) in Experiments 1 (respond at the end of the sequence) and 2 (respond as soon as possible-which could be during the sequence). Crucially, in Experiment 3 we compared performance under serial RSVP conditions with parallel presentation of the same stimuli for the same total duration and with the same group of participants. We found robust transposed-word effects in the RSVP conditions tested in all experiments, but only in error rates and not in RTs. This contrasts with the effects found in both errors and RTs in our prior work using parallel presentation, as well as the parallel presentation conditions tested in Experiment 3. We provide a tentative account of why, under conditions that impose a serial word-by-word reading strategy, transposed-word effects are only seen in error rates and not in RTs.
Collapse
Affiliation(s)
- Jonathan Mirault
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique & Aix-Marseille University, Marseille, France
- Pôle pilote Ampiric, Institut National Supérieur du Professorat et de l’Éducation, Aix-Marseille Université, Marseille, France
| | - Aaron Vandendaele
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Felipe Pegado
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique & Aix-Marseille University, Marseille, France
- Pôle pilote Ampiric, Institut National Supérieur du Professorat et de l’Éducation, Aix-Marseille Université, Marseille, France
| | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique & Aix-Marseille University, Marseille, France
- Institute of Language, Communication and the Brain, Aix-Marseille University, Marseille, France
| |
Collapse
|
21
|
Kulmizev A, Nivre J. Schrödinger's tree-On syntax and neural language models. Front Artif Intell 2022; 5:796788. [PMID: 36325030 PMCID: PMC9618648 DOI: 10.3389/frai.2022.796788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 09/02/2022] [Indexed: 11/05/2022] Open
Abstract
In the last half-decade, the field of natural language processing (NLP) has undergone two major transitions: the switch to neural networks as the primary modeling paradigm and the homogenization of the training regime (pre-train, then fine-tune). Amidst this process, language models have emerged as NLP's workhorse, displaying increasingly fluent generation capabilities and proving to be an indispensable means of knowledge transfer downstream. Due to the otherwise opaque, black-box nature of such models, researchers have employed aspects of linguistic theory in order to characterize their behavior. Questions central to syntax-the study of the hierarchical structure of language-have factored heavily into such work, shedding invaluable insights about models' inherent biases and their ability to make human-like generalizations. In this paper, we attempt to take stock of this growing body of literature. In doing so, we observe a lack of clarity across numerous dimensions, which influences the hypotheses that researchers form, as well as the conclusions they draw from their findings. To remedy this, we urge researchers to make careful considerations when investigating coding properties, selecting representations, and evaluating via downstream tasks. Furthermore, we outline the implications of the different types of research questions exhibited in studies on syntax, as well as the inherent pitfalls of aggregate metrics. Ultimately, we hope that our discussion adds nuance to the prospect of studying language models and paves the way for a less monolithic perspective on syntax in this context.
Collapse
Affiliation(s)
- Artur Kulmizev
- Computational Linguistics Group, Department of Linguistics and Philology, Uppsala University, Uppsala, Sweden
| | - Joakim Nivre
- Computational Linguistics Group, Department of Linguistics and Philology, Uppsala University, Uppsala, Sweden
- RISE Research Institutes of Sweden, Kista, Sweden
| |
Collapse
|
22
|
Broderick MP, Zuk NJ, Anderson AJ, Lalor EC. More than words: Neurophysiological correlates of semantic dissimilarity depend on comprehension of the speech narrative. Eur J Neurosci 2022; 56:5201-5214. [PMID: 35993240 DOI: 10.1111/ejn.15805] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 08/15/2022] [Accepted: 08/18/2022] [Indexed: 12/14/2022]
Abstract
Speech comprehension relies on the ability to understand words within a coherent context. Recent studies have attempted to obtain electrophysiological indices of this process by modelling how brain activity is affected by a word's semantic dissimilarity to preceding words. Although the resulting indices appear robust and are strongly modulated by attention, it remains possible that, rather than capturing the contextual understanding of words, they may actually reflect word-to-word changes in semantic content without the need for a narrative-level understanding on the part of the listener. To test this, we recorded electroencephalography from subjects who listened to speech presented in either its original, narrative form, or after scrambling the word order by varying amounts. This manipulation affected the ability of subjects to comprehend the speech narrative but not the ability to recognise individual words. Neural indices of semantic understanding and low-level acoustic processing were derived for each scrambling condition using the temporal response function. Signatures of semantic processing were observed when speech was unscrambled or minimally scrambled and subjects understood the speech. The same markers were absent for higher scrambling levels as speech comprehension dropped. In contrast, word recognition remained high and neural measures related to envelope tracking did not vary significantly across scrambling conditions. This supports the previous claim that electrophysiological indices based on the semantic dissimilarity of words to their context reflect a listener's understanding of those words relative to that context. It also highlights the relative insensitivity of neural measures of low-level speech processing to speech comprehension.
Collapse
Affiliation(s)
- Michael P Broderick
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Nathaniel J Zuk
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Andrew J Anderson
- Del Monte Institute for Neuroscience, Department of Neuroscience, Department of Biomedical Engineering, University of Rochester, Rochester, New York, USA
| | - Edmund C Lalor
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.,Del Monte Institute for Neuroscience, Department of Neuroscience, Department of Biomedical Engineering, University of Rochester, Rochester, New York, USA
| |
Collapse
|
23
|
Lipkin B, Tuckute G, Affourtit J, Small H, Mineroff Z, Kean H, Jouravlev O, Rakocevic L, Pritchett B, Siegelman M, Hoeflin C, Pongos A, Blank IA, Struhl MK, Ivanova A, Shannon S, Sathe A, Hoffmann M, Nieto-Castañón A, Fedorenko E. Probabilistic atlas for the language network based on precision fMRI data from >800 individuals. Sci Data 2022; 9:529. [PMID: 36038572 PMCID: PMC9424256 DOI: 10.1038/s41597-022-01645-3] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 08/09/2022] [Indexed: 11/13/2022] Open
Abstract
Two analytic traditions characterize fMRI language research. One relies on averaging activations across individuals. This approach has limitations: because of inter-individual variability in the locations of language areas, any given voxel/vertex in a common brain space is part of the language network in some individuals but in others, may belong to a distinct network. An alternative approach relies on identifying language areas in each individual using a functional 'localizer'. Because of its greater sensitivity, functional resolution, and interpretability, functional localization is gaining popularity, but it is not always feasible, and cannot be applied retroactively to past studies. To bridge these disjoint approaches, we created a probabilistic functional atlas using fMRI data for an extensively validated language localizer in 806 individuals. This atlas enables estimating the probability that any given location in a common space belongs to the language network, and thus can help interpret group-level activation peaks and lesion locations, or select voxels/electrodes for analysis. More meaningful comparisons of findings across studies should increase robustness and replicability in language research.
Collapse
Affiliation(s)
- Benjamin Lipkin
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Greta Tuckute
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Hannah Small
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Zachary Mineroff
- Human-computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Hope Kean
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Olessia Jouravlev
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Lara Rakocevic
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Brianna Pritchett
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Caitlyn Hoeflin
- Harris School of Public Policy, University of Chicago, Chicago, IL, USA
| | - Alvincé Pongos
- Department of Bioengineering, University of California, Berkeley, CA, USA
| | - Idan A Blank
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Melissa Kline Struhl
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Anna Ivanova
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Steven Shannon
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Aalok Sathe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Malte Hoffmann
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Cambridge, MA, USA
| | - Alfonso Nieto-Castañón
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Speech, Hearing, Bioscience, and Technology, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
24
|
Coopmans CW, de Hoop H, Hagoort P, Martin AE. Effects of Structure and Meaning on Cortical Tracking of Linguistic Units in Naturalistic Speech. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:386-412. [PMID: 37216060 PMCID: PMC10158633 DOI: 10.1162/nol_a_00070] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 03/02/2022] [Indexed: 05/24/2023]
Abstract
Recent research has established that cortical activity "tracks" the presentation rate of syntactic phrases in continuous speech, even though phrases are abstract units that do not have direct correlates in the acoustic signal. We investigated whether cortical tracking of phrase structures is modulated by the extent to which these structures compositionally determine meaning. To this end, we recorded electroencephalography (EEG) of 38 native speakers who listened to naturally spoken Dutch stimuli in different conditions, which parametrically modulated the degree to which syntactic structure and lexical semantics determine sentence meaning. Tracking was quantified through mutual information between the EEG data and either the speech envelopes or abstract annotations of syntax, all of which were filtered in the frequency band corresponding to the presentation rate of phrases (1.1-2.1 Hz). Overall, these mutual information analyses showed stronger tracking of phrases in regular sentences than in stimuli whose lexical-syntactic content is reduced, but no consistent differences in tracking between sentences and stimuli that contain a combination of syntactic structure and lexical content. While there were no effects of compositional meaning on the degree of phrase-structure tracking, analyses of event-related potentials elicited by sentence-final words did reveal meaning-induced differences between conditions. Our findings suggest that cortical tracking of structure in sentences indexes the internal generation of this structure, a process that is modulated by the properties of its input, but not by the compositional interpretation of its output.
Collapse
Affiliation(s)
- Cas W. Coopmans
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Helen de Hoop
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
25
|
Martin KC, Seydell-Greenwald A, Berl MM, Gaillard WD, Turkeltaub PE, Newport EL. A Weak Shadow of Early Life Language Processing Persists in the Right Hemisphere of the Mature Brain. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:364-385. [PMID: 35686116 PMCID: PMC9169899 DOI: 10.1162/nol_a_00069] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 02/10/2022] [Indexed: 06/15/2023]
Abstract
Studies of language organization show a striking change in cerebral dominance for language over development: We begin life with a left hemisphere (LH) bias for language processing, which is weaker than that in adults and which can be overcome if there is a LH injury. Over development this LH bias becomes stronger and can no longer be reversed. Prior work has shown that this change results from a significant reduction in the magnitude of language activation in right hemisphere (RH) regions in adults compared to children. Here we investigate whether the spatial distribution of language activation, albeit weaker in magnitude, still persists in homotopic RH regions of the mature brain. Children aged 4-13 (n = 39) and young adults (n = 14) completed an auditory sentence comprehension fMRI (functional magnetic resonance imaging) task. To equate neural activity across the hemispheres, we applied fixed cutoffs for the number of active voxels that would be included in each hemisphere for each participant. To evaluate homotopicity, we generated left-right flipped versions of each activation map, calculated spatial overlap between the LH and RH activity in frontal and temporal regions, and tested for mean differences in the spatial overlap values between the age groups. We found that, in children as well as in adults, there was indeed a spatially intact shadow of language activity in the right frontal and temporal regions homotopic to the LH language regions. After a LH stroke in adulthood, recovering early-life activation in these regions might assist in enhancing recovery of language abilities.
Collapse
Affiliation(s)
- Kelly C. Martin
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC
| | - Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC
- MedStar National Rehabilitation Hospital, Washington, DC
| | - Madison M. Berl
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC
- Children’s National Hospital, Washington, DC
| | - William D. Gaillard
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC
- Children’s National Hospital, Washington, DC
| | - Peter E. Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC
- MedStar National Rehabilitation Hospital, Washington, DC
| | - Elissa L. Newport
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC
- MedStar National Rehabilitation Hospital, Washington, DC
| |
Collapse
|
26
|
Goldberg AE, Ferreira F. Good-enough language production. Trends Cogn Sci 2022; 26:300-311. [PMID: 35241380 PMCID: PMC8956348 DOI: 10.1016/j.tics.2022.01.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 01/10/2022] [Accepted: 01/18/2022] [Indexed: 11/24/2022]
Abstract
Our ability to comprehend and produce language is one of humans' most impressive skills, but it is not flawless. We must convey and interpret messages via a noisy channel in ever-changing contexts and we sometimes fail to access an optimal combination of words and grammatical constructions. Here, we extend the notion of good-enough (GN) comprehension to GN production, which allows us to unify a wide range of phenomena including overly vague word choices, agreement errors, resumptive pronouns, transfer effects, and children's overextensions and regularizations. We suggest these all involve the accessing and production of a 'GN' option when a more-optimal option is inaccessible. The role of accessibility highlights the need to relate memory encoding and retrieval processes to language comprehension and production.
Collapse
Affiliation(s)
- Adele E Goldberg
- Department of Psychology, Princeton University, Princeton, NJ 08544, USA.
| | - Fernanda Ferreira
- Department of Psychology, University of California, Davis, Davis, CA 95616, USA.
| |
Collapse
|
27
|
Contreras Kallens P, Christiansen MH. Models of Language and Multiword Expressions. Front Artif Intell 2022; 5:781962. [PMID: 35252848 PMCID: PMC8892141 DOI: 10.3389/frai.2022.781962] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 01/25/2022] [Indexed: 11/13/2022] Open
Abstract
Traditional accounts of language postulate two basic components: words stored in a lexicon, and rules that govern how they can be combined into meaningful sentences, a grammar. But, although this words-and-rules framework has proven itself to be useful in natural language processing and cognitive science, it has also shown important shortcomings when faced with actual language use. In this article, we review evidence from language acquisition, sentence processing, and computational modeling that shows how multiword expressions such as idioms, collocations, and other meaningful and common units that comprise more than one word play a key role in the organization of our linguistic knowledge. Importantly, multiword expressions straddle the line between lexicon and grammar, calling into question how useful this distinction is as a foundation for our understanding of language. Nonetheless, finding a replacement for the foundational role the words-and-rules approach has played in our theories is not straightforward. Thus, the second part of our article reviews and synthesizes the diverse approaches that have attempted to account for the central role of multiword expressions in language representation, acquisition, and processing.
Collapse
Affiliation(s)
| | - Morten H. Christiansen
- Department of Psychology, Cornell University, Ithaca, NY, United States
- Interacting Minds Centre and School of Communication and Culture, Aarhus University, Aarhus, Denmark
- Haskins Laboratories, New Haven, CT, United States
| |
Collapse
|
28
|
Matchin W, Basilakos A, Ouden DBD, Stark BC, Hickok G, Fridriksson J. Functional differentiation in the language network revealed by lesion-symptom mapping. Neuroimage 2022; 247:118778. [PMID: 34896587 PMCID: PMC8830186 DOI: 10.1016/j.neuroimage.2021.118778] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Revised: 11/17/2021] [Accepted: 12/02/2021] [Indexed: 12/18/2022] Open
Abstract
Theories of language organization in the brain commonly posit that different regions underlie distinct linguistic mechanisms. However, such theories have been criticized on the grounds that many neuroimaging studies of language processing find similar effects across regions. Moreover, condition by region interaction effects, which provide the strongest evidence of functional differentiation between regions, have rarely been offered in support of these theories. Here we address this by using lesion-symptom mapping in three large, partially-overlapping groups of aphasia patients with left hemisphere brain damage due to stroke (N = 121, N = 92, N = 218). We identified multiple measure by region interaction effects, associating damage to the posterior middle temporal gyrus with syntactic comprehension deficits, damage to posterior inferior frontal gyrus with expressive agrammatism, and damage to inferior angular gyrus with semantic category word fluency deficits. Our results are inconsistent with recent hypotheses that regions of the language network are undifferentiated with respect to high-level linguistic processing.
Collapse
Affiliation(s)
- William Matchin
- Department of Communication Sciences and Disorders, University of South Carolina, Discovery 1, Room 202D, 915 Greene St., Columbia, SC 29208, United States.
| | - Alexandra Basilakos
- Department of Communication Sciences and Disorders, University of South Carolina, Discovery 1, Room 202D, 915 Greene St., Columbia, SC 29208, United States
| | - Dirk-Bart den Ouden
- Department of Communication Sciences and Disorders, University of South Carolina, Discovery 1, Room 202D, 915 Greene St., Columbia, SC 29208, United States
| | - Brielle C Stark
- Department of Speech and Hearing Sciences, Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States
| | - Gregory Hickok
- Department of Cognitive Sciences, Department of Language Science, University of California, Irvine, California, United States
| | - Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Discovery 1, Room 202D, 915 Greene St., Columbia, SC 29208, United States
| |
Collapse
|
29
|
Huizeling E, Arana S, Hagoort P, Schoffelen JM. Lexical Frequency and Sentence Context Influence the Brain's Response to Single Words. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:149-179. [PMID: 37215333 PMCID: PMC10158670 DOI: 10.1162/nol_a_00054] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 09/03/2021] [Indexed: 05/24/2023]
Abstract
Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading intact and scrambled sentences. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index × frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150 ms) and late stages of word processing, but also interact during late stages of word processing (>150-250 ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neurocognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.
Collapse
Affiliation(s)
- Eleanor Huizeling
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Sophie Arana
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | | |
Collapse
|
30
|
Parrish A, Pylkkänen L. Conceptual Combination in the LATL With and Without Syntactic Composition. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:46-66. [PMID: 37215334 PMCID: PMC10158584 DOI: 10.1162/nol_a_00048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 06/15/2021] [Indexed: 05/24/2023]
Abstract
The relationship among syntactic, semantic, and conceptual processes in language comprehension is a central question to the neurobiology of language. Several studies have suggested that conceptual combination in particular can be localized to the left anterior temporal lobe (LATL), while syntactic processes are more often associated with the posterior temporal lobe or inferior frontal gyrus. However, LATL activity can also correlate with syntactic computations, particularly in narrative comprehension. Here we investigated the degree to which LATL conceptual combination is dependent on syntax, specifically asking whether rapid (∼200 ms) magnetoencephalography effects of conceptual combination in the LATL can occur in the absence of licit syntactic phrase closure and in the absence of a semantically plausible output for the composition. We find that such effects do occur: LATL effects of conceptual combination were observed even when there was no syntactic phrase closure or plausible meaning. But syntactic closure did have an additive effect such that LATL signals were the highest for expressions that composed both conceptually and syntactically. Our findings conform to an account in which LATL conceptual composition is influenced by local syntactic composition but is also able to operate without it.
Collapse
Affiliation(s)
- Alicia Parrish
- Department of Linguistics, New York University, New York, USA
| | - Liina Pylkkänen
- Department of Linguistics, New York University, New York, USA
- Department of Psychology, New York University, New York, USA
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, UAE
| |
Collapse
|
31
|
Schrimpf M, Blank IA, Tuckute G, Kauf C, Hosseini EA, Kanwisher N, Tenenbaum JB, Fedorenko E. The neural architecture of language: Integrative modeling converges on predictive processing. Proc Natl Acad Sci U S A 2021; 118:e2105646118. [PMID: 34737231 PMCID: PMC8694052 DOI: 10.1073/pnas.2105646118] [Citation(s) in RCA: 126] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/03/2021] [Indexed: 01/30/2023] Open
Abstract
The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models. By revealing trends across models, this approach yields novel insights into cognitive and neural mechanisms in the target domain. We here present a systematic study taking this approach to higher-level cognition: human language processing, our species' signature cognitive skill. We find that the most powerful "transformer" models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities (functional MRI and electrocorticography). Models' neural fits ("brain score") and fits to behavioral responses are both strongly correlated with model accuracy on the next-word prediction task (but not other language tasks). Model architecture appears to substantially contribute to neural fit. These results provide computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the human brain.
Collapse
Affiliation(s)
- Martin Schrimpf
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Idan Asher Blank
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- Department of Psychology, University of California, Los Angeles, CA 90095
| | - Greta Tuckute
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Carina Kauf
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Eghbal A Hosseini
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| |
Collapse
|
32
|
Hodgson VJ, Lambon Ralph MA, Jackson RL. Multiple dimensions underlying the functional organization of the language network. Neuroimage 2021; 241:118444. [PMID: 34343627 PMCID: PMC8456749 DOI: 10.1016/j.neuroimage.2021.118444] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 07/24/2021] [Accepted: 07/31/2021] [Indexed: 02/08/2023] Open
Abstract
Understanding the different neural networks that support human language is an ongoing challenge for cognitive neuroscience. Which divisions are capable of distinguishing the functional significance of regions across the language network? A key separation between semantic cognition and phonological processing was highlighted in early meta-analyses, yet these seminal works did not formally test this proposition. Moreover, organization by domain is not the only possibility. Regions may be organized by the type of process performed, as in the separation between representation and control processes proposed within the Controlled Semantic Cognition framework. The importance of these factors was assessed in a series of activation likelihood estimation meta-analyses that investigated which regions of the language network are consistently recruited for semantic and phonological domains, and for representation and control processes. Whilst semantic and phonological processing consistently recruit many overlapping regions, they can be dissociated (by differential involvement of bilateral anterior temporal lobes, precentral gyrus and superior temporal gyri) only when using both formal analysis methods and sufficient data. Both semantic and phonological regions are further dissociable into control and representation regions, highlighting this as an additional, distinct dimension on which the language network is functionally organized. Furthermore, some of these control regions overlap with multiple-demand network regions critical for control beyond the language domain, suggesting the relative level of domain-specificity is also informative. Multiple, distinct dimensions are critical to understand the role of language regions. Here we present a proposal as to the core principles underpinning the functional organization of the language network.
Collapse
|
33
|
Asyraff A, Lemarchand R, Tamm A, Hoffman P. Stimulus-independent neural coding of event semantics: Evidence from cross-sentence fMRI decoding. Neuroimage 2021; 236:118073. [PMID: 33878380 PMCID: PMC8270886 DOI: 10.1016/j.neuroimage.2021.118073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 04/06/2021] [Accepted: 04/11/2021] [Indexed: 11/25/2022] Open
Abstract
Multivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of simple events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical structure, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance.
Collapse
Affiliation(s)
- Aliff Asyraff
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Rafael Lemarchand
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Andres Tamm
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Paul Hoffman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK.
| |
Collapse
|
34
|
Wehbe L, Blank IA, Shain C, Futrell R, Levy R, von der Malsburg T, Smith N, Gibson E, Fedorenko E. Incremental Language Comprehension Difficulty Predicts Activity in the Language Network but Not the Multiple Demand Network. Cereb Cortex 2021. [PMID: 33895807 DOI: 10.1101/2020.04.15.043844] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023] Open
Abstract
What role do domain-general executive functions play in human language comprehension? To address this question, we examine the relationship between behavioral measures of comprehension and neural activity in the domain-general "multiple demand" (MD) network, which has been linked to constructs like attention, working memory, inhibitory control, and selection, and implicated in diverse goal-directed behaviors. Specifically, functional magnetic resonance imaging data collected during naturalistic story listening are compared with theory-neutral measures of online comprehension difficulty and incremental processing load (reading times and eye-fixation durations). Critically, to ensure that variance in these measures is driven by features of the linguistic stimulus rather than reflecting participant- or trial-level variability, the neuroimaging and behavioral datasets were collected in nonoverlapping samples. We find no behavioral-neural link in functionally localized MD regions; instead, this link is found in the domain-specific, fronto-temporal "core language network," in both left-hemispheric areas and their right hemispheric homotopic areas. These results argue against strong involvement of domain-general executive circuits in language comprehension.
Collapse
Affiliation(s)
- Leila Wehbe
- Carnegie Mellon University, Machine Learning Department PA 15213, USA
| | - Idan Asher Blank
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California Los Angeles, Department of Psychology CA 90095, USA
| | - Cory Shain
- Ohio State University, Department of Linguistics OH 43210, USA
| | - Richard Futrell
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California Irvine, Department of Linguistics CA 92697, USA
| | - Roger Levy
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California San Diego, Department of Linguistics CA 92161, USA
| | - Titus von der Malsburg
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of Stuttgart, Institute of Linguistics, 70049 Stuttgart, Germany
| | - Nathaniel Smith
- University of California San Diego, Department of Linguistics CA 92161, USA
| | - Edward Gibson
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
| | - Evelina Fedorenko
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- Massachusetts Institute of Technology, McGovern Institute for Brain ResearchMA 02139, USA
| |
Collapse
|
35
|
Wehbe L, Blank IA, Shain C, Futrell R, Levy R, von der Malsburg T, Smith N, Gibson E, Fedorenko E. Incremental Language Comprehension Difficulty Predicts Activity in the Language Network but Not the Multiple Demand Network. Cereb Cortex 2021; 31:4006-4023. [PMID: 33895807 PMCID: PMC8328211 DOI: 10.1093/cercor/bhab065] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 01/15/2021] [Accepted: 02/21/2021] [Indexed: 12/28/2022] Open
Abstract
What role do domain-general executive functions play in human language comprehension? To address this question, we examine the relationship between behavioral measures of comprehension and neural activity in the domain-general "multiple demand" (MD) network, which has been linked to constructs like attention, working memory, inhibitory control, and selection, and implicated in diverse goal-directed behaviors. Specifically, functional magnetic resonance imaging data collected during naturalistic story listening are compared with theory-neutral measures of online comprehension difficulty and incremental processing load (reading times and eye-fixation durations). Critically, to ensure that variance in these measures is driven by features of the linguistic stimulus rather than reflecting participant- or trial-level variability, the neuroimaging and behavioral datasets were collected in nonoverlapping samples. We find no behavioral-neural link in functionally localized MD regions; instead, this link is found in the domain-specific, fronto-temporal "core language network," in both left-hemispheric areas and their right hemispheric homotopic areas. These results argue against strong involvement of domain-general executive circuits in language comprehension.
Collapse
Affiliation(s)
- Leila Wehbe
- Carnegie Mellon University, Machine Learning Department PA 15213, USA
| | - Idan Asher Blank
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California Los Angeles, Department of Psychology CA 90095, USA
| | - Cory Shain
- Ohio State University, Department of Linguistics OH 43210, USA
| | - Richard Futrell
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California Irvine, Department of Linguistics CA 92697, USA
| | - Roger Levy
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California San Diego, Department of Linguistics CA 92161, USA
| | - Titus von der Malsburg
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of Stuttgart, Institute of Linguistics, 70049 Stuttgart, Germany
| | - Nathaniel Smith
- University of California San Diego, Department of Linguistics CA 92161, USA
| | - Edward Gibson
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
| | - Evelina Fedorenko
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- Massachusetts Institute of Technology, McGovern Institute for Brain ResearchMA 02139, USA
| |
Collapse
|
36
|
Graessner A, Zaccarella E, Hartwigsen G. Differential contributions of left-hemispheric language regions to basic semantic composition. Brain Struct Funct 2021; 226:501-518. [PMID: 33515279 PMCID: PMC7910266 DOI: 10.1007/s00429-020-02196-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Accepted: 12/16/2020] [Indexed: 02/08/2023]
Abstract
Semantic composition, the ability to combine single words to form complex meanings, is a core feature of human language. Despite growing interest in the basis of semantic composition, the neural correlates and the interaction of regions within this network remain a matter of debate. We designed a well-controlled two-word fMRI paradigm in which phrases only differed along the semantic dimension while keeping syntactic information alike. Healthy participants listened to meaningful ("fresh apple"), anomalous ("awake apple") and pseudoword phrases ("awake gufel") while performing an implicit and an explicit semantic task. We identified neural signatures for distinct processes during basic semantic composition. When lexical information is kept constant across conditions and the evaluation of phrasal plausibility is examined (meaningful vs. anomalous phrases), a small set of mostly left-hemispheric semantic regions, including the anterior part of the left angular gyrus, is found active. Conversely, when the load of lexical information-independently of phrasal plausibility-is varied (meaningful or anomalous vs. pseudoword phrases), conceptual combination involves a wide-spread left-hemispheric network comprising executive semantic control regions and general conceptual representation regions. Within this network, the functional coupling between the left anterior inferior frontal gyrus, the bilateral pre-supplementary motor area and the posterior angular gyrus specifically increases for meaningful phrases relative to pseudoword phrases. Stronger effects in the explicit task further suggest task-dependent neural recruitment. Overall, we provide a separation between distinct nodes of the semantic network, whose functional contributions depend on the type of compositional process under analysis.
Collapse
Affiliation(s)
- Astrid Graessner
- Lise-Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103, Leipzig, Germany.
| | - Emiliano Zaccarella
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise-Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103, Leipzig, Germany
| |
Collapse
|
37
|
Quillen IA, Yen M, Wilson SM. Distinct neural correlates of linguistic demand and non-linguistic demand. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:202-225. [PMID: 34585141 PMCID: PMC8475781 DOI: 10.1162/nol_a_00031] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/27/2023]
Abstract
In this study, we investigated how the brain responds to task difficulty in linguistic and non-linguistic contexts. This is important for the interpretation of functional imaging studies of neuroplasticity in post-stroke aphasia, because of the inherent difficulty of matching or controlling task difficulty in studies with neurological populations. Twenty neurologically normal individuals were scanned with fMRI as they performed a linguistic task and a non-linguistic task, each of which had two levels of difficulty. Critically, the tasks were matched across domains (linguistic, non-linguistic) for accuracy and reaction time, such that the differences between the easy and difficult conditions were equivalent across domains. We found that non-linguistic demand modulated the same set of multiple demand (MD) regions that have been identified in many prior studies. In contrast, linguistic demand modulated MD regions to a much lesser extent, especially nodes belonging to the dorsal attention network. Linguistic demand modulated a subset of language regions, with the left inferior frontal gyrus most strongly modulated. The right hemisphere region homotopic to Broca's area was also modulated by linguistic but not non-linguistic demand. When linguistic demand was mapped relative to non-linguistic demand, we also observed domain by difficulty interactions in temporal language regions as well as a widespread bilateral semantic network. In sum, linguistic and non-linguistic demand have strikingly different neural correlates. These findings can be used to better interpret studies of patients recovering from aphasia. Some reported activations in these studies may reflect task performance differences, while others can be more confidently attributed to neuroplasticity.
Collapse
Affiliation(s)
- Ian A Quillen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Melodie Yen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
38
|
Ivanova AA, Srikant S, Sueoka Y, Kean HH, Dhamala R, O'Reilly UM, Bers MU, Fedorenko E. Comprehension of computer code relies primarily on domain-general executive brain regions. eLife 2020; 9:e58906. [PMID: 33319744 PMCID: PMC7738192 DOI: 10.7554/elife.58906] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 11/06/2020] [Indexed: 12/22/2022] Open
Abstract
Computer programming is a novel cognitive tool that has transformed modern society. What cognitive and neural mechanisms support this skill? Here, we used functional magnetic resonance imaging to investigate two candidate brain systems: the multiple demand (MD) system, typically recruited during math, logic, problem solving, and executive tasks, and the language system, typically recruited during linguistic processing. We examined MD and language system responses to code written in Python, a text-based programming language (Experiment 1) and in ScratchJr, a graphical programming language (Experiment 2); for both, we contrasted responses to code problems with responses to content-matched sentence problems. We found that the MD system exhibited strong bilateral responses to code in both experiments, whereas the language system responded strongly to sentence problems, but weakly or not at all to code problems. Thus, the MD system supports the use of novel cognitive tools even when the input is structurally similar to natural language.
Collapse
Affiliation(s)
- Anna A Ivanova
- Department of Brain and Cognitive Sciences, Massachusetts Institute of TechnologyCambridgeUnited States
- McGovern Institute for Brain Research, Massachusetts Institute of TechnologyCambridgeUnited States
| | - Shashank Srikant
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of TechnologyCambridgeUnited States
| | - Yotaro Sueoka
- Department of Brain and Cognitive Sciences, Massachusetts Institute of TechnologyCambridgeUnited States
- McGovern Institute for Brain Research, Massachusetts Institute of TechnologyCambridgeUnited States
| | - Hope H Kean
- Department of Brain and Cognitive Sciences, Massachusetts Institute of TechnologyCambridgeUnited States
- McGovern Institute for Brain Research, Massachusetts Institute of TechnologyCambridgeUnited States
| | - Riva Dhamala
- Eliot-Pearson Department of Child Study and Human Development, Tufts UniversityMedfordUnited States
| | - Una-May O'Reilly
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of TechnologyCambridgeUnited States
| | - Marina U Bers
- Eliot-Pearson Department of Child Study and Human Development, Tufts UniversityMedfordUnited States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of TechnologyCambridgeUnited States
- McGovern Institute for Brain Research, Massachusetts Institute of TechnologyCambridgeUnited States
| |
Collapse
|
39
|
Fedorenko E, Blank IA, Siegelman M, Mineroff Z. Lack of selectivity for syntax relative to word meanings throughout the language network. Cognition 2020; 203:104348. [PMID: 32569894 DOI: 10.1101/477851] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 05/14/2020] [Accepted: 05/31/2020] [Indexed: 05/25/2023]
Abstract
To understand what you are reading now, your mind retrieves the meanings of words and constructions from a linguistic knowledge store (lexico-semantic processing) and identifies the relationships among them to construct a complex meaning (syntactic or combinatorial processing). Do these two sets of processes rely on distinct, specialized mechanisms or, rather, share a common pool of resources? Linguistic theorizing, empirical evidence from language acquisition and processing, and computational modeling have jointly painted a picture whereby lexico-semantic and syntactic processing are deeply inter-connected and perhaps not separable. In contrast, many current proposals of the neural architecture of language continue to endorse a view whereby certain brain regions selectively support syntactic/combinatorial processing, although the locus of such "syntactic hub", and its nature, vary across proposals. Here, we searched for selectivity for syntactic over lexico-semantic processing using a powerful individual-subjects fMRI approach across three sentence comprehension paradigms that have been used in prior work to argue for such selectivity: responses to lexico-semantic vs. morpho-syntactic violations (Experiment 1); recovery from neural suppression across pairs of sentences differing in only lexical items vs. only syntactic structure (Experiment 2); and same/different meaning judgments on such sentence pairs (Experiment 3). Across experiments, both lexico-semantic and syntactic conditions elicited robust responses throughout the left fronto-temporal language network. Critically, however, no regions were more strongly engaged by syntactic than lexico-semantic processing, although some regions showed the opposite pattern. Thus, contra many current proposals of the neural architecture of language, syntactic/combinatorial processing is not separable from lexico-semantic processing at the level of brain regions-or even voxel subsets-within the language network, in line with strong integration between these two processes that has been consistently observed in behavioral and computational language research. The results further suggest that the language network may be generally more strongly concerned with meaning than syntactic form, in line with the primary function of language-to share meanings across minds.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, USA.
| | - Idan Asher Blank
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; Department of Psychology, UCLA, Los Angeles, CA 90095, USA
| | - Matthew Siegelman
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; Department of Psychology, Columbia University, New York, NY 10027, USA
| | - Zachary Mineroff
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; Eberly Center for Teaching Excellence & Educational Innovation, CMU, Pittsburgh, PA 15213, USA
| |
Collapse
|
40
|
Small SL, Watkins KE. Neurobiology of Language: Editorial. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:1-8. [PMID: 37213206 PMCID: PMC10158616 DOI: 10.1162/nol_e_00009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Affiliation(s)
- Steven L Small
- School of Behavioral and Brain Sciences, University of Texas at Dallas, USA
| | - Kate E Watkins
- Department of Experimental Psychology, University of Oxford, UK
| |
Collapse
|