1
|
Perron M, Vuong V, Grassi MW, Imran A, Alain C. Engagement of the speech motor system in challenging speech perception: Activation likelihood estimation meta-analyses. Hum Brain Mapp 2024; 45:e70023. [PMID: 39268584 PMCID: PMC11393483 DOI: 10.1002/hbm.70023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/20/2024] [Accepted: 08/29/2024] [Indexed: 09/17/2024] Open
Abstract
The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.
Collapse
Affiliation(s)
- Maxime Perron
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Veronica Vuong
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Institute of Medical Sciences, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, Ontario, Canada
| | - Madison W Grassi
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
| | - Ashna Imran
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Sciences, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Sueoka Y, Paunov A, Tanner A, Blank IA, Ivanova A, Fedorenko E. The Language Network Reliably "Tracks" Naturalistic Meaningful Nonverbal Stimuli. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:385-408. [PMID: 38911462 PMCID: PMC11192443 DOI: 10.1162/nol_a_00135] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 01/08/2024] [Indexed: 06/25/2024]
Abstract
The language network, comprised of brain regions in the left frontal and temporal cortex, responds robustly and reliably during language comprehension but shows little or no response during many nonlinguistic cognitive tasks (e.g., Fedorenko & Blank, 2020). However, one domain whose relationship with language remains debated is semantics-our conceptual knowledge of the world. Given that the language network responds strongly to meaningful linguistic stimuli, could some of this response be driven by the presence of rich conceptual representations encoded in linguistic inputs? In this study, we used a naturalistic cognition paradigm to test whether the cognitive and neural resources that are responsible for language processing are also recruited for processing semantically rich nonverbal stimuli. To do so, we measured BOLD responses to a set of ∼5-minute-long video and audio clips that consisted of meaningful event sequences but did not contain any linguistic content. We then used the intersubject correlation (ISC) approach (Hasson et al., 2004) to examine the extent to which the language network "tracks" these stimuli, that is, exhibits stimulus-related variation. Across all the regions of the language network, meaningful nonverbal stimuli elicited reliable ISCs. These ISCs were higher than the ISCs elicited by semantically impoverished nonverbal stimuli (e.g., a music clip), but substantially lower than the ISCs elicited by linguistic stimuli. Our results complement earlier findings from controlled experiments (e.g., Ivanova et al., 2021) in providing further evidence that the language network shows some sensitivity to semantic content in nonverbal stimuli.
Collapse
Affiliation(s)
- Yotaro Sueoka
- Department of Brain and Cognitive Sciences, Massachusetts Instititute of Technology, Cambridge, MA, USA
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA
| | - Alexander Paunov
- Department of Brain and Cognitive Sciences, Massachusetts Instititute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Instititute of Technology, Cambridge, MA, USA
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
| | - Alyx Tanner
- McGovern Institute for Brain Research, Massachusetts Instititute of Technology, Cambridge, MA, USA
| | - Idan A. Blank
- Department of Psychology and Linguistics, University of California Los Angeles, Los Angeles, CA, USA
| | - Anna Ivanova
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Instititute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Instititute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| |
Collapse
|
3
|
Fedorenko E, Piantadosi ST, Gibson EAF. Language is primarily a tool for communication rather than thought. Nature 2024; 630:575-586. [PMID: 38898296 DOI: 10.1038/s41586-024-07522-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 05/03/2024] [Indexed: 06/21/2024]
Abstract
Language is a defining characteristic of our species, but the function, or functions, that it serves has been debated for centuries. Here we bring recent evidence from neuroscience and allied disciplines to argue that in modern humans, language is a tool for communication, contrary to a prominent view that we use language for thinking. We begin by introducing the brain network that supports linguistic ability in humans. We then review evidence for a double dissociation between language and thought, and discuss several properties of language that suggest that it is optimized for communication. We conclude that although the emergence of language has unquestionably transformed human culture, language does not appear to be a prerequisite for complex thought, including symbolic thought. Instead, language is a powerful tool for the transmission of cultural knowledge; it plausibly co-evolved with our thinking and reasoning capacities, and only reflects, rather than gives rise to, the signature sophistication of human cognition.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Massachusetts Institute of Technology, Cambridge, MA, USA.
- Speech and Hearing in Bioscience and Technology Program at Harvard University, Boston, MA, USA.
| | | | | |
Collapse
|
4
|
Fedorenko E, Ivanova AA, Regev TI. The language network as a natural kind within the broader landscape of the human brain. Nat Rev Neurosci 2024; 25:289-312. [PMID: 38609551 DOI: 10.1038/s41583-024-00802-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2024] [Indexed: 04/14/2024]
Abstract
Language behaviour is complex, but neuroscientific evidence disentangles it into distinct components supported by dedicated brain areas or networks. In this Review, we describe the 'core' language network, which includes left-hemisphere frontal and temporal areas, and show that it is strongly interconnected, independent of input and output modalities, causally important for language and language-selective. We discuss evidence that this language network plausibly stores language knowledge and supports core linguistic computations related to accessing words and constructions from memory and combining them to interpret (decode) or generate (encode) linguistic messages. We emphasize that the language network works closely with, but is distinct from, both lower-level - perceptual and motor - mechanisms and higher-level systems of knowledge and reasoning. The perceptual and motor mechanisms process linguistic signals, but, in contrast to the language network, are sensitive only to these signals' surface properties, not their meanings; the systems of knowledge and reasoning (such as the system that supports social reasoning) are sometimes engaged during language use but are not language-selective. This Review lays a foundation both for in-depth investigations of these different components of the language processing pipeline and for probing inter-component interactions.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- The Program in Speech and Hearing in Bioscience and Technology, Harvard University, Cambridge, MA, USA.
| | - Anna A Ivanova
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Tamar I Regev
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
5
|
Blache P. A neuro-cognitive model of comprehension based on prediction and unification. Front Hum Neurosci 2024; 18:1356541. [PMID: 38655372 PMCID: PMC11035797 DOI: 10.3389/fnhum.2024.1356541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 03/20/2024] [Indexed: 04/26/2024] Open
Abstract
Most architectures and models of language processing have been built upon a restricted view of language, which is limited to sentence processing. These approaches fail to capture one primordial characteristic: efficiency. Many facilitation effects are known to be at play in natural situations such as conversation (shallow processing, no real access to the lexicon, etc.) without any impact on the comprehension. In this study, on the basis of a new model integrating into a unique architecture, we present these facilitation effects for accessing the meaning into the classical compositional architecture. This model relies on two mechanisms, prediction and unification, and provides a unique architecture for the description of language processing in its natural environment.
Collapse
Affiliation(s)
- Philippe Blache
- Laboratoire Parole et Langage (LPL-CNRS), Aix-en-Provence, France
- Institute of Language, Communication and the Brain (ILCB), Marseille, France
| |
Collapse
|
6
|
Desbordes T, King JR, Dehaene S. Tracking the neural codes for words and phrases during semantic composition, working-memory storage, and retrieval. Cell Rep 2024; 43:113847. [PMID: 38412098 DOI: 10.1016/j.celrep.2024.113847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 11/02/2023] [Accepted: 02/07/2024] [Indexed: 02/29/2024] Open
Abstract
The ability to compose successive words into a meaningful phrase is a characteristic feature of human cognition, yet its neural mechanisms remain incompletely understood. Here, we analyze the cortical mechanisms of semantic composition using magnetoencephalography (MEG) while participants read one-word, two-word, and five-word noun phrases and compared them with a subsequent image. Decoding of MEG signals revealed three processing stages. During phrase comprehension, the representation of individual words was sustained for a variable duration depending on phrasal context. During the delay period, the word code was replaced by a working-memory code whose activation increased with semantic complexity. Finally, the speed and accuracy of retrieval depended on semantic complexity and was faster for surface than for deep semantic properties. In conclusion, we propose that the brain initially encodes phrases using factorized dimensions for successive words but later compresses them in working memory and requires a period of decompression to access them.
Collapse
Affiliation(s)
- Théo Desbordes
- Meta AI, Paris, France; Cognitive Neuroimaging Unit, NeuroSpin Center, 91191 Gif-sur-Yvette, France.
| | - Jean-Rémi King
- Meta AI, Paris, France; École Normale Supérieure, PSL University, Paris, France
| | - Stanislas Dehaene
- Université Paris Saclay, INSERM, CEA, Cognitive Neuroimaging Unit, NeuroSpin Center, 91191 Gif-sur-Yvette, France; Collège de France, PSL University, Paris, France
| |
Collapse
|
7
|
Regev TI, Kim HS, Chen X, Affourtit J, Schipper AE, Bergen L, Mahowald K, Fedorenko E. High-level language brain regions process sublexical regularities. Cereb Cortex 2024; 34:bhae077. [PMID: 38494886 DOI: 10.1093/cercor/bhae077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 02/05/2024] [Accepted: 02/07/2024] [Indexed: 03/19/2024] Open
Abstract
A network of left frontal and temporal brain regions supports language processing. This "core" language network stores our knowledge of words and constructions as well as constraints on how those combine to form sentences. However, our linguistic knowledge additionally includes information about phonemes and how they combine to form phonemic clusters, syllables, and words. Are phoneme combinatorics also represented in these language regions? Across five functional magnetic resonance imaging experiments, we investigated the sensitivity of high-level language processing brain regions to sublexical linguistic regularities by examining responses to diverse nonwords-sequences of phonemes that do not constitute real words (e.g. punes, silory, flope). We establish robust responses in the language network to visually (experiment 1a, n = 605) and auditorily (experiments 1b, n = 12, and 1c, n = 13) presented nonwords. In experiment 2 (n = 16), we find stronger responses to nonwords that are more well-formed, i.e. obey the phoneme-combinatorial constraints of English. Finally, in experiment 3 (n = 14), we provide suggestive evidence that the responses in experiments 1 and 2 are not due to the activation of real words that share some phonology with the nonwords. The results suggest that sublexical regularities are stored and processed within the same fronto-temporal network that supports lexical and syntactic processes.
Collapse
Affiliation(s)
- Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Hee So Kim
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Xuanyi Chen
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Sciences, Rice University, Houston, TX 77005, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Abigail E Schipper
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Leon Bergen
- Department of Linguistics, University of California San Diego, San Diego CA 92093, United States
| | - Kyle Mahowald
- Department of Linguistics, University of Texas at Austin, Austin, TX 78712, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Harvard Program in Speech and Hearing Bioscience and Technology, Boston, MA 02115, United States
| |
Collapse
|
8
|
Woolnough O, Donos C, Murphy E, Rollo PS, Roccaforte ZJ, Dehaene S, Tandon N. Spatiotemporally distributed frontotemporal networks for sentence reading. Proc Natl Acad Sci U S A 2023; 120:e2300252120. [PMID: 37068244 PMCID: PMC10151604 DOI: 10.1073/pnas.2300252120] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 03/14/2023] [Indexed: 04/19/2023] Open
Abstract
Reading a sentence entails integrating the meanings of individual words to infer more complex, higher-order meaning. This highly rapid and complex human behavior is known to engage the inferior frontal gyrus (IFG) and middle temporal gyrus (MTG) in the language-dominant hemisphere, yet whether there are distinct contributions of these regions to sentence reading is still unclear. To probe these neural spatiotemporal dynamics, we used direct intracranial recordings to measure neural activity while reading sentences, meaning-deficient Jabberwocky sentences, and lists of words or pseudowords. We isolated two functionally and spatiotemporally distinct frontotemporal networks, each sensitive to distinct aspects of word and sentence composition. The first distributed network engages the IFG and MTG, with IFG activity preceding MTG. Activity in this network ramps up over the duration of a sentence and is reduced or absent during Jabberwocky and word lists, implying its role in the derivation of sentence-level meaning. The second network engages the superior temporal gyrus and the IFG, with temporal responses leading those in frontal lobe, and shows greater activation for each word in a list than those in sentences, suggesting that sentential context enables greater efficiency in the lexical and/or phonological processing of individual words. These adjacent, yet spatiotemporally dissociable neural mechanisms for word- and sentence-level processes shed light on the richly layered semantic networks that enable us to fluently read. These results imply distributed, dynamic computation across the frontotemporal language network rather than a clear dichotomy between the contributions of frontal and temporal structures.
Collapse
Affiliation(s)
- Oscar Woolnough
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
| | - Cristian Donos
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Faculty of Physics, University of Bucharest, 050663Bucharest, Romania
| | - Elliot Murphy
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
| | - Patrick S. Rollo
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
| | - Zachary J. Roccaforte
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, NeuroSpin Center, 91191Gif-sur-Yvette, France
- Collège de France, 75005Paris, France
| | - Nitin Tandon
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
- Memorial Hermann Hospital, Texas Medical Center, Houston, TX77030
| |
Collapse
|
9
|
Hu J, Small H, Kean H, Takahashi A, Zekelman L, Kleinman D, Ryan E, Nieto-Castañón A, Ferreira V, Fedorenko E. Precision fMRI reveals that the language-selective network supports both phrase-structure building and lexical access during language production. Cereb Cortex 2023; 33:4384-4404. [PMID: 36130104 PMCID: PMC10110436 DOI: 10.1093/cercor/bhac350] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 08/01/2022] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
A fronto-temporal brain network has long been implicated in language comprehension. However, this network's role in language production remains debated. In particular, it remains unclear whether all or only some language regions contribute to production, and which aspects of production these regions support. Across 3 functional magnetic resonance imaging experiments that rely on robust individual-subject analyses, we characterize the language network's response to high-level production demands. We report 3 novel results. First, sentence production, spoken or typed, elicits a strong response throughout the language network. Second, the language network responds to both phrase-structure building and lexical access demands, although the response to phrase-structure building is stronger and more spatially extensive, present in every language region. Finally, contra some proposals, we find no evidence of brain regions-within or outside the language network-that selectively support phrase-structure building in production relative to comprehension. Instead, all language regions respond more strongly during production than comprehension, suggesting that production incurs a greater cost for the language network. Together, these results align with the idea that language comprehension and production draw on the same knowledge representations, which are stored in a distributed manner within the language-selective network and are used to both interpret and generate linguistic utterances.
Collapse
Affiliation(s)
- Jennifer Hu
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Hannah Small
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Atsushi Takahashi
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Leo Zekelman
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | | | - Elizabeth Ryan
- St. George’s Medical School, St. George’s University, Grenada, West Indies
| | - Alfonso Nieto-Castañón
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215, United States
| | - Victor Ferreira
- Department of Psychology, UCSD, La Jolla, CA 92093, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
10
|
Giroud J, Lerousseau JP, Pellegrino F, Morillon B. The channel capacity of multilevel linguistic features constrains speech comprehension. Cognition 2023; 232:105345. [PMID: 36462227 DOI: 10.1016/j.cognition.2022.105345] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 09/28/2022] [Accepted: 11/22/2022] [Indexed: 12/05/2022]
Abstract
Humans are expert at processing speech but how this feat is accomplished remains a major question in cognitive neuroscience. Capitalizing on the concept of channel capacity, we developed a unified measurement framework to investigate the respective influence of seven acoustic and linguistic features on speech comprehension, encompassing acoustic, sub-lexical, lexical and supra-lexical levels of description. We show that comprehension is independently impacted by all these features, but at varying degrees and with a clear dominance of the syllabic rate. Comparing comprehension of French words and sentences further reveals that when supra-lexical contextual information is present, the impact of all other features is dramatically reduced. Finally, we estimated the channel capacity associated with each linguistic feature and compared them with their generic distribution in natural speech. Our data reveal that while acoustic modulation, syllabic and phonemic rates unfold respectively at 5, 5, and 12 Hz in natural speech, they are associated with independent processing bottlenecks whose channel capacity are of 15, 15 and 35 Hz, respectively, as suggested by neurophysiological theories. They moreover point towards supra-lexical contextual information as the feature limiting the flow of natural speech. Overall, this study reveals how multilevel linguistic features constrain speech comprehension.
Collapse
Affiliation(s)
- Jérémy Giroud
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.
| | | | - François Pellegrino
- Laboratoire Dynamique du Langage UMR 5596, CNRS, University of Lyon, 14 Avenue Berthelot, 69007 Lyon, France
| | - Benjamin Morillon
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| |
Collapse
|
11
|
Li J, Kean H, Fedorenko E, Saygin Z. Intact reading ability despite lacking a canonical visual word form area in an individual born without the left superior temporal lobe. Cogn Neuropsychol 2023; 39:249-275. [PMID: 36653302 DOI: 10.1080/02643294.2023.2164923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
The visual word form area (VWFA), a region canonically located within left ventral temporal cortex (VTC), is specialized for orthography in literate adults presumbly due to its connectivity with frontotemporal language regions. But is a typical, left-lateralized language network critical for the VWFA's emergence? We investigated this question in an individual (EG) born without the left superior temporal lobe but who has normal reading ability. EG showed canonical typical face-selectivity bilateraly but no wordselectivity either in right VWFA or in the spared left VWFA. Moreover, in contrast with the idea that the VWFA is simply part of the language network, no part of EG's VTC showed selectivity to higher-level linguistic processing. Interestingly, EG's VWFA showed reliable multivariate patterns that distinguished words from other categories. These results suggest that a typical left-hemisphere language network is necessary for acanonical VWFA, and that orthographic processing can otherwise be supported by a distributed neural code.
Collapse
Affiliation(s)
- Jin Li
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Hope Kean
- Department of Brain and Cognitive Sciences / McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences / McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Zeynep Saygin
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
12
|
Okayasu M, Inukai T, Tanaka D, Tsumura K, Shintaki R, Takeda M, Nakahara K, Jimura K. The Stroop effect involves an excitatory-inhibitory fronto-cerebellar loop. Nat Commun 2023; 14:27. [PMID: 36631460 PMCID: PMC9834394 DOI: 10.1038/s41467-022-35397-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Accepted: 11/30/2022] [Indexed: 01/13/2023] Open
Abstract
The Stroop effect is a classical, well-known behavioral phenomenon in humans that refers to robust interference between language and color information. It remains unclear, however, when the interference occurs and how it is resolved in the brain. Here we show that the Stroop effect occurs during perception of color-word stimuli and involves a cross-hemispheric, excitatory-inhibitory loop functionally connecting the lateral prefrontal cortex and cerebellum. Participants performed a Stroop task and a non-verbal control task (which we term the Swimmy task), and made a response vocally or manually. The Stroop effect involved the lateral prefrontal cortex in the left hemisphere and the cerebellum in the right hemisphere, independently of the response type; such lateralization was absent during the Swimmy task, however. Moreover, the prefrontal cortex amplified cerebellar activity, whereas the cerebellum suppressed prefrontal activity. This fronto-cerebellar loop may implement language and cognitive systems that enable goal-directed behavior during perceptual conflicts.
Collapse
Affiliation(s)
- Moe Okayasu
- Department of Biosciences and Informatics, Keio University, Yokohama, Japan
| | - Tensei Inukai
- Department of Biosciences and Informatics, Keio University, Yokohama, Japan
| | - Daiki Tanaka
- Department of Biosciences and Informatics, Keio University, Yokohama, Japan
| | - Kaho Tsumura
- Department of Biosciences and Informatics, Keio University, Yokohama, Japan
| | - Reiko Shintaki
- Department of Biosciences and Informatics, Keio University, Yokohama, Japan
| | - Masaki Takeda
- Research Center for Brain Communication, Kochi University of Technology, Kami, Japan
| | - Kiyoshi Nakahara
- Research Center for Brain Communication, Kochi University of Technology, Kami, Japan
| | - Koji Jimura
- Department of Biosciences and Informatics, Keio University, Yokohama, Japan.
- Research Center for Brain Communication, Kochi University of Technology, Kami, Japan.
- Department of Informatics, Gunma University, Maebashi, Japan.
| |
Collapse
|
13
|
Wang S, Planton S, Chanoine V, Sein J, Anton JL, Nazarian B, Dubarry AS, Pallier C, Pattamadilok C. Graph theoretical analysis reveals the functional role of the left ventral occipito-temporal cortex in speech processing. Sci Rep 2022; 12:20028. [PMID: 36414688 PMCID: PMC9681757 DOI: 10.1038/s41598-022-24056-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 11/09/2022] [Indexed: 11/23/2022] Open
Abstract
The left ventral occipito-temporal cortex (left-vOT) plays a key role in reading. Interestingly, the area also responds to speech input, suggesting that it may have other functions beyond written word recognition. Here, we adopt graph theoretical analysis to investigate the left-vOT's functional role in the whole-brain network while participants process spoken sentences in different contexts. Overall, different connectivity measures indicate that the left-vOT acts as an interface enabling the communication between distributed brain regions and sub-networks. During simple speech perception, the left-vOT is systematically part of the visual network and contributes to the communication between neighboring areas, remote areas, and sub-networks, by acting as a local bridge, a global bridge, and a connector, respectively. However, when speech comprehension is explicitly required, the specific functional role of the area and the sub-network to which the left-vOT belongs change and vary with the quality of speech signal and task difficulty. These connectivity patterns provide insightful information on the contribution of the left-vOT in various contexts of language processing beyond its role in reading. They advance our general understanding of the neural mechanisms underlying the flexibility of the language network that adjusts itself according to the processing context.
Collapse
Affiliation(s)
- Shuai Wang
- grid.462776.60000 0001 2206 2382Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France ,grid.5399.60000 0001 2176 4817Aix Marseille Univ, Institute of Language, Communication and the Brain, Aix-en-Provence, France
| | - Samuel Planton
- grid.462776.60000 0001 2206 2382Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France ,grid.7429.80000000121866389Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, Gif/Yvette, France
| | - Valérie Chanoine
- grid.462776.60000 0001 2206 2382Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France ,grid.5399.60000 0001 2176 4817Aix Marseille Univ, Institute of Language, Communication and the Brain, Aix-en-Provence, France
| | - Julien Sein
- grid.462486.a0000 0004 4650 2882Aix Marseille Univ, CNRS, Centre IRM-INT@CERIMED, Institut de Neurosciences de la Timone, UMR 7289 Marseille, France
| | - Jean-Luc Anton
- grid.462486.a0000 0004 4650 2882Aix Marseille Univ, CNRS, Centre IRM-INT@CERIMED, Institut de Neurosciences de la Timone, UMR 7289 Marseille, France
| | - Bruno Nazarian
- grid.462486.a0000 0004 4650 2882Aix Marseille Univ, CNRS, Centre IRM-INT@CERIMED, Institut de Neurosciences de la Timone, UMR 7289 Marseille, France
| | - Anne-Sophie Dubarry
- grid.462776.60000 0001 2206 2382Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France ,grid.4444.00000 0001 2112 9282 Aix Marseille Univ, CNRS, LNC, Marseille, France
| | - Christophe Pallier
- grid.7429.80000000121866389Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, Gif/Yvette, France
| | - Chotiga Pattamadilok
- grid.462776.60000 0001 2206 2382Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| |
Collapse
|
14
|
Lee YS, Rogers CS, Grossman M, Wingfield A, Peelle JE. Hemispheric dissociations in regions supporting auditory sentence comprehension in older adults. AGING BRAIN 2022; 2:100051. [PMID: 36908889 PMCID: PMC9997128 DOI: 10.1016/j.nbas.2022.100051] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 08/10/2022] [Accepted: 08/11/2022] [Indexed: 11/21/2022] Open
Abstract
We investigated how the aging brain copes with acoustic and syntactic challenges during spoken language comprehension. Thirty-eight healthy adults aged 54 - 80 years (M = 66 years) participated in an fMRI experiment wherein listeners indicated the gender of an agent in short spoken sentences that varied in syntactic complexity (object-relative vs subject-relative center-embedded clause structures) and acoustic richness (high vs low spectral detail, but all intelligible). We found widespread activity throughout a bilateral frontotemporal network during successful sentence comprehension. Consistent with prior reports, bilateral inferior frontal gyrus and left posterior superior temporal gyrus were more active in response to object-relative sentences than to subject-relative sentences. Moreover, several regions were significantly correlated with individual differences in task performance: Activity in right frontoparietal cortex and left cerebellum (Crus I & II) showed a negative correlation with overall comprehension. By contrast, left frontotemporal areas and right cerebellum (Lobule VII) showed a negative correlation with accuracy specifically for syntactically complex sentences. In addition, laterality analyses confirmed a lack of hemispheric lateralization in activity evoked by sentence stimuli in older adults. Importantly, we found different hemispheric roles, with a left-lateralized core language network supporting syntactic operations, and right-hemisphere regions coming into play to aid in general cognitive demands during spoken sentence processing. Together our findings support the view that high levels of language comprehension in older adults are maintained by a close interplay between a core left hemisphere language network and additional neural resources in the contralateral hemisphere.
Collapse
Affiliation(s)
- Yune Sang Lee
- Department of Speech, Language, and Hearing, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
| | - Chad S. Rogers
- Department of Psychology, Union College, Schenectady, NY, USA
| | - Murray Grossman
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Jonathan E. Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
15
|
Heugel N, Beardsley SA, Liebenthal E. EEG and fMRI coupling and decoupling based on joint independent component analysis (jICA). J Neurosci Methods 2022; 369:109477. [PMID: 34998799 PMCID: PMC8879823 DOI: 10.1016/j.jneumeth.2022.109477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 12/20/2021] [Accepted: 01/04/2022] [Indexed: 11/25/2022]
Abstract
BACKGROUND Meaningful integration of functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) requires knowing whether these measurements reflect the activity of the same neural sources, i.e., estimating the degree of coupling and decoupling between the neuroimaging modalities. NEW METHOD This paper proposes a method to quantify the coupling and decoupling of fMRI and EEG signals based on the mixing matrix produced by joint independent component analysis (jICA). The method is termed fMRI/EEG-jICA. RESULTS fMRI and EEG acquired during a syllable detection task with variable syllable presentation rates (0.25-3 Hz) were separated with jICA into two spatiotemporally distinct components, a primary component that increased nonlinearly in amplitude with syllable presentation rate, putatively reflecting an obligatory auditory response, and a secondary component that declined nonlinearly with syllable presentation rate, putatively reflecting an auditory attention orienting response. The two EEG subcomponents were of similar amplitude, but the secondary fMRI subcomponent was ten folds smaller than the primary one. COMPARISON TO EXISTING METHOD FMRI multiple regression analysis yielded a map more consistent with the primary than secondary fMRI subcomponent of jICA, as determined by a greater area under the curve (0.5 versus 0.38) in a sensitivity and specificity analysis of spatial overlap. CONCLUSION fMRI/EEG-jICA revealed spatiotemporally distinct brain networks with greater sensitivity than fMRI multiple regression analysis, demonstrating how this method can be used for leveraging EEG signals to inform the detection and functional characterization of fMRI signals. fMRI/EEG-jICA may be useful for studying neurovascular coupling at a macro-level, e.g., in neurovascular disorders.
Collapse
Affiliation(s)
- Nicholas Heugel
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI
| | - Scott A Beardsley
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI,Clinical Translational Science Institute, Medical College of Wisconsin, Milwaukee WI
| | - Einat Liebenthal
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, USA; McLean Hospital, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
16
|
Wehbe L, Blank IA, Shain C, Futrell R, Levy R, von der Malsburg T, Smith N, Gibson E, Fedorenko E. Incremental Language Comprehension Difficulty Predicts Activity in the Language Network but Not the Multiple Demand Network. Cereb Cortex 2021. [PMID: 33895807 DOI: 10.1101/2020.04.15.043844] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023] Open
Abstract
What role do domain-general executive functions play in human language comprehension? To address this question, we examine the relationship between behavioral measures of comprehension and neural activity in the domain-general "multiple demand" (MD) network, which has been linked to constructs like attention, working memory, inhibitory control, and selection, and implicated in diverse goal-directed behaviors. Specifically, functional magnetic resonance imaging data collected during naturalistic story listening are compared with theory-neutral measures of online comprehension difficulty and incremental processing load (reading times and eye-fixation durations). Critically, to ensure that variance in these measures is driven by features of the linguistic stimulus rather than reflecting participant- or trial-level variability, the neuroimaging and behavioral datasets were collected in nonoverlapping samples. We find no behavioral-neural link in functionally localized MD regions; instead, this link is found in the domain-specific, fronto-temporal "core language network," in both left-hemispheric areas and their right hemispheric homotopic areas. These results argue against strong involvement of domain-general executive circuits in language comprehension.
Collapse
Affiliation(s)
- Leila Wehbe
- Carnegie Mellon University, Machine Learning Department PA 15213, USA
| | - Idan Asher Blank
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California Los Angeles, Department of Psychology CA 90095, USA
| | - Cory Shain
- Ohio State University, Department of Linguistics OH 43210, USA
| | - Richard Futrell
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California Irvine, Department of Linguistics CA 92697, USA
| | - Roger Levy
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California San Diego, Department of Linguistics CA 92161, USA
| | - Titus von der Malsburg
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of Stuttgart, Institute of Linguistics, 70049 Stuttgart, Germany
| | - Nathaniel Smith
- University of California San Diego, Department of Linguistics CA 92161, USA
| | - Edward Gibson
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
| | - Evelina Fedorenko
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- Massachusetts Institute of Technology, McGovern Institute for Brain ResearchMA 02139, USA
| |
Collapse
|
17
|
Wehbe L, Blank IA, Shain C, Futrell R, Levy R, von der Malsburg T, Smith N, Gibson E, Fedorenko E. Incremental Language Comprehension Difficulty Predicts Activity in the Language Network but Not the Multiple Demand Network. Cereb Cortex 2021; 31:4006-4023. [PMID: 33895807 PMCID: PMC8328211 DOI: 10.1093/cercor/bhab065] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 01/15/2021] [Accepted: 02/21/2021] [Indexed: 12/28/2022] Open
Abstract
What role do domain-general executive functions play in human language comprehension? To address this question, we examine the relationship between behavioral measures of comprehension and neural activity in the domain-general "multiple demand" (MD) network, which has been linked to constructs like attention, working memory, inhibitory control, and selection, and implicated in diverse goal-directed behaviors. Specifically, functional magnetic resonance imaging data collected during naturalistic story listening are compared with theory-neutral measures of online comprehension difficulty and incremental processing load (reading times and eye-fixation durations). Critically, to ensure that variance in these measures is driven by features of the linguistic stimulus rather than reflecting participant- or trial-level variability, the neuroimaging and behavioral datasets were collected in nonoverlapping samples. We find no behavioral-neural link in functionally localized MD regions; instead, this link is found in the domain-specific, fronto-temporal "core language network," in both left-hemispheric areas and their right hemispheric homotopic areas. These results argue against strong involvement of domain-general executive circuits in language comprehension.
Collapse
Affiliation(s)
- Leila Wehbe
- Carnegie Mellon University, Machine Learning Department PA 15213, USA
| | - Idan Asher Blank
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California Los Angeles, Department of Psychology CA 90095, USA
| | - Cory Shain
- Ohio State University, Department of Linguistics OH 43210, USA
| | - Richard Futrell
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California Irvine, Department of Linguistics CA 92697, USA
| | - Roger Levy
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of California San Diego, Department of Linguistics CA 92161, USA
| | - Titus von der Malsburg
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- University of Stuttgart, Institute of Linguistics, 70049 Stuttgart, Germany
| | - Nathaniel Smith
- University of California San Diego, Department of Linguistics CA 92161, USA
| | - Edward Gibson
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
| | - Evelina Fedorenko
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences MA 02139, USA
- Massachusetts Institute of Technology, McGovern Institute for Brain ResearchMA 02139, USA
| |
Collapse
|
18
|
Continuous-time deconvolutional regression for psycholinguistic modeling. Cognition 2021; 215:104735. [PMID: 34303182 DOI: 10.1016/j.cognition.2021.104735] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 04/01/2021] [Accepted: 04/11/2021] [Indexed: 12/28/2022]
Abstract
The influence of stimuli in psycholinguistic experiments diffuses across time because the human response to language is not instantaneous. The linear models typically used to analyze psycholinguistic data are unable to account for this phenomenon due to strong temporal independence assumptions, while existing deconvolutional methods for estimating diffuse temporal structure model time discretely and therefore cannot be directly applied to natural language stimuli where events (words) have variable duration. In light of evidence that continuous-time deconvolutional regression (CDR) can address these issues (Shain & Schuler, 2018), this article motivates the use of CDR for many experimental settings, exposits some of its mathematical properties, and empirically evaluates the influence of various experimental confounds (noise, multicollinearity, and impulse response misspecification), hyperparameter settings, and response types (behavioral and fMRI). Results show that CDR (1) yields highly consistent estimates across a variety of hyperparameter configurations, (2) faithfully recovers the data-generating model on synthetic data, even under adverse training conditions, and (3) outperforms widely-used statistical approaches when applied to naturalistic reading and fMRI data. In addition, procedures for testing scientific hypotheses using CDR are defined and demonstrated, and empirically-motivated best-practices for CDR modeling are proposed. Results support the use of CDR for analyzing psycholinguistic time series, especially in a naturalistic experimental paradigm.
Collapse
|
19
|
Baumgarten TJ, Maniscalco B, Lee JL, Flounders MW, Abry P, He BJ. Neural integration underlying naturalistic prediction flexibly adapts to varying sensory input rate. Nat Commun 2021; 12:2643. [PMID: 33976118 PMCID: PMC8113607 DOI: 10.1038/s41467-021-22632-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 03/16/2021] [Indexed: 02/03/2023] Open
Abstract
Prediction of future sensory input based on past sensory information is essential for organisms to effectively adapt their behavior in dynamic environments. Humans successfully predict future stimuli in various natural settings. Yet, it remains elusive how the brain achieves effective prediction despite enormous variations in sensory input rate, which directly affect how fast sensory information can accumulate. We presented participants with acoustic sequences capturing temporal statistical regularities prevalent in nature and investigated neural mechanisms underlying predictive computation using MEG. By parametrically manipulating sequence presentation speed, we tested two hypotheses: neural prediction relies on integrating past sensory information over fixed time periods or fixed amounts of information. We demonstrate that across halved and doubled presentation speeds, predictive information in neural activity stems from integration over fixed amounts of information. Our findings reveal the neural mechanisms enabling humans to robustly predict dynamic stimuli in natural environments despite large sensory input rate variations.
Collapse
Affiliation(s)
- Thomas J Baumgarten
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Brian Maniscalco
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
| | - Jennifer L Lee
- Neuroscience Graduate Program, New York University, New York, NY, USA
| | - Matthew W Flounders
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
| | - Patrice Abry
- CNRS, Laboratoire de Physique, Université de Lyon, ENS Lyon, Lyon, France
| | - Biyu J He
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA.
- Departments of Neurology, Neuroscience and Physiology, and Radiology, New York University School of Medicine, New York, NY, USA.
| |
Collapse
|
20
|
Extracting representations of cognition across neuroimaging studies improves brain decoding. PLoS Comput Biol 2021; 17:e1008795. [PMID: 33939700 PMCID: PMC8118532 DOI: 10.1371/journal.pcbi.1008795] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 05/13/2021] [Accepted: 02/15/2021] [Indexed: 11/19/2022] Open
Abstract
Cognitive brain imaging is accumulating datasets about the neural substrate of many different mental processes. Yet, most studies are based on few subjects and have low statistical power. Analyzing data across studies could bring more statistical power; yet the current brain-imaging analytic framework cannot be used at scale as it requires casting all cognitive tasks in a unified theoretical framework. We introduce a new methodology to analyze brain responses across tasks without a joint model of the psychological processes. The method boosts statistical power in small studies with specific cognitive focus by analyzing them jointly with large studies that probe less focal mental processes. Our approach improves decoding performance for 80% of 35 widely-different functional-imaging studies. It finds commonalities across tasks in a data-driven way, via common brain representations that predict mental processes. These are brain networks tuned to psychological manipulations. They outline interpretable and plausible brain structures. The extracted networks have been made available; they can be readily reused in new neuro-imaging studies. We provide a multi-study decoding tool to adapt to new data.
Collapse
|
21
|
Kajiura M, Jeong H, Kawata NYS, Yu S, Kinoshita T, Kawashima R, Sugiura M. Brain activity predicts future learning success in intensive second language listening training. BRAIN AND LANGUAGE 2021; 212:104839. [PMID: 33271393 DOI: 10.1016/j.bandl.2020.104839] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 06/03/2020] [Accepted: 07/14/2020] [Indexed: 06/12/2023]
Abstract
This study explores neural mechanisms underlying how prior knowledge gained from pre-listening transcript reading helps comprehend fast-rate speech in a second language (L2) and applies to L2 learning. Top-down predictive processing by prior knowledge may play an important role in L2 speech comprehension and improving listening skill. By manipulating the pre-listening transcript effect (pre-listening transcript reading [TR] vs. no transcript reading [NTR]) and type of languages (first language (L1) vs. L2), we measured brain activity in L2 learners, who performed fast-rate listening comprehension tasks during functional magnetic resonance imaging. Thereafter, we examined whether TR_L2-specific brain activity can predict individual learning success after an intensive listening training. The left angular and superior temporal gyri were key areas responsible for integrating prior knowledge to sensory input. Activity in these areas correlated significantly with gain scores on subsequent training, indicating that brain activity related to prior knowledge-sensory input integration predicts future learning success.
Collapse
Affiliation(s)
- Mayumi Kajiura
- Division of Foreign Language Education, Aichi Shukutoku University, Nagoya, Japan.
| | - Hyeonjeong Jeong
- Graduate School of International Cultural Studies, Tohoku University, Sendai, Japan; Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan.
| | - Natasha Y S Kawata
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan
| | - Shaoyun Yu
- Graduate School of Humanities, Nagoya University, Nagoya, Japan
| | - Toru Kinoshita
- Graduate School of Humanities, Nagoya University, Nagoya, Japan
| | - Ryuta Kawashima
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan
| | - Motoaki Sugiura
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan; International Research Institute for Disaster Science, Tohoku University, Sendai, Japan
| |
Collapse
|
22
|
Blank IA, Fedorenko E. No evidence for differences among language regions in their temporal receptive windows. Neuroimage 2020; 219:116925. [PMID: 32407994 PMCID: PMC9392830 DOI: 10.1016/j.neuroimage.2020.116925] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 03/20/2020] [Accepted: 05/06/2020] [Indexed: 10/24/2022] Open
Abstract
The "core language network" consists of left frontal and temporal regions that are selectively engaged in linguistic processing. Whereas functional differences among these regions have long been debated, many accounts propose distinctions in terms of representational grain-size-e.g., words vs. phrases/sentences-or processing time-scale, i.e., operating on local linguistic features vs. larger spans of input. Indeed, the topography of language regions appears to overlap with a cortical hierarchy reported by Lerner et al. (2011) wherein mid-posterior temporal regions are sensitive to low-level features of speech, surrounding areas-to word-level information, and inferior frontal areas-to sentence-level information and beyond. However, the correspondence between the language network and this hierarchy of "temporal receptive windows" (TRWs) is difficult to establish because the precise anatomical locations of language regions vary across individuals. To directly test this correspondence, we first identified language regions in each participant with a well-validated task-based localizer, which confers high functional resolution to the study of TRWs (traditionally based on stereotactic coordinates); then, we characterized regional TRWs with the naturalistic story listening paradigm of Lerner et al. (2011), which augments task-based characterizations of the language network by more closely resembling comprehension "in the wild". We find no region-by-TRW interactions across temporal and inferior frontal regions, which are all sensitive to both word-level and sentence-level information. Therefore, the language network as a whole constitutes a unique stage of information integration within a broader cortical hierarchy.
Collapse
Affiliation(s)
- Idan A Blank
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| |
Collapse
|
23
|
Matchin W, Wood E. Syntax-Sensitive Regions of the Posterior Inferior Frontal Gyrus and the Posterior Temporal Lobe Are Differentially Recruited by Production and Perception. Cereb Cortex Commun 2020; 1:tgaa029. [PMID: 34296103 PMCID: PMC8152856 DOI: 10.1093/texcom/tgaa029] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 06/22/2020] [Accepted: 06/22/2020] [Indexed: 01/27/2023] Open
Abstract
Matchin and Hickok (2020) proposed that the left posterior inferior frontal gyrus (PIFG) and the left posterior temporal lobe (PTL) both play a role in syntactic processing, broadly construed, attributing distinct functions to these regions with respect to production and perception. Consistent with this hypothesis, functional dissociations between these regions have been demonstrated with respect to lesion-symptom mapping in aphasia. However, neuroimaging studies of syntactic comprehension typically show similar activations in these regions. In order to identify whether these regions show distinct activation patterns with respect to syntactic perception and production, we performed an fMRI study contrasting the subvocal articulation and perception of structured jabberwocky phrases (syntactic), sequences of real words (lexical), and sequences of pseudowords (phonological). We defined two sets of language-selective regions of interest (ROIs) in individual subjects for the PIFG and the PTL using the contrasts [syntactic > lexical] and [syntactic > phonological]. We found robust significant interactions of comprehension and production between these 2 regions at the syntactic level, for both sets of language-selective ROIs. This suggests a core difference in the function of these regions with respect to production and perception, consistent with the lesion literature.
Collapse
Affiliation(s)
- William Matchin
- Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA
| | - Emily Wood
- Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA
| |
Collapse
|
24
|
Diachek E, Blank I, Siegelman M, Affourtit J, Fedorenko E. The Domain-General Multiple Demand (MD) Network Does Not Support Core Aspects of Language Comprehension: A Large-Scale fMRI Investigation. J Neurosci 2020; 40:4536-4550. [PMID: 32317387 PMCID: PMC7275862 DOI: 10.1523/jneurosci.2036-19.2020] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 03/02/2020] [Accepted: 04/05/2020] [Indexed: 11/21/2022] Open
Abstract
Aside from the language-selective left-lateralized frontotemporal network, language comprehension sometimes recruits a domain-general bilateral frontoparietal network implicated in executive functions: the multiple demand (MD) network. However, the nature of the MD network's contributions to language comprehension remains debated. To illuminate the role of this network in language processing in humans, we conducted a large-scale fMRI investigation using data from 30 diverse word and sentence comprehension experiments (481 unique participants [female and male], 678 scanning sessions). In line with prior findings, the MD network was active during many language tasks. Moreover, similar to the language-selective network, which is robustly lateralized to the left hemisphere, these responses were stronger in the left-hemisphere MD regions. However, in contrast with the language-selective network, the MD network responded more strongly (1) to lists of unconnected words than to sentences, and (2) in paradigms with an explicit task compared with passive comprehension paradigms. Indeed, many passive comprehension tasks failed to elicit a response above the fixation baseline in the MD network, in contrast to strong responses in the language-selective network. Together, these results argue against a role for the MD network in core aspects of sentence comprehension, such as inhibiting irrelevant meanings or parses, keeping intermediate representations active in working memory, or predicting upcoming words or structures. These results align with recent evidence of relatively poor tracking of the linguistic signal by the MD regions during naturalistic comprehension, and instead suggest that the MD network's engagement during language processing reflects effort associated with extraneous task demands.SIGNIFICANCE STATEMENT Domain-general executive processes, such as working memory and cognitive control, have long been implicated in language comprehension, including in neuroimaging studies that have reported activation in domain-general multiple demand (MD) regions for linguistic manipulations. However, much prior evidence has come from paradigms where language interpretation is accompanied by extraneous tasks. Using a large fMRI dataset (30 experiments/481 participants/678 sessions), we demonstrate that MD regions are engaged during language comprehension in the presence of task demands, but not during passive reading/listening, conditions that strongly activate the frontotemporal language network. These results present a fundamental challenge to proposals whereby linguistic computations, such as inhibiting irrelevant meanings, keeping representations active in working memory, or predicting upcoming elements, draw on domain-general executive resources.
Collapse
Affiliation(s)
- Evgeniia Diachek
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37203
| | - Idan Blank
- Department of Psychology, University of California at Los Angeles, Los Angeles, California 90095
| | - Matthew Siegelman
- Department of Psychology, Columbia University, New York, New York 10027
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Psychiatry, Massachusetts General Hospital, Charlestown, Massachusetts 02129
| |
Collapse
|
25
|
Fedorenko E, Blank IA. Broca's Area Is Not a Natural Kind. Trends Cogn Sci 2020; 24:270-284. [PMID: 32160565 PMCID: PMC7211504 DOI: 10.1016/j.tics.2020.01.001] [Citation(s) in RCA: 129] [Impact Index Per Article: 32.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 12/21/2019] [Accepted: 01/09/2020] [Indexed: 01/09/2023]
Abstract
Theories of human cognition prominently feature 'Broca's area', which causally contributes to a myriad of mental functions. However, Broca's area is not a monolithic, multipurpose unit - it is structurally and functionally heterogeneous. Some functions engaging (subsets of) this area share neurocognitive resources, whereas others rely on separable circuits. A decade of converging evidence has now illuminated a fundamental distinction between two subregions of Broca's area that likely play computationally distinct roles in cognition: one belongs to the domain-specific 'language network', the other to the domain-general 'multiple-demand (MD) network'. Claims about Broca's area should be (re)cast in terms of these (and other, as yet undetermined) functional components, to establish a cumulative research enterprise where empirical findings can be replicated and theoretical proposals can be meaningfully compared and falsified.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Brain and Cognitive Sciences Department, and McGovern Institute for Brain Research, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA.
| | - Idan A Blank
- Department of Psychology, University of California at Los Angeles (UCLA), Los Angeles, CA 90095, USA.
| |
Collapse
|
26
|
Mollica F, Siegelman M, Diachek E, Piantadosi ST, Mineroff Z, Futrell R, Kean H, Qian P, Fedorenko E. Composition is the Core Driver of the Language-selective Network. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:104-134. [PMID: 36794007 PMCID: PMC9923699 DOI: 10.1162/nol_a_00005] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 12/19/2019] [Indexed: 05/11/2023]
Abstract
The frontotemporal language network responds robustly and selectively to sentences. But the features of linguistic input that drive this response and the computations that these language areas support remain debated. Two key features of sentences are typically confounded in natural linguistic input: words in sentences (a) are semantically and syntactically combinable into phrase- and clause-level meanings, and (b) occur in an order licensed by the language's grammar. Inspired by recent psycholinguistic work establishing that language processing is robust to word order violations, we hypothesized that the core linguistic computation is composition, and, thus, can take place even when the word order violates the grammatical constraints of the language. This hypothesis predicts that a linguistic string should elicit a sentence-level response in the language network provided that the words in that string can enter into dependency relationships as in typical sentences. We tested this prediction across two fMRI experiments (total N = 47) by introducing a varying number of local word swaps into naturalistic sentences, leading to progressively less syntactically well-formed strings. Critically, local dependency relationships were preserved because combinable words remained close to each other. As predicted, word order degradation did not decrease the magnitude of the blood oxygen level-dependent response in the language network, except when combinable words were so far apart that composition among nearby words was highly unlikely. This finding demonstrates that composition is robust to word order violations, and that the language regions respond as strongly as they do to naturalistic linguistic input, providing that composition can take place.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Hope Kean
- Brain & Cognitive Sciences Department, MIT
| | - Peng Qian
- Brain & Cognitive Sciences Department, MIT
| | - Evelina Fedorenko
- Brain & Cognitive Sciences Department, MIT
- McGovern Institute for Brain Research, MIT
- Psychiatry Department, Massachusetts General Hospital
| |
Collapse
|
27
|
Shain C, Blank IA, van Schijndel M, Schuler W, Fedorenko E. fMRI reveals language-specific predictive coding during naturalistic sentence comprehension. Neuropsychologia 2020; 138:107307. [PMID: 31874149 PMCID: PMC7140726 DOI: 10.1016/j.neuropsychologia.2019.107307] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 12/02/2019] [Accepted: 12/13/2019] [Indexed: 11/19/2022]
Abstract
Much research in cognitive neuroscience supports prediction as a canonical computation of cognition across domains. Is such predictive coding implemented by feedback from higher-order domain-general circuits, or is it locally implemented in domain-specific circuits? What information sources are used to generate these predictions? This study addresses these two questions in the context of language processing. We present fMRI evidence from a naturalistic comprehension paradigm (1) that predictive coding in the brain's response to language is domain-specific, and (2) that these predictions are sensitive both to local word co-occurrence patterns and to hierarchical structure. Using a recently developed continuous-time deconvolutional regression technique that supports data-driven hemodynamic response function discovery from continuous BOLD signal fluctuations in response to naturalistic stimuli, we found effects of prediction measures in the language network but not in the domain-general multiple-demand network, which supports executive control processes and has been previously implicated in language comprehension. Moreover, within the language network, surface-level and structural prediction effects were separable. The predictability effects in the language network were substantial, with the model capturing over 37% of explainable variance on held-out data. These findings indicate that human sentence processing mechanisms generate predictions about upcoming words using cognitive processes that are sensitive to hierarchical structure and specialized for language processing, rather than via feedback from high-level executive control mechanisms.
Collapse
Affiliation(s)
| | - Idan Asher Blank
- University of California Los Angeles, 90024, USA; Massachusetts Institute of Technology, 02139, USA.
| | | | - William Schuler
- The Ohio State University, 43210, USA; Massachusetts General Hospital, Program in Speech and Hearing Bioscience and Technology, 02115, USA.
| | - Evelina Fedorenko
- Massachusetts General Hospital, Program in Speech and Hearing Bioscience and Technology, 02115, USA.
| |
Collapse
|
28
|
Discourse management during speech perception: A functional magnetic resonance imaging (fMRI) study. Neuroimage 2019; 202:116047. [DOI: 10.1016/j.neuroimage.2019.116047] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 07/09/2019] [Accepted: 07/22/2019] [Indexed: 11/22/2022] Open
|
29
|
Planton S, Chanoine V, Sein J, Anton JL, Nazarian B, Pallier C, Pattamadilok C. Top-down activation of the visuo-orthographic system during spoken sentence processing. Neuroimage 2019; 202:116135. [PMID: 31470125 DOI: 10.1016/j.neuroimage.2019.116135] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Revised: 08/09/2019] [Accepted: 08/26/2019] [Indexed: 11/28/2022] Open
Abstract
The left ventral occipitotemporal cortex (vOT) is considered the key area of the visuo-orthographic system. However, some studies reported that the area is also involved in speech processing tasks, especially those that require activation of orthographic knowledge. These findings suggest the existence of a top-down activation mechanism allowing such cross-modal activation. Yet, little is known about the involvement of the vOT in more natural speech processing situations like spoken sentence processing. Here, we addressed this issue in a functional Magnetic Resonance Imaging (fMRI) study while manipulating the impacts of two factors, i.e., task demands (semantic vs. low-level perceptual task) and the quality of speech signals (sentences presented against clear vs. noisy background). Analyses were performed at the levels of whole brain and region-of-interest (ROI) focusing on the vOT voxels individually identified through a reading task. Whole brain analysis showed that processing spoken sentences induced activity in a large network including the regions typically involved in phonological, articulatory, semantic and orthographic processing. ROI analysis further specified that a significant part of the vOT voxels that responded to written words also responded to spoken sentences, thus, suggesting that the same area within the left occipitotemporal pathway contributes to both reading and speech processing. Interestingly, both analyses provided converging evidence that vOT responses to speech were sensitive to both task demands and quality of speech signals: Compared to the low-level perceptual task, activity of the area increased when efforts on comprehension were required. The impact of background noise depended on task demands. It led to a decrease of vOT activity in the semantic task but not in the low-level perceptual task. Our results provide new insights into the function of this key area of the reading network, notably by showing that its speech-induced top-down activation also generalizes to ecological speech processing situations.
Collapse
Affiliation(s)
- Samuel Planton
- Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France; INSERM-CEA, Cognitive Neuroimaging Unit, Neurospin Center, Gif-sur-Yvette, France.
| | - Valérie Chanoine
- Aix Marseille Univ, Institute of Language, Communication and the Brain, Brain and Language Research Institute, Aix-en-Provence, France
| | - Julien Sein
- Aix Marseille Univ, CNRS, Centre IRM-INT, INT UMR, 7289, Marseille, France
| | - Jean-Luc Anton
- Aix Marseille Univ, CNRS, Centre IRM-INT, INT UMR, 7289, Marseille, France
| | - Bruno Nazarian
- Aix Marseille Univ, CNRS, Centre IRM-INT, INT UMR, 7289, Marseille, France
| | - Christophe Pallier
- INSERM-CEA, Cognitive Neuroimaging Unit, Neurospin Center, Gif-sur-Yvette, France
| | | |
Collapse
|
30
|
Scott TL, Perrachione TK. Common cortical architectures for phonological working memory identified in individual brains. Neuroimage 2019; 202:116096. [PMID: 31415882 DOI: 10.1016/j.neuroimage.2019.116096] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Revised: 07/10/2019] [Accepted: 08/11/2019] [Indexed: 02/01/2023] Open
Abstract
Phonological working memory is the capacity to briefly maintain and recall representations of sounds important for speech and language and is believed to be critical for language and reading acquisition. Whether phonological working memory is supported by fronto-parietal brain regions associated with short-term memory storage or perisylvian brain structures implicated in speech perception and production is unclear, perhaps due to variability in stimuli, task demands, and individuals. We used fMRI to assess neurophysiological responses while individuals performed two tasks with closely matched stimuli but divergent task demands-nonword repetition and nonword discrimination-at two levels of phonological working memory load. Using analyses designed to address intersubject variability, we found significant neural responses to the critical contrast of high vs. low phonological working memory load in both tasks in a set of regions closely resembling those involved in speech perception and production. Moreover, within those regions, the voxel-wise patterns of load-related activation were highly correlated between the two tasks. These results suggest that brain regions in the temporal and frontal lobes encapsulate the core neurocomputational components of phonological working memory; an architecture that becomes increasingly evident as neural responses are examined in successively finer-grained detail in individual participants.
Collapse
Affiliation(s)
- Terri L Scott
- Graduate Program for Neuroscience, Boston University, USA
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, USA.
| |
Collapse
|
31
|
Siegelman M, Blank IA, Mineroff Z, Fedorenko E. An Attempt to Conceptually Replicate the Dissociation between Syntax and Semantics during Sentence Comprehension. Neuroscience 2019; 413:219-229. [PMID: 31200104 PMCID: PMC6661197 DOI: 10.1016/j.neuroscience.2019.06.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 05/31/2019] [Accepted: 06/03/2019] [Indexed: 11/24/2022]
Abstract
Is sentence structure processed by the same neural and cognitive resources that are recruited for processing word meanings, or do structure and meaning rely on distinct resources? Linguistic theorizing and much behavioral evidence suggest tight integration between lexico-semantic and syntactic representations and processing. However, most current proposals of the neural architecture of language continue to postulate a distinction between the two. One of the earlier and most cited pieces of neuroimaging evidence in favor of this dissociation comes from a paper by Dapretto and Bookheimer (1999). Using a sentence-meaning judgment task, Dapretto & Bookheimer observed two distinct peaks within the left inferior frontal gyrus (LIFG): one was more active during a lexico-semantic manipulation, and the other during a syntactic manipulation. Although the paper is highly cited, no attempt has been made, to our knowledge, to replicate the original finding. We report an fMRI study that attempts to do so. Using a combination of whole-brain, group-level ROI, and participant-specific functional ROI approaches, we fail to replicate the original dissociation. In particular, whereas parts of LIFG respond reliably more strongly during lexico-semantic than syntactic processing, no part of LIFG (including in the region defined around the peak reported by Dapretto & Bookheimer) shows the opposite pattern. We speculate that the original result was a false positive, possibly driven by a small subset of participants or items that biased a fixed-effects analysis with low power.
Collapse
Affiliation(s)
- Matthew Siegelman
- MIT, Department of Brain and Cognitive Sciences; Columbia University, Department of Psychology
| | - Idan A Blank
- MIT, Department of Brain and Cognitive Sciences; UCLA, Department of Psychology
| | | | - Evelina Fedorenko
- MIT, Department of Brain and Cognitive Sciences; MIT, McGovern Institute for Brain Research; MGH, Department of Psychiatry.
| |
Collapse
|
32
|
Lizarazu M, Lallier M, Molinaro N. Phase-amplitude coupling between theta and gamma oscillations adapts to speech rate. Ann N Y Acad Sci 2019; 1453:140-152. [PMID: 31020680 PMCID: PMC6850406 DOI: 10.1111/nyas.14099] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Revised: 02/11/2019] [Accepted: 03/26/2019] [Indexed: 11/30/2022]
Abstract
Low- and high-frequency cortical oscillations play an important role in speech processing. Low-frequency neural oscillations in the delta (<4 Hz) and theta (4-8 Hz) bands entrain to the prosodic and syllabic rates of speech, respectively. Theta band neural oscillations modulate high-frequency neural oscillations in the gamma band (28-40 Hz), which have been hypothesized to be crucial for processing phonemes in natural speech. Since speech rate is known to vary considerably, both between and within talkers, it has yet to be determined whether this nested gamma response reflects an externally induced rhythm sensitive to the rate of the fine-grained structure of the input or a speech rate-independent endogenous response. Here, we recorded magnetoencephalography responses from participants listening to a speech delivered at different rates: decelerated, normal, and accelerated. We found that the phase of theta band oscillations in left and right auditory regions adjusts to speech rate variations. Importantly, we showed that the peak of the gamma response-coupled to the phase of theta-follows the speech rate. This indicates that gamma activity in auditory regions synchronizes with the fine-grain properties of speech, possibly reflecting detailed acoustic analysis of the input.
Collapse
Affiliation(s)
- Mikel Lizarazu
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain.,Laboratoire de Sciences Cognitives et Psycholinguistique, Dept d'Etudes Cognitives, ENS, PSL University, EHESS, CNRS, Paris, France
| | - Marie Lallier
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain
| | - Nicola Molinaro
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain.,Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
33
|
Paunov AM, Blank IA, Fedorenko E. Functionally distinct language and Theory of Mind networks are synchronized at rest and during language comprehension. J Neurophysiol 2019; 121:1244-1265. [PMID: 30601693 PMCID: PMC6485726 DOI: 10.1152/jn.00619.2018] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Revised: 12/26/2018] [Accepted: 12/30/2018] [Indexed: 12/30/2022] Open
Abstract
Communication requires the abilities to generate and interpret utterances and to infer the beliefs, desires, and goals of others ("Theory of Mind"; ToM). These two abilities have been shown to dissociate: individuals with aphasia retain the ability to think about others' mental states; and individuals with autism are impaired in social reasoning, but their basic language processing is often intact. In line with this evidence from brain disorders, functional MRI (fMRI) studies have shown that linguistic and ToM abilities recruit distinct sets of brain regions. And yet, language is a social tool that allows us to share thoughts with one another. Thus, the language and ToM brain networks must share information despite being implemented in distinct neural circuits. Here, we investigated potential interactions between these networks during naturalistic cognition using functional correlations in fMRI. The networks were functionally defined in individual participants, in terms of preference for sentences over nonwords for language, and for belief inference over physical-event processing for ToM, with both a verbal and a nonverbal paradigm. Although, across experiments, interregion correlations within each network were higher than between-network correlations, we also observed above-baseline synchronization of blood oxygenation level-dependent signal fluctuations between the two networks during rest and story comprehension. This synchronization was functionally specific: neither network was synchronized with the executive control network (functionally defined in terms of preference for a harder over easier version of an executive task). Thus, coordination between the language and ToM networks appears to be an inherent and specific characteristic of their functional architecture. NEW & NOTEWORTHY Humans differ from nonhuman primates in their abilities to communicate linguistically and to infer others' mental states. Although linguistic and social abilities appear to be interlinked onto- and phylogenetically, they are dissociated in the adult human brain. Yet successful communication requires language and social reasoning to work in concert. Using functional MRI, we show that language regions are synchronized with social regions during rest and language comprehension, pointing to a possible mechanism for internetwork interaction.
Collapse
Affiliation(s)
- Alexander M Paunov
- Massachusetts Institute of Technology, Brain & Cognitive Sciences Department , Cambridge, Massachusetts
| | - Idan A Blank
- Massachusetts Institute of Technology, Brain & Cognitive Sciences Department , Cambridge, Massachusetts
| | - Evelina Fedorenko
- Massachusetts Institute of Technology, Brain & Cognitive Sciences Department , Cambridge, Massachusetts
- Harvard Medical School, Psychiatry Department , Boston, Massachusetts
- Massachusetts General Hospital, Psychiatry Department , Boston, Massachusetts
| |
Collapse
|
34
|
Weaver MD, Fahrenfort JJ, Belopolsky A, van Gaal S. Independent Neural Activity Patterns for Sensory- and Confidence-Based Information Maintenance during Category-Selective Visual Processing. eNeuro 2019; 6:ENEURO.0268-18.2018. [PMID: 30834301 PMCID: PMC6397950 DOI: 10.1523/eneuro.0268-18.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Revised: 11/30/2018] [Accepted: 12/16/2018] [Indexed: 11/21/2022] Open
Abstract
Several influential theories of consciousness attempt to explain how, when and where conscious perception arises in the brain. The extent of conscious perception of a stimulus is often probed by asking subjects to provide confidence estimations in their choices in challenging perceptual decision-making tasks. Here, we aimed to dissociate neural patterns of "cognitive" and "sensory" information maintenance by linking category selective visual processes to decision confidence using multivariate decoding techniques on human EEG data. Participants discriminated at-threshold masked face versus house stimuli and reported confidence in their discrimination performance. Three distinct types of category-selective neural activity patterns were observed, dissociable by their timing, scalp topography, relationship with decision confidence, and generalization profile. An early (∼150-200 ms) decoding profile was unrelated to confidence and quickly followed by two distinct decodable patterns of late neural activity (350-500 ms). One pattern was on-diagonal, global and highly related to decision confidence, likely indicating cognitive maintenance of consciously reportable stimulus representations. The other pattern however was off-diagonal, restricted to posterior electrode sites (local), and independent of decision confidence, and therefore may reflect sensory maintenance of category-specific information, possibly operating via recurrent processes within visual cortices. These results highlight that two functionally independent neural processes are operating in parallel, only one of which is related to decision confidence and conscious access.
Collapse
Affiliation(s)
- Matthew D. Weaver
- Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, The Netherlands
- Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, The Netherlands
- Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam 1081 BT, The Netherlands
| | - Johannes J. Fahrenfort
- Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, The Netherlands
- Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, The Netherlands
- Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam 1081 BT, The Netherlands
| | - Artem Belopolsky
- Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam 1081 BT, The Netherlands
| | - Simon van Gaal
- Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, The Netherlands
- Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, The Netherlands
| |
Collapse
|
35
|
Hertrich I, Dietrich S, Ackermann H. Cortical phase locking to accelerated speech in blind and sighted listeners prior to and after training. BRAIN AND LANGUAGE 2018; 185:19-29. [PMID: 30025355 DOI: 10.1016/j.bandl.2018.07.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 07/06/2018] [Accepted: 07/06/2018] [Indexed: 06/08/2023]
Abstract
Cross-correlation of magnetoencephalography (MEG) with time courses derived from the speech signal has shown differences in phase-locking between blind subjects able to comprehend accelerated speech and sighted controls. The present training study contributes to disentangle the effects of blindness and training. Both subject groups (baseline: n = 16 blind, 13 sighted; trained: 10 blind, 3 sighted) were able to enhance speech comprehension up to ca. 18 syllables per second. MEG responses phase-locked to syllable onsets were captured in five pre-defined source locations comprising left and right auditory cortex (A1), right visual cortex (V1), left inferior frontal gyrus (IFG) and left pre-supplementary motor area. Phase locking in A1 was consistently increased while V1 showed opposite training effects in blind and sighted subjects. Also the IFG showed some group differences indicating enhanced top-down strategies in sighted subjects while blind subjects may have a more fine-grained bottom-up resolution for accelerated speech.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | - Susanne Dietrich
- Department of Psychology, Evolutionary Cognition (Cognitive Sciences), University of Tübingen, Germany
| | - Hermann Ackermann
- Department of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| |
Collapse
|
36
|
Penn LR, Ayasse ND, Wingfield A, Ghitza O. The possible role of brain rhythms in perceiving fast speech: Evidence from adult aging. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2088. [PMID: 30404494 PMCID: PMC6181647 DOI: 10.1121/1.5054905] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 08/28/2018] [Accepted: 08/31/2018] [Indexed: 06/08/2023]
Abstract
The rhythms of speech and the time scales of linguistic units (e.g., syllables) correspond remarkably to cortical oscillations. Previous research has demonstrated that in young adults, the intelligibility of time-compressed speech can be rescued by "repackaging" the speech signal through the regular insertion of silent gaps to restore correspondence to the theta oscillator. This experiment tested whether this same phenomenon can be demonstrated in older adults, who show age-related changes in cortical oscillations. The results demonstrated a similar phenomenon for older adults, but that the "rescue point" of repackaging is shifted, consistent with a slowing of theta oscillations.
Collapse
Affiliation(s)
- Lana R Penn
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454, USA
| | - Nicole D Ayasse
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454, USA
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454, USA
| | - Oded Ghitza
- Department of Biomedical Engineering, Hearing Research Center, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
37
|
Amalric M, Dehaene S. Cortical circuits for mathematical knowledge: evidence for a major subdivision within the brain's semantic networks. Philos Trans R Soc Lond B Biol Sci 2018; 373:rstb.2016.0515. [PMID: 29292362 DOI: 10.1098/rstb.2016.0515] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/17/2017] [Indexed: 01/29/2023] Open
Abstract
Is mathematical language similar to natural language? Are language areas used by mathematicians when they do mathematics? And does the brain comprise a generic semantic system that stores mathematical knowledge alongside knowledge of history, geography or famous people? Here, we refute those views by reviewing three functional MRI studies of the representation and manipulation of high-level mathematical knowledge in professional mathematicians. The results reveal that brain activity during professional mathematical reflection spares perisylvian language-related brain regions as well as temporal lobe areas classically involved in general semantic knowledge. Instead, mathematical reflection recycles bilateral intraparietal and ventral temporal regions involved in elementary number sense. Even simple fact retrieval, such as remembering that 'the sine function is periodical' or that 'London buses are red', activates dissociated areas for math versus non-math knowledge. Together with other fMRI and recent intracranial studies, our results indicated a major separation between two brain networks for mathematical and non-mathematical semantics, which goes a long way to explain a variety of facts in neuroimaging, neuropsychology and developmental disorders.This article is part of a discussion meeting issue 'The origins of numerical abilities'.
Collapse
Affiliation(s)
- Marie Amalric
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France .,Collège de France, Paris, France.,Sorbonne Universités, UPMC Univ Paris 06, IFD, 4 place Jussieu, Paris, France
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France .,Collège de France, Paris, France
| |
Collapse
|
38
|
Dietrich S, Hertrich I, Müller-Dahlhaus F, Ackermann H, Belardinelli P, Desideri D, Seibold VC, Ziemann U. Reduced Performance During a Sentence Repetition Task by Continuous Theta-Burst Magnetic Stimulation of the Pre-supplementary Motor Area. Front Neurosci 2018; 12:361. [PMID: 29896086 PMCID: PMC5987029 DOI: 10.3389/fnins.2018.00361] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 05/09/2018] [Indexed: 11/23/2022] Open
Abstract
The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient “virtual lesion” using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s). Speech comprehension was quantified by the percentage of correctly reproduced speech material. For high speech rates, subjects showed decreased performance after cTBS of pre-SMA. Regarding the error pattern, the number of incorrect words without any semantic or phonological similarity to the target context increased, while related words decreased. Thus, the transient impairment of pre-SMA seems to affect its inhibitory function that normally eliminates erroneous speech material prior to speaking or, in case of perception, prior to encoding into a semantically/pragmatically meaningful message.
Collapse
Affiliation(s)
- Susanne Dietrich
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.,Department of Psychology, Evolutionary Cognition, University of Tübingen, Tübingen, Germany
| | - Ingo Hertrich
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Florian Müller-Dahlhaus
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.,Department of Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg University, University of Mainz, Mainz, Germany
| | - Hermann Ackermann
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Paolo Belardinelli
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Debora Desideri
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Verena C Seibold
- Department of Psychology, Evolutionary Cognition, University of Tübingen, Tübingen, Germany
| | - Ulf Ziemann
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| |
Collapse
|
39
|
Wilson SM, Bautista A, McCarron A. Convergence of spoken and written language processing in the superior temporal sulcus. Neuroimage 2018; 171:62-74. [PMID: 29277646 PMCID: PMC5857434 DOI: 10.1016/j.neuroimage.2017.12.068] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Revised: 12/15/2017] [Accepted: 12/20/2017] [Indexed: 12/22/2022] Open
Abstract
Spoken and written language processing streams converge in the superior temporal sulcus (STS), but the functional and anatomical nature of this convergence is not clear. We used functional MRI to quantify neural responses to spoken and written language, along with unintelligible stimuli in each modality, and employed several strategies to segregate activations on the dorsal and ventral banks of the STS. We found that intelligible and unintelligible inputs in both modalities activated the dorsal bank of the STS. The posterior dorsal bank was able to discriminate between modalities based on distributed patterns of activity, pointing to a role in encoding of phonological and orthographic word forms. The anterior dorsal bank was agnostic to input modality, suggesting that this region represents abstract lexical nodes. In the ventral bank of the STS, responses to unintelligible inputs in both modalities were attenuated, while intelligible inputs continued to drive activation, indicative of higher level semantic and syntactic processing. Our results suggest that the processing of spoken and written language converges on the posterior dorsal bank of the STS, which is the first of a heterogeneous set of language regions within the STS, with distinct functions spanning a broad range of linguistic processes.
Collapse
Affiliation(s)
- Stephen M Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Alexa Bautista
- Department of Speech, Language, and Hearing Sciences, University of Arizona, Tucson, AZ, USA
| | - Angelica McCarron
- Department of Speech, Language, and Hearing Sciences, University of Arizona, Tucson, AZ, USA
| |
Collapse
|
40
|
Nakamura K, Makuuchi M, Oga T, Mizuochi-Endo T, Iwabuchi T, Nakajima Y, Dehaene S. Neural capacity limits during unconscious semantic processing. Eur J Neurosci 2018. [DOI: 10.1111/ejn.13890] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Kimihiro Nakamura
- Faculty of Human Sciences; University of Tsukuba; Tsukuba 305-8577 Japan
| | - Michiru Makuuchi
- National Rehabilitation Center for Persons with Disabilities; Tokorozawa 359-0042 Japan
| | - Tatsuhide Oga
- Toranomon Hospital Kajigaya; Kawasaki 213-0015 Japan
| | - Tomomi Mizuochi-Endo
- National Rehabilitation Center for Persons with Disabilities; Tokorozawa 359-0042 Japan
| | - Toshiki Iwabuchi
- National Rehabilitation Center for Persons with Disabilities; Tokorozawa 359-0042 Japan
| | - Yasoichi Nakajima
- National Rehabilitation Center for Persons with Disabilities; Tokorozawa 359-0042 Japan
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit; CEA DSV/I2BM; INSERM; NeuroSpin Center; Université Paris-Sud; Université Paris-Saclay; 91191 Gif/Yvette France
- Collège de France; 11 Place Marcelin Berthelot 75005 Paris France
| |
Collapse
|
41
|
Moreno A, Limousin F, Dehaene S, Pallier C. Brain correlates of constituent structure in sign language comprehension. Neuroimage 2018; 167:151-161. [PMID: 29175202 PMCID: PMC6044420 DOI: 10.1016/j.neuroimage.2017.11.040] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 10/27/2017] [Accepted: 11/19/2017] [Indexed: 01/16/2023] Open
Abstract
During sentence processing, areas of the left superior temporal sulcus, inferior frontal gyrus and left basal ganglia exhibit a systematic increase in brain activity as a function of constituent size, suggesting their involvement in the computation of syntactic and semantic structures. Here, we asked whether these areas play a universal role in language and therefore contribute to the processing of non-spoken sign language. Congenitally deaf adults who acquired French sign language as a first language and written French as a second language were scanned while watching sequences of signs in which the size of syntactic constituents was manipulated. An effect of constituent size was found in the basal ganglia, including the head of the caudate and the putamen. A smaller effect was also detected in temporal and frontal regions previously shown to be sensitive to constituent size in written language in hearing French subjects (Pallier et al., 2011). When the deaf participants read sentences versus word lists, the same network of language areas was observed. While reading and sign language processing yielded identical effects of linguistic structure in the basal ganglia, the effect of structure was stronger in all cortical language areas for written language relative to sign language. Furthermore, cortical activity was partially modulated by age of acquisition and reading proficiency. Our results stress the important role of the basal ganglia, within the language network, in the representation of the constituent structure of language, regardless of the input modality.
Collapse
Affiliation(s)
- Antonio Moreno
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France.
| | - Fanny Limousin
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France; Collège de France, 11 Place Marcelin Berthelot, 75005 Paris, France
| | - Christophe Pallier
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France.
| |
Collapse
|
42
|
Borges AFT, Giraud AL, Mansvelder HD, Linkenkaer-Hansen K. Scale-Free Amplitude Modulation of Neuronal Oscillations Tracks Comprehension of Accelerated Speech. J Neurosci 2018; 38:710-722. [PMID: 29217685 PMCID: PMC6596185 DOI: 10.1523/jneurosci.1515-17.2017] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Revised: 10/24/2017] [Accepted: 11/20/2017] [Indexed: 01/17/2023] Open
Abstract
Speech comprehension is preserved up to a threefold acceleration, but deteriorates rapidly at higher speeds. Current models posit that perceptual resilience to accelerated speech is limited by the brain's ability to parse speech into syllabic units using δ/θ oscillations. Here, we investigated whether the involvement of neuronal oscillations in processing accelerated speech also relates to their scale-free amplitude modulation as indexed by the strength of long-range temporal correlations (LRTC). We recorded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensible rates, at a mostly unintelligible rate and at this same speed interleaved with silence gaps. δ, θ, and low-γ oscillations followed the nonlinear variation of comprehension, with LRTC rising only at the highest speed. In contrast, increasing the rate was associated with a monotonic increase in LRTC in high-γ activity. When intelligibility was restored with the insertion of silence gaps, LRTC in the δ, θ, and low-γ oscillations resumed the low levels observed for intelligible speech. Remarkably, the lower the individual subject scaling exponents of δ/θ oscillations, the greater the comprehension of the fastest speech rate. Moreover, the strength of LRTC of the speech envelope decreased at the maximal rate, suggesting an inverse relationship with the LRTC of brain dynamics when comprehension halts. Our findings show that scale-free amplitude modulation of cortical oscillations and speech signals are tightly coupled to speech uptake capacity.SIGNIFICANCE STATEMENT One may read this statement in 20-30 s, but reading it in less than five leaves us clueless. Our minds limit how much information we grasp in an instant. Understanding the neural constraints on our capacity for sensory uptake is a fundamental question in neuroscience. Here, MEG was used to investigate neuronal activity while subjects listened to radio news played faster and faster until becoming unintelligible. We found that speech comprehension is related to the scale-free dynamics of δ and θ bands, whereas this property in high-γ fluctuations mirrors speech rate. We propose that successful speech processing imposes constraints on the self-organization of synchronous cell assemblies and their scale-free dynamics adjusts to the temporal properties of spoken language.
Collapse
Affiliation(s)
- Ana Filipa Teixeira Borges
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, Netherlands
- Amsterdam Neuroscience, Amsterdam, Netherlands, and
| | - Anne-Lise Giraud
- Department of Neuroscience, University of Geneva, Biotech Campus, Geneva 1211, Switzerland
| | - Huibert D Mansvelder
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, Netherlands
- Amsterdam Neuroscience, Amsterdam, Netherlands, and
| | - Klaus Linkenkaer-Hansen
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, Netherlands,
- Amsterdam Neuroscience, Amsterdam, Netherlands, and
| |
Collapse
|
43
|
Discrete and continuous mechanisms of temporal selection in rapid visual streams. Nat Commun 2017; 8:1955. [PMID: 29208892 PMCID: PMC5717232 DOI: 10.1038/s41467-017-02079-x] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Accepted: 11/04/2017] [Indexed: 11/08/2022] Open
Abstract
Humans can reliably detect a target picture even when tens of images are flashed every second. Here we use magnetoencephalography to dissect the neural mechanisms underlying the dynamics of temporal selection during a rapid serial visual presentation task. Multivariate decoding algorithms allow us to track the overlapping brain responses induced by each image in a rapid visual stream. The results show that temporal selection involves a sequence of gradual followed by all-or-none stages: (i) all images first undergo the same parallel processing pipeline; (ii) starting around 150 ms, responses to multiple images surrounding the target are continuously amplified in ventral visual areas; (iii) only the images that are subsequently reported elicit late all-or-none activations in visual and parietal areas around 350 ms. Thus, multiple images can cohabit in the brain and undergo efficient parallel processing, but temporal selection also isolates a single one for amplification and report.
Collapse
|
44
|
Kozák LR, van Graan LA, Chaudhary UJ, Szabó ÁG, Lemieux L. ICN_Atlas: Automated description and quantification of functional MRI activation patterns in the framework of intrinsic connectivity networks. Neuroimage 2017; 163:319-341. [PMID: 28899742 PMCID: PMC5725313 DOI: 10.1016/j.neuroimage.2017.09.014] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Revised: 08/30/2017] [Accepted: 09/06/2017] [Indexed: 12/29/2022] Open
Abstract
Generally, the interpretation of functional MRI (fMRI) activation maps continues to rely on assessing their relationship to anatomical structures, mostly in a qualitative and often subjective way. Recently, the existence of persistent and stable brain networks of functional nature has been revealed; in particular these so-called intrinsic connectivity networks (ICNs) appear to link patterns of resting state and task-related state connectivity. These networks provide an opportunity of functionally-derived description and interpretation of fMRI maps, that may be especially important in cases where the maps are predominantly task-unrelated, such as studies of spontaneous brain activity e.g. in the case of seizure-related fMRI maps in epilepsy patients or sleep states. Here we present a new toolbox (ICN_Atlas) aimed at facilitating the interpretation of fMRI data in the context of ICN. More specifically, the new methodology was designed to describe fMRI maps in function-oriented, objective and quantitative way using a set of 15 metrics conceived to quantify the degree of 'engagement' of ICNs for any given fMRI-derived statistical map of interest. We demonstrate that the proposed framework provides a highly reliable quantification of fMRI activation maps using a publicly available longitudinal (test-retest) resting-state fMRI dataset. The utility of the ICN_Atlas is also illustrated on a parametric task-modulation fMRI dataset, and on a dataset of a patient who had repeated seizures during resting-state fMRI, confirmed on simultaneously recorded EEG. The proposed ICN_Atlas toolbox is freely available for download at http://icnatlas.com and at http://www.nitrc.org for researchers to use in their fMRI investigations.
Collapse
Affiliation(s)
- Lajos R Kozák
- MR Research Center, Semmelweis University, 1085, Budapest, Hungary.
| | - Louis André van Graan
- Department of Clinical and Experimental Epilepsy, UCL Institute of Neurology, University College London, WC1N 3BG, London, UK; Epilepsy Society, SL9 0RJ Chalfont St. Peter, Buckinghamshire, UK.
| | - Umair J Chaudhary
- Department of Clinical and Experimental Epilepsy, UCL Institute of Neurology, University College London, WC1N 3BG, London, UK; Epilepsy Society, SL9 0RJ Chalfont St. Peter, Buckinghamshire, UK.
| | | | - Louis Lemieux
- Department of Clinical and Experimental Epilepsy, UCL Institute of Neurology, University College London, WC1N 3BG, London, UK; Epilepsy Society, SL9 0RJ Chalfont St. Peter, Buckinghamshire, UK.
| |
Collapse
|
45
|
Domain-General Brain Regions Do Not Track Linguistic Input as Closely as Language-Selective Regions. J Neurosci 2017; 37:9999-10011. [PMID: 28871034 DOI: 10.1523/jneurosci.3642-16.2017] [Citation(s) in RCA: 50] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2016] [Revised: 08/16/2017] [Accepted: 08/18/2017] [Indexed: 01/05/2023] Open
Abstract
Language comprehension engages a cortical network of left frontal and temporal regions. Activity in this network is language-selective, showing virtually no modulation by nonlinguistic tasks. In addition, language comprehension engages a second network consisting of bilateral frontal, parietal, cingulate, and insular regions. Activity in this "multiple demand" (MD) network scales with comprehension difficulty, but also with cognitive effort across a wide range of nonlinguistic tasks in a domain-general fashion. Given the functional dissociation between the language and MD networks, their respective contributions to comprehension are likely distinct, yet such differences remain elusive. Prior neuroimaging studies have suggested that activity in each network covaries with some linguistic features that, behaviorally, influence on-line processing and comprehension. This sensitivity of the language and MD networks to local input characteristics has often been interpreted, implicitly or explicitly, as evidence that both networks track linguistic input closely, and in a manner consistent across individuals. Here, we used fMRI to directly test this assumption by comparing the BOLD signal time courses in each network across different people (n = 45, men and women) listening to the same story. Language network activity showed fewer individual differences, indicative of closer input tracking, whereas MD network activity was more idiosyncratic and, moreover, showed lower reliability within an individual across repetitions of a story. These findings constrain cognitive models of language comprehension by suggesting a novel distinction between the processes implemented in the language and MD networks.SIGNIFICANCE STATEMENT Language comprehension recruits both language-specific mechanisms and domain-general mechanisms that are engaged in many cognitive processes. In the human cortex, language-selective mechanisms are implemented in the left-lateralized "core language network", whereas domain-general mechanisms are implemented in the bilateral "multiple demand" (MD) network. Here, we report the first direct comparison of the respective contributions of these networks to naturalistic story comprehension. Using a novel combination of neuroimaging approaches we find that MD regions track stories less closely than language regions. This finding constrains the possible contributions of the MD network to comprehension, contrasts with accounts positing that this network has continuous access to linguistic input, and suggests a new typology of comprehension processes based on their extent of input tracking.
Collapse
|
46
|
Neurophysiological dynamics of phrase-structure building during sentence processing. Proc Natl Acad Sci U S A 2017; 114:E3669-E3678. [PMID: 28416691 DOI: 10.1073/pnas.1701590114] [Citation(s) in RCA: 129] [Impact Index Per Article: 18.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Although sentences unfold sequentially, one word at a time, most linguistic theories propose that their underlying syntactic structure involves a tree of nested phrases rather than a linear sequence of words. Whether and how the brain builds such structures, however, remains largely unknown. Here, we used human intracranial recordings and visual word-by-word presentation of sentences and word lists to investigate how left-hemispheric brain activity varies during the formation of phrase structures. In a broad set of language-related areas, comprising multiple superior temporal and inferior frontal sites, high-gamma power increased with each successive word in a sentence but decreased suddenly whenever words could be merged into a phrase. Regression analyses showed that each additional word or multiword phrase contributed a similar amount of additional brain activity, providing evidence for a merge operation that applies equally to linguistic objects of arbitrary complexity. More superficial models of language, based solely on sequential transition probability over lexical and syntactic categories, only captured activity in the posterior middle temporal gyrus. Formal model comparison indicated that the model of multiword phrase construction provided a better fit than probability-based models at most sites in superior temporal and inferior frontal cortices. Activity in those regions was consistent with a neural implementation of a bottom-up or left-corner parser of the incoming language stream. Our results provide initial intracranial evidence for the neurophysiological reality of the merge operation postulated by linguists and suggest that the brain compresses syntactically well-formed sequences of words into a hierarchy of nested phrases.
Collapse
|
47
|
Pattamadilok C, Chanoine V, Pallier C, Anton JL, Nazarian B, Belin P, Ziegler JC. Automaticity of phonological and semantic processing during visual word recognition. Neuroimage 2017; 149:244-255. [PMID: 28163139 DOI: 10.1016/j.neuroimage.2017.02.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Revised: 01/30/2017] [Accepted: 02/02/2017] [Indexed: 11/25/2022] Open
Abstract
Reading involves activation of phonological and semantic knowledge. Yet, the automaticity of the activation of these representations remains subject to debate. The present study addressed this issue by examining how different brain areas involved in language processing responded to a manipulation of bottom-up (level of visibility) and top-down information (task demands) applied to written words. The analyses showed that the same brain areas were activated in response to written words whether the task was symbol detection, rime detection, or semantic judgment. This network included posterior, temporal and prefrontal regions, which clearly suggests the involvement of orthographic, semantic and phonological/articulatory processing in all tasks. However, we also found interactions between task and stimulus visibility, which reflected the fact that the strength of the neural responses to written words in several high-level language areas varied across tasks. Together, our findings suggest that the involvement of phonological and semantic processing in reading is supported by two complementary mechanisms. First, an automatic mechanism that results from a task-independent spread of activation throughout a network in which orthography is linked to phonology and semantics. Second, a mechanism that further fine-tunes the sensitivity of high-level language areas to the sensory input in a task-dependent manner.
Collapse
Affiliation(s)
| | - Valérie Chanoine
- Labex Brain and Language Research Institute, Aix-en-Provence, France
| | - Christophe Pallier
- INSERM-CEA Cognitive Neuroimaging Unit, Neurospin center, Gif-sur-Yvette, France
| | - Jean-Luc Anton
- Aix Marseille Univ, CNRS, INT Inst Neurosci Timone, UMR 7289, Centre IRM Fonctionnelle Cérébrale, Marseille, France
| | - Bruno Nazarian
- Aix Marseille Univ, CNRS, INT Inst Neurosci Timone, UMR 7289, Centre IRM Fonctionnelle Cérébrale, Marseille, France
| | - Pascal Belin
- Aix Marseille Univ, CNRS, INT Inst Neurosci Timone, UMR 7289, Centre IRM Fonctionnelle Cérébrale, Marseille, France
| | | |
Collapse
|
48
|
Issard C, Gervain J. Adult-like processing of time-compressed speech by newborns: A NIRS study. Dev Cogn Neurosci 2016; 25:176-184. [PMID: 27852514 PMCID: PMC6987815 DOI: 10.1016/j.dcn.2016.10.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2016] [Revised: 10/05/2016] [Accepted: 10/06/2016] [Indexed: 11/29/2022] Open
Abstract
Newborns’ perception of time-compressed speech is similar to that of adults. Newborns adapt to moderately compressed speech, but not to highly compressed speech. Adaptation to time-compressed speech happens at an auditory level. Adaptation to time-compressed speech involves the left temporoparietal regions.
Humans can adapt to a wide range of variations in the speech signal, maintaining an invariant representation of the linguistic information it contains. Among them, adaptation to rapid or time-compressed speech has been well studied in adults, but the developmental origin of this capacity remains unknown. Does this ability depend on experience with speech (if yes, as heard in utero or as heard postnatally), with sounds in general or is it experience-independent? Using near-infrared spectroscopy, we show that the newborn brain can discriminate between three different compression rates: normal, i.e. 100% of the original duration, moderately compressed, i.e. 60% of original duration and highly compressed, i.e. 30% of original duration. Even more interestingly, responses to normal and moderately compressed speech are similar, showing a canonical hemodynamic response in the left temporoparietal, right frontal and right temporal cortex, while responses to highly compressed speech are inverted, showing a decrease in oxyhemoglobin concentration. These results mirror those found in adults, who readily adapt to moderately compressed, but not to highly compressed speech, showing that adaptation to time-compressed speech requires little or no experience with speech, and happens at an auditory, and not at a more abstract linguistic level.
Collapse
Affiliation(s)
- Cécile Issard
- Laboratoire Psychologie de la Perception, Université Paris Descartes, 75006 Paris, France; Laboratoire Psychologie de la Perception, Centre National de la Recherche Scientifique UMR 8242, 75006 Paris, France
| | - Judit Gervain
- Laboratoire Psychologie de la Perception, Université Paris Descartes, 75006 Paris, France; Laboratoire Psychologie de la Perception, Centre National de la Recherche Scientifique UMR 8242, 75006 Paris, France.
| |
Collapse
|
49
|
Hertrich I, Dietrich S, Ackermann H. The role of the supplementary motor area for speech and language processing. Neurosci Biobehav Rev 2016; 68:602-610. [PMID: 27343998 DOI: 10.1016/j.neubiorev.2016.06.030] [Citation(s) in RCA: 173] [Impact Index Per Article: 21.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Revised: 06/17/2016] [Accepted: 06/21/2016] [Indexed: 01/23/2023]
Abstract
Apart from its function in speech motor control, the supplementary motor area (SMA) has largely been neglected in models of speech and language processing in the brain. The aim of this review paper is to summarize more recent work, suggesting that the SMA has various superordinate control functions during speech communication and language reception, which is particularly relevant in case of increased task demands. The SMA is subdivided into a posterior region serving predominantly motor-related functions (SMA proper) whereas the anterior part (pre-SMA) is involved in higher-order cognitive control mechanisms. In analogy to motor triggering functions of the SMA proper, the pre-SMA seems to manage procedural aspects of cognitive processing. These latter functions, among others, comprise attentional switching, ambiguity resolution, context integration, and coordination between procedural and declarative memory structures. Regarding language processing, this refers, for example, to the use of inner speech mechanisms during language encoding, but also to lexical disambiguation, syntax and prosody integration, and context-tracking.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | - Susanne Dietrich
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| | - Hermann Ackermann
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| |
Collapse
|
50
|
Dehaene S, Meyniel F, Wacongne C, Wang L, Pallier C. The Neural Representation of Sequences: From Transition Probabilities to Algebraic Patterns and Linguistic Trees. Neuron 2015; 88:2-19. [DOI: 10.1016/j.neuron.2015.09.019] [Citation(s) in RCA: 243] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|