1
|
Xie Y, Zhou P, Zhan L, Xue Y. Low-frequency neural activity tracks syntactic information through semantic mediation. BRAIN AND LANGUAGE 2025; 261:105532. [PMID: 39787812 DOI: 10.1016/j.bandl.2025.105532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 12/17/2024] [Accepted: 01/02/2025] [Indexed: 01/12/2025]
Abstract
How our brain integrates single words into larger linguistic units is a central focus in neurolinguistic studies. Previous studies mainly explored this topic at the semantic or syntactic level, with few looking at how cortical activities track word sequences with different levels of semantic correlations. In addition, prior research did not tease apart the semantic factors from the syntactic ones in the word sequences. The current study addressed these issues by conducting a speech perception EEG experiment using the frequency-tagging paradigm. Participants (N = 25, Meanage = 23;4, 16 girls) were asked to listen to different types of sequences and their neural activity was recorded by EEG. We also constructed a model simulation based on surprisal values of GPT-2. Both the EEG results and the model prediction show that low-frequency neural activity tracks syntactic information through semantic mediation. Implications of the findings were discussed in relation to the language processing mechanism.
Collapse
Affiliation(s)
- Yuan Xie
- School of Engineering, Westlake University, Hangzhou, Zhejiang 310030, China; Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, Zhejiang 310024, China
| | - Peng Zhou
- Department of Linguistics, School of International Studies, Zhejiang University, Hangzhou 310058, China.
| | - Likan Zhan
- School of Communication Sciences, Beijing Language and Culture University, Beijing 100083, China
| | - Yanan Xue
- School of Communication Sciences, Beijing Language and Culture University, Beijing 100083, China
| |
Collapse
|
2
|
Coopmans CW, de Hoop H, Tezcan F, Hagoort P, Martin AE. Language-specific neural dynamics extend syntax into the time domain. PLoS Biol 2025; 23:e3002968. [PMID: 39836653 PMCID: PMC11750093 DOI: 10.1371/journal.pbio.3002968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 12/05/2024] [Indexed: 01/23/2025] Open
Abstract
Studies of perception have long shown that the brain adds information to its sensory analysis of the physical environment. A touchstone example for humans is language use: to comprehend a physical signal like speech, the brain must add linguistic knowledge, including syntax. Yet, syntactic rules and representations are widely assumed to be atemporal (i.e., abstract and not bound by time), so they must be translated into time-varying signals for speech comprehension and production. Here, we test 3 different models of the temporal spell-out of syntactic structure against brain activity of people listening to Dutch stories: an integratory bottom-up parser, a predictive top-down parser, and a mildly predictive left-corner parser. These models build exactly the same structure but differ in when syntactic information is added by the brain-this difference is captured in the (temporal distribution of the) complexity metric "incremental node count." Using temporal response function models with both acoustic and information-theoretic control predictors, node counts were regressed against source-reconstructed delta-band activity acquired with magnetoencephalography. Neural dynamics in left frontal and temporal regions most strongly reflect node counts derived by the top-down method, which postulates syntax early in time, suggesting that predictive structure building is an important component of Dutch sentence comprehension. The absence of strong effects of the left-corner model further suggests that its mildly predictive strategy does not represent Dutch language comprehension well, in contrast to what has been found for English. Understanding when the brain projects its knowledge of syntax onto speech, and whether this is done in language-specific ways, will inform and constrain the development of mechanistic models of syntactic structure building in the brain.
Collapse
Affiliation(s)
- Cas W. Coopmans
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
- Centre for Language Studies, Radboud University, Nijmegen, the Netherlands
| | - Helen de Hoop
- Centre for Language Studies, Radboud University, Nijmegen, the Netherlands
| | - Filiz Tezcan
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
3
|
Weissbart H, Martin AE. The structure and statistics of language jointly shape cross-frequency neural dynamics during spoken language comprehension. Nat Commun 2024; 15:8850. [PMID: 39397036 PMCID: PMC11471778 DOI: 10.1038/s41467-024-53128-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 09/30/2024] [Indexed: 10/15/2024] Open
Abstract
Humans excel at extracting structurally-determined meaning from speech despite inherent physical variability. This study explores the brain's ability to predict and understand spoken language robustly. It investigates the relationship between structural and statistical language knowledge in brain dynamics, focusing on phase and amplitude modulation. Using syntactic features from constituent hierarchies and surface statistics from a transformer model as predictors of forward encoding models, we reconstructed cross-frequency neural dynamics from MEG data during audiobook listening. Our findings challenge a strict separation of linguistic structure and statistics in the brain, with both aiding neural signal reconstruction. Syntactic features have a more temporally spread impact, and both word entropy and the number of closing syntactic constituents are linked to the phase-amplitude coupling of neural dynamics, implying a role in temporal prediction and cortical oscillation alignment during speech processing. Our results indicate that structured and statistical information jointly shape neural dynamics during spoken language comprehension and suggest an integration process via a cross-frequency coupling mechanism.
Collapse
Affiliation(s)
- Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - Andrea E Martin
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
4
|
Slaats S, Meyer AS, Martin AE. Lexical Surprisal Shapes the Time Course of Syntactic Structure Building. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:942-980. [PMID: 39534445 PMCID: PMC11556436 DOI: 10.1162/nol_a_00155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 07/24/2024] [Indexed: 11/16/2024]
Abstract
When we understand language, we recognize words and combine them into sentences. In this article, we explore the hypothesis that listeners use probabilistic information about words to build syntactic structure. Recent work has shown that lexical probability and syntactic structure both modulate the delta-band (<4 Hz) neural signal. Here, we investigated whether the neural encoding of syntactic structure changes as a function of the distributional properties of a word. To this end, we analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 min. Using temporal response functions and a cumulative model-comparison approach, we evaluated the contributions of syntactic and distributional features to the variance in the delta-band neural signal. This revealed that lexical surprisal values (a distributional feature), as well as bottom-up node counts (a syntactic feature) positively contributed to the model of the delta-band neural signal. Subsequently, we compared responses to the syntactic feature between words with high- and low-surprisal values. This revealed a delay in the response to the syntactic feature as a consequence of the surprisal value of the word: high-surprisal values were associated with a delayed response to the syntactic feature by 150-190 ms. The delay was not affected by word duration, and did not have a lexical origin. These findings suggest that the brain uses probabilistic information to infer syntactic structure, and highlight an importance for the role of time in this process.
Collapse
Affiliation(s)
- Sophie Slaats
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Department of Basic Neurosciences, University of Geneva, Geneva, Switzerland
| | - Antje S. Meyer
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| |
Collapse
|
5
|
Townsend PH, Jones A, Patel AD, Race E. Rhythmic Temporal Cues Coordinate Cross-frequency Phase-amplitude Coupling during Memory Encoding. J Cogn Neurosci 2024; 36:2100-2116. [PMID: 38991125 DOI: 10.1162/jocn_a_02217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2024]
Abstract
Accumulating evidence suggests that rhythmic temporal cues in the environment influence the encoding of information into long-term memory. Here, we test the hypothesis that these mnemonic effects of rhythm reflect the coupling of high-frequency (gamma) oscillations to entrained lower-frequency oscillations synchronized to the beat of the rhythm. In Study 1, we first test this hypothesis in the context of global effects of rhythm on memory, when memory is superior for visual stimuli presented in rhythmic compared with arrhythmic patterns at encoding [Jones, A., & Ward, E. V. Rhythmic temporal structure at encoding enhances recognition memory, Journal of Cognitive Neuroscience, 31, 1549-1562, 2019]. We found that rhythmic presentation of visual stimuli during encoding was associated with greater phase-amplitude coupling (PAC) between entrained low-frequency (delta) oscillations and higher-frequency (gamma) oscillations. In Study 2, we next investigated cross-frequency PAC in the context of local effects of rhythm on memory encoding, when memory is superior for visual stimuli presented in-synchrony compared with out-of-synchrony with a background auditory beat [Hickey, P., Merseal, H., Patel, A. D., & Race, E. Memory in time: Neural tracking of low-frequency rhythm dynamically modulates memory formation. Neuroimage, 213, 116693, 2020]. We found that the mnemonic effect of rhythm in this context was again associated with increased cross-frequency PAC between entrained low-frequency (delta) oscillations and higher-frequency (gamma) oscillations. Furthermore, the magnitude of gamma power modulations positively scaled with the subsequent memory benefit for in- versus out-of-synchrony stimuli. Together, these results suggest that the influence of rhythm on memory encoding may reflect the temporal coordination of higher-frequency gamma activity by entrained low-frequency oscillations.
Collapse
Affiliation(s)
- Paige Hickey Townsend
- Massachusetts General Hospital, Charlestown, MA
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA
| | | | - Aniruddh D Patel
- Tufts University, Medford, MA
- Canadian Institute for Advanced Research
| | | |
Collapse
|
6
|
Chalas N, Meyer L, Lo CW, Park H, Kluger DS, Abbasi O, Kayser C, Nitsch R, Gross J. Dissociating prosodic from syntactic delta activity during natural speech comprehension. Curr Biol 2024; 34:3537-3549.e5. [PMID: 39047734 DOI: 10.1016/j.cub.2024.06.072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 06/24/2024] [Accepted: 06/27/2024] [Indexed: 07/27/2024]
Abstract
Decoding human speech requires the brain to segment the incoming acoustic signal into meaningful linguistic units, ranging from syllables and words to phrases. Integrating these linguistic constituents into a coherent percept sets the root of compositional meaning and hence understanding. One important cue for segmentation in natural speech is prosodic cues, such as pauses, but their interplay with higher-level linguistic processing is still unknown. Here, we dissociate the neural tracking of prosodic pauses from the segmentation of multi-word chunks using magnetoencephalography (MEG). We find that manipulating the regularity of pauses disrupts slow speech-brain tracking bilaterally in auditory areas (below 2 Hz) and in turn increases left-lateralized coherence of higher-frequency auditory activity at speech onsets (around 25-45 Hz). Critically, we also find that multi-word chunks-defined as short, coherent bundles of inter-word dependencies-are processed through the rhythmic fluctuations of low-frequency activity (below 2 Hz) bilaterally and independently of prosodic cues. Importantly, low-frequency alignment at chunk onsets increases the accuracy of an encoding model in bilateral auditory and frontal areas while controlling for the effect of acoustics. Our findings provide novel insights into the neural basis of speech perception, demonstrating that both acoustic features (prosodic cues) and abstract linguistic processing at the multi-word timescale are underpinned independently by low-frequency electrophysiological brain activity in the delta frequency range.
Collapse
Affiliation(s)
- Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany; Institute for Translational Neuroscience, University of Münster, Münster, Germany.
| | - Lars Meyer
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Chia-Wen Lo
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Hyojin Park
- Centre for Human Brain Health (CHBH), School of Psychology, University of Birmingham, Birmingham, UK
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, 33615 Bielefeld, Germany
| | - Robert Nitsch
- Institute for Translational Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
7
|
Cometa A, Battaglini C, Artoni F, Greco M, Frank R, Repetto C, Bottoni F, Cappa SF, Micera S, Ricciardi E, Moro A. Brain and grammar: revealing electrophysiological basic structures with competing statistical models. Cereb Cortex 2024; 34:bhae317. [PMID: 39098819 DOI: 10.1093/cercor/bhae317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 07/08/2024] [Accepted: 07/24/2024] [Indexed: 08/06/2024] Open
Abstract
Acoustic, lexical, and syntactic information are simultaneously processed in the brain requiring complex strategies to distinguish their electrophysiological activity. Capitalizing on previous works that factor out acoustic information, we could concentrate on the lexical and syntactic contribution to language processing by testing competing statistical models. We exploited electroencephalographic recordings and compared different surprisal models selectively involving lexical information, part of speech, or syntactic structures in various combinations. Electroencephalographic responses were recorded in 32 participants during listening to affirmative active declarative sentences. We compared the activation corresponding to basic syntactic structures, such as noun phrases vs. verb phrases. Lexical and syntactic processing activates different frequency bands, partially different time windows, and different networks. Moreover, surprisal models based on part of speech inventory only do not explain well the electrophysiological data, while those including syntactic information do. By disentangling acoustic, lexical, and syntactic information, we demonstrated differential brain sensitivity to syntactic information. These results confirm and extend previous measures obtained with intracranial recordings, supporting our hypothesis that syntactic structures are crucial in neural language processing. This study provides a detailed understanding of how the brain processes syntactic information, highlighting the importance of syntactic surprisal in shaping neural responses during language comprehension.
Collapse
Affiliation(s)
- Andrea Cometa
- MoMiLab, IMT School for Advanced Studies Lucca, Piazza S.Francesco, 19, Lucca 55100, Italy
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Viale Rinaldo Piaggio 34, Pontedera 56025, Italy
- Cognitive Neuroscience (ICoN) Center, University School for Advanced Studies IUSS, Piazza Vittoria 15, Pavia 27100, Italy
| | - Chiara Battaglini
- Neurolinguistics and Experimental Pragmatics (NEP) Lab, University School for Advanced Studies IUSS Pavia, Piazza della Vittoria 15, Pavia 27100, Italy
| | - Fiorenzo Artoni
- Department of Clinical Neurosciences, Faculty of Medicine, University of Geneva, 1, rue Michel-Servet, Genéve 1211, Switzerland
| | - Matteo Greco
- Cognitive Neuroscience (ICoN) Center, University School for Advanced Studies IUSS, Piazza Vittoria 15, Pavia 27100, Italy
| | - Robert Frank
- Department of Linguistics, Yale University, 370 Temple St, New Haven, CT 06511, United States
| | - Claudia Repetto
- Department of Psychology, Università Cattolica del Sacro Cuore, Largo A. Gemelli 1, Milan 20123, Italy
| | - Franco Bottoni
- Istituto Clinico Humanitas, IRCCS, Via Alessandro Manzoni 56, Rozzano 20089, Italy
| | - Stefano F Cappa
- Cognitive Neuroscience (ICoN) Center, University School for Advanced Studies IUSS, Piazza Vittoria 15, Pavia 27100, Italy
- Dementia Research Center, IRCCS Mondino Foundation National Institute of Neurology, Via Mondino 2, Pavia 27100, Italy
| | - Silvestro Micera
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Viale Rinaldo Piaggio 34, Pontedera 56025, Italy
- Bertarelli Foundation Chair in Translational NeuroEngineering, Center for Neuroprosthetics and School of Engineering, Ecole Polytechnique Federale de Lausanne, Campus Biotech, Chemin des Mines 9, Geneva, GE CH 1202, Switzerland
| | - Emiliano Ricciardi
- MoMiLab, IMT School for Advanced Studies Lucca, Piazza S.Francesco, 19, Lucca 55100, Italy
| | - Andrea Moro
- Cognitive Neuroscience (ICoN) Center, University School for Advanced Studies IUSS, Piazza Vittoria 15, Pavia 27100, Italy
| |
Collapse
|
8
|
Ten Oever S, Titone L, te Rietmolen N, Martin AE. Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proc Natl Acad Sci U S A 2024; 121:e2320489121. [PMID: 38805278 PMCID: PMC11161766 DOI: 10.1073/pnas.2320489121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 04/22/2024] [Indexed: 05/30/2024] Open
Abstract
Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, NijmegenXD 6525, The Netherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NijmegenEN 6525, The Netherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, EV 6229, The Netherlands
| | - Lorenzo Titone
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, LeipzigD-04303, Germany
| | - Noémie te Rietmolen
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, NijmegenXD 6525, The Netherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NijmegenEN 6525, The Netherlands
| | - Andrea E. Martin
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, NijmegenXD 6525, The Netherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NijmegenEN 6525, The Netherlands
| |
Collapse
|
9
|
Ding R, Ten Oever S, Martin AE. Delta-band Activity Underlies Referential Meaning Representation during Pronoun Resolution. J Cogn Neurosci 2024; 36:1472-1492. [PMID: 38652108 DOI: 10.1162/jocn_a_02163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Human language offers a variety of ways to create meaning, one of which is referring to entities, objects, or events in the world. One such meaning maker is understanding to whom or to what a pronoun in a discourse refers to. To understand a pronoun, the brain must access matching entities or concepts that have been encoded in memory from previous linguistic context. Models of language processing propose that internally stored linguistic concepts, accessed via exogenous cues such as phonological input of a word, are represented as (a)synchronous activities across a population of neurons active at specific frequency bands. Converging evidence suggests that delta band activity (1-3 Hz) is involved in temporal and representational integration during sentence processing. Moreover, recent advances in the neurobiology of memory suggest that recollection engages neural dynamics similar to those which occurred during memory encoding. Integrating from these two research lines, we here tested the hypothesis that neural dynamic patterns, especially in delta frequency range, underlying referential meaning representation, would be reinstated during pronoun resolution. By leveraging neural decoding techniques (i.e., representational similarity analysis) on a magnetoencephalogram data set acquired during a naturalistic story-listening task, we provide evidence that delta-band activity underlies referential meaning representation. Our findings suggest that, during spoken language comprehension, endogenous linguistic representations such as referential concepts may be proactively retrieved and represented via activation of their underlying dynamic neural patterns.
Collapse
Affiliation(s)
- Rong Ding
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Sanne Ten Oever
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Andrea E Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
| |
Collapse
|
10
|
Cai J, Hadjinicolaou AE, Paulk AC, Soper DJ, Xia T, Williams ZM, Cash SS. Natural language processing models reveal neural dynamics of human conversation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.10.531095. [PMID: 36945468 PMCID: PMC10028965 DOI: 10.1101/2023.03.10.531095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2023]
Abstract
Through conversation, humans relay complex information through the alternation of speech production and comprehension. The neural mechanisms that underlie these complementary processes or through which information is precisely conveyed by language, however, remain poorly understood. Here, we used pretrained deep learning natural language processing models in combination with intracranial neuronal recordings to discover neural signals that reliably reflect speech production, comprehension, and their transitions during natural conversation between individuals. Our findings indicate that neural activities that encoded linguistic information were broadly distributed throughout frontotemporal areas across multiple frequency bands. We also find that these activities were specific to the words and sentences being conveyed and that they were dependent on the word's specific context and order. Finally, we demonstrate that these neural patterns partially overlapped during language production and comprehension and that listener-speaker transitions were associated with specific, time-aligned changes in neural activity. Collectively, our findings reveal a dynamical organization of neural activities that subserve language production and comprehension during natural conversation and harness the use of deep learning models in understanding the neural mechanisms underlying human language.
Collapse
Affiliation(s)
- Jing Cai
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Alex E. Hadjinicolaou
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Angelique C. Paulk
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Daniel J. Soper
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Tian Xia
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Ziv M. Williams
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Harvard-MIT Division of Health Sciences and Technology, Boston, MA
- Harvard Medical School, Program in Neuroscience, Boston, MA
- These authors contributed equally
| | - Sydney S. Cash
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Harvard-MIT Division of Health Sciences and Technology, Boston, MA
- These authors contributed equally
| |
Collapse
|
11
|
Zioga I, Zhou YJ, Weissbart H, Martin AE, Haegens S. Alpha and Beta Oscillations Differentially Support Word Production in a Rule-Switching Task. eNeuro 2024; 11:ENEURO.0312-23.2024. [PMID: 38490743 PMCID: PMC10988358 DOI: 10.1523/eneuro.0312-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 01/26/2024] [Accepted: 02/22/2024] [Indexed: 03/17/2024] Open
Abstract
Research into the role of brain oscillations in basic perceptual and cognitive functions has suggested that the alpha rhythm reflects functional inhibition while the beta rhythm reflects neural ensemble (re)activation. However, little is known regarding the generalization of these proposed fundamental operations to linguistic processes, such as speech comprehension and production. Here, we recorded magnetoencephalography in participants performing a novel rule-switching paradigm. Specifically, Dutch native speakers had to produce an alternative exemplar from the same category or a feature of a given target word embedded in spoken sentences (e.g., for the word "tuna", an exemplar from the same category-"seafood"-would be "shrimp", and a feature would be "pink"). A cue indicated the task rule-exemplar or feature-either before (pre-cue) or after (retro-cue) listening to the sentence. Alpha power during the working memory delay was lower for retro-cue compared with that for pre-cue in the left hemispheric language-related regions. Critically, alpha power negatively correlated with reaction times, suggestive of alpha facilitating task performance by regulating inhibition in regions linked to lexical retrieval. Furthermore, we observed a different spatiotemporal pattern of beta activity for exemplars versus features in the right temporoparietal regions, in line with the proposed role of beta in recruiting neural networks for the encoding of distinct categories. Overall, our study provides evidence for the generalizability of the role of alpha and beta oscillations from perceptual to more "complex, linguistic processes" and offers a novel task to investigate links between rule-switching, working memory, and word production.
Collapse
Affiliation(s)
- Ioanna Zioga
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, The Netherlands
| | - Ying Joey Zhou
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Department of Psychiatry, Oxford Centre for Human Brain Activity, Oxford, United Kingdom
| | - Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
| | - Andrea E Martin
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, The Netherlands
| | - Saskia Haegens
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Department of Psychiatry, Columbia University, New York, New York 10032
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, New York 10032
| |
Collapse
|
12
|
Inbar M, Genzer S, Perry A, Grossman E, Landau AN. Intonation Units in Spontaneous Speech Evoke a Neural Response. J Neurosci 2023; 43:8189-8200. [PMID: 37793909 PMCID: PMC10697392 DOI: 10.1523/jneurosci.0235-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 08/16/2023] [Accepted: 08/29/2023] [Indexed: 10/06/2023] Open
Abstract
Spontaneous speech is produced in chunks called intonation units (IUs). IUs are defined by a set of prosodic cues and presumably occur in all human languages. Recent work has shown that across different grammatical and sociocultural conditions IUs form rhythms of ∼1 unit per second. Linguistic theory suggests that IUs pace the flow of information in the discourse. As a result, IUs provide a promising and hitherto unexplored theoretical framework for studying the neural mechanisms of communication. In this article, we identify a neural response unique to the boundary defined by the IU. We measured the EEG of human participants (of either sex), who listened to different speakers recounting an emotional life event. We analyzed the speech stimuli linguistically and modeled the EEG response at word offset using a GLM approach. We find that the EEG response to IU-final words differs from the response to IU-nonfinal words even when equating acoustic boundary strength. Finally, we relate our findings to the body of research on rhythmic brain mechanisms in speech processing. We study the unique contribution of IUs and acoustic boundary strength in predicting delta-band EEG. This analysis suggests that IU-related neural activity, which is tightly linked to the classic Closure Positive Shift (CPS), could be a time-locked component that captures the previously characterized delta-band neural speech tracking.SIGNIFICANCE STATEMENT Linguistic communication is central to human experience, and its neural underpinnings are a topic of much research in recent years. Neuroscientific research has benefited from studying human behavior in naturalistic settings, an endeavor that requires explicit models of complex behavior. Usage-based linguistic theory suggests that spoken language is prosodically structured in intonation units. We reveal that the neural system is attuned to intonation units by explicitly modeling their impact on the EEG response beyond mere acoustics. To our understanding, this is the first time this is demonstrated in spontaneous speech under naturalistic conditions and under a theoretical framework that connects the prosodic chunking of speech, on the one hand, with the flow of information during communication, on the other.
Collapse
Affiliation(s)
- Maya Inbar
- Department of Linguistics, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Psychology, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Shir Genzer
- Department of Psychology, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Anat Perry
- Department of Psychology, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Eitan Grossman
- Department of Linguistics, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Ayelet N Landau
- Department of Psychology, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| |
Collapse
|
13
|
Li J, Hong B, Nolte G, Engel AK, Zhang D. EEG-based speaker-listener neural coupling reflects speech-selective attentional mechanisms beyond the speech stimulus. Cereb Cortex 2023; 33:11080-11091. [PMID: 37814353 DOI: 10.1093/cercor/bhad347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 09/01/2023] [Accepted: 09/04/2023] [Indexed: 10/11/2023] Open
Abstract
When we pay attention to someone, do we focus only on the sound they make, the word they use, or do we form a mental space shared with the speaker we want to pay attention to? Some would argue that the human language is no other than a simple signal, but others claim that human beings understand each other because they form a shared mental ground between the speaker and the listener. Our study aimed to explore the neural mechanisms of speech-selective attention by investigating the electroencephalogram-based neural coupling between the speaker and the listener in a cocktail party paradigm. The temporal response function method was employed to reveal how the listener was coupled to the speaker at the neural level. The results showed that the neural coupling between the listener and the attended speaker peaked 5 s before speech onset at the delta band over the left frontal region, and was correlated with speech comprehension performance. In contrast, the attentional processing of speech acoustics and semantics occurred primarily at a later stage after speech onset and was not significantly correlated with comprehension performance. These findings suggest a predictive mechanism to achieve speaker-listener neural coupling for successful speech comprehension.
Collapse
Affiliation(s)
- Jiawei Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee, Berlin 14195, Germany
| | - Bo Hong
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, Hamburg 20246, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, Hamburg 20246, Germany
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| |
Collapse
|
14
|
Xu N, Qin X, Zhou Z, Shan W, Ren J, Yang C, Lu L, Wang Q. Age differentially modulates the cortical tracking of the lower and higher level linguistic structures during speech comprehension. Cereb Cortex 2023; 33:10463-10474. [PMID: 37566910 DOI: 10.1093/cercor/bhad296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 07/23/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Speech comprehension requires listeners to rapidly parse continuous speech into hierarchically-organized linguistic structures (i.e. syllable, word, phrase, and sentence) and entrain the neural activities to the rhythm of different linguistic levels. Aging is accompanied by changes in speech processing, but it remains unclear how aging affects different levels of linguistic representation. Here, we recorded magnetoencephalography signals in older and younger groups when subjects actively and passively listened to the continuous speech in which hierarchical linguistic structures of word, phrase, and sentence were tagged at 4, 2, and 1 Hz, respectively. A newly-developed parameterization algorithm was applied to separate the periodically linguistic tracking from the aperiodic component. We found enhanced lower-level (word-level) tracking, reduced higher-level (phrasal- and sentential-level) tracking, and reduced aperiodic offset in older compared with younger adults. Furthermore, we observed the attentional modulation on the sentential-level tracking being larger for younger than for older ones. Notably, the neuro-behavior analyses showed that subjects' behavioral accuracy was positively correlated with the higher-level linguistic tracking, reversely correlated with the lower-level linguistic tracking. Overall, these results suggest that the enhanced lower-level linguistic tracking, reduced higher-level linguistic tracking and less flexibility of attentional modulation may underpin aging-related decline in speech comprehension.
Collapse
Affiliation(s)
- Na Xu
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Xiaoxiao Qin
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Ziqi Zhou
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Wei Shan
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Jiechuan Ren
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Chunqing Yang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing 100083, China
| | - Qun Wang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
- Beijing Institute of Brain Disorders, Collaborative Innovation Center for Brain Disorders, Capital Medical University, Beijing 100069, China
| |
Collapse
|
15
|
Tezcan F, Weissbart H, Martin AE. A tradeoff between acoustic and linguistic feature encoding in spoken language comprehension. eLife 2023; 12:e82386. [PMID: 37417736 PMCID: PMC10328533 DOI: 10.7554/elife.82386] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 06/18/2023] [Indexed: 07/08/2023] Open
Abstract
When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a native language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
Collapse
Affiliation(s)
- Filiz Tezcan
- Language and Computation in Neural Systems Group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
| | - Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Radboud UniversityNijmegenNetherlands
| | - Andrea E Martin
- Language and Computation in Neural Systems Group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
- Donders Centre for Cognitive Neuroimaging, Radboud UniversityNijmegenNetherlands
| |
Collapse
|
16
|
Slaats S, Weissbart H, Schoffelen JM, Meyer AS, Martin AE. Delta-Band Neural Responses to Individual Words Are Modulated by Sentence Processing. J Neurosci 2023; 43:4867-4883. [PMID: 37221093 PMCID: PMC10312058 DOI: 10.1523/jneurosci.0964-22.2023] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 04/17/2023] [Accepted: 04/27/2023] [Indexed: 05/25/2023] Open
Abstract
To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a step toward understanding how the brain builds sentence structure, the present study concerns the neural readout of this adaptation. We ask whether low-frequency neural readouts associated with words change as a function of being in a sentence. To this end, we analyzed an MEG dataset by Schoffelen et al. (2019) of 102 human participants (51 women) listening to sentences and word lists, the latter lacking any syntactic structure and combinatorial meaning. Using temporal response functions and a cumulative model-fitting approach, we disentangled delta- and theta-band responses to lexical information (word frequency), from responses to sensory and distributional variables. The results suggest that delta-band responses to words are affected by sentence context in time and space, over and above entropy and surprisal. In both conditions, the word frequency response spanned left temporal and posterior frontal areas; however, the response appeared later in word lists than in sentences. In addition, sentence context determined whether inferior frontal areas were responsive to lexical information. In the theta band, the amplitude was larger in the word list condition ∼100 milliseconds in right frontal areas. We conclude that low-frequency responses to words are changed by sentential context. The results of this study show how the neural representation of words is affected by structural context and as such provide insight into how the brain instantiates compositionality in language.SIGNIFICANCE STATEMENT Human language is unprecedented in its combinatorial capacity: we are capable of producing and understanding sentences we have never heard before. Although the mechanisms underlying this capacity have been described in formal linguistics and cognitive science, how they are implemented in the brain remains to a large extent unknown. A large body of earlier work from the cognitive neuroscientific literature implies a role for delta-band neural activity in the representation of linguistic structure and meaning. In this work, we combine these insights and techniques with findings from psycholinguistics to show that meaning is more than the sum of its parts; the delta-band MEG signal differentially reflects lexical information inside and outside sentence structures.
Collapse
Affiliation(s)
- Sophie Slaats
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- The International Max Planck Research School for Language Sciences, 6525 XD Nijmegen, The Netherlands
| | - Hugo Weissbart
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands
| | - Jan-Mathijs Schoffelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands
| | - Antje S Meyer
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands
| | - Andrea E Martin
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands
| |
Collapse
|
17
|
Zioga I, Weissbart H, Lewis AG, Haegens S, Martin AE. Naturalistic Spoken Language Comprehension Is Supported by Alpha and Beta Oscillations. J Neurosci 2023; 43:3718-3732. [PMID: 37059462 PMCID: PMC10198453 DOI: 10.1523/jneurosci.1500-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 03/17/2023] [Accepted: 03/23/2023] [Indexed: 04/16/2023] Open
Abstract
Brain oscillations are prevalent in all species and are involved in numerous perceptual operations. α oscillations are thought to facilitate processing through the inhibition of task-irrelevant networks, while β oscillations are linked to the putative reactivation of content representations. Can the proposed functional role of α and β oscillations be generalized from low-level operations to higher-level cognitive processes? Here we address this question focusing on naturalistic spoken language comprehension. Twenty-two (18 female) Dutch native speakers listened to stories in Dutch and French while MEG was recorded. We used dependency parsing to identify three dependency states at each word: the number of (1) newly opened dependencies, (2) dependencies that remained open, and (3) resolved dependencies. We then constructed forward models to predict α and β power from the dependency features. Results showed that dependency features predict α and β power in language-related regions beyond low-level linguistic features. Left temporal, fundamental language regions are involved in language comprehension in α, while frontal and parietal, higher-order language regions, and motor regions are involved in β. Critically, α- and β-band dynamics seem to subserve language comprehension tapping into syntactic structure building and semantic composition by providing low-level mechanistic operations for inhibition and reactivation processes. Because of the temporal similarity of the α-β responses, their potential functional dissociation remains to be elucidated. Overall, this study sheds light on the role of α and β oscillations during naturalistic spoken language comprehension, providing evidence for the generalizability of these dynamics from perceptual to complex linguistic processes.SIGNIFICANCE STATEMENT It remains unclear whether the proposed functional role of α and β oscillations in perceptual and motor function is generalizable to higher-level cognitive processes, such as spoken language comprehension. We found that syntactic features predict α and β power in language-related regions beyond low-level linguistic features when listening to naturalistic speech in a known language. We offer experimental findings that integrate a neuroscientific framework on the role of brain oscillations as "building blocks" with spoken language comprehension. This supports the view of a domain-general role of oscillations across the hierarchy of cognitive functions, from low-level sensory operations to abstract linguistic processes.
Collapse
Affiliation(s)
- Ioanna Zioga
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
| | - Hugo Weissbart
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
| | - Ashley G Lewis
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
| | - Saskia Haegens
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Department of Psychiatry, Columbia University, New York, New York 10032
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, New York 10032
| | - Andrea E Martin
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
| |
Collapse
|