1
|
Weissbart H, Martin AE. The structure and statistics of language jointly shape cross-frequency neural dynamics during spoken language comprehension. Nat Commun 2024; 15:8850. [PMID: 39397036 PMCID: PMC11471778 DOI: 10.1038/s41467-024-53128-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 09/30/2024] [Indexed: 10/15/2024] Open
Abstract
Humans excel at extracting structurally-determined meaning from speech despite inherent physical variability. This study explores the brain's ability to predict and understand spoken language robustly. It investigates the relationship between structural and statistical language knowledge in brain dynamics, focusing on phase and amplitude modulation. Using syntactic features from constituent hierarchies and surface statistics from a transformer model as predictors of forward encoding models, we reconstructed cross-frequency neural dynamics from MEG data during audiobook listening. Our findings challenge a strict separation of linguistic structure and statistics in the brain, with both aiding neural signal reconstruction. Syntactic features have a more temporally spread impact, and both word entropy and the number of closing syntactic constituents are linked to the phase-amplitude coupling of neural dynamics, implying a role in temporal prediction and cortical oscillation alignment during speech processing. Our results indicate that structured and statistical information jointly shape neural dynamics during spoken language comprehension and suggest an integration process via a cross-frequency coupling mechanism.
Collapse
Affiliation(s)
- Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - Andrea E Martin
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
2
|
Slaats S, Meyer AS, Martin AE. Lexical Surprisal Shapes the Time Course of Syntactic Structure Building. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:942-980. [PMID: 39534445 PMCID: PMC11556436 DOI: 10.1162/nol_a_00155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 07/24/2024] [Indexed: 11/16/2024]
Abstract
When we understand language, we recognize words and combine them into sentences. In this article, we explore the hypothesis that listeners use probabilistic information about words to build syntactic structure. Recent work has shown that lexical probability and syntactic structure both modulate the delta-band (<4 Hz) neural signal. Here, we investigated whether the neural encoding of syntactic structure changes as a function of the distributional properties of a word. To this end, we analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 min. Using temporal response functions and a cumulative model-comparison approach, we evaluated the contributions of syntactic and distributional features to the variance in the delta-band neural signal. This revealed that lexical surprisal values (a distributional feature), as well as bottom-up node counts (a syntactic feature) positively contributed to the model of the delta-band neural signal. Subsequently, we compared responses to the syntactic feature between words with high- and low-surprisal values. This revealed a delay in the response to the syntactic feature as a consequence of the surprisal value of the word: high-surprisal values were associated with a delayed response to the syntactic feature by 150-190 ms. The delay was not affected by word duration, and did not have a lexical origin. These findings suggest that the brain uses probabilistic information to infer syntactic structure, and highlight an importance for the role of time in this process.
Collapse
Affiliation(s)
- Sophie Slaats
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Department of Basic Neurosciences, University of Geneva, Geneva, Switzerland
| | - Antje S. Meyer
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| |
Collapse
|
3
|
Zhao J, Martin AE, Coopmans CW. Structural and sequential regularities modulate phrase-rate neural tracking. Sci Rep 2024; 14:16603. [PMID: 39025957 PMCID: PMC11258220 DOI: 10.1038/s41598-024-67153-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 07/08/2024] [Indexed: 07/20/2024] Open
Abstract
Electrophysiological brain activity has been shown to synchronize with the quasi-regular repetition of grammatical phrases in connected speech-so-called phrase-rate neural tracking. Current debate centers around whether this phenomenon is best explained in terms of the syntactic properties of phrases or in terms of syntax-external information, such as the sequential repetition of parts of speech. As these two factors were confounded in previous studies, much of the literature is compatible with both accounts. Here, we used electroencephalography (EEG) to determine if and when the brain is sensitive to both types of information. Twenty native speakers of Mandarin Chinese listened to isochronously presented streams of monosyllabic words, which contained either grammatical two-word phrases (e.g., catch fish, sell house) or non-grammatical word combinations (e.g., full lend, bread far). Within the grammatical conditions, we varied two structural factors: the position of the head of each phrase and the type of attachment. Within the non-grammatical conditions, we varied the consistency with which parts of speech were repeated. Tracking was quantified through evoked power and inter-trial phase coherence, both derived from the frequency-domain representation of EEG responses. As expected, neural tracking at the phrase rate was stronger in grammatical sequences than in non-grammatical sequences without syntactic structure. Moreover, it was modulated by both attachment type and head position, revealing the structure-sensitivity of phrase-rate tracking. We additionally found that the brain tracks the repetition of parts of speech in non-grammatical sequences. These data provide an integrative perspective on the current debate about neural tracking effects, revealing that the brain utilizes regularities computed over multiple levels of linguistic representation in guiding rhythmic computation.
Collapse
Affiliation(s)
- Junyuan Zhao
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
| | - Andrea E Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cas W Coopmans
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
4
|
Woolnough O, Donos C, Murphy E, Rollo PS, Roccaforte ZJ, Dehaene S, Tandon N. Spatiotemporally distributed frontotemporal networks for sentence reading. Proc Natl Acad Sci U S A 2023; 120:e2300252120. [PMID: 37068244 PMCID: PMC10151604 DOI: 10.1073/pnas.2300252120] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 03/14/2023] [Indexed: 04/19/2023] Open
Abstract
Reading a sentence entails integrating the meanings of individual words to infer more complex, higher-order meaning. This highly rapid and complex human behavior is known to engage the inferior frontal gyrus (IFG) and middle temporal gyrus (MTG) in the language-dominant hemisphere, yet whether there are distinct contributions of these regions to sentence reading is still unclear. To probe these neural spatiotemporal dynamics, we used direct intracranial recordings to measure neural activity while reading sentences, meaning-deficient Jabberwocky sentences, and lists of words or pseudowords. We isolated two functionally and spatiotemporally distinct frontotemporal networks, each sensitive to distinct aspects of word and sentence composition. The first distributed network engages the IFG and MTG, with IFG activity preceding MTG. Activity in this network ramps up over the duration of a sentence and is reduced or absent during Jabberwocky and word lists, implying its role in the derivation of sentence-level meaning. The second network engages the superior temporal gyrus and the IFG, with temporal responses leading those in frontal lobe, and shows greater activation for each word in a list than those in sentences, suggesting that sentential context enables greater efficiency in the lexical and/or phonological processing of individual words. These adjacent, yet spatiotemporally dissociable neural mechanisms for word- and sentence-level processes shed light on the richly layered semantic networks that enable us to fluently read. These results imply distributed, dynamic computation across the frontotemporal language network rather than a clear dichotomy between the contributions of frontal and temporal structures.
Collapse
Affiliation(s)
- Oscar Woolnough
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
| | - Cristian Donos
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Faculty of Physics, University of Bucharest, 050663Bucharest, Romania
| | - Elliot Murphy
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
| | - Patrick S. Rollo
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
| | - Zachary J. Roccaforte
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, NeuroSpin Center, 91191Gif-sur-Yvette, France
- Collège de France, 75005Paris, France
| | - Nitin Tandon
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School at UT Health Houston, Houston, TX77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX77030
- Memorial Hermann Hospital, Texas Medical Center, Houston, TX77030
| |
Collapse
|
5
|
Lu Y, Jin P, Ding N, Tian X. Delta-band neural tracking primarily reflects rule-based chunking instead of semantic relatedness between words. Cereb Cortex 2023; 33:4448-4458. [PMID: 36124831 PMCID: PMC10110438 DOI: 10.1093/cercor/bhac354] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 08/12/2022] [Accepted: 08/13/2022] [Indexed: 11/14/2022] Open
Abstract
It is debated whether cortical responses matching the time scales of phrases and sentences mediate the mental construction of the syntactic chunks or are simply caused by the semantic properties of words. Here, we investigate to what extent delta-band neural responses to speech can be explained by semantic relatedness between words. To dissociate the contribution of semantic relatedness from sentential structures, participants listened to sentence sequences and paired-word sequences in which semantically related words repeated at 1 Hz. Semantic relatedness in the 2 types of sequences was quantified using a word2vec model that captured the semantic relation between words without considering sentential structure. The word2vec model predicted comparable 1-Hz responses with paired-word sequences and sentence sequences. However, empirical neural activity, recorded using magnetoencephalography, showed a weaker 1-Hz response to paired-word sequences than sentence sequences in a word-level task that did not require sentential processing. Furthermore, when listeners applied a task-related rule to parse paired-word sequences into multi-word chunks, 1-Hz response was stronger than that in word-level task on the same sequences. Our results suggest that cortical activity tracks multi-word chunks constructed by either syntactic rules or task-related rules, whereas the semantic relatedness between words contributes only in a minor way.
Collapse
Affiliation(s)
- Yuhan Lu
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
| | - Peiqing Jin
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
- Research Center for Applied Mathematics and Machine Intelligence, Research Institute of Basic Theories, Zhejiang Lab, Hangzhou 311121, China
| | - Xing Tian
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Division of Arts and Sciences, New York University Shanghai
| |
Collapse
|
6
|
Dedhe AM, Clatterbuck H, Piantadosi ST, Cantlon JF. Origins of Hierarchical Logical Reasoning. Cogn Sci 2023; 47:e13250. [PMID: 36739520 PMCID: PMC11057913 DOI: 10.1111/cogs.13250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 12/21/2022] [Accepted: 01/06/2023] [Indexed: 02/06/2023]
Abstract
Hierarchical cognitive mechanisms underlie sophisticated behaviors, including language, music, mathematics, tool-use, and theory of mind. The origins of hierarchical logical reasoning have long been, and continue to be, an important puzzle for cognitive science. Prior approaches to hierarchical logical reasoning have often failed to distinguish between observable hierarchical behavior and unobservable hierarchical cognitive mechanisms. Furthermore, past research has been largely methodologically restricted to passive recognition tasks as compared to active generation tasks that are stronger tests of hierarchical rules. We argue that it is necessary to implement learning studies in humans, non-human species, and machines that are analyzed with formal models comparing the contribution of different cognitive mechanisms implicated in the generation of hierarchical behavior. These studies are critical to advance theories in the domains of recursion, rule-learning, symbolic reasoning, and the potentially uniquely human cognitive origins of hierarchical logical reasoning.
Collapse
Affiliation(s)
- Abhishek M. Dedhe
- Department of Psychology, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Carnegie Mellon University
| | | | | | - Jessica F. Cantlon
- Department of Psychology, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Carnegie Mellon University
| |
Collapse
|
7
|
Becker R, Hervais-Adelman A. Individual theta-band cortical entrainment to speech in quiet predicts word-in-noise comprehension. Cereb Cortex Commun 2023; 4:tgad001. [PMID: 36726796 PMCID: PMC9883620 DOI: 10.1093/texcom/tgad001] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 12/17/2022] [Accepted: 12/18/2022] [Indexed: 01/09/2023] Open
Abstract
Speech elicits brain activity time-locked to its amplitude envelope. The resulting speech-brain synchrony (SBS) is thought to be crucial to speech parsing and comprehension. It has been shown that higher speech-brain coherence is associated with increased speech intelligibility. However, studies depending on the experimental manipulation of speech stimuli do not allow conclusion about the causality of the observed tracking. Here, we investigate whether individual differences in the intrinsic propensity to track the speech envelope when listening to speech-in-quiet is predictive of individual differences in speech-recognition-in-noise, in an independent task. We evaluated the cerebral tracking of speech in source-localized magnetoencephalography, at timescales corresponding to the phrases, words, syllables and phonemes. We found that individual differences in syllabic tracking in right superior temporal gyrus and in left middle temporal gyrus (MTG) were positively associated with recognition accuracy in an independent words-in-noise task. Furthermore, directed connectivity analysis showed that this relationship is partially mediated by top-down connectivity from premotor cortex-associated with speech processing and active sensing in the auditory domain-to left MTG. Thus, the extent of SBS-even during clear speech-reflects an active mechanism of the speech processing system that may confer resilience to noise.
Collapse
Affiliation(s)
- Robert Becker
- Corresponding author: Neurolinguistics, Department of Psychology, University of Zurich (UZH), Zurich, Switzerland.
| | - Alexis Hervais-Adelman
- Neurolinguistics, Department of Psychology, University of Zurich, Zurich 8050, Switzerland,Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, Zurich 8057, Switzerland
| |
Collapse
|
8
|
Lo CW, Tung TY, Ke AH, Brennan JR. Hierarchy, Not Lexical Regularity, Modulates Low-Frequency Neural Synchrony During Language Comprehension. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:538-555. [PMID: 37215342 PMCID: PMC10158645 DOI: 10.1162/nol_a_00077] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 06/20/2022] [Indexed: 05/24/2023]
Abstract
Neural responses appear to synchronize with sentence structure. However, researchers have debated whether this response in the delta band (0.5-3 Hz) really reflects hierarchical information or simply lexical regularities. Computational simulations in which sentences are represented simply as sequences of high-dimensional numeric vectors that encode lexical information seem to give rise to power spectra similar to those observed for sentence synchronization, suggesting that sentence-level cortical tracking findings may reflect sequential lexical or part-of-speech information, and not necessarily hierarchical syntactic information. Using electroencephalography (EEG) data and the frequency-tagging paradigm, we develop a novel experimental condition to tease apart the predictions of the lexical and the hierarchical accounts of the attested low-frequency synchronization. Under a lexical model, synchronization should be observed even when words are reversed within their phrases (e.g., "sheep white grass eat" instead of "white sheep eat grass"), because the same lexical items are preserved at the same regular intervals. Critically, such stimuli are not syntactically well-formed; thus a hierarchical model does not predict synchronization of phrase- and sentence-level structure in the reversed phrase condition. Computational simulations confirm these diverging predictions. EEG data from N = 31 native speakers of Mandarin show robust delta synchronization to syntactically well-formed isochronous speech. Importantly, no such pattern is observed for reversed phrases, consistent with the hierarchical, but not the lexical, accounts.
Collapse
Affiliation(s)
- Chia-Wen Lo
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
| | - Tzu-Yun Tung
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
| | - Alan Hezao Ke
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
- Department of Linguistics, Languages and Cultures, Michigan State University, East Lansing, MI, USA
| | | |
Collapse
|
9
|
Glushko A, Poeppel D, Steinhauer K. Overt and implicit prosody contribute to neurophysiological responses previously attributed to grammatical processing. Sci Rep 2022; 12:14759. [PMID: 36042220 PMCID: PMC9427746 DOI: 10.1038/s41598-022-18162-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Accepted: 08/05/2022] [Indexed: 11/09/2022] Open
Abstract
Recent neurophysiological research suggests that slow cortical activity tracks hierarchical syntactic structure during online sentence processing. Here we tested an alternative hypothesis: electrophysiological activity peaks at constituent phrase as well as sentence frequencies reflect cortical tracking of overt or covert (implicit) prosodic grouping. Participants listened to series of sentences presented in three conditions while electroencephalography (EEG) was recorded. First, prosodic cues in the sentence materials were neutralized. We found an EEG spectral power peak elicited at a frequency that only 'tagged' covert, implicit prosodic change, but not any major syntactic constituents. In the second condition, participants listened to a series of sentences with overt prosodic grouping cues that either aligned or misaligned with the syntactic phrasing in the sentences (initial overt prosody trials). Following each overt prosody trial, participants were presented with a second series of sentences lacking overt prosodic cues (instructed prosody trial) and were instructed to imagine the prosodic contour present in the previous, overt prosody trial. The EEG responses reflected an interactive relationship between syntactic processing and prosodic tracking at the frequencies of syntactic constituents (sentences and phrases): alignment of syntax and prosody boosted EEG responses, whereas their misalignment had an opposite effect. This was true for both overt and imagined prosody conditions. We conclude that processing of both overt and covert prosody is reflected in the frequency-tagged neural responses at sentence constituent frequencies. These findings need to be incorporated in any account that aims to identify neural markers reflecting syntactic processing.
Collapse
Affiliation(s)
| | - David Poeppel
- Department of Psychology, New York University, New York City, NY, USA
- Ernst Struengmann Institute for Neuroscience, Frankfurt, Germany
- Center for Language, Music, and Emotion (CLaME), New York, USA
| | - Karsten Steinhauer
- Centre for Research on Brain, Language and Music, Montreal, Canada
- School of Communication Sciences and Disorders, McGill University, 2001 McGill College Avenue, Unit 800, Montreal, QC, H3A 1G1, Canada
| |
Collapse
|
10
|
ten Oever S, Carta S, Kaufeld G, Martin AE. Neural tracking of phrases in spoken language comprehension is automatic and task-dependent. eLife 2022; 11:e77468. [PMID: 35833919 PMCID: PMC9282854 DOI: 10.7554/elife.77468] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 06/25/2022] [Indexed: 12/02/2022] Open
Abstract
Linguistic phrases are tracked in sentences even though there is no one-to-one acoustic phrase marker in the physical signal. This phenomenon suggests an automatic tracking of abstract linguistic structure that is endogenously generated by the brain. However, all studies investigating linguistic tracking compare conditions where either relevant information at linguistic timescales is available, or where this information is absent altogether (e.g., sentences versus word lists during passive listening). It is therefore unclear whether tracking at phrasal timescales is related to the content of language, or rather, results as a consequence of attending to the timescales that happen to match behaviourally relevant information. To investigate this question, we presented participants with sentences and word lists while recording their brain activity with magnetoencephalography (MEG). Participants performed passive, syllable, word, and word-combination tasks corresponding to attending to four different rates: one they would naturally attend to, syllable-rates, word-rates, and phrasal-rates, respectively. We replicated overall findings of stronger phrasal-rate tracking measured with mutual information for sentences compared to word lists across the classical language network. However, in the inferior frontal gyrus (IFG) we found a task effect suggesting stronger phrasal-rate tracking during the word-combination task independent of the presence of linguistic structure, as well as stronger delta-band connectivity during this task. These results suggest that extracting linguistic information at phrasal rates occurs automatically with or without the presence of an additional task, but also that IFG might be important for temporal integration across various perceptual domains.
Collapse
Affiliation(s)
- Sanne ten Oever
- Language and Computation in Neural Systems group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive NeuroimagingNijmegenNetherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastrichtNetherlands
| | - Sara Carta
- Language and Computation in Neural Systems group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
- ADAPT Centre, School of Computer Science and Statistics, University of Dublin, Trinity CollegeDublinIreland
- CIMeC - Center for Mind/Brain Sciences, University of TrentoTrentoItaly
| | - Greta Kaufeld
- Language and Computation in Neural Systems group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
| | - Andrea E Martin
- Language and Computation in Neural Systems group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive NeuroimagingNijmegenNetherlands
| |
Collapse
|
11
|
Abstract
Sentences contain structure that determines their meaning beyond that of individual words. An influential study by Ding and colleagues (2016) used frequency tagging of phrases and sentences to show that the human brain is sensitive to structure by finding peaks of neural power at the rate at which structures were presented. Since then, there has been a rich debate on how to best explain this pattern of results with profound impact on the language sciences. Models that use hierarchical structure building, as well as models based on associative sequence processing, can predict the neural response, creating an inferential impasse as to which class of models explains the nature of the linguistic computations reflected in the neural readout. In the current manuscript, we discuss pitfalls and common fallacies seen in the conclusions drawn in the literature illustrated by various simulations. We conclude that inferring the neural operations of sentence processing based on these neural data, and any like it, alone, is insufficient. We discuss how to best evaluate models and how to approach the modeling of neural readouts to sentence processing in a manner that remains faithful to cognitive, neural, and linguistic principles.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Language and Computation in Neural Systems Group, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Karthikeya Kaushik
- Language and Computation in Neural Systems Group, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands
| | - Andrea E. Martin
- Language and Computation in Neural Systems Group, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
12
|
Coopmans CW, de Hoop H, Hagoort P, Martin AE. Effects of Structure and Meaning on Cortical Tracking of Linguistic Units in Naturalistic Speech. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:386-412. [PMID: 37216060 PMCID: PMC10158633 DOI: 10.1162/nol_a_00070] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 03/02/2022] [Indexed: 05/24/2023]
Abstract
Recent research has established that cortical activity "tracks" the presentation rate of syntactic phrases in continuous speech, even though phrases are abstract units that do not have direct correlates in the acoustic signal. We investigated whether cortical tracking of phrase structures is modulated by the extent to which these structures compositionally determine meaning. To this end, we recorded electroencephalography (EEG) of 38 native speakers who listened to naturally spoken Dutch stimuli in different conditions, which parametrically modulated the degree to which syntactic structure and lexical semantics determine sentence meaning. Tracking was quantified through mutual information between the EEG data and either the speech envelopes or abstract annotations of syntax, all of which were filtered in the frequency band corresponding to the presentation rate of phrases (1.1-2.1 Hz). Overall, these mutual information analyses showed stronger tracking of phrases in regular sentences than in stimuli whose lexical-syntactic content is reduced, but no consistent differences in tracking between sentences and stimuli that contain a combination of syntactic structure and lexical content. While there were no effects of compositional meaning on the degree of phrase-structure tracking, analyses of event-related potentials elicited by sentence-final words did reveal meaning-induced differences between conditions. Our findings suggest that cortical tracking of structure in sentences indexes the internal generation of this structure, a process that is modulated by the properties of its input, but not by the compositional interpretation of its output.
Collapse
Affiliation(s)
- Cas W. Coopmans
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Helen de Hoop
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
13
|
Delta-band neural activity primarily tracks sentences instead of semantic properties of words. Neuroimage 2022; 251:118979. [DOI: 10.1016/j.neuroimage.2022.118979] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 01/29/2022] [Accepted: 02/06/2022] [Indexed: 11/21/2022] Open
|
14
|
Hörberg T, Jaeger TF. A Rational Model of Incremental Argument Interpretation: The Comprehension of Swedish Transitive Clauses. Front Psychol 2021; 12:674202. [PMID: 34721134 PMCID: PMC8554243 DOI: 10.3389/fpsyg.2021.674202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 09/21/2021] [Indexed: 11/16/2022] Open
Abstract
A central component of sentence understanding is verb-argument interpretation, determining how the referents in the sentence are related to the events or states expressed by the verb. Previous work has found that comprehenders change their argument interpretations incrementally as the sentence unfolds, based on morphosyntactic (e.g., case, agreement), lexico-semantic (e.g., animacy, verb-argument fit), and discourse cues (e.g., givenness). However, it is still unknown whether these cues have a privileged role in language processing, or whether their effects on argument interpretation originate in implicit expectations based on the joint distribution of these cues with argument assignments experienced in previous language input. We compare the former, linguistic account against the latter, expectation-based account, using data from production and comprehension of transitive clauses in Swedish. Based on a large corpus of Swedish, we develop a rational (Bayesian) model of incremental argument interpretation. This model predicts the processing difficulty experienced at different points in the sentence as a function of the Bayesian surprise associated with changes in expectations over possible argument interpretations. We then test the model against reading times from a self-paced reading experiment on Swedish. We find Bayesian surprise to be a significant predictor of reading times, complementing effects of word surprisal. Bayesian surprise also captures the qualitative effects of morpho-syntactic and lexico-semantic cues. Additional model comparisons find that it—with a single degree of freedom—captures much, if not all, of the effects associated with these cues. This suggests that the effects of form- and meaning-based cues to argument interpretation are mediated through expectation-based processing.
Collapse
Affiliation(s)
- Thomas Hörberg
- Department of Linguistics, Stockholm University, Stockholm, Sweden.,Department of Computational Science and Technology, KTH Royal Institute of Technology, Stockholm, Sweden
| | - T Florian Jaeger
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States.,Department of Computer Science, University of Rochester, Rochester, NY, United States
| |
Collapse
|
15
|
Kalenkovich E, Shestakova A, Kazanina N. Frequency tagging of syntactic structure or lexical properties; a registered MEG study. Cortex 2021; 146:24-38. [PMID: 34814042 DOI: 10.1016/j.cortex.2021.09.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Revised: 09/22/2021] [Accepted: 09/30/2021] [Indexed: 11/17/2022]
Abstract
A traditional view on sentence comprehension holds that the listener parses linguistic input using hierarchical syntactic rules. Recently, physiological evidence for such a claim has been provided by Ding et al.'s (2016) MEG study that demonstrated, using a frequency-tagging paradigm, that regularly occurring syntactic constituents were spontaneously tracked by listeners. Even more recently, this study's results have been challenged as artifactual by Frank and Yang (2018) who successfully re-created Ding's results using a distributional semantic vector model that relied exclusively on lexical information and did not appeal to any hierarchical syntactic representations. The current MEG study was designed to dissociate the two interpretations of Ding et al.'s results. Taking advantage of the morphological richness of Russian, we constructed two types of sentences of different syntactic structure; critically, this was achieved by manipulating a single affix on one of the words while all other lexical roots and affixes in the sentence were kept the same. In Experiment 1, we successfully verified the intuition that due to almost complete lexical overlap the two types of sentences should yield the same activity pattern according to Frank and Yang's (2018) lexico-semantic model. In Experiment 2, we recorded Russian listeners' MEG activity while they listened to the two types of sentences. Contradicting the hierarchical syntactic account and consistent with the lexico-semantic one, we observed no difference across the conditions in the way participants tracked the stimuli properties. Corroborated by other recent evidence, our findings show that peaks interpreted by Ding et al. as reflecting higher-level syntactic constituency may stem from non-syntactic factors.
Collapse
Affiliation(s)
- Evgenii Kalenkovich
- HSE University, Centre for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Russian Federation.
| | - Anna Shestakova
- International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Russia
| | - Nina Kazanina
- University of Bristol, School of Psychological Science, Bristol, UK; International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
16
|
Burroughs A, Kazanina N, Houghton C. Grammatical category and the neural processing of phrases. Sci Rep 2021; 11:2446. [PMID: 33510230 PMCID: PMC7844293 DOI: 10.1038/s41598-021-81901-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 01/03/2021] [Indexed: 11/19/2022] Open
Abstract
The interlocking roles of lexical, syntactic and semantic processing in language comprehension has been the subject of longstanding debate. Recently, the cortical response to a frequency-tagged linguistic stimulus has been shown to track the rate of phrase and sentence, as well as syllable, presentation. This could be interpreted as evidence for the hierarchical processing of speech, or as a response to the repetition of grammatical category. To examine the extent to which hierarchical structure plays a role in language processing we recorded EEG from human participants as they listen to isochronous streams of monosyllabic words. Comparing responses to sequences in which grammatical category is strictly alternating and chosen such that two-word phrases can be grammatically constructed—cold food loud room—or is absent—rough give ill tell—showed cortical entrainment at the two-word phrase rate was only present in the grammatical condition. Thus, grammatical category repetition alone does not yield entertainment at higher level than a word. On the other hand, cortical entrainment was reduced for the mixed-phrase condition that contained two-word phrases but no grammatical category repetition—that word send less—which is not what would be expected if the measured entrainment reflected purely abstract hierarchical syntactic units. Our results support a model in which word-level grammatical category information is required to build larger units.
Collapse
Affiliation(s)
- Amelia Burroughs
- Department of Computer Science, University of Bristol, Bristol, UK
| | - Nina Kazanina
- School of Psychological Science, University of Bristol, Bristol, UK.,International Laboratory of Social Neurobiology, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Russian Federation
| | - Conor Houghton
- Department of Computer Science, University of Bristol, Bristol, UK.
| |
Collapse
|
17
|
Abstract
Speech processing in the human brain is grounded in non-specific auditory processing in the general mammalian brain, but relies on human-specific adaptations for processing speech and language. For this reason, many recent neurophysiological investigations of speech processing have turned to the human brain, with an emphasis on continuous speech. Substantial progress has been made using the phenomenon of "neural speech tracking", in which neurophysiological responses time-lock to the rhythm of auditory (and other) features in continuous speech. One broad category of investigations concerns the extent to which speech tracking measures are related to speech intelligibility, which has clinical applications in addition to its scientific importance. Recent investigations have also focused on disentangling different neural processes that contribute to speech tracking. The two lines of research are closely related, since processing stages throughout auditory cortex contribute to speech comprehension, in addition to subcortical processing and higher order and attentional processes.
Collapse
Affiliation(s)
- Christian Brodbeck
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, U.S.A
| | - Jonathan Z. Simon
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, U.S.A
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, U.S.A
- Department of Biology, University of Maryland, College Park, Maryland 20742, U.S.A
| |
Collapse
|
18
|
Linguistic Structure and Meaning Organize Neural Oscillations into a Content-Specific Hierarchy. J Neurosci 2020; 40:9467-9475. [PMID: 33097640 DOI: 10.1523/jneurosci.0302-20.2020] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 09/25/2020] [Accepted: 10/03/2020] [Indexed: 11/21/2022] Open
Abstract
Neural oscillations track linguistic information during speech comprehension (Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (Doelling et al., 2014; Zoefel and VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low-frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backward acoustically matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical (1.9-2.8 Hz) timescales, suggesting that the delta-band is modulated by lexically driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin and Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.SIGNIFICANCE STATEMENT Biological systems like the brain encode their environment not only by reacting in a series of stimulus-driven responses, but by combining stimulus-driven information with endogenous, internally generated, inferential knowledge and meaning. Understanding language from speech is the human benchmark for this. Much research focuses on the purely stimulus-driven response, but here, we focus on the goal of language behavior: conveying structure and meaning. To that end, we use naturalistic stimuli that contrast acoustic-prosodic and lexical-semantic information to show that, during spoken language comprehension, oscillatory modulations reflect computations related to inferring structure and meaning from the acoustic signal. Our experiment provides the first evidence to date that compositional structure and meaning organize the oscillatory response, above and beyond prosodic and lexical controls.
Collapse
|
19
|
Gui P, Jiang Y, Zang D, Qi Z, Tan J, Tanigawa H, Jiang J, Wen Y, Xu L, Zhao J, Mao Y, Poo MM, Ding N, Dehaene S, Wu X, Wang L. Assessing the depth of language processing in patients with disorders of consciousness. Nat Neurosci 2020; 23:761-770. [DOI: 10.1038/s41593-020-0639-1] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 04/08/2020] [Indexed: 12/18/2022]
|
20
|
Jin P, Lu Y, Ding N. Low-frequency neural activity reflects rule-based chunking during speech listening. eLife 2020; 9:55613. [PMID: 32310082 PMCID: PMC7213976 DOI: 10.7554/elife.55613] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 04/20/2020] [Indexed: 12/26/2022] Open
Abstract
Chunking is a key mechanism for sequence processing. Studies on speech sequences have suggested low-frequency cortical activity tracks spoken phrases, that is, chunks of words defined by tacit linguistic knowledge. Here, we investigate whether low-frequency cortical activity reflects a general mechanism for sequence chunking and can track chunks defined by temporarily learned artificial rules. The experiment records magnetoencephalographic (MEG) responses to a sequence of spoken words. To dissociate word properties from the chunk structures, two tasks separately require listeners to group pairs of semantically similar or semantically dissimilar words into chunks. In the MEG spectrum, a clear response is observed at the chunk rate. More importantly, the chunk-rate response is task-dependent. It is phase locked to chunk boundaries, instead of the semantic relatedness between words. The results strongly suggest that cortical activity can track chunks constructed based on task-related rules and potentially reflects a general mechanism for chunk-level representations. From digital personal assistants like Siri and Alexa to customer service chatbots, computers are slowly learning to talk to us. But as anyone who has interacted with them will appreciate, the results are often imperfect. Each time we speak or write, we use grammatical rules to combine words in a specific order. These rules enable us to produce new sentences that we have never seen or heard before, and to understand the sentences of others. But computer scientists adopt a different strategy when training computers to use language. Instead of grammar, they provide the computers with vast numbers of example sentences and phrases. The computers then use this input to calculate how likely for one word to follow another in a given context. "The sky is blue" is more common than "the sky is green", for example. But is it possible that the human brain also uses this approach? When we listen to speech, the brain shows patterns of activity that correspond to units such as sentences. But previous research has been unable to tell whether the brain is using grammatical rules to recognise sentences, or whether it relies on a probability-based approach like a computer. Using a simple artificial language, Jin et al. have now managed to tease apart these alternatives. Healthy volunteers listened to lists of words while lying inside a brain scanner. The volunteers had to group the words into pairs, otherwise known as chunks, by following various rules that simulated the grammatical rules present in natural languages. Crucially, the volunteers’ brain activity tracked the chunks – which differed depending on which rule had been applied – rather than the individual words. This suggests that the brain processes speech using abstract rules instead of word probabilities. While computers are now much better at processing language, they still perform worse than people. Understanding how the human brain solves this task could ultimately help to improve the performance of personal digital assistants.
Collapse
Affiliation(s)
- Peiqing Jin
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Yuhan Lu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China.,Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou, China
| |
Collapse
|
21
|
Abstract
OBJECTIVE Speech signals have a remarkable ability to entrain brain activity to the rapid fluctuations of speech sounds. For instance, one can readily measure a correlation of the sound amplitude with the evoked responses of the electroencephalogram (EEG), and the strength of this correlation is indicative of whether the listener is attending to the speech. In this study we asked whether this stimulus-response correlation is also predictive of speech intelligibility. APPROACH We hypothesized that when a listener fails to understand the speech in adverse hearing conditions, attention wanes and stimulus-response correlation also drops. To test this, we measure a listener's ability to detect words in noisy speech while recording their brain activity using EEG. We alter intelligibility without changing the acoustic stimulus by pairing it with congruent and incongruent visual speech. MAIN RESULTS For almost all subjects we found that an improvement in speech detection coincided with an increase in correlation between the noisy speech and the EEG measured over a period of 30 min. SIGNIFICANCE We conclude that simultaneous recordings of the perceived sound and the corresponding EEG response may be a practical tool to assess speech intelligibility in the context of hearing aids.
Collapse
Affiliation(s)
- Ivan Iotzov
- Biomedical Engineering, City College of New York, New York City, NY, United States of America
| | | |
Collapse
|
22
|
Hasson U, Egidi G, Marelli M, Willems RM. Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition 2018; 180:135-157. [PMID: 30053570 PMCID: PMC6145924 DOI: 10.1016/j.cognition.2018.06.018] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 06/05/2018] [Accepted: 06/24/2018] [Indexed: 12/26/2022]
Abstract
Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
Collapse
Affiliation(s)
- Uri Hasson
- Center for Mind/Brain Sciences, The University of Trento, Trento, Italy; Center for Practical Wisdom, The University of Chicago, Chicago, IL, United States.
| | - Giovanna Egidi
- Center for Mind/Brain Sciences, The University of Trento, Trento, Italy
| | - Marco Marelli
- Department of Psychology, University of Milano-Bicocca, Milano, Italy; NeuroMI - Milan Center for Neuroscience, Milano, Italy
| | - Roel M Willems
- Centre for Language Studies & Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
23
|
Lau E. Neural Indices of Structured Sentence Representation. PSYCHOLOGY OF LEARNING AND MOTIVATION 2018. [DOI: 10.1016/bs.plm.2018.08.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/28/2023]
|