1
|
Weissbart H, Martin AE. The structure and statistics of language jointly shape cross-frequency neural dynamics during spoken language comprehension. Nat Commun 2024; 15:8850. [PMID: 39397036 DOI: 10.1038/s41467-024-53128-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 09/30/2024] [Indexed: 10/15/2024] Open
Abstract
Humans excel at extracting structurally-determined meaning from speech despite inherent physical variability. This study explores the brain's ability to predict and understand spoken language robustly. It investigates the relationship between structural and statistical language knowledge in brain dynamics, focusing on phase and amplitude modulation. Using syntactic features from constituent hierarchies and surface statistics from a transformer model as predictors of forward encoding models, we reconstructed cross-frequency neural dynamics from MEG data during audiobook listening. Our findings challenge a strict separation of linguistic structure and statistics in the brain, with both aiding neural signal reconstruction. Syntactic features have a more temporally spread impact, and both word entropy and the number of closing syntactic constituents are linked to the phase-amplitude coupling of neural dynamics, implying a role in temporal prediction and cortical oscillation alignment during speech processing. Our results indicate that structured and statistical information jointly shape neural dynamics during spoken language comprehension and suggest an integration process via a cross-frequency coupling mechanism.
Collapse
Affiliation(s)
- Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - Andrea E Martin
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
2
|
Kobzeva A, Kush D. Grammar and Expectation in Active Dependency Resolution: Experimental and Modeling Evidence From Norwegian. Cogn Sci 2024; 48:e13501. [PMID: 39401001 DOI: 10.1111/cogs.13501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 09/10/2024] [Accepted: 09/20/2024] [Indexed: 10/15/2024]
Abstract
Filler-gap dependency resolution is often characterized as an active process. We probed the mechanisms that determine where and why comprehenders posit gaps during incremental processing using Norwegian as our test language. First, we investigated why active filler-gap dependency resolution is suspended inside island domains like embedded questions in some languages. Processing-based accounts hold that resource limitations prevent gap-filling in embedded questions across languages, while grammar-based accounts predict that active gap-filling is only blocked in languages where embedded questions are grammatical islands. In a self-paced reading study, we find that Norwegian participants exhibit filled-gap effects inside embedded questions, which are not islands in the language. The findings are consistent with grammar-based, but not processing, accounts. Second, we asked if active filler-gap processing can be understood as a special case of probabilistic ambiguity resolution within an expectation-based framework. To do so, we tested whether word-by-word surprisal values from a neural language model could predict the location and magnitude of filled-gap effects in our behavioral data. We find that surprisal accurately tracks the location of filled-gap effects but severely underestimates their magnitude. This suggests either that mechanisms above and beyond probabilistic ambiguity resolution are required to fully explain active gap-filling behavior or that surprisal values derived from long-short term memory are not good proxies for humans' incremental expectations during filler-gap resolution.
Collapse
Affiliation(s)
- Anastasia Kobzeva
- Department of Language and Literature, Norwegian University of Science and Technology
| | - Dave Kush
- Department of Language Studies, University of Toronto
- Department of Linguistics, University of Toronto
| |
Collapse
|
3
|
de Varda AG, Marelli M, Amenta S. Cloze probability, predictability ratings, and computational estimates for 205 English sentences, aligned with existing EEG and reading time data. Behav Res Methods 2024; 56:5190-5213. [PMID: 37880511 PMCID: PMC11289024 DOI: 10.3758/s13428-023-02261-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/25/2023] [Indexed: 10/27/2023]
Abstract
We release a database of cloze probability values, predictability ratings, and computational estimates for a sample of 205 English sentences (1726 words), aligned with previously released word-by-word reading time data (both self-paced reading and eye-movement records; Frank et al., Behavior Research Methods, 45(4), 1182-1190. 2013) and EEG responses (Frank et al., Brain and Language, 140, 1-11. 2015). Our analyses show that predictability ratings are the best predictors of the EEG signal (N400, P600, LAN) self-paced reading times, and eye movement patterns, when spillover effects are taken into account. The computational estimates are particularly effective at explaining variance in the eye-tracking data without spillover. Cloze probability estimates have decent overall psychometric accuracy and are the best predictors of early fixation patterns (first fixation duration). Our results indicate that the choice of the best measurement of word predictability in context critically depends on the processing index being considered.
Collapse
Affiliation(s)
- Andrea Gregor de Varda
- Department of Psychology, University of Milano - Bicocca, Piazza dell'Ateneo Nuovo 1, Milano, MI 20126, Italy.
| | - Marco Marelli
- Department of Psychology, University of Milano - Bicocca, Piazza dell'Ateneo Nuovo 1, Milano, MI 20126, Italy
| | - Simona Amenta
- Department of Psychology, University of Milano - Bicocca, Piazza dell'Ateneo Nuovo 1, Milano, MI 20126, Italy
| |
Collapse
|
4
|
Michaelov JA, Bergen BK. On the Mathematical Relationship Between Contextual Probability and N400 Amplitude. Open Mind (Camb) 2024; 8:859-897. [PMID: 39077107 PMCID: PMC11285424 DOI: 10.1162/opmi_a_00150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/24/2024] [Indexed: 07/31/2024] Open
Abstract
Accounts of human language comprehension propose different mathematical relationships between the contextual probability of a word and how difficult it is to process, including linear, logarithmic, and super-logarithmic ones. However, the empirical evidence favoring any of these over the others is mixed, appearing to vary depending on the index of processing difficulty used and the approach taken to calculate contextual probability. To help disentangle these results, we focus on the mathematical relationship between corpus-derived contextual probability and the N400, a neural index of processing difficulty. Specifically, we use 37 contemporary transformer language models to calculate the contextual probability of stimuli from 6 experimental studies of the N400, and test whether N400 amplitude is best predicted by a linear, logarithmic, super-logarithmic, or sub-logarithmic transformation of the probabilities calculated using these language models, as well as combinations of these transformed metrics. We replicate the finding that on some datasets, a combination of linearly and logarithmically-transformed probability can predict N400 amplitude better than either metric alone. In addition, we find that overall, the best single predictor of N400 amplitude is sub-logarithmically-transformed probability, which for almost all language models and datasets explains all the variance in N400 amplitude otherwise explained by the linear and logarithmic transformations. This is a novel finding that is not predicted by any current theoretical accounts, and thus one that we argue is likely to play an important role in increasing our understanding of how the statistical regularities of language impact language comprehension.
Collapse
|
5
|
Sugimoto Y, Yoshida R, Jeong H, Koizumi M, Brennan JR, Oseki Y. Localizing Syntactic Composition with Left-Corner Recurrent Neural Network Grammars. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:201-224. [PMID: 38645619 PMCID: PMC11025653 DOI: 10.1162/nol_a_00118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 07/24/2023] [Indexed: 04/23/2024]
Abstract
In computational neurolinguistics, it has been demonstrated that hierarchical models such as recurrent neural network grammars (RNNGs), which jointly generate word sequences and their syntactic structures via the syntactic composition, better explained human brain activity than sequential models such as long short-term memory networks (LSTMs). However, the vanilla RNNG has employed the top-down parsing strategy, which has been pointed out in the psycholinguistics literature as suboptimal especially for head-final/left-branching languages, and alternatively the left-corner parsing strategy has been proposed as the psychologically plausible parsing strategy. In this article, building on this line of inquiry, we investigate not only whether hierarchical models like RNNGs better explain human brain activity than sequential models like LSTMs, but also which parsing strategy is more neurobiologically plausible, by developing a novel fMRI corpus where participants read newspaper articles in a head-final/left-branching language, namely Japanese, through the naturalistic fMRI experiment. The results revealed that left-corner RNNGs outperformed both LSTMs and top-down RNNGs in the left inferior frontal and temporal-parietal regions, suggesting that there are certain brain regions that localize the syntactic composition with the left-corner parsing strategy.
Collapse
Affiliation(s)
- Yushi Sugimoto
- Graduate School of Arts and Sciences, University of Tokyo, Tokyo, Japan
| | - Ryo Yoshida
- Graduate School of Arts and Sciences, University of Tokyo, Tokyo, Japan
| | - Hyeonjeong Jeong
- Graduate School of International Cultural Studies, Tohoku University, Sendai, Japan
| | - Masatoshi Koizumi
- Department of Linguistics, Graduate School of Arts and Letters, Tohoku University, Sendai, Japan
| | | | - Yohei Oseki
- Graduate School of Arts and Sciences, University of Tokyo, Tokyo, Japan
| |
Collapse
|
6
|
Shain C, Meister C, Pimentel T, Cotterell R, Levy R. Large-scale evidence for logarithmic effects of word predictability on reading time. Proc Natl Acad Sci U S A 2024; 121:e2307876121. [PMID: 38422017 DOI: 10.1073/pnas.2307876121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 11/11/2023] [Indexed: 03/02/2024] Open
Abstract
During real-time language comprehension, our minds rapidly decode complex meanings from sequences of words. The difficulty of doing so is known to be related to words' contextual predictability, but what cognitive processes do these predictability effects reflect? In one view, predictability effects reflect facilitation due to anticipatory processing of words that are predictable from context. This view predicts a linear effect of predictability on processing demand. In another view, predictability effects reflect the costs of probabilistic inference over sentence interpretations. This view predicts either a logarithmic or a superlogarithmic effect of predictability on processing demand, depending on whether it assumes pressures toward a uniform distribution of information over time. The empirical record is currently mixed. Here, we revisit this question at scale: We analyze six reading datasets, estimate next-word probabilities with diverse statistical language models, and model reading times using recent advances in nonlinear regression. Results support a logarithmic effect of word predictability on processing difficulty, which favors probabilistic inference as a key component of human language processing.
Collapse
Affiliation(s)
- Cory Shain
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Clara Meister
- Department of Computer Science, Institute for Machine Learning, ETH Zürich, Zürich 8092, Schweiz
| | - Tiago Pimentel
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, United Kingdom
| | - Ryan Cotterell
- Department of Computer Science, Institute for Machine Learning, ETH Zürich, Zürich 8092, Schweiz
| | - Roger Levy
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
| |
Collapse
|
7
|
Bruera A, Tao Y, Anderson A, Çokal D, Haber J, Poesio M. Modeling Brain Representations of Words' Concreteness in Context Using GPT-2 and Human Ratings. Cogn Sci 2023; 47:e13388. [PMID: 38103208 DOI: 10.1111/cogs.13388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 09/12/2023] [Accepted: 10/27/2023] [Indexed: 12/18/2023]
Abstract
The meaning of most words in language depends on their context. Understanding how the human brain extracts contextualized meaning, and identifying where in the brain this takes place, remain important scientific challenges. But technological and computational advances in neuroscience and artificial intelligence now provide unprecedented opportunities to study the human brain in action as language is read and understood. Recent contextualized language models seem to be able to capture homonymic meaning variation ("bat", in a baseball vs. a vampire context), as well as more nuanced differences of meaning-for example, polysemous words such as "book", which can be interpreted in distinct but related senses ("explain a book", information, vs. "open a book", object) whose differences are fine-grained. We study these subtle differences in lexical meaning along the concrete/abstract dimension, as they are triggered by verb-noun semantic composition. We analyze functional magnetic resonance imaging (fMRI) activations elicited by Italian verb phrases containing nouns whose interpretation is affected by the verb to different degrees. By using a contextualized language model and human concreteness ratings, we shed light on where in the brain such fine-grained meaning variation takes place and how it is coded. Our results show that phrase concreteness judgments and the contextualized model can predict BOLD activation associated with semantic composition within the language network. Importantly, representations derived from a complex, nonlinear composition process consistently outperform simpler composition approaches. This is compatible with a holistic view of semantic composition in the brain, where semantic representations are modified by the process of composition itself. When looking at individual brain areas, we find that encoding performance is statistically significant, although with differing patterns of results, suggesting differential involvement, in the posterior superior temporal sulcus, inferior frontal gyrus and anterior temporal lobe, and in motor areas previously associated with processing of concreteness/abstractness.
Collapse
Affiliation(s)
- Andrea Bruera
- School of Electronic Engineering and Computer Science, Cognitive Science Research Group, Queen Mary University of London
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences
| | - Yuan Tao
- Department of Cognitive Science, Johns Hopkins University
| | | | - Derya Çokal
- Department of German Language and Literature I-Linguistics, University of Cologne
| | - Janosch Haber
- School of Electronic Engineering and Computer Science, Cognitive Science Research Group, Queen Mary University of London
- Chattermill, London
| | - Massimo Poesio
- School of Electronic Engineering and Computer Science, Cognitive Science Research Group, Queen Mary University of London
- Department of Information and Computing Sciences, University of Utrecht
| |
Collapse
|
8
|
Kauf C, Ivanova AA, Rambelli G, Chersoni E, She JS, Chowdhury Z, Fedorenko E, Lenci A. Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely. Cogn Sci 2023; 47:e13386. [PMID: 38009752 DOI: 10.1111/cogs.13386] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 10/27/2023] [Accepted: 11/04/2023] [Indexed: 11/29/2023]
Abstract
Word co-occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context, leverage these patterns to achieve impressive performance on diverse semantic tasks requiring world knowledge. An important but understudied question about LLMs' semantic abilities is whether they acquire generalized knowledge of common events. Here, we test whether five pretrained LLMs (from 2018's BERT to 2023's MPT) assign a higher likelihood to plausible descriptions of agent-patient interactions than to minimally different implausible versions of the same event. Using three curated sets of minimal sentence pairs (total n = 1215), we found that pretrained LLMs possess substantial event knowledge, outperforming other distributional language models. In particular, they almost always assign a higher likelihood to possible versus impossible events (The teacher bought the laptop vs. The laptop bought the teacher). However, LLMs show less consistent preferences for likely versus unlikely events (The nanny tutored the boy vs. The boy tutored the nanny). In follow-up analyses, we show that (i) LLM scores are driven by both plausibility and surface-level sentence features, (ii) LLM scores generalize well across syntactic variants (active vs. passive constructions) but less well across semantic variants (synonymous sentences), (iii) some LLM errors mirror human judgment ambiguity, and (iv) sentence plausibility serves as an organizing dimension in internal LLM representations. Overall, our results show that important aspects of event knowledge naturally emerge from distributional linguistic patterns, but also highlight a gap between representations of possible/impossible and likely/unlikely events.
Collapse
Affiliation(s)
- Carina Kauf
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
| | - Anna A Ivanova
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
- Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology
| | - Giulia Rambelli
- Department of Modern Languages, Literatures and Cultures, University of Bologna
| | - Emmanuele Chersoni
- Department of Chinese and Bilingual Studies, Hong Kong Polytechnic University
| | - Jingyuan Selena She
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
| | | | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
| | - Alessandro Lenci
- Department of Philology, Literature, and Linguistics, University of Pisa
| |
Collapse
|
9
|
Shin GH, Mun S. Explainability of neural networks for child language: Agent-First strategy in comprehension of Korean active transitive construction. Dev Sci 2023; 26:e13405. [PMID: 37161692 DOI: 10.1111/desc.13405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 03/10/2023] [Accepted: 04/08/2023] [Indexed: 05/11/2023]
Abstract
This study investigates how neural networks address the properties of children's linguistic knowledge, with a focus on the Agent-First strategy in comprehension of an active transitive construction in Korean. We develop various neural-network models and measure their classification performance on the test stimuli used in a behavioural experiment involving scrambling and omission of sentential components at varying degrees. Results show that, despite some compatibility of these models' performance with the children's response patterns, their performance does not fully approximate the children's utilisation of this strategy, demonstrating by-model and by-condition asymmetries. This study's findings suggest that neural networks can utilise information about formal co-occurrences to access the intended message to a certain degree, but the outcome of this process may be substantially different from how a child (as a developing processor) engages in comprehension. This implies some limits of neural networks on revealing the developmental trajectories of child language. RESEARCH HIGHLIGHTS: This study investigates how neural networks address properties of child language. We focus on the Agent-First strategy in comprehension of Korean active transitive. Results show by-model/condition asymmetries against children's response patterns. This implies some limits of neural networks on revealing properties of child language.
Collapse
Affiliation(s)
- Gyu-Ho Shin
- Department of Linguistics, University of Illinois Chicago, Chicago, IL, USA
- Department of Asian Studies, Palacky University Olomouc, Olomouc, Czech Republic
| | - Seongmin Mun
- Humanities Research Institute, Ajou University, Suwon-si, Gyeonggi-do, South Korea
| |
Collapse
|
10
|
Hoover JL, Sonderegger M, Piantadosi ST, O’Donnell TJ. The Plausibility of Sampling as an Algorithmic Theory of Sentence Processing. Open Mind (Camb) 2023; 7:350-391. [PMID: 37637302 PMCID: PMC10449406 DOI: 10.1162/opmi_a_00086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/21/2023] [Indexed: 08/29/2023] Open
Abstract
Words that are more surprising given context take longer to process. However, no incremental parsing algorithm has been shown to directly predict this phenomenon. In this work, we focus on a class of algorithms whose runtime does naturally scale in surprisal-those that involve repeatedly sampling from the prior. Our first contribution is to show that simple examples of such algorithms predict runtime to increase superlinearly with surprisal, and also predict variance in runtime to increase. These two predictions stand in contrast with literature on surprisal theory (Hale, 2001; Levy, 2008a) which assumes that the expected processing cost increases linearly with surprisal, and makes no prediction about variance. In the second part of this paper, we conduct an empirical study of the relationship between surprisal and reading time, using a collection of modern language models to estimate surprisal. We find that with better language models, reading time increases superlinearly in surprisal, and also that variance increases. These results are consistent with the predictions of sampling-based algorithms.
Collapse
Affiliation(s)
- Jacob Louis Hoover
- McGill University, Montréal, Canada
- Mila Québec AI Institute, Montréal, Canada
| | | | | | - Timothy J. O’Donnell
- McGill University, Montréal, Canada
- Mila Québec AI Institute, Montréal, Canada
- Canada CIFAR AI Chair, Mila
| |
Collapse
|
11
|
Desbordes T, Lakretz Y, Chanoine V, Oquab M, Badier JM, Trébuchon A, Carron R, Bénar CG, Dehaene S, King JR. Dimensionality and Ramping: Signatures of Sentence Integration in the Dynamics of Brains and Deep Language Models. J Neurosci 2023; 43:5350-5364. [PMID: 37217308 PMCID: PMC10359032 DOI: 10.1523/jneurosci.1163-22.2023] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 02/07/2023] [Accepted: 02/19/2023] [Indexed: 05/24/2023] Open
Abstract
A sentence is more than the sum of its words: its meaning depends on how they combine with one another. The brain mechanisms underlying such semantic composition remain poorly understood. To shed light on the neural vector code underlying semantic composition, we introduce two hypotheses: (1) the intrinsic dimensionality of the space of neural representations should increase as a sentence unfolds, paralleling the growing complexity of its semantic representation; and (2) this progressive integration should be reflected in ramping and sentence-final signals. To test these predictions, we designed a dataset of closely matched normal and jabberwocky sentences (composed of meaningless pseudo words) and displayed them to deep language models and to 11 human participants (5 men and 6 women) monitored with simultaneous MEG and intracranial EEG. In both deep language models and electrophysiological data, we found that representational dimensionality was higher for meaningful sentences than jabberwocky. Furthermore, multivariate decoding of normal versus jabberwocky confirmed three dynamic patterns: (1) a phasic pattern following each word, peaking in temporal and parietal areas; (2) a ramping pattern, characteristic of bilateral inferior and middle frontal gyri; and (3) a sentence-final pattern in left superior frontal gyrus and right orbitofrontal cortex. These results provide a first glimpse into the neural geometry of semantic integration and constrain the search for a neural code of linguistic composition.SIGNIFICANCE STATEMENT Starting from general linguistic concepts, we make two sets of predictions in neural signals evoked by reading multiword sentences. First, the intrinsic dimensionality of the representation should grow with additional meaningful words. Second, the neural dynamics should exhibit signatures of encoding, maintaining, and resolving semantic composition. We successfully validated these hypotheses in deep neural language models, artificial neural networks trained on text and performing very well on many natural language processing tasks. Then, using a unique combination of MEG and intracranial electrodes, we recorded high-resolution brain data from human participants while they read a controlled set of sentences. Time-resolved dimensionality analysis showed increasing dimensionality with meaning, and multivariate decoding allowed us to isolate the three dynamical patterns we had hypothesized.
Collapse
Affiliation(s)
- Théo Desbordes
- Meta AI Research, Paris 75002, France; and Cognitive Neuroimaging Unit NeuroSpin center, 91191, Gif-sur-Yvette, France
| | - Yair Lakretz
- Cognitive Neuroimaging Unit NeuroSpin center, Gif-sur-Yvette, 91191, France
| | - Valérie Chanoine
- Institute of Language, Communication and the Brain, Aix-en-Provence, 13100, France; and Aix-Marseille Université, Centre National de la Recherche Scientifique, LPL, Aix-en-Provence, 13100, France
| | | | - Jean-Michel Badier
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale, CNRS, LPL, Aix-en-Provence 13100; and Inst Neurosci Syst, Marseille, 13005, France
| | - Agnès Trébuchon
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale, CNRS, LPL, Aix-en-Provence 13100, France; and Inst Neurosci Syst, Marseille, 13005, France; and Assistance Publique Hopitaux de Marseille, Timone hospital, Epileptology and Cerebral Rythmology, Marseille, 13385, France
| | - Romain Carron
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale, CNRS, LPL, Aix-en-Provence 13100, France; and Inst Neurosci Syst, Marseille, 13005, France; and Assistance Publique Hopitaux de Marseille, Timone hospital, Functional and Stereotactic Neurosurgery, Marseille, 13385, France
| | - Christian-G Bénar
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale, CNRS, LPL, Aix-en-Provence 13100, France; and Inst Neurosci Syst, Marseille, 13005, France
| | - Stanislas Dehaene
- Université Paris Saclay, Institut National de la Santé et de la Recherche Médicale, Commissariat à l'Energie Atomique, Cognitive Neuroimaging Unit, NeuroSpin center, Saclay, 91191, France; and Collège de France, PSL University, Paris, 75231, France
| | - Jean-Rémi King
- Meta AI Research, Paris 75002, France; and Cognitive Neuroimaging Unit NeuroSpin center, 91191, Gif-sur-Yvette, France
- LSP, École normale supérieure, PSL (Paris Sciences & Lettres) University, CNRS, 75005 Paris, France
| |
Collapse
|