1
|
Celma-Miralles A, Seeberg AB, Haumann NT, Vuust P, Petersen B. Experience with the cochlear implant enhances the neural tracking of spectrotemporal patterns in the Alberti bass. Hear Res 2024; 452:109105. [PMID: 39216335 DOI: 10.1016/j.heares.2024.109105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 08/08/2024] [Accepted: 08/13/2024] [Indexed: 09/04/2024]
Abstract
Cochlear implant (CI) users experience diminished music enjoyment due to the technical limitations of the CI. Nonetheless, behavioral studies have reported that rhythmic features are well-transmitted through the CI. Still, the gradual improvement of rhythm perception after the CI switch-on has not yet been determined using neurophysiological measures. To fill this gap, we here reanalyzed the electroencephalographic responses of participants from two previous mismatch negativity studies. These studies included eight recently implanted CI users measured twice, within the first six weeks after CI switch-on and approximately three months later; thirteen experienced CI users with a median experience of 7 years; and fourteen normally hearing (NH) controls. All participants listened to a repetitive four-tone pattern (known in music as Alberti bass) for 35 min. Applying frequency tagging, we aimed to estimate the neural activity synchronized to the periodicities of the Alberti bass. We hypothesized that longer experience with the CI would be reflected in stronger frequency-tagged neural responses approaching the responses of NH controls. We found an increase in the frequency-tagged amplitudes after only 3 months of CI use. This increase in neural synchronization may reflect an early adaptation to the CI stimulation. Moreover, the frequency-tagged amplitudes of experienced CI users were significantly greater than those of recently implanted CI users, but still smaller than those of NH controls. The frequency-tagged neural responses did not just reflect spectrotemporal changes in the stimuli (i.e., intensity or spectral content fluctuating over time), but also showed non-linear transformations that seemed to enhance relevant periodicities of the Alberti bass. Our findings provide neurophysiological evidence indicating a gradual adaptation to the CI, which is noticeable already after three months, resulting in close to NH brain processing of spectrotemporal features of musical rhythms after extended CI use.
Collapse
Affiliation(s)
- Alexandre Celma-Miralles
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark.
| | - Alberte B Seeberg
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Niels T Haumann
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Peter Vuust
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Bjørn Petersen
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| |
Collapse
|
2
|
Chalas N, Meyer L, Lo CW, Park H, Kluger DS, Abbasi O, Kayser C, Nitsch R, Gross J. Dissociating prosodic from syntactic delta activity during natural speech comprehension. Curr Biol 2024; 34:3537-3549.e5. [PMID: 39047734 DOI: 10.1016/j.cub.2024.06.072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 06/24/2024] [Accepted: 06/27/2024] [Indexed: 07/27/2024]
Abstract
Decoding human speech requires the brain to segment the incoming acoustic signal into meaningful linguistic units, ranging from syllables and words to phrases. Integrating these linguistic constituents into a coherent percept sets the root of compositional meaning and hence understanding. One important cue for segmentation in natural speech is prosodic cues, such as pauses, but their interplay with higher-level linguistic processing is still unknown. Here, we dissociate the neural tracking of prosodic pauses from the segmentation of multi-word chunks using magnetoencephalography (MEG). We find that manipulating the regularity of pauses disrupts slow speech-brain tracking bilaterally in auditory areas (below 2 Hz) and in turn increases left-lateralized coherence of higher-frequency auditory activity at speech onsets (around 25-45 Hz). Critically, we also find that multi-word chunks-defined as short, coherent bundles of inter-word dependencies-are processed through the rhythmic fluctuations of low-frequency activity (below 2 Hz) bilaterally and independently of prosodic cues. Importantly, low-frequency alignment at chunk onsets increases the accuracy of an encoding model in bilateral auditory and frontal areas while controlling for the effect of acoustics. Our findings provide novel insights into the neural basis of speech perception, demonstrating that both acoustic features (prosodic cues) and abstract linguistic processing at the multi-word timescale are underpinned independently by low-frequency electrophysiological brain activity in the delta frequency range.
Collapse
Affiliation(s)
- Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany; Institute for Translational Neuroscience, University of Münster, Münster, Germany.
| | - Lars Meyer
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Chia-Wen Lo
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Hyojin Park
- Centre for Human Brain Health (CHBH), School of Psychology, University of Birmingham, Birmingham, UK
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, 33615 Bielefeld, Germany
| | - Robert Nitsch
- Institute for Translational Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
3
|
Zhao J, Martin AE, Coopmans CW. Structural and sequential regularities modulate phrase-rate neural tracking. Sci Rep 2024; 14:16603. [PMID: 39025957 PMCID: PMC11258220 DOI: 10.1038/s41598-024-67153-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 07/08/2024] [Indexed: 07/20/2024] Open
Abstract
Electrophysiological brain activity has been shown to synchronize with the quasi-regular repetition of grammatical phrases in connected speech-so-called phrase-rate neural tracking. Current debate centers around whether this phenomenon is best explained in terms of the syntactic properties of phrases or in terms of syntax-external information, such as the sequential repetition of parts of speech. As these two factors were confounded in previous studies, much of the literature is compatible with both accounts. Here, we used electroencephalography (EEG) to determine if and when the brain is sensitive to both types of information. Twenty native speakers of Mandarin Chinese listened to isochronously presented streams of monosyllabic words, which contained either grammatical two-word phrases (e.g., catch fish, sell house) or non-grammatical word combinations (e.g., full lend, bread far). Within the grammatical conditions, we varied two structural factors: the position of the head of each phrase and the type of attachment. Within the non-grammatical conditions, we varied the consistency with which parts of speech were repeated. Tracking was quantified through evoked power and inter-trial phase coherence, both derived from the frequency-domain representation of EEG responses. As expected, neural tracking at the phrase rate was stronger in grammatical sequences than in non-grammatical sequences without syntactic structure. Moreover, it was modulated by both attachment type and head position, revealing the structure-sensitivity of phrase-rate tracking. We additionally found that the brain tracks the repetition of parts of speech in non-grammatical sequences. These data provide an integrative perspective on the current debate about neural tracking effects, revealing that the brain utilizes regularities computed over multiple levels of linguistic representation in guiding rhythmic computation.
Collapse
Affiliation(s)
- Junyuan Zhao
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
| | - Andrea E Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cas W Coopmans
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
4
|
Degano G, Donhauser PW, Gwilliams L, Merlo P, Golestani N. Speech prosody enhances the neural processing of syntax. Commun Biol 2024; 7:748. [PMID: 38902370 PMCID: PMC11190187 DOI: 10.1038/s42003-024-06444-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 06/12/2024] [Indexed: 06/22/2024] Open
Abstract
Human language relies on the correct processing of syntactic information, as it is essential for successful communication between speakers. As an abstract level of language, syntax has often been studied separately from the physical form of the speech signal, thus often masking the interactions that can promote better syntactic processing in the human brain. However, behavioral and neural evidence from adults suggests the idea that prosody and syntax interact, and studies in infants support the notion that prosody assists language learning. Here we analyze a MEG dataset to investigate how acoustic cues, specifically prosody, interact with syntactic representations in the brains of native English speakers. More specifically, to examine whether prosody enhances the cortical encoding of syntactic representations, we decode syntactic phrase boundaries directly from brain activity, and evaluate possible modulations of this decoding by the prosodic boundaries. Our findings demonstrate that the presence of prosodic boundaries improves the neural representation of phrase boundaries, indicating the facilitative role of prosodic cues in processing abstract linguistic features. This work has implications for interactive models of how the brain processes different linguistic features. Future research is needed to establish the neural underpinnings of prosody-syntax interactions in languages with different typological characteristics.
Collapse
Affiliation(s)
- Giulio Degano
- Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland.
| | - Peter W Donhauser
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany
| | - Laura Gwilliams
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Paola Merlo
- Department of Linguistics, University of Geneva, Geneva, Switzerland
- University Centre for Informatics, University of Geneva, Geneva, Switzerland
| | - Narly Golestani
- Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- Brain and Language Lab, Cognitive Science Hub, University of Vienna, Vienna, Austria
- Department of Behavioral and Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria
| |
Collapse
|
5
|
Lo CW, Meyer L. Chunk boundaries disrupt dependency processing in an AG: Reconciling incremental processing and discrete sampling. PLoS One 2024; 19:e0305333. [PMID: 38889141 PMCID: PMC11185458 DOI: 10.1371/journal.pone.0305333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 05/29/2024] [Indexed: 06/20/2024] Open
Abstract
Language is rooted in our ability to compose: We link words together, fusing their meanings. Links are not limited to neighboring words but often span intervening words. The ability to process these non-adjacent dependencies (NADs) conflicts with the brain's sampling of speech: We consume speech in chunks that are limited in time, containing only a limited number of words. It is unknown how we link words together that belong to separate chunks. Here, we report that we cannot-at least not so well. In our electroencephalography (EEG) study, 37 human listeners learned chunks and dependencies from an artificial grammar (AG) composed of syllables. Multi-syllable chunks to be learned were equal-sized, allowing us to employ a frequency-tagging approach. On top of chunks, syllable streams contained NADs that were either confined to a single chunk or crossed a chunk boundary. Frequency analyses of the EEG revealed a spectral peak at the chunk rate, showing that participants learned the chunks. NADs that cross boundaries were associated with smaller electrophysiological responses than within-chunk NADs. This shows that NADs are processed readily when they are confined to the same chunk, but not as well when crossing a chunk boundary. Our findings help to reconcile the classical notion that language is processed incrementally with recent evidence for discrete perceptual sampling of speech. This has implications for language acquisition and processing as well as for the general view of syntax in human language.
Collapse
Affiliation(s)
- Chia-Wen Lo
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- University Clinic Münster, Münster, Germany
| |
Collapse
|
6
|
Young MJ, Fecchio M, Bodien YG, Edlow BL. Covert cortical processing: a diagnosis in search of a definition. Neurosci Conscious 2024; 2024:niad026. [PMID: 38327828 PMCID: PMC10849751 DOI: 10.1093/nc/niad026] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 10/22/2023] [Accepted: 12/10/2023] [Indexed: 02/09/2024] Open
Abstract
Historically, clinical evaluation of unresponsive patients following brain injury has relied principally on serial behavioral examination to search for emerging signs of consciousness and track recovery. Advances in neuroimaging and electrophysiologic techniques now enable clinicians to peer into residual brain functions even in the absence of overt behavioral signs. These advances have expanded clinicians' ability to sub-stratify behaviorally unresponsive and seemingly unaware patients following brain injury by querying and classifying covert brain activity made evident through active or passive neuroimaging or electrophysiologic techniques, including functional MRI, electroencephalography (EEG), transcranial magnetic stimulation-EEG, and positron emission tomography. Clinical research has thus reciprocally influenced clinical practice, giving rise to new diagnostic categories including cognitive-motor dissociation (i.e. 'covert consciousness') and covert cortical processing (CCP). While covert consciousness has received extensive attention and study, CCP is relatively less understood. We describe that CCP is an emerging and clinically relevant state of consciousness marked by the presence of intact association cortex responses to environmental stimuli in the absence of behavioral evidence of stimulus processing. CCP is not a monotonic state but rather encapsulates a spectrum of possible association cortex responses from rudimentary to complex and to a range of possible stimuli. In constructing a roadmap for this evolving field, we emphasize that efforts to inform clinicians, philosophers, and researchers of this condition are crucial. Along with strategies to sensitize diagnostic criteria and disorders of consciousness nosology to these vital discoveries, democratizing access to the resources necessary for clinical identification of CCP is an emerging clinical and ethical imperative.
Collapse
Affiliation(s)
- Michael J Young
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, 101 Merrimac Street, Suite 310, Boston, MA 02114, USA
| | - Matteo Fecchio
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, 101 Merrimac Street, Suite 310, Boston, MA 02114, USA
| | - Yelena G Bodien
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, 101 Merrimac Street, Suite 310, Boston, MA 02114, USA
- Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Harvard Medical School, 300 1st Ave, Charlestown, Boston, MA 02129, USA
| | - Brian L Edlow
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, 101 Merrimac Street, Suite 310, Boston, MA 02114, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, 149 13th St, Charlestown, Charlestown, MA 02129, USA
| |
Collapse
|
7
|
Ding N. Low-frequency neural parsing of hierarchical linguistic structures. Nat Rev Neurosci 2023; 24:792. [PMID: 37770624 DOI: 10.1038/s41583-023-00749-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/30/2023]
Affiliation(s)
- Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China.
| |
Collapse
|
8
|
Kazanina N, Tavano A. Reply to 'Low-frequency neural parsing of hierarchical linguistic structures'. Nat Rev Neurosci 2023; 24:793. [PMID: 37770625 DOI: 10.1038/s41583-023-00750-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/30/2023]
Affiliation(s)
- Nina Kazanina
- University of Bristol, Bristol, UK.
- Higher School of Economics, Moscow, Russia.
| | - Alessandro Tavano
- Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
9
|
Inbar M, Genzer S, Perry A, Grossman E, Landau AN. Intonation Units in Spontaneous Speech Evoke a Neural Response. J Neurosci 2023; 43:8189-8200. [PMID: 37793909 PMCID: PMC10697392 DOI: 10.1523/jneurosci.0235-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 08/16/2023] [Accepted: 08/29/2023] [Indexed: 10/06/2023] Open
Abstract
Spontaneous speech is produced in chunks called intonation units (IUs). IUs are defined by a set of prosodic cues and presumably occur in all human languages. Recent work has shown that across different grammatical and sociocultural conditions IUs form rhythms of ∼1 unit per second. Linguistic theory suggests that IUs pace the flow of information in the discourse. As a result, IUs provide a promising and hitherto unexplored theoretical framework for studying the neural mechanisms of communication. In this article, we identify a neural response unique to the boundary defined by the IU. We measured the EEG of human participants (of either sex), who listened to different speakers recounting an emotional life event. We analyzed the speech stimuli linguistically and modeled the EEG response at word offset using a GLM approach. We find that the EEG response to IU-final words differs from the response to IU-nonfinal words even when equating acoustic boundary strength. Finally, we relate our findings to the body of research on rhythmic brain mechanisms in speech processing. We study the unique contribution of IUs and acoustic boundary strength in predicting delta-band EEG. This analysis suggests that IU-related neural activity, which is tightly linked to the classic Closure Positive Shift (CPS), could be a time-locked component that captures the previously characterized delta-band neural speech tracking.SIGNIFICANCE STATEMENT Linguistic communication is central to human experience, and its neural underpinnings are a topic of much research in recent years. Neuroscientific research has benefited from studying human behavior in naturalistic settings, an endeavor that requires explicit models of complex behavior. Usage-based linguistic theory suggests that spoken language is prosodically structured in intonation units. We reveal that the neural system is attuned to intonation units by explicitly modeling their impact on the EEG response beyond mere acoustics. To our understanding, this is the first time this is demonstrated in spontaneous speech under naturalistic conditions and under a theoretical framework that connects the prosodic chunking of speech, on the one hand, with the flow of information during communication, on the other.
Collapse
Affiliation(s)
- Maya Inbar
- Department of Linguistics, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Psychology, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Shir Genzer
- Department of Psychology, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Anat Perry
- Department of Psychology, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Eitan Grossman
- Department of Linguistics, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Ayelet N Landau
- Department of Psychology, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| |
Collapse
|
10
|
Lo CW, Henke L, Martorell J, Meyer L. When linguistic dogma rejects a neuroscientific hypothesis. Nat Rev Neurosci 2023; 24:725. [PMID: 37696995 DOI: 10.1038/s41583-023-00738-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Affiliation(s)
- Chia-Wen Lo
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Lena Henke
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jordi Martorell
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Brain Rhythms and Cognition Group, Basque Center on Cognition, Brain and Language, Donostia-San Sebastián, Spain
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- University Clinic Münster, Münster, Germany
| |
Collapse
|
11
|
Lo CW, Anderson M, Henke L, Meyer L. Periodic fluctuations in reading times reflect multi-word-chunking. Sci Rep 2023; 13:18522. [PMID: 37898645 PMCID: PMC10613263 DOI: 10.1038/s41598-023-45536-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 10/20/2023] [Indexed: 10/30/2023] Open
Abstract
Memory is fleeting. To avoid information loss, humans need to recode verbal stimuli into chunks of limited duration, each containing multiple words. Chunk duration may also be limited neurally by the wavelength of periodic brain activity, so-called neural oscillations. While both cognitive and neural constraints predict some degree of behavioral regularity in processing, this remains to be shown. Our analysis of self-paced reading data from 181 participants reveals periodic patterns at a frequency of [Formula: see text] 2 Hz. We defined multi-word chunks by using a computational formalization based on dependency annotations and part-of-speech tags. Potential chunk outputs were first generated from the computational formalization and the final chunk outputs were selected based on normalized pointwise mutual information. We show that behavioral periodicity is time-aligned to multi-word chunks, suggesting that the multi-word chunks generated from local dependency clusters may minimize memory demands. This is the first evidence that sentence processing behavior is periodic, consistent with a role of both memory constraints and endogenous electrophysiological rhythms in the formation of chunks during language comprehension.
Collapse
Affiliation(s)
- Chia-Wen Lo
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, 04013, Leipzig, Germany.
| | | | - Lena Henke
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, 04013, Leipzig, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, 04013, Leipzig, Germany
- Clinic for Phoniatrics and Pedaudiology, University Clinic Münster, 48149, Münster, Germany
| |
Collapse
|
12
|
Kösem A, Dai B, McQueen JM, Hagoort P. Neural tracking of speech envelope does not unequivocally reflect intelligibility. Neuroimage 2023; 272:120040. [PMID: 36935084 DOI: 10.1016/j.neuroimage.2023.120040] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 03/19/2023] Open
Abstract
During listening, brain activity tracks the rhythmic structures of speech signals. Here, we directly dissociated the contribution of neural envelope tracking in the processing of speech acoustic cues from that related to linguistic processing. We examined the neural changes associated with the comprehension of Noise-Vocoded (NV) speech using magnetoencephalography (MEG). Participants listened to NV sentences in a 3-phase training paradigm: (1) pre-training, where NV stimuli were barely comprehended, (2) training with exposure of the original clear version of speech stimulus, and (3) post-training, where the same stimuli gained intelligibility from the training phase. Using this paradigm, we tested if the neural responses of a speech signal was modulated by its intelligibility without any change in its acoustic structure. To test the influence of spectral degradation on neural envelope tracking independently of training, participants listened to two types of NV sentences (4-band and 2-band NV speech), but were only trained to understand 4-band NV speech. Significant changes in neural tracking were observed in the delta range in relation to the acoustic degradation of speech. However, we failed to find a direct effect of intelligibility on the neural tracking of speech envelope in both theta and delta ranges, in both auditory regions-of-interest and whole-brain sensor-space analyses. This suggests that acoustics greatly influence the neural tracking response to speech envelope, and that caution needs to be taken when choosing the control signals for speech-brain tracking analyses, considering that a slight change in acoustic parameters can have strong effects on the neural tracking response.
Collapse
Affiliation(s)
- Anne Kösem
- Max Planck Institute for Psycholinguistics, 6500 AH Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands; Lyon Neuroscience Research Center (CRNL), CoPhy Team, INSERM U1028, 69500 Bron, France.
| | - Bohan Dai
- Max Planck Institute for Psycholinguistics, 6500 AH Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands
| | - James M McQueen
- Max Planck Institute for Psycholinguistics, 6500 AH Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, 6500 AH Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands
| |
Collapse
|
13
|
Murphy E. ROSE: A Neurocomputational Architecture for Syntax. ARXIV 2023:arXiv:2303.08877v1. [PMID: 36994166 PMCID: PMC10055479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
A comprehensive model of natural language processing in the brain must accommodate four components: representations, operations, structures and encoding. It further requires a principled account of how these different components mechanistically, and causally, relate to each another. While previous models have isolated regions of interest for structure-building and lexical access, and have utilized specific neural recording measures to expose possible signatures of syntax, many gaps remain with respect to bridging distinct scales of analysis that map onto these four components. By expanding existing accounts of how neural oscillations can index various linguistic processes, this article proposes a neurocomputational architecture for syntax, termed the ROSE model (Representation, Operation, Structure, Encoding). Under ROSE, the basic data structures of syntax are atomic features, types of mental representations (R), and are coded at the single-unit and ensemble level. Elementary computations (O) that transform these units into manipulable objects accessible to subsequent structure-building levels are coded via high frequency broadband γ activity. Low frequency synchronization and cross-frequency coupling code for recursive categorial inferences (S). Distinct forms of low frequency coupling and phase-amplitude coupling (δ-θ coupling via pSTS-IFG; θ-γ coupling via IFG to conceptual hubs in lateral and ventral temporal cortex) then encode these structures onto distinct workspaces (E). Causally connecting R to O is spike-phase/LFP coupling; connecting O to S is phase-amplitude coupling; connecting S to E is a system of frontotemporal traveling oscillations; connecting E back to lower levels is low-frequency phase resetting of spike-LFP coupling. This compositional neural code has important implications for algorithmic accounts, since it makes concrete predictions for the appropriate level of study for psycholinguistic parsing models. ROSE is reliant on neurophysiologically plausible mechanisms, is supported at all four levels by a range of recent empirical research, and provides an anatomically precise and falsifiable grounding for the basic property of natural language syntax: hierarchical, recursive structure-building.
Collapse
Affiliation(s)
- Elliot Murphy
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, UTHealth, Houston, TX, USA
- Texas Institute for Restorative Neurotechnologies, UTHealth, Houston, TX, USA
| |
Collapse
|
14
|
Kazanina N, Tavano A. What neural oscillations can and cannot do for syntactic structure building. Nat Rev Neurosci 2023; 24:113-128. [PMID: 36460920 DOI: 10.1038/s41583-022-00659-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2022] [Indexed: 12/04/2022]
Abstract
Understanding what someone says requires relating words in a sentence to one another as instructed by the grammatical rules of a language. In recent years, the neurophysiological basis for this process has become a prominent topic of discussion in cognitive neuroscience. Current proposals about the neural mechanisms of syntactic structure building converge on a key role for neural oscillations in this process, but they differ in terms of the exact function that is assigned to them. In this Perspective, we discuss two proposed functions for neural oscillations - chunking and multiscale information integration - and evaluate their merits and limitations taking into account a fundamentally hierarchical nature of syntactic representations in natural languages. We highlight insights that provide a tangible starting point for a neurocognitive model of syntactic structure building.
Collapse
Affiliation(s)
- Nina Kazanina
- University of Bristol, Bristol, UK.
- Higher School of Economics, Moscow, Russia.
| | | |
Collapse
|
15
|
Chalas N, Daube C, Kluger DS, Abbasi O, Nitsch R, Gross J. Speech onsets and sustained speech contribute differentially to delta and theta speech tracking in auditory cortex. Cereb Cortex 2023; 33:6273-6281. [PMID: 36627246 DOI: 10.1093/cercor/bhac502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 01/12/2023] Open
Abstract
When we attentively listen to an individual's speech, our brain activity dynamically aligns to the incoming acoustic input at multiple timescales. Although this systematic alignment between ongoing brain activity and speech in auditory brain areas is well established, the acoustic events that drive this phase-locking are not fully understood. Here, we use magnetoencephalographic recordings of 24 human participants (12 females) while they were listening to a 1 h story. We show that whereas speech-brain coupling is associated with sustained acoustic fluctuations in the speech envelope in the theta-frequency range (4-7 Hz), speech tracking in the low-frequency delta (below 1 Hz) was strongest around onsets of speech, like the beginning of a sentence. Crucially, delta tracking in bilateral auditory areas was not sustained after onsets, proposing a delta tracking during continuous speech perception that is driven by speech onsets. We conclude that both onsets and sustained components of speech contribute differentially to speech tracking in delta- and theta-frequency bands, orchestrating sampling of continuous speech. Thus, our results suggest a temporal dissociation of acoustically driven oscillatory activity in auditory areas during speech tracking, providing valuable implications for orchestration of speech tracking at multiple time scales.
Collapse
Affiliation(s)
- Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Malmedyweg 15, 48149, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Fliednerstr. 21, 48149 Münster, Germany.,Institute for Translational Neuroscience, University of Münster, Albert-Schweitzer-Campus 1, Geb. A9a, Münster, Germany
| | - Christoph Daube
- Centre for Cognitive Neuroimaging, University of Glasgow, 56-64 Hillhead Street, G12 8QB, Glasgow, United Kingdom
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Malmedyweg 15, 48149, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Fliednerstr. 21, 48149 Münster, Germany
| | - Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Malmedyweg 15, 48149, Münster, Germany
| | - Robert Nitsch
- Institute for Translational Neuroscience, University of Münster, Albert-Schweitzer-Campus 1, Geb. A9a, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Malmedyweg 15, 48149, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Fliednerstr. 21, 48149 Münster, Germany
| |
Collapse
|
16
|
Lo CW, Tung TY, Ke AH, Brennan JR. Hierarchy, Not Lexical Regularity, Modulates Low-Frequency Neural Synchrony During Language Comprehension. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:538-555. [PMID: 37215342 PMCID: PMC10158645 DOI: 10.1162/nol_a_00077] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 06/20/2022] [Indexed: 05/24/2023]
Abstract
Neural responses appear to synchronize with sentence structure. However, researchers have debated whether this response in the delta band (0.5-3 Hz) really reflects hierarchical information or simply lexical regularities. Computational simulations in which sentences are represented simply as sequences of high-dimensional numeric vectors that encode lexical information seem to give rise to power spectra similar to those observed for sentence synchronization, suggesting that sentence-level cortical tracking findings may reflect sequential lexical or part-of-speech information, and not necessarily hierarchical syntactic information. Using electroencephalography (EEG) data and the frequency-tagging paradigm, we develop a novel experimental condition to tease apart the predictions of the lexical and the hierarchical accounts of the attested low-frequency synchronization. Under a lexical model, synchronization should be observed even when words are reversed within their phrases (e.g., "sheep white grass eat" instead of "white sheep eat grass"), because the same lexical items are preserved at the same regular intervals. Critically, such stimuli are not syntactically well-formed; thus a hierarchical model does not predict synchronization of phrase- and sentence-level structure in the reversed phrase condition. Computational simulations confirm these diverging predictions. EEG data from N = 31 native speakers of Mandarin show robust delta synchronization to syntactically well-formed isochronous speech. Importantly, no such pattern is observed for reversed phrases, consistent with the hierarchical, but not the lexical, accounts.
Collapse
Affiliation(s)
- Chia-Wen Lo
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
| | - Tzu-Yun Tung
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
| | - Alan Hezao Ke
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
- Department of Linguistics, Languages and Cultures, Michigan State University, East Lansing, MI, USA
| | | |
Collapse
|