1
|
Smit EA, Rathcke TV. The role of native language and beat perception ability in the perception of speech rhythm : Native language, beat perception, and speech rhythm perception. Psychon Bull Rev 2024:10.3758/s13423-024-02513-4. [PMID: 39028394 DOI: 10.3758/s13423-024-02513-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/11/2024] [Indexed: 07/20/2024]
Abstract
The perception of rhythm has been studied across a range of auditory signals, with speech presenting one of the particularly challenging cases to capture and explain. Here, we asked if rhythm perception in speech is guided by perceptual biases arising from native language structures, if it is shaped by the cognitive ability to perceive a regular beat, or a combination of both. Listeners of two prosodically distinct languages - English and French - heard sentences (spoken in their native and the foreign language, respectively) and compared the rhythm of each sentence to its drummed version (presented at inter-syllabic, inter-vocalic, or isochronous intervals). While English listeners tended to map sentence rhythm onto inter-vocalic and inter-syllabic intervals in this task, French listeners showed a perceptual preference for inter-vocalic intervals only. The native language tendency was equally apparent in the listeners' foreign language and was enhanced by individual beat perception ability. These findings suggest that rhythm perception in speech is shaped primarily by listeners' native language experience with a lesser influence of innate cognitive traits.
Collapse
Affiliation(s)
- Eline A Smit
- Department of Linguistics, University of Konstanz, Konstanz, Germany
| | - Tamara V Rathcke
- Department of Linguistics, University of Konstanz, Konstanz, Germany.
| |
Collapse
|
2
|
Herff SA, Bonetti L, Cecchetti G, Vuust P, Kringelbach ML, Rohrmeier MA. Hierarchical syntax model of music predicts theta power during music listening. Neuropsychologia 2024; 199:108905. [PMID: 38740179 DOI: 10.1016/j.neuropsychologia.2024.108905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 03/07/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
Linguistic research showed that the depth of syntactic embedding is reflected in brain theta power. Here, we test whether this also extends to non-linguistic stimuli, specifically music. We used a hierarchical model of musical syntax to continuously quantify two types of expert-annotated harmonic dependencies throughout a piece of Western classical music: prolongation and preparation. Prolongations can roughly be understood as a musical analogue to linguistic coordination between constituents that share the same function (e.g., 'pizza' and 'pasta' in 'I ate pizza and pasta'). Preparation refers to the dependency between two harmonies whereby the first implies a resolution towards the second (e.g., dominant towards tonic; similar to how the adjective implies the presence of a noun in 'I like spicy … '). Source reconstructed MEG data of sixty-five participants listening to the musical piece was then analysed. We used Bayesian Mixed Effects models to predict theta envelope in the brain, using the number of open prolongation and preparation dependencies as predictors whilst controlling for audio envelope. We observed that prolongation and preparation both carry independent and distinguishable predictive value for theta band fluctuation in key linguistic areas such as the Angular, Superior Temporal, and Heschl's Gyri, or their right-lateralised homologues, with preparation showing additional predictive value for areas associated with the reward system and prediction. Musical expertise further mediated these effects in language-related brain areas. Results show that predictions of precisely formalised music-theoretical models are reflected in the brain activity of listeners which furthers our understanding of the perception and cognition of musical structure.
Collapse
Affiliation(s)
- Steffen A Herff
- Sydney Conservatorium of Music, University of Sydney, Sydney, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Gabriele Cecchetti
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
3
|
Bowers A, Hudock D. Lower nonword syllable sequence repetition accuracy in adults who stutter is related to differences in audio-motor oscillations. Neuropsychologia 2024; 199:108906. [PMID: 38740180 DOI: 10.1016/j.neuropsychologia.2024.108906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 03/05/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
OBJECTIVE The goal of this study was to use independent component analysis (ICA) of high-density electroencephalography (EEG) to investigate whether differences in audio-motor neural oscillations are related to nonword syllable repetition accuracy in a group of adults who stutter compared to typically fluent speakers. METHODS EEG was recorded using 128 channels from 23 typically fluent speakers and 23 adults who stutter matched for age, sex, and handedness. EEG was recorded during delayed, 2 and 4 bilabial nonword syllable repetition conditions. Scalp-topography, dipole source estimates, and power spectral density (PSD) were computed for each independent component (IC) and used to cluster similar ICs across participants. Event-related spectral perturbations (ERSPs) were computed for each IC cluster to examine changes over time in the repetition conditions and to examine how dynamic changes in ERSPs are related to syllable repetition accuracy. RESULTS Findings indicated significantly lower accuracy on a measure of percentage correct trials in the AWS group and for a normalized measure of syllable load performance across conditions. Analysis of ERSPs revealed significantly lower alpha/beta ERD in left and right μ ICs and in left and right posterior temporal lobe α ICs in AWS compared to TFS (CC p < 0.05). Pearson correlations with %CT for frequency across time showed strong relationships with accuracy (FWE<0.05) during maintenance in the TFS group and during execution in the AWS group. CONCLUSIONS Findings implicate lower alpha/beta ERD (8-30 Hz) during syllable encoding over posterior temporal ICs and execution in left temporal/sensorimotor components. Strong correlations with accuracy and interindividual differences in ∼6-8 Hz ERSPs during execution implicate differences in motor and auditory-sensory monitoring during syllable sequence execution in AWS.
Collapse
Affiliation(s)
- Andrew Bowers
- University of Arkansas, 275 Epley Center, 606 North Razorback Rd. Fayetteville AR, 72701, United States.
| | - Daniel Hudock
- Idaho State University, 921 S. 8th Ave, Mailstop 8116, Pocatello, ID 83209, United States
| |
Collapse
|
4
|
Lydon EA, Panfil HB, Yako S, Mudar RA. Behavioral and neural measures of semantic conflict monitoring: Findings from a novel picture-word interference task. Brain Res 2024; 1834:148900. [PMID: 38555981 DOI: 10.1016/j.brainres.2024.148900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 03/23/2024] [Accepted: 03/28/2024] [Indexed: 04/02/2024]
Abstract
Conflict monitoring has been studied extensively using experimental paradigms that manipulate perceptual dimensions of stimuli and responses. The picture-word interference (PWI) task has historically been used to examine semantic conflict, but primarily for the purpose of examining lexical retrieval. In this study, we utilized two novel PWI tasks to assess conflict monitoring in the context of semantic conflict. Participants included nineteen young adults (14F, age = 20.79 ± 3.14) who completed two tasks: Animals and Objects. Task and conflict effects were assessed by examining behavioral (reaction time and accuracy) and neurophysiological (oscillations in theta, alpha, and beta band) measures. Results revealed conflict effects within both tasks, but the pattern of findings differed across the two semantic categories. Participants were slower to respond to unmatched versus matched trials on the Objects task only and were less accurate responding to matched versus unmatched trials in the Animals task only. We also observed task differences, with participants responding more accurately on conflict trials for Animals compared to Objects. Differences in neural oscillations were observed, including between-task differences in low beta oscillations and within-task differences in theta, alpha, and low beta. We also observed significant correlations between task performance and standard measures of cognitive control. This work provides new insights into conflict monitoring, highlighting the importance of examining conflict across different semantic categories, especially in the context of animacy. The findings serve as a benchmark to assess conflict monitoring using PWI tasks across populations of varying cognitive ability.
Collapse
Affiliation(s)
- Elizabeth A Lydon
- Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, IL, USA
| | - Holly B Panfil
- Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, IL, USA
| | - Sharbel Yako
- Molecular and Cellular Biology, University of Illinois Urbana-Champaign, Champaign, IL, USA
| | - Raksha A Mudar
- Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, IL, USA.
| |
Collapse
|
5
|
Issa MF, Khan I, Ruzzoli M, Molinaro N, Lizarazu M. On the speech envelope in the cortical tracking of speech. Neuroimage 2024; 297:120675. [PMID: 38885886 DOI: 10.1016/j.neuroimage.2024.120675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 06/05/2024] [Accepted: 06/06/2024] [Indexed: 06/20/2024] Open
Abstract
The synchronization between the speech envelope and neural activity in auditory regions, referred to as cortical tracking of speech (CTS), plays a key role in speech processing. The method selected for extracting the envelope is a crucial step in CTS measurement, and the absence of a consensus on best practices among the various methods can influence analysis outcomes and interpretation. Here, we systematically compare five standard envelope extraction methods the absolute value of Hilbert transform (absHilbert), gammatone filterbanks, heuristic approach, Bark scale, and vocalic energy), analyzing their impact on the CTS. We present performance metrics for each method based on the recording of brain activity from participants listening to speech in clear and noisy conditions, utilizing intracranial EEG, MEG and EEG data. As expected, we observed significant CTS in temporal brain regions below 10 Hz across all datasets, regardless of the extraction methods. In general, the gammatone filterbanks approach consistently demonstrated superior performance compared to other methods. Results from our study can guide scientists in the field to make informed decisions about the optimal analysis to extract the CTS, contributing to advancing the understanding of the neuronal mechanisms implicated in CTS.
Collapse
Affiliation(s)
- Mohamed F Issa
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Department of Scientific Computing, Faculty of Computers and Artificial Intelligence, Benha University, Benha, Egypt.
| | - Izhar Khan
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| | - Manuela Ruzzoli
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Nicola Molinaro
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Mikel Lizarazu
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| |
Collapse
|
6
|
Ten Oever S, Titone L, te Rietmolen N, Martin AE. Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proc Natl Acad Sci U S A 2024; 121:e2320489121. [PMID: 38805278 PMCID: PMC11161766 DOI: 10.1073/pnas.2320489121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 04/22/2024] [Indexed: 05/30/2024] Open
Abstract
Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, NijmegenXD 6525, The Netherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NijmegenEN 6525, The Netherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, EV 6229, The Netherlands
| | - Lorenzo Titone
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, LeipzigD-04303, Germany
| | - Noémie te Rietmolen
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, NijmegenXD 6525, The Netherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NijmegenEN 6525, The Netherlands
| | - Andrea E. Martin
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, NijmegenXD 6525, The Netherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NijmegenEN 6525, The Netherlands
| |
Collapse
|
7
|
Allal-Sumoto TK, Şahin D, Mizuhara H. Neural activity related to productive vocabulary knowledge effects during second language comprehension. Neurosci Res 2024; 203:8-17. [PMID: 38242177 DOI: 10.1016/j.neures.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 12/21/2023] [Accepted: 01/15/2024] [Indexed: 01/21/2024]
Abstract
Second language learners and educators often believe that improving one's listening ability hinges on acquiring an extensive vocabulary and engaging in thorough listening practice. Our previous study suggested that listening comprehension is also impacted by the ability to produce vocabulary. Nevertheless, it remained uncertain whether quick comprehension could be attributed to a simple acceleration of processing or to changes in neural activity. To identify neural activity changes during sentence listening comprehension according to different levels of lexical knowledge (productive, only comprehensive, uncomprehensive), we measured participants' electrical activity in the brain via electroencephalography (EEG) and conducted a time-frequency-based EEG power analysis. Additionally, we employed a decoding model to verify the predictability of vocabulary knowledge levels based on neural activity. The decoding results showed that EEG activity could discriminate between listening to sentences containing phrases that include productive knowledge and ones without. The positive impact of productive vocabulary knowledge on sentence comprehension, driven by distinctive neural processing during sentence comprehension, was unequivocally evident. Our study emphasizes the importance of productive vocabulary knowledge acquisition to enhance the process of second language listening comprehension.
Collapse
Affiliation(s)
| | - Duygu Şahin
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo, Kyoto 606-8501, Japan
| | - Hiroaki Mizuhara
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo, Kyoto 606-8501, Japan.
| |
Collapse
|
8
|
Ding R, Ten Oever S, Martin AE. Delta-band Activity Underlies Referential Meaning Representation during Pronoun Resolution. J Cogn Neurosci 2024; 36:1472-1492. [PMID: 38652108 DOI: 10.1162/jocn_a_02163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Human language offers a variety of ways to create meaning, one of which is referring to entities, objects, or events in the world. One such meaning maker is understanding to whom or to what a pronoun in a discourse refers to. To understand a pronoun, the brain must access matching entities or concepts that have been encoded in memory from previous linguistic context. Models of language processing propose that internally stored linguistic concepts, accessed via exogenous cues such as phonological input of a word, are represented as (a)synchronous activities across a population of neurons active at specific frequency bands. Converging evidence suggests that delta band activity (1-3 Hz) is involved in temporal and representational integration during sentence processing. Moreover, recent advances in the neurobiology of memory suggest that recollection engages neural dynamics similar to those which occurred during memory encoding. Integrating from these two research lines, we here tested the hypothesis that neural dynamic patterns, especially in delta frequency range, underlying referential meaning representation, would be reinstated during pronoun resolution. By leveraging neural decoding techniques (i.e., representational similarity analysis) on a magnetoencephalogram data set acquired during a naturalistic story-listening task, we provide evidence that delta-band activity underlies referential meaning representation. Our findings suggest that, during spoken language comprehension, endogenous linguistic representations such as referential concepts may be proactively retrieved and represented via activation of their underlying dynamic neural patterns.
Collapse
Affiliation(s)
- Rong Ding
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Sanne Ten Oever
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Andrea E Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
| |
Collapse
|
9
|
Nix KC, Oh A, Goad BS, Wu W, Lucas MV, Baumer FM. Detection of Language Lateralization Using Spectral Analysis of EEG. J Clin Neurophysiol 2024; 41:334-343. [PMID: 38710040 PMCID: PMC11076005 DOI: 10.1097/wnp.0000000000000988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/08/2024] Open
Abstract
PURPOSE Language lateralization relies on expensive equipment and can be difficult to tolerate. We assessed if lateralized brain responses to a language task can be detected with spectral analysis of electroencephalography (EEG). METHODS Twenty right-handed, neurotypical adults (28 ± 10 years; five males) performed a verb generation task and two control tasks (word listening and repetition). We measured changes in EEG activity elicited by tasks (the event-related spectral perturbation [ERSP]) in the theta, alpha, beta, and gamma frequency bands in two language (superior temporal and inferior frontal [ST and IF]) and one control (occipital [Occ]) region bilaterally. We tested whether language tasks elicited (1) changes in spectral power from baseline (significant ERSP) at any region or (2) asymmetric ERSPs between matched left and right regions. RESULTS Left IF beta power (-0.37±0.53, t = -3.12, P = 0.006) and gamma power in all regions decreased during verb generation. Asymmetric ERSPs (right > left) occurred between the (1) IF regions in the beta band (right vs. left difference of 0.23±0.37, t(19) = -2.80, P = 0.0114) and (2) ST regions in the alpha band (right vs. left difference of 0.48±0.63, t(19) = -3.36, P = 0.003). No changes from baseline or hemispheric asymmetries were noted in language regions during control tasks. On the individual level, 16 (80%) participants showed decreased left IF beta power from baseline, and 16 showed ST alpha asymmetry. Eighteen participants (90%) showed one of these two findings. CONCLUSIONS Spectral EEG analysis detects lateralized responses during language tasks in frontal and temporal regions. Spectral EEG analysis could be developed into a readily available language lateralization modality.
Collapse
Affiliation(s)
- Kerry C Nix
- Department of Neurology, Stanford University School of Medicine, Palo Alto, California, U.S.A
- Wu Tsai Neurosciences Institute, Stanford, California, U.S.A.; and
| | - Ahyuda Oh
- Department of Neurology, Stanford University School of Medicine, Palo Alto, California, U.S.A
| | - Beattie S Goad
- Department of Neurology, Stanford University School of Medicine, Palo Alto, California, U.S.A
| | - Wei Wu
- Wu Tsai Neurosciences Institute, Stanford, California, U.S.A.; and
- Department of Psychiatry, Stanford University School of Medicine, Palo Alto, California, U.S.A
| | - Molly V Lucas
- Wu Tsai Neurosciences Institute, Stanford, California, U.S.A.; and
- Department of Psychiatry, Stanford University School of Medicine, Palo Alto, California, U.S.A
| | - Fiona M Baumer
- Department of Neurology, Stanford University School of Medicine, Palo Alto, California, U.S.A
- Wu Tsai Neurosciences Institute, Stanford, California, U.S.A.; and
| |
Collapse
|
10
|
Zioga I, Zhou YJ, Weissbart H, Martin AE, Haegens S. Alpha and Beta Oscillations Differentially Support Word Production in a Rule-Switching Task. eNeuro 2024; 11:ENEURO.0312-23.2024. [PMID: 38490743 PMCID: PMC10988358 DOI: 10.1523/eneuro.0312-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 01/26/2024] [Accepted: 02/22/2024] [Indexed: 03/17/2024] Open
Abstract
Research into the role of brain oscillations in basic perceptual and cognitive functions has suggested that the alpha rhythm reflects functional inhibition while the beta rhythm reflects neural ensemble (re)activation. However, little is known regarding the generalization of these proposed fundamental operations to linguistic processes, such as speech comprehension and production. Here, we recorded magnetoencephalography in participants performing a novel rule-switching paradigm. Specifically, Dutch native speakers had to produce an alternative exemplar from the same category or a feature of a given target word embedded in spoken sentences (e.g., for the word "tuna", an exemplar from the same category-"seafood"-would be "shrimp", and a feature would be "pink"). A cue indicated the task rule-exemplar or feature-either before (pre-cue) or after (retro-cue) listening to the sentence. Alpha power during the working memory delay was lower for retro-cue compared with that for pre-cue in the left hemispheric language-related regions. Critically, alpha power negatively correlated with reaction times, suggestive of alpha facilitating task performance by regulating inhibition in regions linked to lexical retrieval. Furthermore, we observed a different spatiotemporal pattern of beta activity for exemplars versus features in the right temporoparietal regions, in line with the proposed role of beta in recruiting neural networks for the encoding of distinct categories. Overall, our study provides evidence for the generalizability of the role of alpha and beta oscillations from perceptual to more "complex, linguistic processes" and offers a novel task to investigate links between rule-switching, working memory, and word production.
Collapse
Affiliation(s)
- Ioanna Zioga
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, The Netherlands
| | - Ying Joey Zhou
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Department of Psychiatry, Oxford Centre for Human Brain Activity, Oxford, United Kingdom
| | - Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
| | - Andrea E Martin
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, The Netherlands
| | - Saskia Haegens
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Department of Psychiatry, Columbia University, New York, New York 10032
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, New York 10032
| |
Collapse
|
11
|
Corsini A, Tomassini A, Pastore A, Delis I, Fadiga L, D'Ausilio A. Speech perception difficulty modulates theta-band encoding of articulatory synergies. J Neurophysiol 2024; 131:480-491. [PMID: 38323331 DOI: 10.1152/jn.00388.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/04/2024] [Accepted: 01/25/2024] [Indexed: 02/08/2024] Open
Abstract
The human brain tracks available speech acoustics and extrapolates missing information such as the speaker's articulatory patterns. However, the extent to which articulatory reconstruction supports speech perception remains unclear. This study explores the relationship between articulatory reconstruction and task difficulty. Participants listened to sentences and performed a speech-rhyming task. Real kinematic data of the speaker's vocal tract were recorded via electromagnetic articulography (EMA) and aligned to corresponding acoustic outputs. We extracted articulatory synergies from the EMA data with principal component analysis (PCA) and employed partial information decomposition (PID) to separate the electroencephalographic (EEG) encoding of acoustic and articulatory features into unique, redundant, and synergistic atoms of information. We median-split sentences into easy (ES) and hard (HS) based on participants' performance and found that greater task difficulty involved greater encoding of unique articulatory information in the theta band. We conclude that fine-grained articulatory reconstruction plays a complementary role in the encoding of speech acoustics, lending further support to the claim that motor processes support speech perception.NEW & NOTEWORTHY Top-down processes originating from the motor system contribute to speech perception through the reconstruction of the speaker's articulatory movement. This study investigates the role of such articulatory simulation under variable task difficulty. We show that more challenging listening tasks lead to increased encoding of articulatory kinematics in the theta band and suggest that, in such situations, fine-grained articulatory reconstruction complements acoustic encoding.
Collapse
Affiliation(s)
- Alessandro Corsini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Aldo Pastore
- Laboratorio NEST, Scuola Normale Superiore, Pisa, Italy
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, Leeds, United Kingdom
| | - Luciano Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Alessandro D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| |
Collapse
|
12
|
Rubianes M, Drijvers L, Muñoz F, Jiménez-Ortega L, Almeida-Rivera T, Sánchez-García J, Fondevila S, Casado P, Martín-Loeches M. The Self-reference Effect Can Modulate Language Syntactic Processing Even Without Explicit Awareness: An Electroencephalography Study. J Cogn Neurosci 2024; 36:460-474. [PMID: 38165746 DOI: 10.1162/jocn_a_02104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
Although it is well established that self-related information can rapidly capture our attention and bias cognitive functioning, whether this self-bias can affect language processing remains largely unknown. In addition, there is an ongoing debate as to the functional independence of language processes, notably regarding the syntactic domain. Hence, this study investigated the influence of self-related content on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while the masked face identity (self, friend, or unknown faces) was presented for 16 msec preceding the critical word. The language-related ERP components (left anterior negativity [LAN] and P600) appeared for all identity conditions. However, the largest LAN effect followed by a reduced P600 effect was observed for self-faces, whereas a larger LAN with no reduction of the P600 was found for friend faces compared with unknown faces. These data suggest that both early and late syntactic processes can be modulated by self-related content. In addition, alpha power was more suppressed over the left inferior frontal gyrus only when self-faces appeared before the critical word. This may reflect higher semantic demands concomitant to early syntactic operations (around 150-550 msec). Our data also provide further evidence of self-specific response, as reflected by the N250 component. Collectively, our results suggest that identity-related information is rapidly decoded from facial stimuli and may impact core linguistic processes, supporting an interactive view of syntactic processing. This study provides evidence that the self-reference effect can be extended to syntactic processing.
Collapse
Affiliation(s)
- Miguel Rubianes
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | - Linda Drijvers
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Francisco Muñoz
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | - Laura Jiménez-Ortega
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | | | | | - Sabela Fondevila
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | - Pilar Casado
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | - Manuel Martín-Loeches
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| |
Collapse
|
13
|
Hong X, Farmer C, Kozhemiako N, Holmes GL, Thompson L, Manwaring S, Thurm A, Buckley A. Differences in Sleep EEG Coherence and Spindle Metrics in Toddlers With and Without Language Delay: A Prospective Observational Study. RESEARCH SQUARE 2024:rs.3.rs-3904113. [PMID: 38410470 PMCID: PMC10896365 DOI: 10.21203/rs.3.rs-3904113/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/28/2024]
Abstract
Background Sleep plays a crucial role in early language development, and sleep disturbances are common in children with neurodevelopmental disorders. Examining sleep microarchitecture in toddlers with and without language delays can offer key insights into neurophysiological abnormalities associated with atypical neurodevelopmental trajectories and potentially aid in early detection and intervention. Methods Here, we investigated electroencephalogram (EEG) coherence and sleep spindles in 16 toddlers with language delay (LD) compared with a group of 39 typically developing (TD) toddlers. The sample was majority male (n = 34, 62%). Participants were aged 12-to-22 months at baseline, and 34 (LD, n=11; TD, n=23) participants were evaluated again at 36 months of age. Results LD toddlers demonstrated increased EEG coherence compared to TD toddlers, with differences most prominent during slow-wave sleep. Within the LD group, lower expressive language skills were associated with higher coherence in REM sleep. Within the TD group, lower expressive language skills were associated with higher coherence in slow-wave sleep. Sleep spindle density, duration, and frequency changed between baseline and follow-up for both groups, with the LD group demonstrating a smaller magnitude of change than the TD group. The direction of change was frequency-dependent for both groups. Conclusions These findings indicate that atypical sleep EEG connectivity and sleep spindle development can be detected in toddlers between 12 and 36 months and offers insights into neurophysiological mechanisms underlying the etiology of neurodevelopmental disorders. Trial registration https://clinicaltrials.gov/study/NCT01339767; Registration date: 4/20/2011.
Collapse
Affiliation(s)
- Xinyi Hong
- National Institute of Mental Health Division of Intramural Research Programs: National Institute of Mental Health Intramural Research Program
| | - Cristan Farmer
- National Institute of Mental Health Intramural Research Program
| | | | | | - Lauren Thompson
- Washington State University Elson S Floyd College of Medicine
| | - Stacy Manwaring
- University of Utah Department of Communication Sciences and Disorders
| | - Audrey Thurm
- National Institute of Mental Health Intramural Research Program
| | - Ashura Buckley
- National Institute of Mental Health Intramural Research Program
| |
Collapse
|
14
|
Eisenhauer S, Gonzalez Alam TRDJ, Cornelissen PL, Smallwood J, Jefferies E. Individual word representations dissociate from linguistic context along a cortical unimodal to heteromodal gradient. Hum Brain Mapp 2024; 45:e26607. [PMID: 38339897 PMCID: PMC10836172 DOI: 10.1002/hbm.26607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 11/30/2023] [Accepted: 01/15/2024] [Indexed: 02/12/2024] Open
Abstract
Language comprehension involves multiple hierarchical processing stages across time, space, and levels of representation. When processing a word, the sensory input is transformed into increasingly abstract representations that need to be integrated with the linguistic context. Thus, language comprehension involves both input-driven as well as context-dependent processes. While neuroimaging research has traditionally focused on mapping individual brain regions to the distinct underlying processes, recent studies indicate that whole-brain distributed patterns of cortical activation might be highly relevant for cognitive functions, including language. One such pattern, based on resting-state connectivity, is the 'principal cortical gradient', which dissociates sensory from heteromodal brain regions. The present study investigated the extent to which this gradient provides an organizational principle underlying language function, using a multimodal neuroimaging dataset of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) recordings from 102 participants during sentence reading. We found that the brain response to individual representations of a word (word length, orthographic distance, and word frequency), which reflect visual; orthographic; and lexical properties, gradually increases towards the sensory end of the gradient. Although these properties showed opposite effect directions in fMRI and MEG, their association with the sensory end of the gradient was consistent across both neuroimaging modalities. In contrast, MEG revealed that properties reflecting a word's relation to its linguistic context (semantic similarity and position within the sentence) involve the heteromodal end of the gradient to a stronger extent. This dissociation between individual word and contextual properties was stable across earlier and later time windows during word presentation, indicating interactive processing of word representations and linguistic context at opposing ends of the principal gradient. To conclude, our findings indicate that the principal gradient underlies the organization of a range of linguistic representations while supporting a gradual distinction between context-independent and context-dependent representations. Furthermore, the gradient reveals convergent patterns across neuroimaging modalities (similar location along the gradient) in the presence of divergent responses (opposite effect directions).
Collapse
Affiliation(s)
- Susanne Eisenhauer
- Department of PsychologyUniversity of YorkYorkUK
- York Neuroimaging Centre, Innovation WayYorkUK
| | | | | | | | - Elizabeth Jefferies
- Department of PsychologyUniversity of YorkYorkUK
- York Neuroimaging Centre, Innovation WayYorkUK
| |
Collapse
|
15
|
Zoefel B, Kösem A. Neural tracking of continuous acoustics: properties, speech-specificity and open questions. Eur J Neurosci 2024; 59:394-414. [PMID: 38151889 DOI: 10.1111/ejn.16221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/29/2023]
Abstract
Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS UMR 5549, Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Anne Kösem
- Lyon Neuroscience Research Center (CRNL), INSERM U1028, Bron, France
| |
Collapse
|
16
|
Kim J, Kim HW, Kovar J, Lee YS. Neural consequences of binaural beat stimulation on auditory sentence comprehension: an EEG study. Cereb Cortex 2024; 34:bhad459. [PMID: 38044462 DOI: 10.1093/cercor/bhad459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/05/2023] [Accepted: 11/06/2023] [Indexed: 12/05/2023] Open
Abstract
A growing literature has shown that binaural beat (BB)-generated by dichotic presentation of slightly mismatched pure tones-improves cognition. We recently found that BB stimulation of either beta (18 Hz) or gamma (40 Hz) frequencies enhanced auditory sentence comprehension. Here, we used electroencephalography (EEG) to characterize neural oscillations pertaining to the enhanced linguistic operations following BB stimulation. Sixty healthy young adults were randomly assigned to one of three listening groups: 18-Hz BB, 40-Hz BB, or pure-tone baseline, all embedded in music. After listening to the sound for 10 min (stimulation phase), participants underwent an auditory sentence comprehension task involving spoken sentences that contained either an object or subject relative clause (task phase). During the stimulation phase, 18-Hz BB yielded increased EEG power in a beta frequency range, while 40-Hz BB did not. During the task phase, only the 18-Hz BB resulted in significantly higher accuracy and faster response times compared with the baseline, especially on syntactically more complex object-relative sentences. The behavioral improvement by 18-Hz BB was accompanied by attenuated beta power difference between object- and subject-relative sentences. Altogether, our findings demonstrate beta oscillations as a neural correlate of improved syntactic operation following BB stimulation.
Collapse
Affiliation(s)
- Jeahong Kim
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080, United States
- Callier Clinical Research Center, The University of Texas at Dallas, Richardson, TX 75080, United States
| | - Hyun-Woong Kim
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080, United States
- Callier Clinical Research Center, The University of Texas at Dallas, Richardson, TX 75080, United States
- Center for BrainHealth, The University of Texas at Dallas, Dallas, TX 75235, United States
- Department of Psychology, The University of Texas at Dallas, Richardson, TX 75080, United States
| | - Jessica Kovar
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080, United States
- Callier Clinical Research Center, The University of Texas at Dallas, Richardson, TX 75080, United States
| | - Yune Sang Lee
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080, United States
- Callier Clinical Research Center, The University of Texas at Dallas, Richardson, TX 75080, United States
- Center for BrainHealth, The University of Texas at Dallas, Dallas, TX 75235, United States
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, United States
| |
Collapse
|
17
|
Krifka M. Performative updates and the modeling of speech acts. SYNTHESE 2024; 203:31. [PMID: 38222044 PMCID: PMC10786985 DOI: 10.1007/s11229-023-04359-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 09/18/2023] [Indexed: 01/16/2024]
Abstract
This paper develops a way to model performative speech acts within a framework of dynamic semantics. It introduces a distinction between performative and informative updates, where informative updates filter out indices of context sets (cf. Stalnaker, Cole (ed), Pragmatics, Academic Press, 1978), whereas performative updates change their indices (cf. Szabolcsi, Kiefer (ed), Hungarian linguistics, John Benjamins, 1982). The notion of index change is investigated in detail, identifying implementations by a function or by a relation. Declarations like the meeting is (hereby) adjourned are purely performative updates that just enforce an index change on a context set. Assertions like the meeting is (already) adjourned are analyzed as combinations of a performative update that introduces a guarantee of the speaker for the truth of the proposition, and an informative update that restricts the context set so that this proposition is true. The first update is the illocutionary act characteristic for assertions; the second is the primary perlocutionary act, and is up for negotiations with the addressee. Several other speech acts will be discussed, in particular commissives, directives, exclamatives, optatives, and definitions, which are all performative, and differ from related assertions. The paper concludes a discussion of locutionary acts, which are modelled as index changers as well, and proposes a novel analysis for the performative marker hereby.
Collapse
Affiliation(s)
- Manfred Krifka
- Leibniz-Zentrum Allgemeine Sprachwissenschaft (ZAS) and Humboldt-Universität zu Berlin, Pariser Str. 1, 10719 Berlin, Germany
| |
Collapse
|
18
|
Ten Oever S, Martin AE. Interdependence of "What" and "When" in the Brain. J Cogn Neurosci 2024; 36:167-186. [PMID: 37847823 DOI: 10.1162/jocn_a_02067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding-and, minimally, modeling-this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
- Maastricht University, The Netherlands
| | - Andrea E Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
| |
Collapse
|
19
|
Assaneo MF, Orpella J. Rhythms in Speech. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:257-274. [PMID: 38918356 DOI: 10.1007/978-3-031-60183-5_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.
Collapse
Affiliation(s)
- M Florencia Assaneo
- Instituto de Neurobiología, Universidad Autónoma de México, Santiago de Querétaro, Mexico.
| | - Joan Orpella
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| |
Collapse
|
20
|
Kujala J, Mäkelä S, Ojala P, Hyönä J, Salmelin R. Beta- and gamma-band cortico-cortical interactions support naturalistic reading of continuous text. Eur J Neurosci 2024; 59:238-251. [PMID: 38062542 DOI: 10.1111/ejn.16212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 11/06/2023] [Accepted: 11/15/2023] [Indexed: 01/23/2024]
Abstract
Large-scale integration of information across cortical structures, building on neural connectivity, has been proposed to be a key element in supporting human cognitive processing. In electrophysiological neuroimaging studies of reading, quantification of neural interactions has been limited to the level of isolated words or sentences due to artefacts induced by eye movements. Here, we combined magnetoencephalography recording with advanced artefact rejection tools to investigate both cortico-cortical coherence and directed neural interactions during naturalistic reading of full-page texts. Our results show that reading versus visual scanning of text was associated with wide-spread increases of cortico-cortical coherence in the beta and gamma bands. We further show that the reading task was linked to increased directed neural interactions compared to the scanning task across a sparse set of connections within a wide range of frequencies. Together, the results demonstrate that neural connectivity flexibly builds on different frequency bands to support continuous natural reading.
Collapse
Affiliation(s)
- Jan Kujala
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Sasu Mäkelä
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Pauliina Ojala
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
- Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Jukka Hyönä
- Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
- Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
21
|
Batterink LJ, Mulgrew J, Gibbings A. Rhythmically Modulating Neural Entrainment during Exposure to Regularities Influences Statistical Learning. J Cogn Neurosci 2024; 36:107-127. [PMID: 37902580 DOI: 10.1162/jocn_a_02079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
The ability to discover regularities in the environment, such as syllable patterns in speech, is known as statistical learning. Previous studies have shown that statistical learning is accompanied by neural entrainment, in which neural activity temporally aligns with repeating patterns over time. However, it is unclear whether these rhythmic neural dynamics play a functional role in statistical learning or whether they largely reflect the downstream consequences of learning, such as the enhanced perception of learned words in speech. To better understand this issue, we manipulated participants' neural entrainment during statistical learning using continuous rhythmic visual stimulation. Participants were exposed to a speech stream of repeating nonsense words while viewing either (1) a visual stimulus with a "congruent" rhythm that aligned with the word structure, (2) a visual stimulus with an incongruent rhythm, or (3) a static visual stimulus. Statistical learning was subsequently measured using both an explicit and implicit test. Participants in the congruent condition showed a significant increase in neural entrainment over auditory regions at the relevant word frequency, over and above effects of passive volume conduction, indicating that visual stimulation successfully altered neural entrainment within relevant neural substrates. Critically, during the subsequent implicit test, participants in the congruent condition showed an enhanced ability to predict upcoming syllables and stronger neural phase synchronization to component words, suggesting that they had gained greater sensitivity to the statistical structure of the speech stream relative to the incongruent and static groups. This learning benefit could not be attributed to strategic processes, as participants were largely unaware of the contingencies between the visual stimulation and embedded words. These results indicate that manipulating neural entrainment during exposure to regularities influences statistical learning outcomes, suggesting that neural entrainment may functionally contribute to statistical learning. Our findings encourage future studies using non-invasive brain stimulation methods to further understand the role of entrainment in statistical learning.
Collapse
|
22
|
van der Burght CL, Friederici AD, Maran M, Papitto G, Pyatigorskaya E, Schroën JAM, Trettenbrein PC, Zaccarella E. Cleaning up the Brickyard: How Theory and Methodology Shape Experiments in Cognitive Neuroscience of Language. J Cogn Neurosci 2023; 35:2067-2088. [PMID: 37713672 DOI: 10.1162/jocn_a_02058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/17/2023]
Abstract
The capacity for language is a defining property of our species, yet despite decades of research, evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining "language" in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement among cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modeling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work.
Collapse
Affiliation(s)
| | - Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Matteo Maran
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
| | - Giorgio Papitto
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
| | - Elena Pyatigorskaya
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
| | - Joëlle A M Schroën
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
| | - Patrick C Trettenbrein
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication, Leipzig, Germany
- University of Göttingen, Göttingen, Germany
| | - Emiliano Zaccarella
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
23
|
Mai G, Wang WSY. Distinct roles of delta- and theta-band neural tracking for sharpening and predictive coding of multi-level speech features during spoken language processing. Hum Brain Mapp 2023; 44:6149-6172. [PMID: 37818940 PMCID: PMC10619373 DOI: 10.1002/hbm.26503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 08/17/2023] [Accepted: 09/13/2023] [Indexed: 10/13/2023] Open
Abstract
The brain tracks and encodes multi-level speech features during spoken language processing. It is evident that this speech tracking is dominant at low frequencies (<8 Hz) including delta and theta bands. Recent research has demonstrated distinctions between delta- and theta-band tracking but has not elucidated how they differentially encode speech across linguistic levels. Here, we hypothesised that delta-band tracking encodes prediction errors (enhanced processing of unexpected features) while theta-band tracking encodes neural sharpening (enhanced processing of expected features) when people perceive speech with different linguistic contents. EEG responses were recorded when normal-hearing participants attended to continuous auditory stimuli that contained different phonological/morphological and semantic contents: (1) real-words, (2) pseudo-words and (3) time-reversed speech. We employed multivariate temporal response functions to measure EEG reconstruction accuracies in response to acoustic (spectrogram), phonetic and phonemic features with the partialling procedure that singles out unique contributions of individual features. We found higher delta-band accuracies for pseudo-words than real-words and time-reversed speech, especially during encoding of phonetic features. Notably, individual time-lag analyses showed that significantly higher accuracies for pseudo-words than real-words started at early processing stages for phonetic encoding (<100 ms post-feature) and later stages for acoustic and phonemic encoding (>200 and 400 ms post-feature, respectively). Theta-band accuracies, on the other hand, were higher when stimuli had richer linguistic content (real-words > pseudo-words > time-reversed speech). Such effects also started at early stages (<100 ms post-feature) during encoding of all individual features or when all features were combined. We argue these results indicate that delta-band tracking may play a role in predictive coding leading to greater tracking of pseudo-words due to the presence of unexpected/unpredicted semantic information, while theta-band tracking encodes sharpened signals caused by more expected phonological/morphological and semantic contents. Early presence of these effects reflects rapid computations of sharpening and prediction errors. Moreover, by measuring changes in EEG alpha power, we did not find evidence that the observed effects can be solitarily explained by attentional demands or listening efforts. Finally, we used directed information analyses to illustrate feedforward and feedback information transfers between prediction errors and sharpening across linguistic levels, showcasing how our results fit with the hierarchical Predictive Coding framework. Together, we suggest the distinct roles of delta and theta neural tracking for sharpening and predictive coding of multi-level speech features during spoken language processing.
Collapse
Affiliation(s)
- Guangting Mai
- Hearing Theme, National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, UK
- Academic Unit of Mental Health and Clinical Neurosciences, School of Medicine, The University of Nottingham, Nottingham, UK
- Division of Psychology and Language Sciences, Faculty of Brain Sciences, University College London, London, UK
| | - William S-Y Wang
- Department of Chinese and Bilingual Studies, Hong Kong Polytechnic University, Hung Hom, Hong Kong
- Language Engineering Laboratory, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
24
|
Behboudi MH, Castro S, Chalamalasetty P, Maguire MJ. Development of Gamma Oscillation during Sentence Processing in Early Adolescence: Insights into the Maturation of Semantic Processing. Brain Sci 2023; 13:1639. [PMID: 38137087 PMCID: PMC10741943 DOI: 10.3390/brainsci13121639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/16/2023] [Accepted: 11/24/2023] [Indexed: 12/24/2023] Open
Abstract
Children's ability to retrieve word meanings and incorporate them into sentences, along with the neural structures that support these skills, continues to evolve throughout adolescence. Theta (4-8 Hz) activity that corresponds to word retrieval in children decreases in power and becomes more localized with age. This bottom-up word retrieval is often paired with changes in gamma (31-70 Hz), which are thought to reflect semantic unification in adults. Here, we studied gamma engagement during sentence processing using EEG time-frequency in children (ages 8-15) to unravel the developmental trajectory of the gamma network during sentence processing. Children heavily rely on semantic integration for sentence comprehension, but as they mature, semantic and syntactic processing units become distinct and localized. We observed a similar developmental shift in gamma oscillation around age 11, with younger groups (8-9 and 10-11) exhibiting broadly distributed gamma activity with higher amplitudes, while older groups (12-13 and 14-15) exhibited smaller and more localized gamma activity, especially over the left central and posterior regions. We interpret these findings as support for the argument that younger children rely more heavily on semantic processes for sentence comprehension than older children. And like adults, semantic processing in children is associated with gamma activity.
Collapse
Affiliation(s)
- Mohammad Hossein Behboudi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080, USA; (M.H.B.)
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX 75235, USA
| | - Stephanie Castro
- Department of Human Development and Family Sciences, The University of Texas at Austin, Austin, TX 78705, USA
| | - Prasanth Chalamalasetty
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080, USA; (M.H.B.)
| | - Mandy J. Maguire
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080, USA; (M.H.B.)
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX 75235, USA
| |
Collapse
|
25
|
Mariani B, Nicoletti G, Barzon G, Ortiz Barajas MC, Shukla M, Guevara R, Suweis SS, Gervain J. Prenatal experience with language shapes the brain. SCIENCE ADVANCES 2023; 9:eadj3524. [PMID: 37992161 PMCID: PMC10664997 DOI: 10.1126/sciadv.adj3524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 10/20/2023] [Indexed: 11/24/2023]
Abstract
Human infants acquire language with notable ease compared to adults, but the neural basis of their remarkable brain plasticity for language remains little understood. Applying a scaling analysis of neural oscillations to address this question, we show that newborns' electrophysiological activity exhibits increased long-range temporal correlations after stimulation with speech, particularly in the prenatally heard language, indicating the early emergence of brain specialization for the native language.
Collapse
Affiliation(s)
- Benedetta Mariani
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | - Giorgio Nicoletti
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Department of Mathematics, University of Padua, Padua, Italy
| | - Giacomo Barzon
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | | | - Mohinish Shukla
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Department of Developmental and Social Psychology, University of Padua, Padua, Italy
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Ramón Guevara
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Department of Developmental and Social Psychology, University of Padua, Padua, Italy
| | - Samir Simon Suweis
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | - Judit Gervain
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Integrative Neuroscience and Cognition Center, CNRS and Université Paris Cité, Paris, France
- Department of Developmental and Social Psychology, University of Padua, Padua, Italy
| |
Collapse
|
26
|
Ortiz-Barajas MC, Guevara R, Gervain J. Neural oscillations and speech processing at birth. iScience 2023; 26:108187. [PMID: 37965146 PMCID: PMC10641252 DOI: 10.1016/j.isci.2023.108187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 08/29/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2023] Open
Abstract
Are neural oscillations biologically endowed building blocks of the neural architecture for speech processing from birth, or do they require experience to emerge? In adults, delta, theta, and low-gamma oscillations support the simultaneous processing of phrasal, syllabic, and phonemic units in the speech signal, respectively. Using electroencephalography to investigate neural oscillations in the newborn brain we reveal that delta and theta oscillations differ for rhythmically different languages, suggesting that these bands underlie newborns' universal ability to discriminate languages on the basis of rhythm. Additionally, higher theta activity during post-stimulus as compared to pre-stimulus rest suggests that stimulation after-effects are present from birth.
Collapse
Affiliation(s)
- Maria Clemencia Ortiz-Barajas
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, 45 rue des Saints-Pères, 75006 Paris, France
| | - Ramón Guevara
- Department of Physics and Astronomy, University of Padua, Via Marzolo 8, 35131 Padua, Italy
| | - Judit Gervain
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, 45 rue des Saints-Pères, 75006 Paris, France
- Department of Developmental and Social Psychology, University of Padua, Via Venezia 8, 35131 Padua, Italy
| |
Collapse
|
27
|
Menn KH, Männel C, Meyer L. Does Electrophysiological Maturation Shape Language Acquisition? PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1271-1281. [PMID: 36753616 PMCID: PMC10623610 DOI: 10.1177/17456916231151584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
Infants master temporal patterns of their native language at a developmental trajectory from slow to fast: Shortly after birth, they recognize the slow acoustic modulations specific to their native language before tuning into faster language-specific patterns between 6 and 12 months of age. We propose here that this trajectory is constrained by neuronal maturation-in particular, the gradual emergence of high-frequency neural oscillations in the infant electroencephalogram. Infants' initial focus on slow prosodic modulations is consistent with the prenatal availability of slow electrophysiological activity (i.e., theta- and delta-band oscillations). Our proposal is consistent with the temporal patterns of infant-directed speech, which initially amplifies slow modulations, approaching the faster modulation range of adult-directed speech only as infants' language has advanced sufficiently. Moreover, our proposal agrees with evidence from premature infants showing maturational age is a stronger predictor of language development than ex utero exposure to speech, indicating that premature infants cannot exploit their earlier availability of speech because of electrophysiological constraints. In sum, we provide a new perspective on language acquisition emphasizing neuronal development as a critical driving force of infants' language development.
Collapse
Affiliation(s)
- Katharina H. Menn
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, Leipzig, Germany
| | - Claudia Männel
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Audiology and Phoniatrics, Charité – Universitätsmedizin Berlin, Berlin, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Münster, Germany
| |
Collapse
|
28
|
Zhang Z, Ma F, Guo T. Proactive and reactive language control in bilingual language production revealed by decoding sustained potentials and electroencephalography oscillations. Hum Brain Mapp 2023; 44:5065-5078. [PMID: 37515386 PMCID: PMC10502638 DOI: 10.1002/hbm.26433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 07/04/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023] Open
Abstract
Adopting highly sensitive multivariate electroencephalography (EEG) and alpha-band decoding analyses, the present study investigated proactive and reactive language control during bilingual language production. In a language-switching task, Chinese-English bilinguals were asked to name pictures based on visually presented cues. EEG and alpha-band decoding accuracy associated with switch and non-switch trials were used as indicators for inhibition over the non-target language. Multivariate EEG decoding analyses showed that the decoding accuracy in L1 but not in L2, was above chance level shortly after cue onset. In addition, alpha-band decoding results showed that the decoding accuracy in L1 rose above chance level in an early time window and a late time window locked to the stimulus. Together, these asymmetric patterns of decoding accuracy indicate that both proactive and reactive attentional control over the dominant L1 are exerted during bilingual word production, with a possibility of overlap between two control mechanisms. We addressed theoretical implications based on these findings for bilingual language control models.
Collapse
Affiliation(s)
- Zhaoqi Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Fengyang Ma
- School of EducationUniversity of CincinnatiCincinnatiOhioUSA
| | - Taomei Guo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| |
Collapse
|
29
|
Lewis AG, Schoffelen JM, Bastiaansen M, Schriefers H. Is beta in agreement with the relatives? Using relative clause sentences to investigate MEG beta power dynamics during sentence comprehension. Psychophysiology 2023; 60:e14332. [PMID: 37203219 DOI: 10.1111/psyp.14332] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 03/20/2023] [Accepted: 04/27/2023] [Indexed: 05/20/2023]
Abstract
There remains some debate about whether beta power effects observed during sentence comprehension reflect ongoing syntactic unification operations (beta-syntax hypothesis), or instead reflect maintenance or updating of the sentence-level representation (beta-maintenance hypothesis). In this study, we used magnetoencephalography to investigate beta power neural dynamics while participants read relative clause sentences that were initially ambiguous between a subject- or an object-relative reading. An additional condition included a grammatical violation at the disambiguation point in the relative clause sentences. The beta-maintenance hypothesis predicts a decrease in beta power at the disambiguation point for unexpected (and less preferred) object-relative clause sentences and grammatical violations, as both signal a need to update the sentence-level representation. While the beta-syntax hypothesis also predicts a beta power decrease for grammatical violations due to a disruption of syntactic unification operations, it instead predicts an increase in beta power for the object-relative clause condition because syntactic unification at the point of disambiguation becomes more demanding. We observed decreased beta power for both the agreement violation and object-relative clause conditions in typical left hemisphere language regions, which provides compelling support for the beta-maintenance hypothesis. Mid-frontal theta power effects were also present for grammatical violations and object-relative clause sentences, suggesting that violations and unexpected sentence interpretations are registered as conflicts by the brain's domain-general error detection system.
Collapse
Affiliation(s)
- Ashley Glen Lewis
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Jan-Mathijs Schoffelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Marcel Bastiaansen
- Academy for Leisure and Events, Breda University of Applied Sciences, Breda, the Netherlands
- Department of Cognitive Neuropsychology, School of Social and Behavioural Sciences, Tilburg University, Tilburg, the Netherlands
| | - Herbert Schriefers
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
30
|
Kovács P, Szalárdy O, Winkler I, Tóth B. Two effects of perceived speaker similarity in resolving the cocktail party situation - ERPs and functional connectivity. Biol Psychol 2023; 182:108651. [PMID: 37517603 DOI: 10.1016/j.biopsycho.2023.108651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 07/15/2023] [Accepted: 07/24/2023] [Indexed: 08/01/2023]
Abstract
Following a speaker in multi-talker environments requires the listener to separate the speakers' voices and continuously focus attention on one speech stream. While the dissimilarity of voices may make speaker separation easier, it may also affect maintaining the focus of attention. To assess these effects, electrophysiological (EEG) and behavioral data were collected from healthy young adults while they listened to two concurrent speech streams performing an online lexical detection task and an offline recognition memory task. Perceptual speaker similarity was manipulated on four levels: identical, similar, dissimilar, and opposite-gender speakers. Behavioral and electrophysiological data suggested that, while speaker similarity hinders auditory stream segregation, dissimilarity hinders maintaining the focus of attention by making the to-be-ignored speech stream more distracting. Thus, resolving the cocktail party situation poses different problems at different levels of perceived speaker similarity, resulting in different listening strategies.
Collapse
Affiliation(s)
- Petra Kovács
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary; Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Orsolya Szalárdy
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary; Institute of Behavioural Sciences, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary.
| |
Collapse
|
31
|
Sarmukadam K, Behroozmand R. Neural oscillations reveal disrupted functional connectivity associated with impaired speech auditory feedback control in post-stroke aphasia. Cortex 2023; 166:258-274. [PMID: 37437320 PMCID: PMC10527672 DOI: 10.1016/j.cortex.2023.05.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 05/11/2023] [Accepted: 05/24/2023] [Indexed: 07/14/2023]
Abstract
The oscillatory brain activities reflect neuro-computational processes that are critical for speech production and sensorimotor control. In the present study, we used neural oscillations in left-hemisphere stroke survivors with aphasia as a model to investigate network-level functional connectivity deficits associated with disrupted speech auditory feedback control. Electroencephalography signals were recorded from 40 post-stroke aphasia and 39 neurologically intact control participants while they performed speech vowel production and listening tasks under pitch-shifted altered auditory feedback (AAF) conditions. Using weighted phase-lag index, we calculated broadband (1-70 Hz) functional neural connectivity between electrode pairs covering the frontal, pre- and post-central, and parietal regions. Results revealed reduced fronto-central delta and theta band and centro-parietal low-beta band connectivity in left-hemisphere electrodes associated with diminished speech AAF compensation responses in post-stroke aphasia compared with controls. Lesion-mapping analysis demonstrated that stroke-induced damage to multi-modal brain networks within the inferior frontal gyrus, Rolandic operculum, inferior parietal lobule, angular gyrus, and supramarginal gyrus predicted the reduced functional neural connectivity within the delta and low-beta bands during both tasks in aphasia. These results provide evidence that disrupted neural connectivity due to left-hemisphere brain damage can result in network-wide dysfunctions associated with impaired sensorimotor integration mechanisms for speech auditory feedback control.
Collapse
Affiliation(s)
- Kimaya Sarmukadam
- Speech Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, Columbia, SC, United States.
| | - Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, Columbia, SC, United States.
| |
Collapse
|
32
|
Pei C, Huang X, Qiu Y, Peng Y, Gao S, Biswal B, Yao D, Liu Q, Li F, Xu P. Frequency-specific directed interactions between whole-brain regions during sentence processing using multimodal stimulus. Neurosci Lett 2023; 812:137409. [PMID: 37487970 DOI: 10.1016/j.neulet.2023.137409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 06/26/2023] [Accepted: 07/20/2023] [Indexed: 07/26/2023]
Abstract
Neural oscillations subserve a broad range of speech processing and language comprehension functions. Using an electroencephalogram (EEG), we investigated the frequency-specific directed interactions between whole-brain regions while the participants processed Chinese sentences using different modality stimuli (i.e., auditory, visual, and audio-visual). The results indicate that low-frequency responses correspond to the process of information flow aggregation in primary sensory cortices in different modalities. Information flow dominated by high-frequency responses exhibited characteristics of bottom-up flow from left posterior temporal to left frontal regions. The network pattern of top-down information flowing out of the left frontal lobe was presented by the joint dominance of low- and high-frequency rhythms. Overall, our results suggest that the brain may be modality-independent when processing higher-order language information.
Collapse
Affiliation(s)
- Changfu Pei
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Xunan Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Foreign Languages, University of Electronic Science and Technology of China, Sichuan, Chengdu 611731, China
| | - Yuan Qiu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yueheng Peng
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Shan Gao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Foreign Languages, University of Electronic Science and Technology of China, Sichuan, Chengdu 611731, China
| | - Bharat Biswal
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China; Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Qiang Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Sichuan, Chengdu 610066, China.
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China.
| |
Collapse
|
33
|
Piazza C, Dondena C, Riboldi EM, Riva V, Cantiani C. Baseline EEG in the first year of life: Preliminary insights into the development of autism spectrum disorder and language impairments. iScience 2023; 26:106987. [PMID: 37534149 PMCID: PMC10391601 DOI: 10.1016/j.isci.2023.106987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 04/19/2023] [Accepted: 05/24/2023] [Indexed: 08/04/2023] Open
Abstract
Early identification of neurodevelopmental disorders is important to ensure a prompt and effective intervention, thus improving the later outcome. Autism spectrum disorder (ASD) and language learning impairment (LLI) are among the most common neurodevelopmental disorders, and they share overlapping symptoms. This study aims to characterize baseline electroencephalography (EEG) spectral power in 6- and 12-month-old infants at higher likelihood of developing ASD and LLI, compared to typically developing infants, and to preliminarily verify if spectral power components associated with the risk status are also linked with the later ASD or LLI diagnosis. We found risk status for ASD to be associated with reduced power in the low-frequency bands and risk status for LLI with increased power in the high-frequency bands. Interestingly, later diagnosis shared similar associations, thus supporting the potential role of EEG spectral power as a biomarker useful for understanding pathophysiology and classifying diagnostic outcomes.
Collapse
Affiliation(s)
- Caterina Piazza
- Scientific Institute, IRCCS E. Medea, Bioengineering Lab, 23842 Bosisio Parini, Lecco, Italy
| | - Chiara Dondena
- Scientific Institute, IRCCS E. Medea, Child Psychopathology Unit, 23842 Bosisio Parini, Lecco, Italy
| | - Elena Maria Riboldi
- Scientific Institute, IRCCS E. Medea, Child Psychopathology Unit, 23842 Bosisio Parini, Lecco, Italy
| | - Valentina Riva
- Scientific Institute, IRCCS E. Medea, Child Psychopathology Unit, 23842 Bosisio Parini, Lecco, Italy
| | - Chiara Cantiani
- Scientific Institute, IRCCS E. Medea, Child Psychopathology Unit, 23842 Bosisio Parini, Lecco, Italy
| |
Collapse
|
34
|
Slaats S, Weissbart H, Schoffelen JM, Meyer AS, Martin AE. Delta-Band Neural Responses to Individual Words Are Modulated by Sentence Processing. J Neurosci 2023; 43:4867-4883. [PMID: 37221093 PMCID: PMC10312058 DOI: 10.1523/jneurosci.0964-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 04/17/2023] [Accepted: 04/27/2023] [Indexed: 05/25/2023] Open
Abstract
To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a step toward understanding how the brain builds sentence structure, the present study concerns the neural readout of this adaptation. We ask whether low-frequency neural readouts associated with words change as a function of being in a sentence. To this end, we analyzed an MEG dataset by Schoffelen et al. (2019) of 102 human participants (51 women) listening to sentences and word lists, the latter lacking any syntactic structure and combinatorial meaning. Using temporal response functions and a cumulative model-fitting approach, we disentangled delta- and theta-band responses to lexical information (word frequency), from responses to sensory and distributional variables. The results suggest that delta-band responses to words are affected by sentence context in time and space, over and above entropy and surprisal. In both conditions, the word frequency response spanned left temporal and posterior frontal areas; however, the response appeared later in word lists than in sentences. In addition, sentence context determined whether inferior frontal areas were responsive to lexical information. In the theta band, the amplitude was larger in the word list condition ∼100 milliseconds in right frontal areas. We conclude that low-frequency responses to words are changed by sentential context. The results of this study show how the neural representation of words is affected by structural context and as such provide insight into how the brain instantiates compositionality in language.SIGNIFICANCE STATEMENT Human language is unprecedented in its combinatorial capacity: we are capable of producing and understanding sentences we have never heard before. Although the mechanisms underlying this capacity have been described in formal linguistics and cognitive science, how they are implemented in the brain remains to a large extent unknown. A large body of earlier work from the cognitive neuroscientific literature implies a role for delta-band neural activity in the representation of linguistic structure and meaning. In this work, we combine these insights and techniques with findings from psycholinguistics to show that meaning is more than the sum of its parts; the delta-band MEG signal differentially reflects lexical information inside and outside sentence structures.
Collapse
Affiliation(s)
- Sophie Slaats
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- The International Max Planck Research School for Language Sciences, 6525 XD Nijmegen, The Netherlands
| | - Hugo Weissbart
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands
| | - Jan-Mathijs Schoffelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands
| | - Antje S Meyer
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands
| | - Andrea E Martin
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands
| |
Collapse
|
35
|
Van Herck S, Economou M, Bempt FV, Ghesquière P, Vandermosten M, Wouters J. Pulsatile modulation greatly enhances neural synchronization at syllable rate in children. Neuroimage 2023:120223. [PMID: 37315772 DOI: 10.1016/j.neuroimage.2023.120223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 05/22/2023] [Accepted: 06/11/2023] [Indexed: 06/16/2023] Open
Abstract
Neural processing of the speech envelope is of crucial importance for speech perception and comprehension. This envelope processing is often investigated by measuring neural synchronization to sinusoidal amplitude-modulated stimuli at different modulation frequencies. However, it has been argued that these stimuli lack ecological validity. Pulsatile amplitude-modulated stimuli, on the other hand, are suggested to be more ecologically valid and efficient, and have increased potential to uncover the neural mechanisms behind some developmental disorders such a dyslexia. Nonetheless, pulsatile stimuli have not yet been investigated in pre-reading and beginning reading children, which is a crucial age for developmental reading research. We performed a longitudinal study to examine the potential of pulsatile stimuli in this age range. Fifty-two typically reading children were tested at three time points from the middle of their last year of kindergarten (5 years old) to the end of first grade (7 years old). Using electroencephalography, we measured neural synchronization to syllable rate and phoneme rate sinusoidal and pulsatile amplitude-modulated stimuli. Our results revealed that the pulsatile stimuli significantly enhance neural synchronization at syllable rate, compared to the sinusoidal stimuli. Additionally, the pulsatile stimuli at syllable rate elicited a different hemispheric specialization, more closely resembling natural speech envelope tracking. We postulate that using the pulsatile stimuli greatly increases EEG data acquisition efficiency compared to the common sinusoidal amplitude-modulated stimuli in research in younger children and in developmental reading research.
Collapse
Affiliation(s)
- Shauni Van Herck
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium.
| | - Maria Economou
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | - Femke Vanden Bempt
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | | | - Jan Wouters
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium
| |
Collapse
|
36
|
Branzi FM, Martin CD, Biau E. Activating words without language: beta and theta oscillations reflect lexical access and control processes during verbal and non-verbal object recognition tasks. Cereb Cortex 2023; 33:6228-6240. [PMID: 36724048 PMCID: PMC10183750 DOI: 10.1093/cercor/bhac499] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/27/2022] [Accepted: 11/28/2022] [Indexed: 02/02/2023] Open
Abstract
The intention to name an object modulates neural responses during object recognition tasks. However, the nature of this modulation is still unclear. We established whether a core operation in language, i.e. lexical access, can be observed even when the task does not require language (size-judgment task), and whether response selection in verbal versus non-verbal semantic tasks relies on similar neuronal processes. We measured and compared neuronal oscillatory activities and behavioral responses to the same set of pictures of meaningful objects, while the type of task participants had to perform (picture-naming versus size-judgment) and the type of stimuli to measure lexical access (cognate versus non-cognate) were manipulated. Despite activation of words was facilitated when the task required explicit word-retrieval (picture-naming task), lexical access occurred even without the intention to name the object (non-verbal size-judgment task). Activation of words and response selection were accompanied by beta (25-35 Hz) desynchronization and theta (3-7 Hz) synchronization, respectively. These effects were observed in both picture-naming and size-judgment tasks, suggesting that words became activated via similar mechanisms, irrespective of whether the task involves language explicitly. This finding has important implications to understand the link between core linguistic operations and performance in verbal and non-verbal semantic tasks.
Collapse
Affiliation(s)
- Francesca M Branzi
- Department of Psychological Sciences, Institute of Population Health, University of Liverpool, Liverpool L69 7ZA, UK
| | - Clara D Martin
- BCBL. Basque Center on Cognition, Brain and Language, Paseo Mikeletegi 69, San Sebastian 20009, Spain
- IKERBASQUE, Basque Foundation for Science, Maria Diaz de Haro 3, Bilbao 48013, Spain
| | - Emmanuel Biau
- Department of Psychological Sciences, Institute of Population Health, University of Liverpool, Liverpool L69 7ZA, UK
| |
Collapse
|
37
|
Champagne-Lavau M, Bolger D, Klein M. Impact of social knowledge about the speaker on irony understanding: Evidence from neural oscillations. Soc Neurosci 2023; 18:28-45. [PMID: 37161361 DOI: 10.1080/17470919.2023.2203948] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
The aim of the present study was to explore neuronal oscillatory activity during a task of irony understanding. In this task, we manipulated implicit information about the speaker such as occupation stereotypes (i.e., sarcastic versus non-sarcastic). These stereotypes are social knowledge that influence the extent to which the speaker's ironic intent is understood. Time-frequency analyses revealed an early effect of speaker occupation stereotypes, as evidenced by greater synchronization in the upper gamma band (in the 150-250 ms time window) when the speaker had a sarcastic occupation, by a greater desynchronization for ironic context compared to literal context in the alpha1 band and by a greater synchronization in the theta band when the speaker had a non-sarcastic occupation. When the speaker occupation did not constrain the ironic interpretation, the interpretation of the sentence as ironic was revealed as resource-demanding and requiring pragmatic reanalysis, as shown mainly by the synchronization in the theta band and the desynchronization in the alpha1 band (in the 500-800 ms time window). These results support predictions of the constraint satisfaction model suggesting that during irony understanding, extra-linguistic information such as information on the speaker is used as soon as it is available, in the early stage of processing.
Collapse
Affiliation(s)
| | | | - Madelyne Klein
- LPL, CNRS, Aix-Marseille University, Aix-en-Provence, France
| |
Collapse
|
38
|
Zhang J, Xia J, Liu X, Olichney J. Machine Learning on Visibility Graph Features Discriminates the Cognitive Event-Related Potentials of Patients with Early Alzheimer's Disease from Healthy Aging. Brain Sci 2023; 13:770. [PMID: 37239242 PMCID: PMC10216358 DOI: 10.3390/brainsci13050770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 05/02/2023] [Accepted: 05/04/2023] [Indexed: 05/28/2023] Open
Abstract
We present a framework for electroencephalography (EEG)-based classification between patients with Alzheimer's Disease (AD) and robust normal elderly (RNE) via a graph theory approach using visibility graphs (VGs). This EEG VG approach is motivated by research that has demonstrated differences between patients with early stage AD and RNE using various features of EEG oscillations or cognitive event-related potentials (ERPs). In the present study, EEG signals recorded during a word repetition experiment were wavelet decomposed into 5 sub-bands (δ,θ,α,β,γ). The raw and band-specific signals were then converted to VGs for analysis. Twelve graph features were tested for differences between the AD and RNE groups, and t-tests employed for feature selection. The selected features were then tested for classification using traditional machine learning and deep learning algorithms, achieving a classification accuracy of 100% with linear and non-linear classifiers. We further demonstrated that the same features can be generalized to the classification of mild cognitive impairment (MCI) converters, i.e., prodromal AD, against RNE with a maximum accuracy of 92.5%. Code is released online to allow others to test and reuse this framework.
Collapse
Affiliation(s)
- Jesse Zhang
- Computer Science Department, University of Southern California, Los Angeles, CA 90089, USA;
| | - Jiangyi Xia
- UC Davis Center for Mind and Brain, Davis, CA 95618, USA;
| | - Xin Liu
- UC Davis Computer Science Department, Davis, CA 95616, USA;
| | - John Olichney
- UC Davis Center for Mind and Brain, Davis, CA 95618, USA;
| |
Collapse
|
39
|
Schneider JM, Poudel S, Abel AD, Maguire MJ. Age and vocabulary knowledge differentially influence the N400 and theta responses during semantic retrieval. Dev Cogn Neurosci 2023; 61:101251. [PMID: 37141791 DOI: 10.1016/j.dcn.2023.101251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 03/23/2023] [Accepted: 05/01/2023] [Indexed: 05/06/2023] Open
Abstract
Using electroencephalography (EEG) to study the neural oscillations supporting language development is increasingly common; however, a clear understanding of the relationship between neural oscillations and traditional Event Related Potentials (ERPs) is needed to disentangle how maturation of language-related neural networks supports semantic processing throughout grade school. Theta and the N400 are both thought to index semantic retrieval but, in adults, are only weakly correlated with one another indicating they may measure somewhat unique aspects of retrieval. Here, we studied the relationship between the N400 amplitude and theta power during semantic retrieval with key indicators of language abilities including age, vocabulary, reading comprehension and phonological memory in 226 children ages 8-15 years. The N400 and theta responses were positively correlated over posterior areas, but negatively correlated over frontal areas. When controlling for the N400 amplitude, the amplitude of the theta response was predicted by age, but not by language measures. On the other hand, when controlling theta amplitude, the amplitude of the N400 was predicted by both vocabulary knowledge and age. These findings indicate that while there is a clear relationship between the N400 and theta responses, they may each index unique aspects of development related to semantic retrieval.
Collapse
|
40
|
Bearden DJ, Ehrenberg A, Selawski R, Ono KE, Drane DL, Pedersen NP, Cernokova I, Marcus DJ, Luongo-Zink C, Chern JJ, Oliver CB, Ganote J, Al-Ramadhani R, Bhalla S, Gedela S, Zhang G, Kheder A. Four-Way Wada: SEEG-based mapping with electrical stimulation, high frequency activity, and phase amplitude coupling to complement traditional Wada and functional MRI prior to epilepsy surgery. Epilepsy Res 2023; 192:107129. [PMID: 36958107 PMCID: PMC11008564 DOI: 10.1016/j.eplepsyres.2023.107129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 01/29/2023] [Accepted: 03/20/2023] [Indexed: 03/25/2023]
Abstract
Presurgical evaluation of refractory epilepsy involves functional investigations to minimize postoperative deficit. Assessing language and memory is conventionally undertaken using Wada and fMRI, and occasionally supplemented by data from invasive intracranial electroencephalography, such as electrical stimulation, corticortical evoked potentials, mapping of high frequency activity and phase amplitude coupling. We describe the comparative and complementary role of these methods to inform surgical decision-making and functional prognostication. We used Wada paradigm to standardize testing across all modalities. Postoperative neuropsychological testing confirmed deficit predicted based on these methods.
Collapse
Affiliation(s)
- D J Bearden
- Children's Healthcare of Atlanta, Atlanta, GA, USA; Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| | | | - R Selawski
- Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - K E Ono
- Children's Healthcare of Atlanta, Atlanta, GA, USA; Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| | - D L Drane
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA; Department of Pediatrics, Emory University School of Medicine, Atlanta, GA, USA; Department of Neurology, University of Washington School of Medicine, Seattle, WA, USA
| | - N P Pedersen
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| | | | - D J Marcus
- Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - C Luongo-Zink
- Children's Healthcare of Atlanta, Atlanta, GA, USA; William James College, Newton, MA, USA
| | - J J Chern
- Department of Neurosurgery, Children's Healthcare of Atlanta, USA
| | - C B Oliver
- Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - J Ganote
- Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - R Al-Ramadhani
- University of Pittsburgh Medical Center Children's Hospital, Pittsburgh, PA 15224, USA
| | - S Bhalla
- Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - S Gedela
- Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - G Zhang
- Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - A Kheder
- Children's Healthcare of Atlanta, Atlanta, GA, USA; Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA.
| |
Collapse
|
41
|
Stinkeste C, Vincent MA, Delrue L, Brunellière A. Between alpha and gamma oscillations: Neural signatures of linguistic predictions and listener's attention to speaker's communication intention. Biol Psychol 2023; 180:108583. [PMID: 37156325 DOI: 10.1016/j.biopsycho.2023.108583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 05/04/2023] [Accepted: 05/05/2023] [Indexed: 05/10/2023]
Abstract
When listeners hear a message produced by their interlocutor, they can predict upcoming words thanks to the sentential context and their attention can be focused on the speaker's communication intention. In two electroencephalographical (EEG) studies, we investigated the oscillatory correlates of prediction in spoken-language comprehension and how they are modulated by the listener's attention. Sentential contexts which were strongly predictive of a particular word were ended by a possessive adjective either matching the gender of the predicted word or not. Alpha, beta and gamma oscillations were studied as they were considered to play a crucial role in the predictive process. While evidence of word prediction was related to alpha fluctuations when listeners focused their attention on sentence meaning, changes in high-gamma oscillations were triggered by word prediction when listeners focused their attention on the speaker's communication intention. Independently of the endogenous attention to a level of linguistic information, the oscillatory correlates of word predictions in language comprehension were sensitive to the prosodic emphasis produced by the speaker at a late stage. These findings thus bear major implications for understanding the neural mechanisms that support predictive processing in spoken-language comprehension.
Collapse
Affiliation(s)
- Charlotte Stinkeste
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
| | - Marion A Vincent
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
| | - Laurence Delrue
- Univ. Lille, CNRS, UMR 8163 - STL - Savoirs Textes Langage, F-59000 Lille, France
| | - Angèle Brunellière
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France.
| |
Collapse
|
42
|
Behroozmand R, Sarmukadam K, Fridriksson J. Aberrant modulation of broadband neural oscillations reflects vocal sensorimotor deficits in post-stroke aphasia. Clin Neurophysiol 2023; 149:100-112. [PMID: 36934601 PMCID: PMC10101924 DOI: 10.1016/j.clinph.2023.02.176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 02/17/2023] [Accepted: 02/25/2023] [Indexed: 03/11/2023]
Abstract
OBJECTIVE The present study investigated the neural oscillatory correlates of impaired vocal sensorimotor control in left-hemisphere stroke. METHODS Electroencephalography (EEG) signals were recorded from 34 stroke and 46 control subjects during speech vowel vocalization and listening tasks under normal and pitch-shifted auditory feedback. RESULTS Time-frequency analyses revealed aberrantly decreased theta (4-8 Hz) and increased gamma band (30-80 Hz) power in frontal and posterior parieto-occipital regions as well as reduced alpha (8-13 Hz) and beta (13-30 Hz) desynchronization over sensorimotor areas before speech vowel vocalization in left-hemisphere stroke compared with controls. Subjects with the stroke also presented with aberrant modulation of broadband (4-80 Hz) neural oscillations over sensorimotor regions after speech vowel onset during vocalization and listening under normal and altered auditory feedback. We found that the atypical pattern of broadband neural oscillatory modulation was correlated with diminished vocal feedback error compensation behavior and the severity of co-existing language-related aphasia symptoms associated with left-hemisphere stroke. CONCLUSIONS These findings indicate complex interplays between the underlying mechanisms of speech and language and their deficits in post-stroke aphasia. SIGNIFICANCE Our data motivate the notion of studying neural oscillatory dynamics as a critical component for the examination of speech and language disorders in post-stroke aphasia.
Collapse
Affiliation(s)
- Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 915 Greene Street, Columbia, SC 29208, USA.
| | - Kimaya Sarmukadam
- Speech Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 915 Greene Street, Columbia, SC 29208, USA
| | - Julius Fridriksson
- The Aphasia Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 915 Greene St, Columbia, SC 29208, USA; Center for the Study of Aphasia Recovery (C-STAR), Arnold School of Public Health, University of South Carolina, 915 Greene St, Columbia, SC 29208, USA
| |
Collapse
|
43
|
Kovács P, Tóth B, Honbolygó F, Szalárdy O, Kohári A, Mády K, Magyari L, Winkler I. Speech prosody supports speaker selection and auditory stream segregation in a multi-talker situation. Brain Res 2023; 1805:148246. [PMID: 36657631 DOI: 10.1016/j.brainres.2023.148246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 01/06/2023] [Accepted: 01/12/2023] [Indexed: 01/19/2023]
Abstract
To process speech in a multi-talker environment, listeners need to segregate the mixture of incoming speech streams and focus their attention on one of them. Potentially, speech prosody could aid the segregation of different speakers, the selection of the desired speech stream, and detecting targets within the attended stream. For testing these issues, we recorded behavioral responses and extracted event-related potentials and functional brain networks from electroencephalographic signals recorded while participants listened to two concurrent speech streams, performing a lexical detection and a recognition memory task in parallel. Prosody manipulation was applied to the attended speech stream in one group of participants and to the ignored speech stream in another group. Naturally recorded speech stimuli were either intact, synthetically F0-flattened, or prosodically suppressed by the speaker. Results show that prosody - especially the parsing cues mediated by speech rate - facilitates stream selection, while playing a smaller role in auditory stream segmentation and target detection.
Collapse
Affiliation(s)
- Petra Kovács
- Department of Cognitive Science, Budapest University of Technology and Economics, Hungary
| | - Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungary.
| | - Ferenc Honbolygó
- Brain Imaging Center, Research Center for Natural Sciences, Hungary
| | - Orsolya Szalárdy
- Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungary; Institute of Behavioural Sciences, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Anna Kohári
- Research Group of Phonetics, Institute for General and Hungarian Linguistics, Hungarian Research Centre for Linguistics, Hungary
| | - Katalin Mády
- Research Group of Phonetics, Institute for General and Hungarian Linguistics, Hungarian Research Centre for Linguistics, Hungary
| | - Lilla Magyari
- Department of Social Studies, Faculty of Social Sciences, University of Stavanger, Stavanger, Norway; Norwegian Centre for Reading Education and Research, Faculty of Arts and Education, University of Stavanger, Stavanger, Norway
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungary
| |
Collapse
|
44
|
Wiesman AI, Donhauser PW, Degroot C, Diab S, Kousaie S, Fon EA, Klein D, Baillet S. Aberrant neurophysiological signaling associated with speech impairments in Parkinson's disease. NPJ Parkinsons Dis 2023; 9:61. [PMID: 37059749 PMCID: PMC10104849 DOI: 10.1038/s41531-023-00495-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 03/16/2023] [Indexed: 04/16/2023] Open
Abstract
Difficulty producing intelligible speech is a debilitating symptom of Parkinson's disease (PD). Yet, both the robust evaluation of speech impairments and the identification of the affected brain systems are challenging. Using task-free magnetoencephalography, we examine the spectral and spatial definitions of the functional neuropathology underlying reduced speech quality in patients with PD using a new approach to characterize speech impairments and a novel brain-imaging marker. We found that the interactive scoring of speech impairments in PD (N = 59) is reliable across non-expert raters, and better related to the hallmark motor and cognitive impairments of PD than automatically-extracted acoustical features. By relating these speech impairment ratings to neurophysiological deviations from healthy adults (N = 65), we show that articulation impairments in patients with PD are associated with aberrant activity in the left inferior frontal cortex, and that functional connectivity of this region with somatomotor cortices mediates the influence of cognitive decline on speech deficits.
Collapse
Affiliation(s)
- Alex I Wiesman
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada
| | - Peter W Donhauser
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - Clotilde Degroot
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada
| | - Sabrina Diab
- Department of Psychology, Université du Québec à Montréal, Montréal, QC, Canada
| | - Shanna Kousaie
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
| | - Edward A Fon
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada
| | - Denise Klein
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada.
- Center for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.
| | - Sylvain Baillet
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada.
| |
Collapse
|
45
|
Elmer S, Besson M, Rodriguez-Fornells A, Giroud N. Foreign speech sound discrimination and associative word learning lead to a fast reconfiguration of resting-state networks. Neuroimage 2023; 271:120026. [PMID: 36921678 DOI: 10.1016/j.neuroimage.2023.120026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 03/09/2023] [Accepted: 03/12/2023] [Indexed: 03/18/2023] Open
Abstract
Learning new words in an unfamiliar language is a complex endeavor that requires the orchestration of multiple perceptual and cognitive functions. Although the neural mechanisms governing word learning are becoming better understood, little is known about the predictive value of resting-state (RS) metrics for foreign word discrimination and word learning attainment. In addition, it is still unknown which of the multistep processes involved in word learning have the potential to rapidly reconfigure RS networks. To address these research questions, we used electroencephalography (EEG), measured forty participants, and examined scalp-based power spectra, source-based spectral density maps and functional connectivity metrics before (RS1), in between (RS2) and after (RS3) a series of tasks which are known to facilitate the acquisition of new words in a foreign language, namely word discrimination, word-referent mapping and semantic generalization. Power spectra at the scalp level consistently revealed a reconfiguration of RS networks as a function of foreign word discrimination (RS1 vs. RS2) and word learning (RS1 vs. RS3) tasks in the delta, lower and upper alpha, and upper beta frequency ranges. Otherwise, functional reconfigurations at the source level were restricted to the theta (spectral density maps) and to the lower and upper alpha frequency bands (spectral density maps and functional connectivity). Notably, scalp RS changes related to the word discrimination tasks (difference between RS2 and RS1) correlated with word discrimination abilities (upper alpha band) and semantic generalization performance (theta and upper alpha bands), whereas functional changes related to the word learning tasks (difference between RS3 and RS1) correlated with word discrimination scores (lower alpha band). Taken together, these results highlight that foreign speech sound discrimination and word learning have the potential to rapidly reconfigure RS networks at multiple functional scales.
Collapse
Affiliation(s)
- Stefan Elmer
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Bellvitge Biomedical Research Institute, Barcelona, Spain; Competence center Language & Medicine, University of Zurich, Switzerland.
| | - Mireille Besson
- Laboratoire de Neurosciences Cognitives, Université Publique de France, CNRS & Aix-Marseille University, Marseille, France
| | - Antoni Rodriguez-Fornells
- Bellvitge Biomedical Research Institute, Barcelona, Spain; University of Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| | - Nathalie Giroud
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland
| |
Collapse
|
46
|
Colenbier N, Sareen E, Del-Aguila Puntas T, Griffa A, Pellegrino G, Mantini D, Marinazzo D, Arcara G, Amico E. Task matters: Individual MEG signatures from naturalistic and neurophysiological brain states. Neuroimage 2023; 271:120021. [PMID: 36918139 DOI: 10.1016/j.neuroimage.2023.120021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 02/21/2023] [Accepted: 03/10/2023] [Indexed: 03/14/2023] Open
Abstract
The discovery that human brain connectivity data can be used as a "fingerprint" to identify a given individual from a population, has become a burgeoning research area in the neuroscience field. Recent studies have identified the possibility to extract these brain signatures from the temporal rich dynamics of resting-state magneto encephalography (MEG) recordings. Nevertheless, it is still uncertain to what extent MEG signatures can serve as an indicator of human identifiability during task-related conduct. Here, using MEG data from naturalistic and neurophysiological tasks, we show that identification improves in tasks relative to resting-state, providing compelling evidence for a task dependent axis of MEG signatures. Notably, improvements in identifiability were more prominent in strictly controlled tasks. Lastly, the brain regions contributing most towards individual identification were also modified when engaged in task activities. We hope that this investigation advances our understanding of the driving factors behind brain identification from MEG signals.
Collapse
Affiliation(s)
| | - Ekansh Sareen
- Medical Image Processing Laboratory, Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Tamara Del-Aguila Puntas
- Laboratorio de Psicobiologia, Departmento de Psicología Experimental, Facultad de Psicología, Universidad de Sevilla, Spain
| | - Alessandra Griffa
- Medical Image Processing Laboratory, Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Department of Radiology and Medical Informatics, University of Geneva, Switzerland; Leenaards Memory Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | | | - Dante Mantini
- Movement Control and Neuroplasticity Research Group, KU Leuven, Belgium
| | - Daniele Marinazzo
- Department of Data Analysis, Faculty of Psychology and Educational Sciences, Ghent University, Ghent, Belgium
| | | | - Enrico Amico
- Medical Image Processing Laboratory, Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Department of Radiology and Medical Informatics, University of Geneva, Switzerland.
| |
Collapse
|
47
|
Rimmele JM, Sun Y, Michalareas G, Ghitza O, Poeppel D. Dynamics of Functional Networks for Syllable and Word-Level Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:120-144. [PMID: 37229144 PMCID: PMC10205074 DOI: 10.1162/nol_a_00089] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 11/07/2022] [Indexed: 05/27/2023]
Abstract
Speech comprehension requires the ability to temporally segment the acoustic input for higher-level linguistic analysis. Oscillation-based approaches suggest that low-frequency auditory cortex oscillations track syllable-sized acoustic information and therefore emphasize the relevance of syllabic-level acoustic processing for speech segmentation. How syllabic processing interacts with higher levels of speech processing, beyond segmentation, including the anatomical and neurophysiological characteristics of the networks involved, is debated. In two MEG experiments, we investigate lexical and sublexical word-level processing and the interactions with (acoustic) syllable processing using a frequency-tagging paradigm. Participants listened to disyllabic words presented at a rate of 4 syllables/s. Lexical content (native language), sublexical syllable-to-syllable transitions (foreign language), or mere syllabic information (pseudo-words) were presented. Two conjectures were evaluated: (i) syllable-to-syllable transitions contribute to word-level processing; and (ii) processing of words activates brain areas that interact with acoustic syllable processing. We show that syllable-to-syllable transition information compared to mere syllable information, activated a bilateral superior, middle temporal and inferior frontal network. Lexical content resulted, additionally, in increased neural activity. Evidence for an interaction of word- and acoustic syllable-level processing was inconclusive. Decreases in syllable tracking (cerebroacoustic coherence) in auditory cortex and increases in cross-frequency coupling between right superior and middle temporal and frontal areas were found when lexical content was present compared to all other conditions; however, not when conditions were compared separately. The data provide experimental insight into how subtle and sensitive syllable-to-syllable transition information for word-level processing is.
Collapse
Affiliation(s)
- Johanna M. Rimmele
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck NYU Center for Language, Music and Emotion, Frankfurt am Main, Germany; New York, NY, USA
| | - Yue Sun
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Georgios Michalareas
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Oded Ghitza
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- College of Biomedical Engineering & Hearing Research Center, Boston University, Boston, MA, USA
| | - David Poeppel
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music and Emotion, Frankfurt am Main, Germany; New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
| |
Collapse
|
48
|
Orłowski P, Bola M. Sensory modality defines the relation between EEG Lempel-Ziv diversity and meaningfulness of a stimulus. Sci Rep 2023; 13:3453. [PMID: 36859725 PMCID: PMC9977735 DOI: 10.1038/s41598-023-30639-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 02/27/2023] [Indexed: 03/03/2023] Open
Abstract
Diversity of brain activity is a robust neural correlate of global states of consciousness. It has been proposed that diversity measures specifically reflect the temporal variability of conscious experience. Previous studies supported this hypothesis by showing that perception of meaningful visual stimuli causes richer, more-variable experiences than perception of meaningless stimuli, and this is reflected in greater brain signal diversity. To investigate whether this relation is consistent across sensory modalities, to participants we presented three versions of naturalistic visual and auditory stimuli (videos and audiobooks) that varied in the amount of meaning (original, scrambled, and noise), while recording electroencephalographic signals. We report three main findings. First, greater meaningfulness of visual stimuli was related to higher Lempel-Ziv diversity of EEG signals, but the opposite effect was found in the auditory modality. Second, visual perception was related to generally higher EEG diversity than auditory perception. Third, perception of meaningful visual stimuli and auditory stimuli respectively resulted in higher and lower EEG diversity in comparison to the resting state. In conclusion, the signal diversity of continuous brain signals depends on the stimulated sensory modality, therefore it is not a generic index of the variability of conscious experience.
Collapse
Affiliation(s)
- Paweł Orłowski
- grid.419305.a0000 0001 1943 2944Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 3 Pasteur Street, 02-093 Warsaw, Poland
| | - Michał Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland.
| |
Collapse
|
49
|
Kazanina N, Tavano A. What neural oscillations can and cannot do for syntactic structure building. Nat Rev Neurosci 2023; 24:113-128. [PMID: 36460920 DOI: 10.1038/s41583-022-00659-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2022] [Indexed: 12/04/2022]
Abstract
Understanding what someone says requires relating words in a sentence to one another as instructed by the grammatical rules of a language. In recent years, the neurophysiological basis for this process has become a prominent topic of discussion in cognitive neuroscience. Current proposals about the neural mechanisms of syntactic structure building converge on a key role for neural oscillations in this process, but they differ in terms of the exact function that is assigned to them. In this Perspective, we discuss two proposed functions for neural oscillations - chunking and multiscale information integration - and evaluate their merits and limitations taking into account a fundamentally hierarchical nature of syntactic representations in natural languages. We highlight insights that provide a tangible starting point for a neurocognitive model of syntactic structure building.
Collapse
Affiliation(s)
- Nina Kazanina
- University of Bristol, Bristol, UK.
- Higher School of Economics, Moscow, Russia.
| | | |
Collapse
|
50
|
Ladányi E, Novakovic M, Boorom OA, Aaron AS, Scartozzi AC, Gustavson DE, Nitin R, Bamikole PO, Vaughan C, Fromboluti EK, Schuele CM, Camarata SM, McAuley JD, Gordon RL. Using Motor Tempi to Understand Rhythm and Grammatical Skills in Developmental Language Disorder and Typical Language Development. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:1-28. [PMID: 36875176 PMCID: PMC9979588 DOI: 10.1162/nol_a_00082] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 09/19/2022] [Indexed: 04/18/2023]
Abstract
Children with developmental language disorder (DLD) show relative weaknesses on rhythm tasks beyond their characteristic linguistic impairments. The current study compares preferred tempo and the width of an entrainment region for 5- to 7-year-old typically developing (TD) children and children with DLD and considers the associations with rhythm aptitude and expressive grammar skills in the two populations. Preferred tempo was measured with a spontaneous motor tempo task (tapping tempo at a comfortable speed), and the width (range) of an entrainment region was measured by the difference between the upper (slow) and lower (fast) limits of tapping a rhythm normalized by an individual's spontaneous motor tempo. Data from N = 16 children with DLD and N = 114 TD children showed that whereas entrainment-region width did not differ across the two groups, slowest motor tempo, the determinant of the upper (slow) limit of the entrainment region, was at a faster tempo in children with DLD vs. TD. In other words, the DLD group could not pace their slow tapping as slowly as the TD group. Entrainment-region width was positively associated with rhythm aptitude and receptive grammar even after taking into account potential confounding factors, whereas expressive grammar did not show an association with any of the tapping measures. Preferred tempo was not associated with any study variables after including covariates in the analyses. These results motivate future neuroscientific studies of low-frequency neural oscillatory mechanisms as the potential neural correlates of entrainment-region width and their associations with musical rhythm and spoken language processing in children with typical and atypical language development.
Collapse
Affiliation(s)
- Enikő Ladányi
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Department of Linguistics, University of Potsdam, Potsdam, Germany
| | - Michaela Novakovic
- Department of Pharmacology, Northwestern University Feinberg School of Medicine, Chicago, IL
| | - Olivia A. Boorom
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, Lawrence, KS
| | - Allison S. Aaron
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Alyssa C. Scartozzi
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, TN
| | - Daniel E. Gustavson
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO
| | - Rachana Nitin
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN
| | - Peter O. Bamikole
- Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, OR
| | - Chloe Vaughan
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | - C. Melanie Schuele
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - Stephen M. Camarata
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - J. Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI
| | - Reyna L. Gordon
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, TN
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN
| |
Collapse
|