51
|
Hardy SM, Jensen O, Wheeldon L, Mazaheri A, Segaert K. Modulation in alpha band activity reflects syntax composition: an MEG study of minimal syntactic binding. Cereb Cortex 2023; 33:497-511. [PMID: 35311899 PMCID: PMC9890467 DOI: 10.1093/cercor/bhac080] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 02/06/2022] [Accepted: 02/07/2022] [Indexed: 02/05/2023] Open
Abstract
Successful sentence comprehension requires the binding, or composition, of multiple words into larger structures to establish meaning. Using magnetoencephalography, we investigated the neural mechanisms involved in binding at the syntax level, in a task where contributions from semantics were minimized. Participants were auditorily presented with minimal sentences that required binding (pronoun and pseudo-verb with the corresponding morphological inflection; "she grushes") and pseudo-verb wordlists that did not require binding ("cugged grushes"). Relative to no binding, we found that syntactic binding was associated with a modulation in alpha band (8-12 Hz) activity in left-lateralized language regions. First, we observed a significantly smaller increase in alpha power around the presentation of the target word ("grushes") that required binding (-0.05 to 0.1 s), which we suggest reflects an expectation of binding to occur. Second, during binding of the target word (0.15-0.25 s), we observed significantly decreased alpha phase-locking between the left inferior frontal gyrus and the left middle/inferior temporal cortex, which we suggest reflects alpha-driven cortical disinhibition serving to strengthen communication within the syntax composition neural network. Altogether, our findings highlight the critical role of rapid spatial-temporal alpha band activity in controlling the allocation, transfer, and coordination of the brain's resources during syntax composition.
Collapse
Affiliation(s)
- Sophie M Hardy
- Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
- Department of Psychology, University of Warwick, Coventry CV4 7AL, UK
| | - Ole Jensen
- Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
| | - Linda Wheeldon
- Department of Foreign Languages and Translations, University of Agder, Kristiansand 4630, Norway
| | - Ali Mazaheri
- Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| | - Katrien Segaert
- Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| |
Collapse
|
52
|
Pastore A, Tomassini A, Delis I, Dolfini E, Fadiga L, D'Ausilio A. Speech listening entails neural encoding of invisible articulatory features. Neuroimage 2022; 264:119724. [PMID: 36328272 DOI: 10.1016/j.neuroimage.2022.119724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/28/2022] [Accepted: 10/30/2022] [Indexed: 11/06/2022] Open
Abstract
Speech processing entails a complex interplay between bottom-up and top-down computations. The former is reflected in the neural entrainment to the quasi-rhythmic properties of speech acoustics while the latter is supposed to guide the selection of the most relevant input subspace. Top-down signals are believed to originate mainly from motor regions, yet similar activities have been shown to tune attentional cycles also for simpler, non-speech stimuli. Here we examined whether, during speech listening, the brain reconstructs articulatory patterns associated to speech production. We measured electroencephalographic (EEG) data while participants listened to sentences during the production of which articulatory kinematics of lips, jaws and tongue were also recorded (via Electro-Magnetic Articulography, EMA). We captured the patterns of articulatory coordination through Principal Component Analysis (PCA) and used Partial Information Decomposition (PID) to identify whether the speech envelope and each of the kinematic components provided unique, synergistic and/or redundant information regarding the EEG signals. Interestingly, tongue movements contain both unique as well as synergistic information with the envelope that are encoded in the listener's brain activity. This demonstrates that during speech listening the brain retrieves highly specific and unique motor information that is never accessible through vision, thus leveraging audio-motor maps that arise most likely from the acquisition of speech production during development.
Collapse
Affiliation(s)
- A Pastore
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy.
| | - A Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - I Delis
- School of Biomedical Sciences, University of Leeds, Leeds, UK
| | - E Dolfini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - L Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - A D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy.
| |
Collapse
|
53
|
Chen X, Shi X, Wu Y, Zhou Z, Chen S, Han Y, Shan C. Gamma oscillations and application of 40-Hz audiovisual stimulation to improve brain function. Brain Behav 2022; 12:e2811. [PMID: 36374520 PMCID: PMC9759142 DOI: 10.1002/brb3.2811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 10/06/2022] [Accepted: 10/20/2022] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Audiovisual stimulation, such as auditory stimulation, light stimulation, and audiovisual combined stimulation, as a non-invasive stimulation, which can induce gamma oscillation, has received increased attention in recent years, and it has been preliminarily applied in the clinical rehabilitation of brain dysfunctions, such as cognitive, language, motor, mood, and sleep dysfunctions. However, the exact mechanism underlying the therapeutic effect of 40-Hz audiovisual stimulation remains unclear; the clinical applications of 40-Hz audiovisual stimulation in brain dysfunctions rehabilitation still need further research. OBJECTIVE In order to provide new insights into brain dysfunction rehabilitation, this review begins with a discussion of the mechanism underlying 40-Hz audiovisual stimulation, followed by a brief evaluation of its clinical application in the rehabilitation of brain dysfunctions. RESULTS Currently, 40-Hz audiovisual stimulation was demonstrated to affect synaptic plasticity and modify the connection status of related brain networks in animal experiments and clinical trials. Although its promising efficacy has been shown in the treatment of cognitive, mood, and sleep impairment, research studies into its application in language and motor dysfunctions are still ongoing. CONCLUSIONS Although 40-Hz audiovisual stimulation seems to be effective in treating cognitive, mood, and sleep disorders, its role in language and motor dysfunctions has yet to be determined.
Collapse
Affiliation(s)
- Xixi Chen
- Department of Rehabilitation Medicine, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China.,School of Rehabilitation Science, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Xiaolong Shi
- Department of Rehabilitation Medicine, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China.,School of Rehabilitation Science, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Yuwei Wu
- Department of Rehabilitation Medicine, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Zhiqing Zhou
- Department of Rehabilitation Medicine, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China.,School of Rehabilitation Science, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Songmei Chen
- School of Rehabilitation Science, Shanghai University of Traditional Chinese Medicine, Shanghai, China.,Department of Rehabilitation Medicine, Shanghai No.3 Rehabilitation Hospital, Shanghai, China
| | - Yan Han
- Department of Neurology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Chunlei Shan
- Department of Rehabilitation Medicine, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China.,School of Rehabilitation Science, Shanghai University of Traditional Chinese Medicine, Shanghai, China.,Engineering Research Center of Traditional Chinese Medicine Intelligent Rehabilitation, Ministry of Education, Shanghai, China
| |
Collapse
|
54
|
Neurodevelopmental oscillatory basis of speech processing in noise. Dev Cogn Neurosci 2022; 59:101181. [PMID: 36549148 PMCID: PMC9792357 DOI: 10.1016/j.dcn.2022.101181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 10/31/2022] [Accepted: 11/25/2022] [Indexed: 11/27/2022] Open
Abstract
Humans' extraordinary ability to understand speech in noise relies on multiple processes that develop with age. Using magnetoencephalography (MEG), we characterize the underlying neuromaturational basis by quantifying how cortical oscillations in 144 participants (aged 5-27 years) track phrasal and syllabic structures in connected speech mixed with different types of noise. While the extraction of prosodic cues from clear speech was stable during development, its maintenance in a multi-talker background matured rapidly up to age 9 and was associated with speech comprehension. Furthermore, while the extraction of subtler information provided by syllables matured at age 9, its maintenance in noisy backgrounds progressively matured until adulthood. Altogether, these results highlight distinct behaviorally relevant maturational trajectories for the neuronal signatures of speech perception. In accordance with grain-size proposals, neuromaturational milestones are reached increasingly late for linguistic units of decreasing size, with further delays incurred by noise.
Collapse
|
55
|
Chiang HS, Motes M, Kraut M, Vanneste S, Hart J. High-definition transcranial direct current stimulation modulates theta response during a Go-NoGo task in traumatic brain injury. Clin Neurophysiol 2022; 143:36-47. [PMID: 36108520 PMCID: PMC10545365 DOI: 10.1016/j.clinph.2022.08.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 08/08/2022] [Accepted: 08/24/2022] [Indexed: 11/03/2022]
Abstract
OBJECTIVE High Definition transcranial Direct Current Stimulation (HD-tDCS) has been shown to improve cognitive performance in individuals with chronic traumatic brain injury (TBI), although electrophysiological mechanisms remain unclear. METHODS Veterans with TBI underwent active anodal (N = 15) vs sham (N = 10) HD-tDCS targeting the pre-supplementary motor area (pre-SMA). A Go-NoGo task was conducted simultaneously with electroencephalography (EEG) at baseline and after intervention completion. RESULTS We found increased theta event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITPC) during Go in the frontal midline electrodes overlying the pre-SMA after active HD-tDCS intervention, but not after sham. We also found increased theta phase coherence during Go between the frontal midline and left posterior regions after active HD-tDCS. A late increase in alpha-theta ERSP was found in the left central region after active HD-tDCS. Notably, lower baseline theta ERSP/ITPC in the frontal midline region predicted more post-intervention improvement in Go performance only in the active group. CONCLUSIONS There are local and interregional oscillatory changes in response to HD-tDCS modulation in chronic TBI. SIGNIFICANCE These findings may guide future research in utilizing EEG time-frequency metrics not only to measure interventional effects, but also in selecting candidates who may optimally respond to treatment.
Collapse
Affiliation(s)
- Hsueh-Sheng Chiang
- Department of Neurology, The University of Texas Southwestern Medical Center, 5323 Harry Hines Boulevard, Dallas, TX 75390, USA; School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080, USA.
| | - Michael Motes
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080, USA.
| | - Michael Kraut
- Department of Radiology, The Johns Hopkins University School of Medicine, 601 N Caroline St, Baltimore, MD 21205, USA.
| | - Sven Vanneste
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080, USA; Trinity College Dublin, The University of Dublin, College Green, Dublin 2, Ireland.
| | - John Hart
- Department of Neurology, The University of Texas Southwestern Medical Center, 5323 Harry Hines Boulevard, Dallas, TX 75390, USA; School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080, USA.
| |
Collapse
|
56
|
Boos M, Kobi M, Elmer S, Jäncke L. The influence of experience on cognitive load during simultaneous interpretation. BRAIN AND LANGUAGE 2022; 234:105185. [PMID: 36130466 DOI: 10.1016/j.bandl.2022.105185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 08/01/2022] [Accepted: 09/07/2022] [Indexed: 06/15/2023]
Abstract
Simultaneous interpretation is a complex task that is assumed to be associated with a high workload. To corroborate this association, we measured workload during three tasks of increasing complexity: listening, shadowing, and interpreting, using electroencephalography and self-assessments in four groups of participants with varying experience in simultaneous interpretation. The self-assessment data showed that professional interpreters perceived the most workload-inducing condition, namely the interpreting task, as less demanding compared to the less experienced participants. This higher subjectively perceived workload in non-interpreters was paralleled by increasing frontal theta power values from listening to interpreting, whereas such a modulation was less pronounced in professional interpreters. Furthermore, regarding both workload measures, trainee interpreters were situated between professional interpreters and non-interpreters. Since the non-interpreters demonstrated high proficiencies and exposure in their second language, too, our findings provide evidence for an influence of interpretation training on experienced workload during simultaneous interpretation.
Collapse
Affiliation(s)
- Michael Boos
- Division Neuropsychology, Department of Psychology, University of Zurich, Binzmühlestrasse 14/25, 8050 Zurich, Switzerland.
| | - Matthias Kobi
- Division Neuropsychology, Department of Psychology, University of Zurich, Binzmühlestrasse 14/25, 8050 Zurich, Switzerland.
| | - Stefan Elmer
- Division Neuropsychology, Department of Psychology, University of Zurich, Binzmühlestrasse 14/25, 8050 Zurich, Switzerland; Computational Neuroscience of Speech & Hearing, Department of Computational Linguistics, University of Zurich, Andreasstrasse 15, 8050 Zurich, Switzerland.
| | - Lutz Jäncke
- Division Neuropsychology, Department of Psychology, University of Zurich, Binzmühlestrasse 14/25, 8050 Zurich, Switzerland; University Research Priority Program (URPP) "Dynamics of Healthy Aging", University of Zurich, Andreasstrasse 15/2, 8050 Zurich, Switzerland.
| |
Collapse
|
57
|
Syntax through the looking glass: A review on two-word linguistic processing across behavioral, neuroimaging and neurostimulation studies. Neurosci Biobehav Rev 2022; 142:104881. [DOI: 10.1016/j.neubiorev.2022.104881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 09/13/2022] [Accepted: 09/15/2022] [Indexed: 11/23/2022]
|
58
|
Suess N, Hauswald A, Reisinger P, Rösch S, Keitel A, Weisz N. Cortical tracking of formant modulations derived from silently presented lip movements and its decline with age. Cereb Cortex 2022; 32:4818-4833. [PMID: 35062025 PMCID: PMC9627034 DOI: 10.1093/cercor/bhab518] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 12/15/2021] [Accepted: 12/16/2021] [Indexed: 11/26/2022] Open
Abstract
The integration of visual and auditory cues is crucial for successful processing of speech, especially under adverse conditions. Recent reports have shown that when participants watch muted videos of speakers, the phonological information about the acoustic speech envelope, which is associated with but independent from the speakers' lip movements, is tracked by the visual cortex. However, the speech signal also carries richer acoustic details, for example, about the fundamental frequency and the resonant frequencies, whose visuophonological transformation could aid speech processing. Here, we investigated the neural basis of the visuo-phonological transformation processes of these more fine-grained acoustic details and assessed how they change as a function of age. We recorded whole-head magnetoencephalographic (MEG) data while the participants watched silent normal (i.e., natural) and reversed videos of a speaker and paid attention to their lip movements. We found that the visual cortex is able to track the unheard natural modulations of resonant frequencies (or formants) and the pitch (or fundamental frequency) linked to lip movements. Importantly, only the processing of natural unheard formants decreases significantly with age in the visual and also in the cingulate cortex. This is not the case for the processing of the unheard speech envelope, the fundamental frequency, or the purely visual information carried by lip movements. These results show that unheard spectral fine details (along with the unheard acoustic envelope) are transformed from a mere visual to a phonological representation. Aging affects especially the ability to derive spectral dynamics at formant frequencies. As listening in noisy environments should capitalize on the ability to track spectral fine details, our results provide a novel focus on compensatory processes in such challenging situations.
Collapse
Affiliation(s)
- Nina Suess
- Department of Psychology, Centre for Cognitive Neuroscience, University of Salzburg, Salzburg 5020, Austria
| | - Anne Hauswald
- Department of Psychology, Centre for Cognitive Neuroscience, University of Salzburg, Salzburg 5020, Austria
| | - Patrick Reisinger
- Department of Psychology, Centre for Cognitive Neuroscience, University of Salzburg, Salzburg 5020, Austria
| | - Sebastian Rösch
- Department of Otorhinolaryngology, Head and Neck Surgery, Paracelsus Medical University Salzburg, University Hospital Salzburg, Salzburg 5020, Austria
| | - Anne Keitel
- School of Social Sciences, University of Dundee, Dundee DD1 4HN, UK
| | - Nathan Weisz
- Department of Psychology, Centre for Cognitive Neuroscience, University of Salzburg, Salzburg 5020, Austria
- Department of Psychology, Neuroscience Institute, Christian Doppler University Hospital, Paracelsus Medical University, Salzburg 5020, Austria
| |
Collapse
|
59
|
Lo CW, Tung TY, Ke AH, Brennan JR. Hierarchy, Not Lexical Regularity, Modulates Low-Frequency Neural Synchrony During Language Comprehension. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:538-555. [PMID: 37215342 PMCID: PMC10158645 DOI: 10.1162/nol_a_00077] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 06/20/2022] [Indexed: 05/24/2023]
Abstract
Neural responses appear to synchronize with sentence structure. However, researchers have debated whether this response in the delta band (0.5-3 Hz) really reflects hierarchical information or simply lexical regularities. Computational simulations in which sentences are represented simply as sequences of high-dimensional numeric vectors that encode lexical information seem to give rise to power spectra similar to those observed for sentence synchronization, suggesting that sentence-level cortical tracking findings may reflect sequential lexical or part-of-speech information, and not necessarily hierarchical syntactic information. Using electroencephalography (EEG) data and the frequency-tagging paradigm, we develop a novel experimental condition to tease apart the predictions of the lexical and the hierarchical accounts of the attested low-frequency synchronization. Under a lexical model, synchronization should be observed even when words are reversed within their phrases (e.g., "sheep white grass eat" instead of "white sheep eat grass"), because the same lexical items are preserved at the same regular intervals. Critically, such stimuli are not syntactically well-formed; thus a hierarchical model does not predict synchronization of phrase- and sentence-level structure in the reversed phrase condition. Computational simulations confirm these diverging predictions. EEG data from N = 31 native speakers of Mandarin show robust delta synchronization to syntactically well-formed isochronous speech. Importantly, no such pattern is observed for reversed phrases, consistent with the hierarchical, but not the lexical, accounts.
Collapse
Affiliation(s)
- Chia-Wen Lo
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
| | - Tzu-Yun Tung
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
| | - Alan Hezao Ke
- Department of Linguistics, University of Michigan, Ann Arbor, MI, USA
- Department of Linguistics, Languages and Cultures, Michigan State University, East Lansing, MI, USA
| | | |
Collapse
|
60
|
Zeller J, Bylund E, Lewis AG. The parser consults the lexicon in spite of transparent gender marking: EEG evidence from noun class agreement processing in Zulu. Cognition 2022; 226:105148. [DOI: 10.1016/j.cognition.2022.105148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 02/22/2022] [Accepted: 04/22/2022] [Indexed: 11/03/2022]
|
61
|
Menn KH, Ward EK, Braukmann R, van den Boomen C, Buitelaar J, Hunnius S, Snijders TM. Neural Tracking in Infancy Predicts Language Development in Children With and Without Family History of Autism. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:495-514. [PMID: 37216063 PMCID: PMC10158647 DOI: 10.1162/nol_a_00074] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 05/16/2022] [Indexed: 05/24/2023]
Abstract
During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1-3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not in 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.
Collapse
Affiliation(s)
- Katharina H. Menn
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, Leipzig, Germany
| | - Emma K. Ward
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Ricarda Braukmann
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Carlijn van den Boomen
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jan Buitelaar
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Cognitive Neuroscience, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Sabine Hunnius
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Tineke M. Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Cognitive Neuropsychology Department, Tilburg University
| |
Collapse
|
62
|
Tomić A, Kaan E. Oscillatory brain responses to processing code-switches in the presence of others. BRAIN AND LANGUAGE 2022; 231:105139. [PMID: 35687945 DOI: 10.1016/j.bandl.2022.105139] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 05/23/2022] [Accepted: 05/24/2022] [Indexed: 06/15/2023]
Abstract
Code-switching, i.e. the alternation between languages in a conversation, is a typical, yet socially-constrained practice in bilingual communities. For instance, code-switching is permissible only when other conversation partners are fluent in both languages. Studying code-switching provides insight in the cognitive and neural mechanisms underlying language control, and their modulation by linguistic and non-linguistic factors. Using time-frequency representations, we analyzed brain oscillation changes in EEG data recorded in a prior study (Kaan et al., 2020). In this study, Spanish-English bilinguals read sentences with and without switches in the presence of a bilingual or monolingual partner. Consistent with prior studies, code-switches were associated with a power decrease in the lower beta band (15-18 Hz). In addition, code-switches were associated with a power decrease in the upper gamma band (40-50 Hz), but only when a bilingual partner was present, suggesting the semantic/pragmatic processing of code-switches differs depending on who is present.
Collapse
Affiliation(s)
- Aleksandra Tomić
- University of Florida, Department of Linguistics, Gainesville, FL 32611, USA; UiT The Arctic University of Norway, Department of Language and Culture, 9037 Tromsø, Norway.
| | - Edith Kaan
- University of Florida, Department of Linguistics, Gainesville, FL 32611, USA
| |
Collapse
|
63
|
Kegler M, Weissbart H, Reichenbach T. The neural response at the fundamental frequency of speech is modulated by word-level acoustic and linguistic information. Front Neurosci 2022; 16:915744. [PMID: 35942153 PMCID: PMC9355803 DOI: 10.3389/fnins.2022.915744] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 07/04/2022] [Indexed: 11/21/2022] Open
Abstract
Spoken language comprehension requires rapid and continuous integration of information, from lower-level acoustic to higher-level linguistic features. Much of this processing occurs in the cerebral cortex. Its neural activity exhibits, for instance, correlates of predictive processing, emerging at delays of a few 100 ms. However, the auditory pathways are also characterized by extensive feedback loops from higher-level cortical areas to lower-level ones as well as to subcortical structures. Early neural activity can therefore be influenced by higher-level cognitive processes, but it remains unclear whether such feedback contributes to linguistic processing. Here, we investigated early speech-evoked neural activity that emerges at the fundamental frequency. We analyzed EEG recordings obtained when subjects listened to a story read by a single speaker. We identified a response tracking the speaker's fundamental frequency that occurred at a delay of 11 ms, while another response elicited by the high-frequency modulation of the envelope of higher harmonics exhibited a larger magnitude and longer latency of about 18 ms with an additional significant component at around 40 ms. Notably, while the earlier components of the response likely originate from the subcortical structures, the latter presumably involves contributions from cortical regions. Subsequently, we determined the magnitude of these early neural responses for each individual word in the story. We then quantified the context-independent frequency of each word and used a language model to compute context-dependent word surprisal and precision. The word surprisal represented how predictable a word is, given the previous context, and the word precision reflected the confidence about predicting the next word from the past context. We found that the word-level neural responses at the fundamental frequency were predominantly influenced by the acoustic features: the average fundamental frequency and its variability. Amongst the linguistic features, only context-independent word frequency showed a weak but significant modulation of the neural response to the high-frequency envelope modulation. Our results show that the early neural response at the fundamental frequency is already influenced by acoustic as well as linguistic information, suggesting top-down modulation of this neural response.
Collapse
Affiliation(s)
- Mikolaj Kegler
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
| | - Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Tobias Reichenbach
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
- *Correspondence: Tobias Reichenbach
| |
Collapse
|
64
|
Ten Oever S, Carta S, Kaufeld G, Martin AE. Neural tracking of phrases in spoken language comprehension is automatic and task-dependent. eLife 2022; 11:77468. [PMID: 35833919 PMCID: PMC9282854 DOI: 10.7554/elife.77468] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 06/25/2022] [Indexed: 12/02/2022] Open
Abstract
Linguistic phrases are tracked in sentences even though there is no one-to-one acoustic phrase marker in the physical signal. This phenomenon suggests an automatic tracking of abstract linguistic structure that is endogenously generated by the brain. However, all studies investigating linguistic tracking compare conditions where either relevant information at linguistic timescales is available, or where this information is absent altogether (e.g., sentences versus word lists during passive listening). It is therefore unclear whether tracking at phrasal timescales is related to the content of language, or rather, results as a consequence of attending to the timescales that happen to match behaviourally relevant information. To investigate this question, we presented participants with sentences and word lists while recording their brain activity with magnetoencephalography (MEG). Participants performed passive, syllable, word, and word-combination tasks corresponding to attending to four different rates: one they would naturally attend to, syllable-rates, word-rates, and phrasal-rates, respectively. We replicated overall findings of stronger phrasal-rate tracking measured with mutual information for sentences compared to word lists across the classical language network. However, in the inferior frontal gyrus (IFG) we found a task effect suggesting stronger phrasal-rate tracking during the word-combination task independent of the presence of linguistic structure, as well as stronger delta-band connectivity during this task. These results suggest that extracting linguistic information at phrasal rates occurs automatically with or without the presence of an additional task, but also that IFG might be important for temporal integration across various perceptual domains.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Nijmegen, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Sara Carta
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,ADAPT Centre, School of Computer Science and Statistics, University of Dublin, Trinity College, Dublin, Ireland.,CIMeC - Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Greta Kaufeld
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Andrea E Martin
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Nijmegen, Netherlands
| |
Collapse
|
65
|
Bai F, Meyer AS, Martin AE. Neural dynamics differentially encode phrases and sentences during spoken language comprehension. PLoS Biol 2022; 20:e3001713. [PMID: 35834569 PMCID: PMC9282610 DOI: 10.1371/journal.pbio.3001713] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 06/14/2022] [Indexed: 11/19/2022] Open
Abstract
Human language stands out in the natural world as a biological signal that uses a structured system to combine the meanings of small linguistic units (e.g., words) into larger constituents (e.g., phrases and sentences). However, the physical dynamics of speech (or sign) do not stand in a one-to-one relationship with the meanings listeners perceive. Instead, listeners infer meaning based on their knowledge of the language. The neural readouts of the perceptual and cognitive processes underlying these inferences are still poorly understood. In the present study, we used scalp electroencephalography (EEG) to compare the neural response to phrases (e.g., the red vase) and sentences (e.g., the vase is red), which were close in semantic meaning and had been synthesized to be physically indistinguishable. Differences in structure were well captured in the reorganization of neural phase responses in delta (approximately <2 Hz) and theta bands (approximately 2 to 7 Hz),and in power and power connectivity changes in the alpha band (approximately 7.5 to 13.5 Hz). Consistent with predictions from a computational model, sentences showed more power, more power connectivity, and more phase synchronization than phrases did. Theta–gamma phase–amplitude coupling occurred, but did not differ between the syntactic structures. Spectral–temporal response function (STRF) modeling revealed different encoding states for phrases and sentences, over and above the acoustically driven neural response. Our findings provide a comprehensive description of how the brain encodes and separates linguistic structures in the dynamics of neural responses. They imply that phase synchronization and strength of connectivity are readouts for the constituent structure of language. The results provide a novel basis for future neurophysiological research on linguistic structure representation in the brain, and, together with our simulations, support time-based binding as a mechanism of structure encoding in neural dynamics.
Collapse
Affiliation(s)
- Fan Bai
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Antje S. Meyer
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
- * E-mail:
| |
Collapse
|
66
|
Wang P, He Y, Maess B, Yue J, Chen L, Brauer J, Friederici AD, Knösche TR. Alpha power during task performance predicts individual language comprehension. Neuroimage 2022; 260:119449. [PMID: 35835340 DOI: 10.1016/j.neuroimage.2022.119449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 06/15/2022] [Accepted: 07/03/2022] [Indexed: 11/29/2022] Open
Abstract
Alpha power attenuation during cognitive task performing has been suggested to reflect a process of release of inhibition, increase of excitability, and thereby benefit the improvement of performance. Here, we hypothesized that changes in individual alpha power during the execution of a complex language comprehension task may correlate with the individual performance in that task. We tested this using magnetoencephalography (MEG) recorded during comprehension of German sentences of different syntactic complexity. Results showed that neither the frequency nor the power of the spontaneous oscillatory activity at rest were associated with the individual performance. However, during the execution of a sentences processing task, the individual alpha power attenuation did correlate with individual language comprehension performance. Source reconstruction localized these effects in left temporal-parietal brain regions known to be associated with language processing and their right-hemisphere homologues. Our results support the notion that in-task attenuation of individual alpha power is related to the essential mechanisms of the underlying cognitive processes, rather than merely to general phenomena like attention or vigilance.
Collapse
Affiliation(s)
- P Wang
- Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks Group, Leipzig, Germany
| | - Y He
- Philipps University Marburg, Department of Psychiatry and Psychotherapy, Marburg, Germany
| | - B Maess
- Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks Group, Leipzig, Germany
| | - J Yue
- Harbin Institute of Technology, Laboratory for Cognitive and Social Neuroscience, School of Management, Harbin, China
| | - L Chen
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany; Beijing Normal University, College of Chinese Language and Culture, Beijing, China
| | - J Brauer
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany; Friedrich Schiller University, Office of the Vice-President for Young Researchers, Jena, Germany
| | - A D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany
| | - T R Knösche
- Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks Group, Leipzig, Germany.
| |
Collapse
|
67
|
Egurtzegi A, Blasi DE, Bornkessel-Schlesewsky I, Laka I, Meyer M, Bickel B, Sauppe S. Cross-linguistic differences in case marking shape neural power dynamics and gaze behavior during sentence planning. BRAIN AND LANGUAGE 2022; 230:105127. [PMID: 35605312 DOI: 10.1016/j.bandl.2022.105127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 04/07/2022] [Accepted: 04/21/2022] [Indexed: 06/15/2023]
Abstract
Languages differ in how they mark the dependencies between verbs and arguments, e.g., by case. An eye tracking and EEG picture description study examined the influence of case marking on the time course of sentence planning in Basque and Swiss German. While German assigns an unmarked (nominative) case to subjects, Basque specifically marks agent arguments through ergative case. Fixations to agents and event-related synchronization (ERS) in the theta and alpha frequency bands, as well as desynchronization (ERD) in the alpha and beta bands revealed multiple effects of case marking on the time course of early sentence planning. Speakers decided on case marking under planning early when preparing sentences with ergative-marked agents in Basque, whereas sentences with unmarked agents allowed delaying structural commitment across languages. These findings support hierarchically incremental accounts of sentence planning and highlight how cross-linguistic differences shape the neural dynamics underpinning language use.
Collapse
Affiliation(s)
- Aitor Egurtzegi
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland; English Department, University of Zurich, Switzerland
| | - Damián E Blasi
- Department of Human Evolutionary Biology, Harvard University, United States; Department of Linguistic and Cultural Evolution, Max Planck Institute for Evolutionary Anthropology, Germany
| | - Ina Bornkessel-Schlesewsky
- School of Psychology, Social Work and Social Policy, University of South Australia, Australia; Cognitive and Systems Neuroscience Research Hub, University of South Australia, Australia
| | - Itziar Laka
- Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Spain
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland; Cognitive Psychology Unit, Psychological Institute, University of Klagenfurt, Austria
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland
| | - Sebastian Sauppe
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland.
| |
Collapse
|
68
|
Zheng Y, Kirk I, Chen T, O'Hagan M, Waldie KE. Task-Modulated Oscillation Differences in Auditory and Spoken Chinese-English Bilingual Processing: An Electroencephalography Study. Front Psychol 2022; 13:823700. [PMID: 35712178 PMCID: PMC9197074 DOI: 10.3389/fpsyg.2022.823700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 04/26/2022] [Indexed: 11/25/2022] Open
Abstract
Neurophysiological research on the bilingual activity of interpretation or interpreting has been very fruitful in understanding the bilingual brain and has gained increasing popularity recently. Issues like word interpreting and the directionality of interpreting have been attended to by many researchers, mainly with localizing techniques. Brain structures such as the dorsolateral prefrontal cortex have been repeatedly identified during interpreting. However, little is known about the oscillation and synchronization features of interpreting, especially sentence-level overt interpreting. In this study we implemented a Chinese-English sentence-level overt interpreting experiment with electroencephalography on 43 Chinese-English bilinguals and compared the oscillation and synchronization features of interpreting with those of listening, speaking and shadowing. We found significant time-frequency power differences in the delta-theta (1–7 Hz) and gamma band (above 30 Hz) between motor and silent tasks. Further theta-gamma coupling analysis revealed different synchronization networks in between speaking, shadowing and interpreting, indicating an idea-formulation dependent mechanism. Moreover, interpreting incurred robust right frontotemporal gamma coactivation network compared with speaking and shadowing, which we think may reflect the language conversion process inherent in interpreting.
Collapse
Affiliation(s)
- Yuxuan Zheng
- School of Psychology, The University of Auckland, Auckland, New Zealand
| | - Ian Kirk
- School of Psychology, The University of Auckland, Auckland, New Zealand.,Centre for Brain Research, The University of Auckland, Auckland, New Zealand
| | - Tengfei Chen
- School of Physical and Mathematical Sciences, Nanjing Tech University, Nanjing, China
| | - Minako O'Hagan
- School of Cultures Languages and Linguistics, The University of Auckland, Auckland, New Zealand
| | - Karen E Waldie
- School of Psychology, The University of Auckland, Auckland, New Zealand.,Centre for Brain Research, The University of Auckland, Auckland, New Zealand
| |
Collapse
|
69
|
Alpha power decreases associated with prediction in written and spoken sentence comprehension. Neuropsychologia 2022; 173:108286. [DOI: 10.1016/j.neuropsychologia.2022.108286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 05/16/2022] [Accepted: 06/01/2022] [Indexed: 11/24/2022]
|
70
|
Oscillatory correlates of linguistic prediction and modality effects during listening to auditory-only and audiovisual sentences. Int J Psychophysiol 2022; 178:9-21. [DOI: 10.1016/j.ijpsycho.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 04/18/2022] [Accepted: 06/03/2022] [Indexed: 11/22/2022]
|
71
|
Tichko P, Kim JC, Large E, Loui P. Integrating music-based interventions with Gamma-frequency stimulation: Implications for healthy ageing. Eur J Neurosci 2022; 55:3303-3323. [PMID: 33236353 PMCID: PMC9899516 DOI: 10.1111/ejn.15059] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Revised: 11/18/2020] [Accepted: 11/18/2020] [Indexed: 02/07/2023]
Abstract
In recent years, music-based interventions (MBIs) have risen in popularity as a non-invasive, sustainable form of care for treating dementia-related disorders, such as Mild Cognitive Impairment (MCI) and Alzheimer's disease (AD). Despite their clinical potential, evidence regarding the efficacy of MBIs on patient outcomes is mixed. Recently, a line of related research has begun to investigate the clinical impact of non-invasive Gamma-frequency (e.g., 40 Hz) sensory stimulation on dementia. Current work, using non-human-animal models of AD, suggests that non-invasive Gamma-frequency stimulation can remediate multiple pathophysiologies of dementia at the molecular, cellular and neural-systems scales, and, importantly, improve cognitive functioning. These findings suggest that the efficacy of MBIs could, in theory, be enhanced by incorporating Gamma-frequency stimulation into current MBI protocols. In the current review, we propose a novel clinical framework for non-invasively treating dementia-related disorders that combines previous MBIs with current approaches employing Gamma-frequency sensory stimulation. We theorize that combining MBIs with Gamma-frequency stimulation could increase the therapeutic power of MBIs by simultaneously targeting multiple biomarkers of dementia, restoring neural activity that underlies learning and memory (e.g., Gamma-frequency neural activity, Theta-Gamma coupling), and actively engaging auditory and reward networks in the brain to promote behavioural change.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA, USA
| | - Ji Chul Kim
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Edward Large
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA,Center for the Ecological Study of Perception & Action (CESPA), Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA,Department of Physics, University of Connecticut, Storrs, CT, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA, USA
| |
Collapse
|
72
|
Gray R, Sarampalis A, Başkent D, Harding EE. Working-Memory, Alpha-Theta Oscillations and Musical Training in Older Age: Research Perspectives for Speech-on-speech Perception. Front Aging Neurosci 2022; 14:806439. [PMID: 35645774 PMCID: PMC9131017 DOI: 10.3389/fnagi.2022.806439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/24/2022] [Indexed: 12/18/2022] Open
Abstract
During the normal course of aging, perception of speech-on-speech or “cocktail party” speech and use of working memory (WM) abilities change. Musical training, which is a complex activity that integrates multiple sensory modalities and higher-order cognitive functions, reportedly benefits both WM performance and speech-on-speech perception in older adults. This mini-review explores the relationship between musical training, WM and speech-on-speech perception in older age (> 65 years) through the lens of the Ease of Language Understanding (ELU) model. Linking neural-oscillation literature associating speech-on-speech perception and WM with alpha-theta oscillatory activity, we propose that two stages of speech-on-speech processing in the ELU are underpinned by WM-related alpha-theta oscillatory activity, and that effects of musical training on speech-on-speech perception may be reflected in these frequency bands among older adults.
Collapse
Affiliation(s)
- Ryan Gray
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Psychology, Centre for Applied Behavioural Sciences, School of Social Sciences, Heriot-Watt University, Edinburgh, United Kingdom
| | - Anastasios Sarampalis
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Eleanor E. Harding
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- *Correspondence: Eleanor E. Harding,
| |
Collapse
|
73
|
Wei Y, Hancock R, Mozeiko J, Large EW. The relationship between entrainment dynamics and reading fluency assessed by sensorimotor perturbation. Exp Brain Res 2022; 240:1775-1790. [PMID: 35507069 DOI: 10.1007/s00221-022-06369-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/06/2022] [Indexed: 11/25/2022]
Abstract
A consistent relationship has been found between rhythmic processing and reading skills. Impairment of the ability to entrain movements to an auditory rhythm in clinical populations with language-related deficits, such as children with developmental dyslexia, has been found in both behavioral and neural studies. In this study, we explored the relationship between rhythmic entrainment, behavioral synchronization, reading fluency, and reading comprehension in neurotypical English- and Mandarin-speaking adults. First, we examined entrainment stability by asking participants to coordinate taps with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Next, we assessed behavioral synchronization by asking participants to coordinate taps with the syllables they produced while reading sentences as naturally as possible (tap to syllable task). Finally, we measured reading fluency and reading comprehension for native English and native Mandarin speakers. Stability of entrainment correlated strongly with tap to syllable task performance and with reading fluency, and both findings generalized across English and Mandarin speakers.
Collapse
Affiliation(s)
- Yi Wei
- Department of Psychological Sciences, University of Connecticut, Storrs, USA.
- Brain Imaging Research Center, University of Connecticut, Storrs, USA.
- The Connecticut Institute for the Brain and Cognitive Sciences of University of Connecticut, Storrs, USA.
| | - Roeland Hancock
- Department of Psychological Sciences, University of Connecticut, Storrs, USA
- Brain Imaging Research Center, University of Connecticut, Storrs, USA
- The Connecticut Institute for the Brain and Cognitive Sciences of University of Connecticut, Storrs, USA
| | - Jennifer Mozeiko
- Department of Speech, Language and Hearing Sciences, University of Connecticut, Storrs, USA
| | - Edward W Large
- Department of Psychological Sciences, University of Connecticut, Storrs, USA
- Department of Physics, University of Connecticut, Storrs, USA
- Brain Imaging Research Center, University of Connecticut, Storrs, USA
- The Connecticut Institute for the Brain and Cognitive Sciences of University of Connecticut, Storrs, USA
| |
Collapse
|
74
|
Danchin A, Fenton AA. From Analog to Digital Computing: Is Homo sapiens’ Brain on Its Way to Become a Turing Machine? Front Ecol Evol 2022. [DOI: 10.3389/fevo.2022.796413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The abstract basis of modern computation is the formal description of a finite state machine, the Universal Turing Machine, based on manipulation of integers and logic symbols. In this contribution to the discourse on the computer-brain analogy, we discuss the extent to which analog computing, as performed by the mammalian brain, is like and unlike the digital computing of Universal Turing Machines. We begin with ordinary reality being a permanent dialog between continuous and discontinuous worlds. So it is with computing, which can be analog or digital, and is often mixed. The theory behind computers is essentially digital, but efficient simulations of phenomena can be performed by analog devices; indeed, any physical calculation requires implementation in the physical world and is therefore analog to some extent, despite being based on abstract logic and arithmetic. The mammalian brain, comprised of neuronal networks, functions as an analog device and has given rise to artificial neural networks that are implemented as digital algorithms but function as analog models would. Analog constructs compute with the implementation of a variety of feedback and feedforward loops. In contrast, digital algorithms allow the implementation of recursive processes that enable them to generate unparalleled emergent properties. We briefly illustrate how the cortical organization of neurons can integrate signals and make predictions analogically. While we conclude that brains are not digital computers, we speculate on the recent implementation of human writing in the brain as a possible digital path that slowly evolves the brain into a genuine (slow) Turing machine.
Collapse
|
75
|
Natural Infant-Directed Speech Facilitates Neural Tracking of Prosody. Neuroimage 2022; 251:118991. [PMID: 35158023 DOI: 10.1016/j.neuroimage.2022.118991] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 02/02/2022] [Accepted: 02/10/2022] [Indexed: 01/04/2023] Open
Abstract
Infants prefer to be addressed with infant-directed speech (IDS). IDS benefits language acquisition through amplified low-frequency amplitude modulations. It has been reported that this amplification increases electrophysiological tracking of IDS compared to adult-directed speech (ADS). It is still unknown which particular frequency band triggers this effect. Here, we compare tracking at the rates of syllables and prosodic stress, which are both critical to word segmentation and recognition. In mother-infant dyads (n=30), mothers described novel objects to their 9-month-olds while infants' EEG was recorded. For IDS, mothers were instructed to speak to their children as they typically do, while for ADS, mothers described the objects as if speaking with an adult. Phonetic analyses confirmed that pitch features were more prototypically infant-directed in the IDS-condition compared to the ADS-condition. Neural tracking of speech was assessed by speech-brain coherence, which measures the synchronization between speech envelope and EEG. Results revealed significant speech-brain coherence at both syllabic and prosodic stress rates, indicating that infants track speech in IDS and ADS at both rates. We found significantly higher speech-brain coherence for IDS compared to ADS in the prosodic stress rate but not the syllabic rate. This indicates that the IDS benefit arises primarily from enhanced prosodic stress. Thus, neural tracking is sensitive to parents' speech adaptations during natural interactions, possibly facilitating higher-level inferential processes such as word segmentation from continuous speech.
Collapse
|
76
|
Rothermich K, Ahn S, Dannhauer M, Pell MD. Social appropriateness perception of dynamic interactions. Soc Neurosci 2022; 17:37-57. [PMID: 35060435 DOI: 10.1080/17470919.2022.2032326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The current study explored the judgement of communicative appropriateness while processing a dialogue between two individuals. All stimuli were presented as audio-visual as well as audio-only vignettes and 24 young adults reported their social impression (appropriateness) of literal, blunt, sarcastic, and teasing statements. On average, teasing statements were rated as more appropriate when processing audiovisual statements compared to the audio-only version of a stimuli, while sarcastic statements were judged as less appropriate with additional visual information. These results indicate a rejection of the Tinge Hypothesis for audio-visual vignettes while confirming it for the reduced, audio-only counterparts. We also analyzed time-frequency EEG data of four frequency bands that have been related to language processing: alpha, beta, theta and low gamma. We found desynchronization in the alpha band literal versus nonliteral items, confirming the assumption that the alpha band reflects stimulus complexity. The analysis also revealed a power increase in the theta, beta and low gamma band, especially when comparing blunt and nonliteral statements in the audio-only condition. The time-frequency results corroborate the prominent role of the alpha and theta bands in language processing and offer new insights into the neural correlates of communicative appropriateness and social aspects of speech perception.
Collapse
Affiliation(s)
- Kathrin Rothermich
- Department of Communication Sciences & Disorders, East Carolina University, Greenville, USA.,School of Communication Sciences & Disorders, McGill University, Montréal, Canada
| | - Sungwoo Ahn
- Department of Mathematics, East Carolina University, Greenville, USA
| | | | - Marc D Pell
- School of Communication Sciences & Disorders, McGill University, Montréal, Canada
| |
Collapse
|
77
|
Batterink LJ, Zhang S. Simple statistical regularities presented during sleep are detected but not retained. Neuropsychologia 2022; 164:108106. [PMID: 34864052 DOI: 10.1016/j.neuropsychologia.2021.108106] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 10/06/2021] [Accepted: 11/28/2021] [Indexed: 12/30/2022]
Abstract
In recent years, there has been growing interest and excitement over the newly discovered cognitive capacities of the sleeping brain, including its ability to form novel associations. These recent discoveries raise the possibility that other more sophisticated forms of learning may also be possible during sleep. In the current study, we tested whether sleeping humans are capable of statistical learning - the process of becoming sensitive to repeating, hidden patterns in environmental input, such as embedded words in a continuous stream of speech. Participants' EEG was recorded while they were presented with one of two artificial languages, composed of either trisyllabic or disyllabic nonsense words, during slow-wave sleep. We used an EEG measure of neural entrainment to assess whether participants became sensitive to the repeating regularities during sleep-exposure to the language. We further probed for long-term memory representations by assessing participants' performance on implicit and explicit tests of statistical learning during subsequent wake. In the disyllabic-but not trisyllabic-language condition, participants' neural entrainment to words increased over time, reflecting a gradual gain in sensitivity to the embedded regularities. However, no significant behavioural effects of sleep-exposure were observed after the nap, for either language. Overall, our results indicate that the sleeping brain can detect simple, repeating pairs of syllables, but not more complex triplet regularities. However, the online detection of these regularities does not appear to produce any durable long-term memory traces that persist into wake - at least none that were revealed by our current measures and sample size. Although some perceptual aspects of statistical learning are preserved during sleep, the lack of memory benefits during wake indicates that exposure to a novel language during sleep may have limited practical value.
Collapse
Affiliation(s)
- Laura J Batterink
- Department of Psychology, Brain and Mind Institute, Western University, London, ON, N6A 5B7, Canada.
| | - Steven Zhang
- Department of Psychology, Brain and Mind Institute, Western University, London, ON, N6A 5B7, Canada
| |
Collapse
|
78
|
Su E, Cai S, Xie L, Li H, Schultz T. STAnet: A Spatiotemporal Attention Network for Decoding Auditory Spatial Attention from EEG. IEEE Trans Biomed Eng 2022; 69:2233-2242. [PMID: 34982671 DOI: 10.1109/tbme.2022.3140246] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Humans are able to localize the source of a sound. This enables them to direct attention to a particular speaker in a cocktail party. Psycho-acoustic studies show that the sensory cortices of the human brain respond to the location of sound sources differently, and the auditory attention itself is a dynamic and temporally based brain activity. In this work, we seek to build a computational model which uses both spatial and temporal information manifested in EEG signals for auditory spatial attention detection (ASAD). METHODS We propose an end-to-end spatiotemporal attention network, denoted as STAnet, to detect auditory spatial attention from EEG. The STAnet is designed to assign differentiated weights dynamically to EEG channels through a spatial attention mechanism, and to temporal patterns in EEG signals through a temporal attention mechanism. RESULTS We report the ASAD experiments on two publicly available datasets. The STAnet outperforms other competitive models by a large margin under various experimental conditions. Its attention decision for 1-second decision window outperforms that of the state-of-the-art techniques for 10-second decision window. Experimental results also demonstrate that the STAnet achieves competitive performance on EEG signals ranging from 64 to as few as 16 channels. CONCLUSION This study provides evidence suggesting that efficient low-density EEG online decoding is within reach. SIGNIFICANCE This study also marks an important step towards the practical implementation of ASAD in real life applications.
Collapse
|
79
|
Gnanateja GN, Devaraju DS, Heyne M, Quique YM, Sitek KR, Tardif MC, Tessmer R, Dial HR. On the Role of Neural Oscillations Across Timescales in Speech and Music Processing. Front Comput Neurosci 2022; 16:872093. [PMID: 35814348 PMCID: PMC9260496 DOI: 10.3389/fncom.2022.872093] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 05/24/2022] [Indexed: 11/25/2022] Open
Abstract
This mini review is aimed at a clinician-scientist seeking to understand the role of oscillations in neural processing and their functional relevance in speech and music perception. We present an overview of neural oscillations, methods used to study them, and their functional relevance with respect to music processing, aging, hearing loss, and disorders affecting speech and language. We first review the oscillatory frequency bands and their associations with speech and music processing. Next we describe commonly used metrics for quantifying neural oscillations, briefly touching upon the still-debated mechanisms underpinning oscillatory alignment. Following this, we highlight key findings from research on neural oscillations in speech and music perception, as well as contributions of this work to our understanding of disordered perception in clinical populations. Finally, we conclude with a look toward the future of oscillatory research in speech and music perception, including promising methods and potential avenues for future work. We note that the intention of this mini review is not to systematically review all literature on cortical tracking of speech and music. Rather, we seek to provide the clinician-scientist with foundational information that can be used to evaluate and design research studies targeting the functional role of oscillations in speech and music processing in typical and clinical populations.
Collapse
Affiliation(s)
- G Nike Gnanateja
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Dhatri S Devaraju
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Matthias Heyne
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Yina M Quique
- Center for Education in Health Sciences, Northwestern University, Chicago, IL, United States
| | - Kevin R Sitek
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Monique C Tardif
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Rachel Tessmer
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States
| | - Heather R Dial
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States.,Department of Communication Sciences and Disorders, University of Houston, Houston, TX, United States
| |
Collapse
|
80
|
Momsen JP, Abel AD. Neural oscillations reflect meaning identification for novel words in context. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:132-148. [PMID: 36340747 PMCID: PMC9632687 DOI: 10.1162/nol_a_00052] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 07/27/2021] [Indexed: 05/21/2023]
Abstract
During language processing, people make rapid use of contextual information to promote comprehension of upcoming words. When new words are learned implicitly, information contained in the surrounding context can provide constraints on their possible meaning. In the current study, EEG was recorded as participants listened to a series of three sentences, each containing an identical target pseudoword, with the aim of using contextual information in the surrounding language to identify a meaning representation for the novel word. In half of trials, sentences were semantically coherent so that participants could develop a single representation for the novel word that fit all contexts. Other trials contained unrelated sentence contexts so that meaning associations were not possible. We observed greater theta band enhancement over the left-hemisphere across central and posterior electrodes in response to pseudowords processed across semantically related compared to unrelated contexts. Additionally, relative alpha and beta band suppression was increased prior to pseudoword onset in trials where contextual information more readily promoted pseudoword-meaning associations. Under the hypothesis that theta enhancement indexes processing demands during lexical access, the current study provides evidence for selective online memory retrieval to novel words learned implicitly in a spoken context.
Collapse
Affiliation(s)
- Jacob Pohaku Momsen
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and UC San Diego, San Diego, CA, USA
- * Corresponding Author:
| | - Alyson D. Abel
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| |
Collapse
|
81
|
Maguire MJ, Schneider JM, Melamed TC, Ralph YK, Poudel S, Raval VM, Mikhail D, Abel AD. Temporal and topographical changes in theta power between middle childhood and adolescence during sentence comprehension. Dev Cogn Neurosci 2021; 53:101056. [PMID: 34979479 PMCID: PMC8728578 DOI: 10.1016/j.dcn.2021.101056] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 12/15/2021] [Accepted: 12/29/2021] [Indexed: 11/08/2022] Open
Abstract
Time frequency analysis of the EEG is increasingly used to study the neural oscillations supporting language comprehension. Although this method holds promise for developmental research, most existing work focuses on adults. Theta power (4–8 Hz) in particular often corresponds to semantic processing of words in isolation and in ongoing text. Here we investigated how the timing and topography of theta engagement to individual words during written sentence processing changes between childhood and adolescence (8–15 years). Results show that topographically, the theta response is broadly distributed in children, occurring over left and right central-posterior and midline frontal areas, and localizes to left central-posterior areas by adolescence. There were two notable developmental shifts. First, in response to each word, early (150–300 msec) theta engagement over frontal areas significantly decreases between 8 and 9 years and 10–11 years. Second, throughout the sentence, theta engagement over the right parietal areas significantly decreases between 10 and 11 years and 12–13 years with younger children’s theta response remaining significantly elevated between words compared to adolescents’. We found no significant differences between 12 and 13 years and 14–15 years. These findings indicate that children’s engagement of the language network during sentence processing continues to change through middle childhood but stabilizes into adolescence.
Collapse
Affiliation(s)
- Mandy J Maguire
- University of Texas at Dallas Callier Center for Communication Disorders, 1966 Inwood Rd, Dallas, TX 75235, USA.
| | - Julie M Schneider
- Louisiana State University, 217 Thomas Boyd Hall, Baton Rouge, LA 70803, USA
| | - Tina C Melamed
- University of Texas at Dallas Callier Center for Communication Disorders, 1966 Inwood Rd, Dallas, TX 75235, USA
| | - Yvonne K Ralph
- University of Texas at Dallas Callier Center for Communication Disorders, 1966 Inwood Rd, Dallas, TX 75235, USA
| | - Sonali Poudel
- University of Texas at Dallas Callier Center for Communication Disorders, 1966 Inwood Rd, Dallas, TX 75235, USA
| | - Vyom M Raval
- University of Texas at Dallas Callier Center for Communication Disorders, 1966 Inwood Rd, Dallas, TX 75235, USA
| | - David Mikhail
- University of Texas at Dallas Callier Center for Communication Disorders, 1966 Inwood Rd, Dallas, TX 75235, USA
| | - Alyson D Abel
- San Diego State University, 5500 Campanile Dr, San Diego, CA 92182, USA
| |
Collapse
|
82
|
Palana J, Schwartz S, Tager-Flusberg H. Evaluating the Use of Cortical Entrainment to Measure Atypical Speech Processing: A Systematic Review. Neurosci Biobehav Rev 2021; 133:104506. [PMID: 34942267 DOI: 10.1016/j.neubiorev.2021.12.029] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 12/12/2021] [Accepted: 12/18/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Cortical entrainment has emerged as promising means for measuring continuous speech processing in young, neurotypical adults. However, its utility for capturing atypical speech processing has not been systematically reviewed. OBJECTIVES Synthesize evidence regarding the merit of measuring cortical entrainment to capture atypical speech processing and recommend avenues for future research. METHOD We systematically reviewed publications investigating entrainment to continuous speech in populations with auditory processing differences. RESULTS In the 25 publications reviewed, most studies were conducted on older and/or hearing-impaired adults, for whom slow-wave entrainment to speech was often heightened compared to controls. Research conducted on populations with neurodevelopmental disorders, in whom slow-wave entrainment was often reduced, was less common. Across publications, findings highlighted associations between cortical entrainment and speech processing performance differences. CONCLUSIONS Measures of cortical entrainment offer useful means of capturing speech processing differences and future research should leverage them more extensively when studying populations with neurodevelopmental disorders.
Collapse
Affiliation(s)
- Joseph Palana
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA; Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Harvard Medical School, Boston Children's Hospital, 1 Autumn Street, Boston, MA, 02215, USA
| | - Sophie Schwartz
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA
| | - Helen Tager-Flusberg
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA.
| |
Collapse
|
83
|
Segaert K, Poulisse C, Markiewicz R, Wheeldon L, Marchment D, Adler Z, Howett D, Chan D, Mazaheri A. Detecting impaired language processing in patients with mild cognitive impairment using around-the-ear cEEgrid electrodes. Psychophysiology 2021; 59:e13964. [PMID: 34791701 DOI: 10.1111/psyp.13964] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 09/27/2021] [Accepted: 10/11/2021] [Indexed: 12/21/2022]
Abstract
Mild cognitive impairment (MCI) is the term used to identify those individuals with subjective and objective cognitive decline but with preserved activities of daily living and an absence of dementia. Although MCI can impact functioning in different cognitive domains, most notably episodic memory, relatively little is known about the comprehension of language in MCI. In this study, we used around-the-ear electrodes (cEEGrids) to identify impairments during language comprehension in patients with MCI. In a group of 23 patients with MCI and 23 age-matched controls, language comprehension was tested in a two-word phrase paradigm. We examined the oscillatory changes following word onset as a function of lexico-semantic single-word retrieval (e.g., swrfeq vs. swift) and multiword binding processes (e.g., horse preceded by swift vs. preceded by swrfeq). Electrophysiological signatures (as measured by the cEEGrids) were significantly different between patients with MCI and controls. In controls, lexical retrieval was associated with a rebound in the alpha/beta range, and binding was associated with a post-word alpha/beta suppression. In contrast, both the single-word retrieval and multiword binding signatures were absent in the MCI group. The signatures observed using cEEGrids in controls were comparable with those signatures obtained with a full-cap EEG setup. Importantly, our findings suggest that patients with MCI have impaired electrophysiological signatures for comprehending single words and multiword phrases. Moreover, cEEGrid setups provide a noninvasive and sensitive clinical tool for detecting early impairments in language comprehension in MCI.
Collapse
Affiliation(s)
- K Segaert
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK
| | - C Poulisse
- School of Psychology, University of Birmingham, Birmingham, UK
| | - R Markiewicz
- School of Psychology, University of Birmingham, Birmingham, UK
| | - L Wheeldon
- Department of Foreign Languages and Translation, University of Agder, Kristiansand, Norway
| | - D Marchment
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Z Adler
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - D Howett
- Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| | - D Chan
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - A Mazaheri
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK
| |
Collapse
|
84
|
Markiewicz R, Segaert K, Mazaheri A. How the healthy ageing brain supports semantic binding during language comprehension. Eur J Neurosci 2021; 54:7899-7917. [PMID: 34779069 DOI: 10.1111/ejn.15525] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 11/01/2021] [Accepted: 11/05/2021] [Indexed: 01/02/2023]
Abstract
Semantic binding refers to constructing complex meaning based on elementary building blocks. Using electroencephalography (EEG), we investigated the age-related changes in modulations of oscillatory brain activity supporting lexical retrieval and semantic binding. Young and older adult participants were visually presented two-word phrases, which for the first word revealed a lexical retrieval signature (e.g., swift vs. swrfeq) and for the second word revealed a semantic binding signature (e.g., horse in a semantic binding "swift horse" vs. no binding "swrfeq horse" context). The oscillatory brain activity associated with lexical retrieval as well as semantic binding significantly differed between healthy older and young adults. Specifically for lexical retrieval, we found that different age groups exhibited opposite patterns of theta and alpha modulation, which as a combined picture suggest that lexical retrieval is associated with different and delayed signatures in older compared with young adults. For semantic binding, in young adults, we found a signature in the low-beta range centred around the target word onset (i.e., a smaller low-beta increase for binding relative to no binding), whereas in healthy older adults, we found an opposite binding signature about ~500 ms later in the low- and high-beta range (i.e., a smaller low- and high-beta decrease for binding relative to no binding). The novel finding of a different and delayed oscillatory signature for semantic binding in healthy older adults reflects that the integration of word meaning into the semantic context takes longer and relies on different mechanisms in healthy older compared with young adults.
Collapse
Affiliation(s)
- Roksana Markiewicz
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK
| | - Katrien Segaert
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK.,Centre for Developmental Science, University of Birmingham, Birmingham, UK
| | - Ali Mazaheri
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK
| |
Collapse
|
85
|
Hendriks M, van Ginkel W, Dijkstra T, Piai V. Dropping Beans or Spilling Secrets: How Idiomatic Context Bias Affects Prediction. J Cogn Neurosci 2021; 34:209-223. [PMID: 34813643 DOI: 10.1162/jocn_a_01798] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Idioms can have both a literal interpretation and a figurative interpretation (e.g., to "kick the bucket"). Which interpretation should be activated can be disambiguated by a preceding context (e.g., "The old man was sick. He kicked the bucket."). We investigated whether the idiomatic and literal uses of idioms have different predictive properties when the idiom has been biased toward a literal or figurative sentence interpretation. EEG was recorded as participants performed a lexical decision task on idiom-final words in biased idioms and literal (compositional) sentences. Targets in idioms were identified faster in both figuratively and literally used idioms than in compositional sentences. Time-frequency analysis of a prestimulus interval revealed relatively more alpha-beta power decreases in literally than figuratively used idiomatic sequences and compositional sentences. We argue that lexico-semantic retrieval plays a larger role in literally than figuratively biased idioms, as retrieval of the word meaning is less relevant in the latter and the word form has to be matched to a template. The results are interpreted in terms of context integration and word retrieval and have implications for models of language processing and predictive processing in general.
Collapse
|
86
|
Fen MO, Tokmak Fen F. Unpredictable oscillations of SICNNs with delay. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.08.093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
87
|
Kershner JR. Multisensory deficits in dyslexia may result from a locus coeruleus attentional network dysfunction. Neuropsychologia 2021; 161:108023. [PMID: 34530025 DOI: 10.1016/j.neuropsychologia.2021.108023] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 08/06/2021] [Accepted: 09/11/2021] [Indexed: 12/13/2022]
Abstract
A fundamental educational requirement of beginning reading is to learn, access, and rapidly process associations between novel visuospatial symbols and their phonological representations in speech. Children with difficulties in such cross-modal integration are often divided into dyslexia subtypes, based on whether their primary problem is with the written or spoken component of decoding. The present review suggests that starting in infancy, perceptions of audiovisual speech are integrated by mutual oscillatory phase-resetting between sensory cortices, and throughout development visual and auditory experiences are coupled into unified perceptions. Entirely separate subtypes are incompatible with this view. Visual or auditory deficits will invariably affect processing to some degree in both domains. It is suggested that poor auditory/visual integration may be diagnostic for both forms of dyslexia, stemming from an encoding weakness in the early cross-sensory binding of audiovisual speech. The review presents a model of dyslexia as a dysfunction of the large-scale ventral and dorsal attention networks controlling such binding. Excessive glutamatergic neuronal excitability of the attention networks by the Locus coeruleus-norepinephrine system may interfere with multisensory integration, with deleterious effects on the acquisition of reading by degrading graphene/phoneme conversion.
Collapse
Affiliation(s)
- John R Kershner
- Dept. of Applied Psychology and Human Resources University of Toronto, ON, M5S 1A1, Canada.
| |
Collapse
|
88
|
Ríos-López P, Molinaro N, Bourguignon M, Lallier M. Right-hemisphere coherence to speech at pre-reading stages predicts reading performance one year later. JOURNAL OF COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1080/20445911.2021.1986514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Paula Ríos-López
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain
- Leibniz Institute for Neurobiology, Magdeburg, Germany
- Centre for Behavioral and Brain Sciences, Magdeburg, Germany
| | - Nicola Molinaro
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Mathieu Bourguignon
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain
- Laboratoire de Cartographie Fonctionnelle du Cerveau, Université Libre de Bruxelles, Bruxelles, Belgium
| | - Marie Lallier
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain
| |
Collapse
|
89
|
Palaniyappan L. Dissecting the neurobiology of linguistic disorganisation and impoverishment in schizophrenia. Semin Cell Dev Biol 2021; 129:47-60. [PMID: 34507903 DOI: 10.1016/j.semcdb.2021.08.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 08/13/2021] [Accepted: 05/06/2021] [Indexed: 12/16/2022]
Abstract
Schizophrenia provides a quintessential disease model of how disturbances in the molecular mechanisms of neurodevelopment lead to disruptions in the emergence of cognition. The central and often persistent feature of this illness is the disorganisation and impoverishment of language and related expressive behaviours. Though clinically more prominent, the periodic perceptual distortions characterised as psychosis are non-specific and often episodic. While several insights into psychosis have been gained based on study of the dopaminergic system, the mechanistic basis of linguistic disorganisation and impoverishment is still elusive. Key findings from cellular to systems-level studies highlight the role of ubiquitous, inhibitory processes in language production. Dysregulation of these processes at critical time periods, in key brain areas, provides a surprisingly parsimonious account of linguistic disorganisation and impoverishment in schizophrenia. This review links the notion of excitatory/inhibitory (E/I) imbalance at cortical microcircuits to the expression of language behaviour characteristic of schizophrenia, through the building blocks of neurochemistry, neurophysiology, and neurocognition.
Collapse
Affiliation(s)
- Lena Palaniyappan
- Department of Psychiatry,University of Western Ontario, London, Ontario, Canada; Robarts Research Institute,University of Western Ontario, London, Ontario, Canada; Lawson Health Research Institute, London, Ontario, Canada.
| |
Collapse
|
90
|
Li J, Hong B, Nolte G, Engel AK, Zhang D. Preparatory delta phase response is correlated with naturalistic speech comprehension performance. Cogn Neurodyn 2021; 16:337-352. [PMID: 35401861 PMCID: PMC8934811 DOI: 10.1007/s11571-021-09711-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 07/09/2021] [Accepted: 08/12/2021] [Indexed: 01/07/2023] Open
Abstract
While human speech comprehension is thought to be an active process that involves top-down predictions, it remains unclear how predictive information is used to prepare for the processing of upcoming speech information. We aimed to identify the neural signatures of the preparatory processing of upcoming speech. Participants selectively attended to one of two competing naturalistic, narrative speech streams, and a temporal response function (TRF) method was applied to derive event-related-like neural responses from electroencephalographic data. The phase responses to the attended speech at the delta band (1-4 Hz) were correlated with the comprehension performance of individual participants, with a latency of - 200-0 ms relative to the onset of speech amplitude envelope fluctuations over the fronto-central and left-lateralized parietal electrodes. The phase responses to the attended speech at the alpha band also correlated with comprehension performance but with a latency of 650-980 ms post-onset over the fronto-central electrodes. Distinct neural signatures were found for the attentional modulation, taking the form of TRF-based amplitude responses at a latency of 240-320 ms post-onset over the left-lateralized fronto-central and occipital electrodes. Our findings reveal how the brain gets prepared to process an upcoming speech in a continuous, naturalistic speech context.
Collapse
Affiliation(s)
- Jiawei Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Room 334, Mingzhai Building, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Bo Hong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, Hamburg, Germany
| | - Andreas K. Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, Hamburg, Germany
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Room 334, Mingzhai Building, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| |
Collapse
|
91
|
Hustá C, Zheng X, Papoutsi C, Piai V. Electrophysiological Signatures of Conceptual and Lexical Retrieval from Semantic Memory. Neuropsychologia 2021; 161:107988. [PMID: 34389320 DOI: 10.1016/j.neuropsychologia.2021.107988] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 04/28/2021] [Accepted: 08/04/2021] [Indexed: 11/24/2022]
Abstract
Retrieval from semantic memory of conceptual and lexical information is essential for producing speech. It is unclear whether there are differences in the neural mechanisms of conceptual and lexical retrieval when spreading activation through semantic memory is initiated by verbal or nonverbal settings. The same twenty participants took part in two EEG experiments. The first experiment examined conceptual and lexical retrieval following nonverbal settings, whereas the second experiment was a replication of previous studies examining conceptual and lexical retrieval following verbal settings. Target pictures were presented after constraining and nonconstraining contexts. In the nonverbal settings, contexts were provided as two priming pictures (e.g., constraining: nest, feather; nonconstraining: anchor, lipstick; target picture: BIRD). In the verbal settings, contexts were provided as sentences (e.g., constraining: "The farmer milked a..."; nonconstraining: "The child drew a..."; target picture: COW). Target pictures were named faster following constraining contexts in both experiments, indicating that conceptual preparation starts before target picture onset in constraining conditions. In the verbal experiment, we replicated the alpha-beta power decreases in constraining relative to nonconstraining conditions before target picture onset. No such power decreases were found in the nonverbal experiment. Power decreases in constraining relative to nonconstraining conditions were significantly different between experiments. Our findings suggest that participants engage in conceptual preparation following verbal and nonverbal settings, albeit differently. The retrieval of a target word, initiated by verbal settings, is associated with alpha-beta power decreases. By contrast, broad conceptual preparation alone, prompted by nonverbal settings, does not seem enough to elicit alpha-beta power decreases. These findings have implications for theories of oscillations and semantic memory.
Collapse
Affiliation(s)
- Cecília Hustá
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| | - Xiaochen Zheng
- Radboud University, Donders Centre for Cognitive Neuroimaging, Nijmegen, the Netherlands
| | - Christina Papoutsi
- Radboud University, Donders Centre for Cognition, Nijmegen, the Netherlands; Utrecht University, RMA Linguistics, Utrecht, the Netherlands
| | - Vitória Piai
- Radboud University, Donders Centre for Cognition, Nijmegen, the Netherlands; Radboudumc, Donders Centre for Medical Neuroscience, Department of Medical Psychology, Nijmegen, the Netherlands
| |
Collapse
|
92
|
Nowak K, Costa-Faidella J, Dacewicz A, Escera C, Szelag E. Altered event-related potentials and theta oscillations index auditory working memory deficits in healthy aging. Neurobiol Aging 2021; 108:1-15. [PMID: 34464912 DOI: 10.1016/j.neurobiolaging.2021.07.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 07/25/2021] [Accepted: 07/28/2021] [Indexed: 11/28/2022]
Abstract
Speech comprehension deficits constitute a major issue for an increasingly aged population, as they may lead older individuals to social isolation. Since conversation requires constant monitoring, updating and selecting information, auditory working memory decline, rather than impoverished hearing acuity, has been suggested a core factor. However, in stark contrast to the visual domain, the neurophysiological mechanisms underlying auditory working memory deficits in healthy aging remain poorly understood, especially those related to on-the-fly information processing under increasing load. Therefore, we investigated the behavioral costs and electrophysiological differences associated with healthy aging and working memory load during continuous auditory processing. We recorded EEG activity from 27 younger (∼25 years) and 29 older (∼70 years) participants during their performance on an auditory version of the n-back task with speech syllables and 2 workload levels (1-back; 2-back). Behavioral measures were analyzed as indices of function; event-related potentials as proxies for sensory and cognitive processes; and theta oscillatory power as a reflection of memory and central executive function. Our results show age-related differences in auditory information processing within a latency range that is consistent with a series of impaired functions, from sensory gating to cognitive resource allocation during constant information updating, especially under high load.
Collapse
Affiliation(s)
- Kamila Nowak
- Laboratory of Neuropsychology, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Catalonia, Spain.
| | - Anna Dacewicz
- Laboratory of Neuropsychology, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Catalonia, Spain
| | - Elzbieta Szelag
- Laboratory of Neuropsychology, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
93
|
Ten Oever S, Martin AE. An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions. eLife 2021; 10:68066. [PMID: 34338196 PMCID: PMC8328513 DOI: 10.7554/elife.68066] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 07/16/2021] [Indexed: 11/19/2022] Open
Abstract
Neuronal oscillations putatively track speech in order to optimize sensory processing. However, it is unclear how isochronous brain oscillations can track pseudo-rhythmic speech input. Here we propose that oscillations can track pseudo-rhythmic speech when considering that speech time is dependent on content-based predictions flowing from internal language models. We show that temporal dynamics of speech are dependent on the predictability of words in a sentence. A computational model including oscillations, feedback, and inhibition is able to track pseudo-rhythmic speech input. As the model processes, it generates temporal phase codes, which are a candidate mechanism for carrying information forward in time. The model is optimally sensitive to the natural temporal speech dynamics and can explain empirical data on temporal speech illusions. Our results suggest that speech tracking does not have to rely only on the acoustics but could also exploit ongoing interactions between oscillations and constraints flowing from internal language models.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Andrea E Martin
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
94
|
Hollenstein N, Renggli C, Glaus B, Barrett M, Troendle M, Langer N, Zhang C. Decoding EEG Brain Activity for Multi-Modal Natural Language Processing. Front Hum Neurosci 2021; 15:659410. [PMID: 34326723 PMCID: PMC8314009 DOI: 10.3389/fnhum.2021.659410] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 06/14/2021] [Indexed: 11/13/2022] Open
Abstract
Until recently, human behavioral data from reading has mainly been of interest to researchers to understand human cognition. However, these human language processing signals can also be beneficial in machine learning-based natural language processing tasks. Using EEG brain activity for this purpose is largely unexplored as of yet. In this paper, we present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks, with a special focus on which features of the signal are most beneficial. We present a multi-modal machine learning architecture that learns jointly from textual input as well as from EEG features. We find that filtering the EEG signals into frequency bands is more beneficial than using the broadband signal. Moreover, for a range of word embedding types, EEG data improves binary and ternary sentiment classification and outperforms multiple baselines. For more complex tasks such as relation detection, only the contextualized BERT embeddings outperform the baselines in our experiments, which raises the need for further research. Finally, EEG data shows to be particularly promising when limited training data is available.
Collapse
Affiliation(s)
- Nora Hollenstein
- Department of Nordic Studies and Linguistics, University of Copenhagen, Copenhagen, Denmark
| | - Cedric Renggli
- Department of Computer Science, Swiss Federal Institute of Technology, ETH Zurich, Zurich, Switzerland
| | - Benjamin Glaus
- Department of Computer Science, Swiss Federal Institute of Technology, ETH Zurich, Zurich, Switzerland
| | - Maria Barrett
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - Marius Troendle
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Nicolas Langer
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Ce Zhang
- Department of Computer Science, Swiss Federal Institute of Technology, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
95
|
Li Y, Xing H, Zhang L, Shu H, Zhang Y. How Visual Word Decoding and Context-Driven Auditory Semantic Integration Contribute to Reading Comprehension: A Test of Additive vs. Multiplicative Models. Brain Sci 2021; 11:brainsci11070830. [PMID: 34201695 PMCID: PMC8301993 DOI: 10.3390/brainsci11070830] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 06/11/2021] [Accepted: 06/21/2021] [Indexed: 11/21/2022] Open
Abstract
Theories of reading comprehension emphasize decoding and listening comprehension as two essential components. The current study aimed to investigate how Chinese character decoding and context-driven auditory semantic integration contribute to reading comprehension in Chinese middle school students. Seventy-five middle school students were tested. Context-driven auditory semantic integration was assessed with speech-in-noise tests in which the fundamental frequency (F0) contours of spoken sentences were either kept natural or acoustically flattened, with the latter requiring a higher degree of contextual information. Statistical modeling with hierarchical regression was conducted to examine the contributions of Chinese character decoding and context-driven auditory semantic integration to reading comprehension. Performance in Chinese character decoding and auditory semantic integration scores with the flattened (but not natural) F0 sentences significantly predicted reading comprehension. Furthermore, the contributions of these two factors to reading comprehension were better fitted with an additive model instead of a multiplicative model. These findings indicate that reading comprehension in middle schoolers is associated with not only character decoding but also the listening ability to make better use of the sentential context for semantic integration in a severely degraded speech-in-noise condition. The results add to our better understanding of the multi-faceted reading comprehension in children. Future research could further address the age-dependent development and maturation of reading skills by examining and controlling other important cognitive variables, and apply neuroimaging techniques such as functional magmatic resonance imaging and electrophysiology to reveal the neural substrates and neural oscillatory patterns for the contribution of auditory semantic integration and the observed additive model to reading comprehension.
Collapse
Affiliation(s)
- Yu Li
- Division of Science and Technology, BNU-HKBU United International College, Zhuhai 519087, China;
| | - Hongbing Xing
- Institute on Education Policy and Evaluation of International Students, Beijing Language and Culture University, Beijing 100083, China;
| | - Linjun Zhang
- Beijing Advanced Innovation Center for Language Resources and College of Advanced Chinese Training, Beijing Language and Culture University, Beijing 100083, China
- Correspondence: (L.Z.); (Y.Z.)
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China;
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (L.Z.); (Y.Z.)
| |
Collapse
|
96
|
Abbasi O, Steingräber N, Gross J. Correcting MEG Artifacts Caused by Overt Speech. Front Neurosci 2021; 15:682419. [PMID: 34168536 PMCID: PMC8217464 DOI: 10.3389/fnins.2021.682419] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 05/17/2021] [Indexed: 11/13/2022] Open
Abstract
Recording brain activity during speech production using magnetoencephalography (MEG) can help us to understand the dynamics of speech production. However, these measurements are challenging due to the induced artifacts coming from several sources such as facial muscle activity, lower jaw and head movements. Here, we aimed to characterize speech-related artifacts, focusing on head movements, and subsequently present an approach to remove these artifacts from MEG data. We recorded MEG from 11 healthy participants while they pronounced various syllables in different loudness. Head positions/orientations were extracted during speech production to investigate its role in MEG distortions. Finally, we present an artifact rejection approach using the combination of regression analysis and signal space projection (SSP) in order to correct the induced artifact from MEG data. Our results show that louder speech leads to stronger head movements and stronger MEG distortions. Our proposed artifact rejection approach could successfully remove the speech-related artifact and retrieve the underlying neurophysiological signals. As the presented artifact rejection approach was shown to remove artifacts arising from head movements, induced by overt speech in the MEG, it will facilitate research addressing the neural basis of speech production with MEG.
Collapse
Affiliation(s)
- Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nadine Steingräber
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.,Centre for Cognitive Neuroimaging, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
97
|
Bröhl F, Kayser C. Delta/theta band EEG differentially tracks low and high frequency speech-derived envelopes. Neuroimage 2021; 233:117958. [PMID: 33744458 PMCID: PMC8204264 DOI: 10.1016/j.neuroimage.2021.117958] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 03/08/2021] [Accepted: 03/09/2021] [Indexed: 11/01/2022] Open
Abstract
The representation of speech in the brain is often examined by measuring the alignment of rhythmic brain activity to the speech envelope. To conveniently quantify this alignment (termed 'speech tracking') many studies consider the broadband speech envelope, which combines acoustic fluctuations across the spectral range. Using EEG recordings, we show that using this broadband envelope can provide a distorted picture on speech encoding. We systematically investigated the encoding of spectrally-limited speech-derived envelopes presented by individual and multiple noise carriers in the human brain. Tracking in the 1 to 6 Hz EEG bands differentially reflected low (0.2 - 0.83 kHz) and high (2.66 - 8 kHz) frequency speech-derived envelopes. This was independent of the specific carrier frequency but sensitive to attentional manipulations, and may reflect the context-dependent emphasis of information from distinct spectral ranges of the speech envelope in low frequency brain activity. As low and high frequency speech envelopes relate to distinct phonemic features, our results suggest that functionally distinct processes contribute to speech tracking in the same EEG bands, and are easily confounded when considering the broadband speech envelope.
Collapse
Affiliation(s)
- Felix Bröhl
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615 Bielefeld, Germany.
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615 Bielefeld, Germany
| |
Collapse
|
98
|
Wang P, Knösche TR, Chen L, Brauer J, Friederici AD, Maess B. Functional brain plasticity during L1 training on complex sentences: Changes in gamma-band oscillatory activity. Hum Brain Mapp 2021; 42:3858-3870. [PMID: 33942956 PMCID: PMC8288093 DOI: 10.1002/hbm.25470] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 04/16/2021] [Accepted: 04/26/2021] [Indexed: 01/12/2023] Open
Abstract
The adult human brain remains plastic even after puberty. However, whether first language (L1) training in adults can alter the language network is yet largely unknown. Thus, we conducted a longitudinal training experiment on syntactically complex German sentence comprehension. Sentence complexity was varied by the depth of the center embedded relative clauses (i.e., single or double embedded). Comprehension was tested after each sentence with a question on the thematic role assignment. Thirty adult, native German speakers were recruited for 4 days of training. Magnetoencephalography (MEG) data were recorded and subjected to spectral power analysis covering the classical frequency bands (i.e., theta, alpha, beta, low gamma, and gamma). Normalized spectral power, time‐locked to the final closure of the relative clause, was subjected to a two‐factor analysis (“sentence complexity” and “training days”). Results showed that for the more complex sentences, the interaction of sentence complexity and training days was observed in Brodmann area 44 (BA 44) as a decrease of gamma power with training. Moreover, in the gamma band (55–95 Hz) functional connectivity between BA 44 and other brain regions such as the inferior frontal sulcus and the inferior parietal cortex were correlated with behavioral performance increase due to training. These results show that even for native speakers, complex L1 sentence training improves language performance and alters neural activities of the left hemispheric language network. Training strengthens the use of the dorsal processing stream with working‐memory‐related brain regions for syntactically complex sentences, thereby demonstrating the brain's functional plasticity for L1 training.
Collapse
Affiliation(s)
- Peng Wang
- Max Planck Institute for Human Cognitive and Brain SciencesBrain Networks GroupLeipzigGermany
| | - Thomas R. Knösche
- Max Planck Institute for Human Cognitive and Brain SciencesBrain Networks GroupLeipzigGermany
| | - Luyao Chen
- Beijing Normal UniversityCollege of Chinese Language and CultureBeijing
- Max Planck Institute for Human Cognitive and Brain SciencesDepartment of NeuropsychologyLeipzigGermany
| | - Jens Brauer
- Max Planck Institute for Human Cognitive and Brain SciencesDepartment of NeuropsychologyLeipzigGermany
- Friedrich Schiller UniversityOffice of the Vice‐President for Young ResearchersJenaGermany
| | - Angela D. Friederici
- Max Planck Institute for Human Cognitive and Brain SciencesDepartment of NeuropsychologyLeipzigGermany
| | - Burkhard Maess
- Max Planck Institute for Human Cognitive and Brain SciencesBrain Networks GroupLeipzigGermany
| |
Collapse
|
99
|
Chen F, Zhang H, Ding H, Wang S, Peng G, Zhang Y. Neural coding of formant-exaggerated speech and nonspeech in children with and without autism spectrum disorders. Autism Res 2021; 14:1357-1374. [PMID: 33792205 DOI: 10.1002/aur.2509] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 03/09/2021] [Accepted: 03/16/2021] [Indexed: 12/15/2022]
Abstract
The presence of vowel exaggeration in infant-directed speech (IDS) may adapt to the age-appropriate demands in speech and language acquisition. Previous studies have provided behavioral evidence of atypical auditory processing towards IDS in children with autism spectrum disorders (ASD), while the underlying neurophysiological mechanisms remain unknown. This event-related potential (ERP) study investigated the neural coding of formant-exaggerated speech and nonspeech in 24 4- to 11-year-old children with ASD and 24 typically-developing (TD) peers. The EEG data were recorded using an alternating block design, in which each stimulus type (exaggerated/non-exaggerated sound) was presented with equal probability. ERP waveform analysis revealed an enhanced P1 for vowel formant exaggeration in the TD group but not in the ASD group. This speech-specific atypical processing in ASD was not found for the nonspeech stimuli which showed similar P1 enhancement in both ASD and TD groups. Moreover, the time-frequency analysis indicated that children with ASD showed differences in neural synchronization in the delta-theta bands for processing acoustic formant changes embedded in nonspeech. Collectively, the results add substantiating neurophysiological evidence (i.e., a lack of neural enhancement effect of vowel exaggeration) for atypical auditory processing of IDS in children with ASD, which may exert a negative effect on phonetic encoding and language learning. LAY SUMMARY: Atypical responses to motherese might act as a potential early marker of risk for children with ASD. This study investigated the neural responses to such socially relevant stimuli in the ASD brain, and the results suggested a lack of neural enhancement responding to the motherese even in individuals without intellectual disability.
Collapse
Affiliation(s)
- Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China.,Research Centre for Language, Cognition, and Neuroscience & Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China.,Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Twin Cities, Minnesota, USA
| | - Hao Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai, China
| | - Suiping Wang
- School of Psychology, South China Normal University, Guangzhou, China
| | - Gang Peng
- Research Centre for Language, Cognition, and Neuroscience & Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Twin Cities, Minnesota, USA
| |
Collapse
|
100
|
de Lange P, Boto E, Holmes N, Hill RM, Bowtell R, Wens V, De Tiège X, Brookes MJ, Bourguignon M. Measuring the cortical tracking of speech with optically-pumped magnetometers. Neuroimage 2021; 233:117969. [PMID: 33744453 DOI: 10.1016/j.neuroimage.2021.117969] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 01/08/2021] [Accepted: 03/04/2021] [Indexed: 11/25/2022] Open
Abstract
During continuous speech listening, brain activity tracks speech rhythmicity at frequencies matching with the repetition rate of phrases (0.2-1.5 Hz), words (2-4 Hz) and syllables (4-8 Hz). Here, we evaluated the applicability of wearable MEG based on optically-pumped magnetometers (OPMs) to measure such cortical tracking of speech (CTS). Measuring CTS with OPMs is a priori challenging given the complications associated with OPM measurements at frequencies below 4 Hz, due to increased intrinsic interference and head movement artifacts. Still, this represents an important development as OPM-MEG provides lifespan compliance and substantially improved spatial resolution compared with classical MEG. In this study, four healthy right-handed adults listened to continuous speech for 9 min. The radial component of the magnetic field was recorded simultaneously with 45-46 OPMs evenly covering the scalp surface and fixed to an additively manufactured helmet which fitted all 4 participants. We estimated CTS with reconstruction accuracy and coherence, and determined the number of dominant principal components (PCs) to remove from the data (as a preprocessing step) for optimal estimation. We also identified the dominant source of CTS using a minimum norm estimate. CTS estimated with reconstruction accuracy and coherence was significant in all 4 participants at phrasal and word rates, and in 3 participants (reconstruction accuracy) or 2 (coherence) at syllabic rate. Overall, close-to-optimal CTS estimation was obtained when the 3 (reconstruction accuracy) or 10 (coherence) first PCs were removed from the data. Importantly, values of reconstruction accuracy (~0.4 for 0.2-1.5-Hz CTS and ~0.1 for 2-8-Hz CTS) were remarkably close to those previously reported in classical MEG studies. Finally, source reconstruction localized the main sources of CTS to bilateral auditory cortices. In conclusion, t his study demonstrates that OPMs can be used for the purpose of CTS assessment. This finding opens new research avenues to unravel the neural network involved in CTS across the lifespan and potential alterations in, e.g., language developmental disorders. Data also suggest that OPMs are generally suitable for recording neural activity at frequencies below 4 Hz provided PCA is used as a preprocessing step; 0.2-1.5-Hz being the lowest frequency range successfully investigated here.
Collapse
Affiliation(s)
- Paul de Lange
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Lennik Street, Brussels 1070, Belgium
| | - Elena Boto
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD, United Kingdom
| | - Niall Holmes
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD, United Kingdom
| | - Ryan M Hill
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD, United Kingdom
| | - Richard Bowtell
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD, United Kingdom
| | - Vincent Wens
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Lennik Street, Brussels 1070, Belgium; Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Xavier De Tiège
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Lennik Street, Brussels 1070, Belgium; Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Matthew J Brookes
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD, United Kingdom
| | - Mathieu Bourguignon
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Lennik Street, Brussels 1070, Belgium; Laboratory of neurophysiology and movement biomechanics, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium; BCBL, Basque Center on Cognition, Brain and Language, San Sebastian 20009, Spain.
| |
Collapse
|