1
|
Arutiunian V, Santhosh M, Neuhaus E, Borland H, Tompkins C, Bernier RA, Bookheimer SY, Dapretto M, Gupta AR, Jack A, Jeste S, McPartland JC, Naples A, Van Horn JD, Pelphrey KA, Webb SJ. The relationship between gamma-band neural oscillations and language skills in youth with Autism Spectrum Disorder and their first-degree relatives. Mol Autism 2024; 15:19. [PMID: 38711098 PMCID: PMC11075235 DOI: 10.1186/s13229-024-00598-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 04/18/2024] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Most children with Autism Spectrum Disorder (ASD) have co-occurring language impairments and some of these autism-specific language difficulties are also present in their non-autistic first-degree relatives. One of the possible neural mechanisms associated with variability in language functioning is alterations in cortical gamma-band oscillations, hypothesized to be related to neural excitation and inhibition balance. METHODS We used a high-density 128-channel electroencephalography (EEG) to register brain response to speech stimuli in a large sex-balanced sample of participants: 125 youth with ASD, 121 typically developing (TD) youth, and 40 unaffected siblings (US) of youth with ASD. Language skills were assessed with Clinical Evaluation of Language Fundamentals. RESULTS First, during speech processing, we identified significantly elevated gamma power in ASD participants compared to TD controls. Second, across all youth, higher gamma power was associated with lower language skills. Finally, the US group demonstrated an intermediate profile in both language and gamma power, with nonverbal IQ mediating the relationship between gamma power and language skills. LIMITATIONS We only focused on one of the possible neural contributors to variability in language functioning. Also, the US group consisted of a smaller number of participants in comparison to the ASD or TD groups. Finally, due to the timing issue in EEG system we have provided only non-phase-locked analysis. CONCLUSIONS Autistic youth showed elevated gamma power, suggesting higher excitation in the brain in response to speech stimuli and elevated gamma power was related to lower language skills. The US group showed an intermediate pattern of gamma activity, suggesting that the broader autism phenotype extends to neural profiles.
Collapse
Affiliation(s)
- Vardan Arutiunian
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, 1920 Terry Ave., Seattle, WA, 98101, USA
| | - Megha Santhosh
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, 1920 Terry Ave., Seattle, WA, 98101, USA
| | - Emily Neuhaus
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, 1920 Terry Ave., Seattle, WA, 98101, USA
- Department of Psychiatry and Behavioral Science, University of Washington, Seattle, WA, USA
- Institute of Human Development and Disability, University of Washington, Seattle, WA, USA
| | - Heather Borland
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, 1920 Terry Ave., Seattle, WA, 98101, USA
| | - Chris Tompkins
- Department of Psychiatry and Behavioral Science, University of Washington, Seattle, WA, USA
- Institute of Human Development and Disability, University of Washington, Seattle, WA, USA
| | - Raphael A Bernier
- Department of Psychiatry and Behavioral Science, University of Washington, Seattle, WA, USA
| | - Susan Y Bookheimer
- Center for Autism Research and Treatment, Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- Department of Psychiatry and Biobehavioral Sciences, University of California Los Angeles, Los Angeles, CA, USA
| | - Mirella Dapretto
- Center for Autism Research and Treatment, Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- Department of Psychiatry and Biobehavioral Sciences, University of California Los Angeles, Los Angeles, CA, USA
| | - Abha R Gupta
- Department of Pediatrics, Yale School of Medicine, New Haven, CT, USA
- Yale Child Study Center, Yale School of Medicine, New Haven, CT, USA
- Department of Neuroscience, Yale School of Medicine, New Haven, CT, USA
| | - Allison Jack
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Shafali Jeste
- Department of Neurology, Children's Hospital of Los Angeles, Los Angeles, CA, USA
| | | | - Adam Naples
- Yale Child Study Center, Yale School of Medicine, New Haven, CT, USA
| | - John D Van Horn
- School of Data Science, University of Virginia, Charlottesville, VA, USA
| | - Kevin A Pelphrey
- Department of Neurology, School of Medicine, University of Virginia, Charlottesville, VA, USA
| | - Sara Jane Webb
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, 1920 Terry Ave., Seattle, WA, 98101, USA.
- Department of Psychiatry and Behavioral Science, University of Washington, Seattle, WA, USA.
- Institute of Human Development and Disability, University of Washington, Seattle, WA, USA.
| |
Collapse
|
2
|
Bayat M, Boostani R, Sabeti M, Yadegari F, Pirmoradi M, Rao KS, Nami M. Source Localization and Spectrum Analyzing of EEG in Stuttering State upon Dysfluent Utterances. Clin EEG Neurosci 2024; 55:371-383. [PMID: 36627837 DOI: 10.1177/15500594221150638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Purpose: The present study which addressed adults who stutter (AWS) attempted to investigate power spectral dynamics in the stuttering state by answering the questions using quantitative electroencephalography (qEEG). Method: A 64-channel electroencephalography (EEG) setup was used for data acquisition at 20 AWS. Since the speech, especially stuttering, causes significant noise in the EEG, 2 conditions of speech preparation (SP) and imagined speech (IS) were considered. EEG signals were decomposed into 6 bands. The corresponding sources were localized using the standard low-resolution electromagnetic tomography (sLORETA) tool in both fluent and dysfluent states. Results: Significant differences were noted after analyzing the time-locked EEG signals in fluent and dysfluent utterances. Consistent with previous studies, poor alpha and beta suppression in SP and IS conditions were localized in the left frontotemporal areas in a dysfluent state. This was partly true for the right frontal regions. In the theta range, disfluency was concurrence with increased activation in the left and right motor areas. Increased delta power in the left and right motor areas as well as increased beta2 power over left parietal regions was notable EEG features upon fluent speech. Conclusion: Based on the present findings and those of earlier studies, explaining the neural circuitries involved in stuttering probably requires an examination of the entire frequency spectrum involved in speech.
Collapse
Affiliation(s)
- Masoumeh Bayat
- Department of Neuroscience, School of Advanced Medical Sciences and Technologies, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Reza Boostani
- Department of Computer Sciences and Engineering, School of Engineering, Shiraz University, Shiraz, Iran
| | - Malihe Sabeti
- Department of Computer Engineering, Islamic Azad University, North Tehran Branch, Tehran, Iran
| | - Fariba Yadegari
- Department of Speech and Language Pathology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Mohammadreza Pirmoradi
- Department of Clinical Psychology, School of Behavioral Sciences and Mental Health, Iran University of Medical Sciences, Tehran, Iran
| | - K S Rao
- Neuroscience Center, INDICASAT-AIP, Panama City, Republic of Panama
| | - Mohammad Nami
- Department of Neuroscience, School of Advanced Medical Sciences and Technologies, Shiraz University of Medical Sciences, Shiraz, Iran
- Neuroscience Center, INDICASAT-AIP, Panama City, Republic of Panama
- Dana Brain Health Institute, Iranian Neuroscience Society-Fars Chapter, Shiraz, Iran
- Academy of Health, Senses Cultural Foundation, Sacramento, CA, USA
| |
Collapse
|
3
|
Wei Q, Lin S, Xu S, Zou J, Chen J, Kang M, Hu J, Liao X, Wei H, Ling Q, Shao Y, Yu Y. Graph theoretical analysis and independent component analysis of diabetic optic neuropathy: A resting-state functional magnetic resonance imaging study. CNS Neurosci Ther 2024; 30:e14579. [PMID: 38497532 PMCID: PMC10945884 DOI: 10.1111/cns.14579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 05/06/2023] [Accepted: 12/14/2023] [Indexed: 03/19/2024] Open
Abstract
AIMS This study aimed to investigate the resting-state functional connectivity and topologic characteristics of brain networks in patients with diabetic optic neuropathy (DON). METHODS Resting-state functional magnetic resonance imaging scans were performed on 23 patients and 41 healthy control (HC) subjects. We used independent component analysis and graph theoretical analysis to determine the topologic characteristics of the brain and as well as functional network connectivity (FNC) and topologic properties of brain networks. RESULTS Compared with HCs, patients with DON showed altered global characteristics. At the nodal level, the DON group had fewer nodal degrees in the thalamus and insula, and a greater number in the right rolandic operculum, right postcentral gyrus, and right superior temporal gyrus. In the internetwork comparison, DON patients showed significantly increased FNC between the left frontoparietal network (FPN-L) and ventral attention network (VAN). Additionally, in the intranetwork comparison, connectivity between the left medial superior frontal gyrus (MSFG) of the default network (DMN) and left putamen of auditory network was decreased in the DON group. CONCLUSION DON patients altered node properties and connectivity in the DMN, auditory network, FPN-L, and VAN. These results provide evidence of the involvement of specific brain networks in the pathophysiology of DON.
Collapse
Affiliation(s)
- Qian Wei
- Department of Endocrine and MetabolicThe First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Clinical Research Center for Endocrine and Metabolic Disease, Jiangxi Branch of National Clinical Research Center for Metabolic DiseaseNanchangJiangxiChina
- Queen Mary SchoolThe Nanchang UniversityNanchangJiangxiChina
| | - Si‐Min Lin
- Department of RadiologyXiamen Cardiovascular Hospital of Xiamen University, School of Medicine, Xiamen UniversityXiamenFujianChina
| | - San‐Hua Xu
- Department of OphthalmologyThe First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchangJiangxiChina
| | - Jie Zou
- Department of OphthalmologyThe First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchangJiangxiChina
| | - Jun Chen
- Department of OphthalmologyThe First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchangJiangxiChina
| | - Min Kang
- Department of OphthalmologyThe First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchangJiangxiChina
| | - Jin‐Yu Hu
- Department of OphthalmologyThe First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchangJiangxiChina
| | - Xu‐Lin Liao
- Department of Ophthalmology and Visual SciencesThe Chinese University of Hong KongHong KongChina
| | - Hong Wei
- Department of OphthalmologyThe First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchangJiangxiChina
| | - Qian Ling
- Department of OphthalmologyThe First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchangJiangxiChina
| | - Yi Shao
- Department of OphthalmologyThe First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchangJiangxiChina
- Department of OphthalmologyEye & ENT Hospital of Fudan UniversityShanghaiChina
| | - Yao Yu
- Department of Endocrine and MetabolicThe First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi Clinical Research Center for Endocrine and Metabolic Disease, Jiangxi Branch of National Clinical Research Center for Metabolic DiseaseNanchangJiangxiChina
| |
Collapse
|
4
|
Zoefel B, Kösem A. Neural tracking of continuous acoustics: properties, speech-specificity and open questions. Eur J Neurosci 2024; 59:394-414. [PMID: 38151889 DOI: 10.1111/ejn.16221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/29/2023]
Abstract
Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS UMR 5549, Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Anne Kösem
- Lyon Neuroscience Research Center (CRNL), INSERM U1028, Bron, France
| |
Collapse
|
5
|
Assaneo MF, Orpella J. Rhythms in Speech. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:257-274. [PMID: 38918356 DOI: 10.1007/978-3-031-60183-5_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.
Collapse
Affiliation(s)
- M Florencia Assaneo
- Instituto de Neurobiología, Universidad Autónoma de México, Santiago de Querétaro, Mexico.
| | - Joan Orpella
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| |
Collapse
|
6
|
Lasnick OHM, Hancock R, Hoeft F. Left-dominance for resting-state temporal low-gamma power in children with impaired word-decoding and without comorbid ADHD. PLoS One 2023; 18:e0292330. [PMID: 38157354 PMCID: PMC10756518 DOI: 10.1371/journal.pone.0292330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 12/08/2023] [Indexed: 01/03/2024] Open
Abstract
One theory of the origins of reading disorders (i.e., dyslexia) is a language network which cannot effectively 'entrain' to speech, with cascading effects on the development of phonological skills. Low-gamma (low-γ, 30-45 Hz) neural activity, particularly in the left hemisphere, is thought to correspond to tracking at phonemic rates in speech. The main goals of the current study were to investigate temporal low-γ band-power during rest in a sample of children and adolescents with and without reading disorder (RD). Using a Bayesian statistical approach to analyze the power spectral density of EEG data, we examined whether (1) resting-state temporal low-γ power was attenuated in the left temporal region in RD; (2) low-γ power covaried with individual reading performance; (3) low-γ temporal lateralization was atypical in RD. Contrary to our expectations, results did not support the hypothesized effects of RD status and poor decoding ability on left hemisphere low-γ power or lateralization: post-hoc tests revealed that the lack of atypicality in the RD group was not due to the inclusion of those with comorbid attentional deficits. However, post-hoc tests also revealed a specific left-dominance for low-γ rhythms in children with reading deficits only, when participants with comorbid attentional deficits were excluded. We also observed an inverse relationship between decoding and left-lateralization in the controls, such that those with better decoding skills were less likely to show left-lateralization. We discuss these unexpected findings in the context of prior theoretical frameworks on temporal sampling. These results may reflect the importance of real-time language processing to evoke gamma rhythms in the phonemic range during childhood and adolescence.
Collapse
Affiliation(s)
- Oliver H. M. Lasnick
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Roeland Hancock
- Wu Tsai Institute, Yale University, New Haven, Connecticut, United States of America
| | - Fumiko Hoeft
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| |
Collapse
|
7
|
Ortiz-Barajas MC, Guevara R, Gervain J. Neural oscillations and speech processing at birth. iScience 2023; 26:108187. [PMID: 37965146 PMCID: PMC10641252 DOI: 10.1016/j.isci.2023.108187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 08/29/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2023] Open
Abstract
Are neural oscillations biologically endowed building blocks of the neural architecture for speech processing from birth, or do they require experience to emerge? In adults, delta, theta, and low-gamma oscillations support the simultaneous processing of phrasal, syllabic, and phonemic units in the speech signal, respectively. Using electroencephalography to investigate neural oscillations in the newborn brain we reveal that delta and theta oscillations differ for rhythmically different languages, suggesting that these bands underlie newborns' universal ability to discriminate languages on the basis of rhythm. Additionally, higher theta activity during post-stimulus as compared to pre-stimulus rest suggests that stimulation after-effects are present from birth.
Collapse
Affiliation(s)
- Maria Clemencia Ortiz-Barajas
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, 45 rue des Saints-Pères, 75006 Paris, France
| | - Ramón Guevara
- Department of Physics and Astronomy, University of Padua, Via Marzolo 8, 35131 Padua, Italy
| | - Judit Gervain
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, 45 rue des Saints-Pères, 75006 Paris, France
- Department of Developmental and Social Psychology, University of Padua, Via Venezia 8, 35131 Padua, Italy
| |
Collapse
|
8
|
Zhao L, Wang X. Frontal cortex activity during the production of diverse social communication calls in marmoset monkeys. Nat Commun 2023; 14:6634. [PMID: 37857618 PMCID: PMC10587070 DOI: 10.1038/s41467-023-42052-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 09/28/2023] [Indexed: 10/21/2023] Open
Abstract
Vocal communication is essential for social behaviors in humans and non-human primates. While the frontal cortex is crucial to human speech production, its role in vocal production in non-human primates has long been questioned. It is unclear whether activities in the frontal cortex represent diverse vocal signals used in non-human primate communication. Here we studied single neuron activities and local field potentials (LFP) in the frontal cortex of male marmoset monkeys while the animal engaged in vocal exchanges with conspecifics in a social environment. We found that both single neuron activities and LFP were modulated by the production of each of the four major call types. Moreover, neural activities showed distinct patterns for different call types and theta-band LFP oscillations showed phase-locking to the phrases of twitter calls, suggesting a neural representation of vocalization features. Our results suggest important functions of the marmoset frontal cortex in supporting the production of diverse vocalizations in communication.
Collapse
Affiliation(s)
- Lingyun Zhao
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA.
- Department of Neurological Surgery, University of California, San Francisco, CA, 94158, USA.
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA.
| |
Collapse
|
9
|
Moon J, Chau T. Online Ternary Classification of Covert Speech by Leveraging the Passive Perception of Speech. Int J Neural Syst 2023; 33:2350048. [PMID: 37522623 DOI: 10.1142/s012906572350048x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/01/2023]
Abstract
Brain-computer interfaces (BCIs) provide communicative alternatives to those without functional speech. Covert speech (CS)-based BCIs enable communication simply by thinking of words and thus have intuitive appeal. However, an elusive barrier to their clinical translation is the collection of voluminous examples of high-quality CS signals, as iteratively rehearsing words for long durations is mentally fatiguing. Research on CS and speech perception (SP) identifies common spatiotemporal patterns in their respective electroencephalographic (EEG) signals, pointing towards shared encoding mechanisms. The goal of this study was to investigate whether a model that leverages the signal similarities between SP and CS can differentiate speech-related EEG signals online. Ten participants completed a dyadic protocol where in each trial, they listened to a randomly selected word and then subsequently mentally rehearsed the word. In the offline sessions, eight words were presented to participants. For the subsequent online sessions, the two most distinct words (most separable in terms of their EEG signals) were chosen to form a ternary classification problem (two words and rest). The model comprised a functional mapping derived from SP and CS signals of the same speech token (features are extracted via a Riemannian approach). An average ternary online accuracy of 75.3% (60% chance level) was achieved across participants, with individual accuracies as high as 93%. Moreover, we observed that the signal-to-noise ratio (SNR) of CS signals was enhanced by perception-covert modeling according to the level of high-frequency ([Formula: see text]-band) correspondence between CS and SP. These findings may lead to less burdensome data collection for training speech BCIs, which could eventually enhance the rate at which the vocabulary can grow.
Collapse
Affiliation(s)
- Jae Moon
- Institute of Biomedical Engineering, University of Toronto, Holland Bloorview Kid's Rehabilitation Hospital, Toronto, Ontario, Canada
| | - Tom Chau
- Institute of Biomedical Engineering, University of Toronto, Holland Bloorview Kid's Rehabilitation Hospital, Toronto, Ontario, Canada
| |
Collapse
|
10
|
Elmer S, Kurthen I, Meyer M, Giroud N. A multidimensional characterization of the neurocognitive architecture underlying age-related temporal speech processing. Neuroimage 2023; 278:120285. [PMID: 37481009 DOI: 10.1016/j.neuroimage.2023.120285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 07/11/2023] [Accepted: 07/19/2023] [Indexed: 07/24/2023] Open
Abstract
Healthy aging is often associated with speech comprehension difficulties in everyday life situations despite a pure-tone hearing threshold in the normative range. Drawing on this background, we used a multidimensional approach to assess the functional and structural neural correlates underlying age-related temporal speech processing while controlling for pure-tone hearing acuity. Accordingly, we combined structural magnetic resonance imaging and electroencephalography, and collected behavioral data while younger and older adults completed a phonetic categorization and discrimination task with consonant-vowel syllables varying along a voice-onset time continuum. The behavioral results confirmed age-related temporal speech processing singularities which were reflected in a shift of the boundary of the psychometric categorization function, with older adults perceiving more syllable characterized by a short voice-onset time as /ta/ compared to younger adults. Furthermore, despite the absence of any between-group differences in phonetic discrimination abilities, older adults demonstrated longer N100/P200 latencies as well as increased P200 amplitudes while processing the consonant-vowel syllables varying in voice-onset time. Finally, older adults also exhibited a divergent anatomical gray matter infrastructure in bilateral auditory-related and frontal brain regions, as manifested in reduced cortical thickness and surface area. Notably, in the younger adults but not in the older adult cohort, cortical surface area in these two gross anatomical clusters correlated with the categorization of consonant-vowel syllables characterized by a short voice-onset time, suggesting the existence of a critical gray matter threshold that is crucial for consistent mapping of phonetic categories varying along the temporal dimension. Taken together, our results highlight the multifaceted dimensions of age-related temporal speech processing characteristics, and pave the way toward a better understanding of the relationships between hearing, speech and the brain in older age.
Collapse
Affiliation(s)
- Stefan Elmer
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland.
| | - Ira Kurthen
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland; Cognitive Psychology Unit, Alpen-Adria University, Klagenfurt, Austria
| | - Nathalie Giroud
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland
| |
Collapse
|
11
|
He D, Buder EH, Bidelman GM. Effects of Syllable Rate on Neuro-Behavioral Synchronization Across Modalities: Brain Oscillations and Speech Productions. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:344-360. [PMID: 37229510 PMCID: PMC10205147 DOI: 10.1162/nol_a_00102] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 01/25/2023] [Indexed: 05/27/2023]
Abstract
Considerable work suggests the dominant syllable rhythm of the acoustic envelope is remarkably similar across languages (∼4-5 Hz) and that oscillatory brain activity tracks these quasiperiodic rhythms to facilitate speech processing. However, whether this fundamental periodicity represents a common organizing principle in both auditory and motor systems involved in speech has not been explicitly tested. To evaluate relations between entrainment in the perceptual and production domains, we measured individuals' (i) neuroacoustic tracking of the EEG to speech trains and their (ii) simultaneous and non-simultaneous productions synchronized to syllable rates between 2.5 and 8.5 Hz. Productions made without concurrent auditory presentation isolated motor speech functions more purely. We show that neural synchronization flexibly adapts to the heard stimuli in a rate-dependent manner, but that phase locking is boosted near ∼4.5 Hz, the purported dominant rate of speech. Cued speech productions (recruit sensorimotor interaction) were optimal between 2.5 and 4.5 Hz, suggesting a low-frequency constraint on motor output and/or sensorimotor integration. In contrast, "pure" motor productions (without concurrent sound cues) were most precisely generated at rates of 4.5 and 5.5 Hz, paralleling the neuroacoustic data. Correlations further revealed strong links between receptive (EEG) and production synchronization abilities; individuals with stronger auditory-perceptual entrainment better matched speech rhythms motorically. Together, our findings support an intimate link between exogenous and endogenous rhythmic processing that is optimized at 4-5 Hz in both auditory and motor systems. Parallels across modalities could result from dynamics of the speech motor system coupled with experience-dependent tuning of the perceptual system via the sensorimotor interface.
Collapse
Affiliation(s)
- Deling He
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
| | - Eugene H. Buder
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| |
Collapse
|
12
|
Lubinus C, Keitel A, Obleser J, Poeppel D, Rimmele JM. Explaining flexible continuous speech comprehension from individual motor rhythms. Proc Biol Sci 2023; 290:20222410. [PMID: 36855868 PMCID: PMC9975658 DOI: 10.1098/rspb.2022.2410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023] Open
Abstract
When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (digit span) and higher linguistic, model-based sentence predictability-particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role.
Collapse
Affiliation(s)
- Christina Lubinus
- Department of Neuroscience and Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
| | - Anne Keitel
- Psychology, University of Dundee, Dundee DD1 4HN, UK
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience (in Cooperation with Max Planck Society), Frankfurt am Main, Germany
| | - Johanna M. Rimmele
- Department of Neuroscience and Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| |
Collapse
|
13
|
Arutiunian V, Arcara G, Buyanova I, Davydova E, Pereverzeva D, Sorokin A, Tyushkevich S, Mamokhina U, Danilina K, Dragoy O. Neuromagnetic 40 Hz Auditory Steady-State Response in the left auditory cortex is related to language comprehension in children with Autism Spectrum Disorder. Prog Neuropsychopharmacol Biol Psychiatry 2023; 122:110690. [PMID: 36470421 DOI: 10.1016/j.pnpbp.2022.110690] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 11/06/2022] [Accepted: 11/29/2022] [Indexed: 12/08/2022]
Abstract
Language impairment is comorbid in most children with Autism Spectrum Disorder (ASD), but its neural mechanisms are still poorly understood. Some studies hypothesize that the atypical low-level sensory perception in the auditory cortex accounts for the abnormal language development in these children. One of the potential non-invasive measures of such low-level perception can be the cortical gamma-band oscillations registered with magnetoencephalography (MEG), and 40 Hz Auditory Steady-State Response (40 Hz ASSR) is a reliable paradigm for eliciting auditory gamma response. Although there is research in children with and without ASD using 40 Hz ASSR, nothing is known about the relationship between this auditory response in children with ASD and their language abilities measured directly in formal assessment. In the present study, we used MEG and individual brain models to investigate 40 Hz ASSR in primary-school-aged children with and without ASD. It was also used to assess how the strength of the auditory response is related to language abilities of children with ASD, their non-verbal IQ, and social functioning. A total of 40 children were included in the study. The results demonstrated that 40 Hz ASSR was reduced in the right auditory cortex in children with ASD when comparing them to typically developing controls. Importantly, our study provides the first evidence of the association between 40 Hz ASSR in the language-dominant left auditory cortex and language comprehension in children with ASD. This link was domain-specific because the other brain-behavior correlations were non-significant.
Collapse
Affiliation(s)
| | | | - Irina Buyanova
- Center for Language and Brain, HSE University, Moscow, Russia
| | - Elizaveta Davydova
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia; Chair of Differential Psychology and Psychophysiology, Moscow State University of Psychology and Education, Moscow, Russia
| | - Darya Pereverzeva
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia
| | - Alexander Sorokin
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia; Haskins Laboratories, New Haven, CT, United States of America
| | - Svetlana Tyushkevich
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia
| | - Uliana Mamokhina
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia
| | - Kamilla Danilina
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia
| | - Olga Dragoy
- Center for Language and Brain, HSE University, Moscow, Russia; Institute of Linguistics, Russian Academy of Sciences, Moscow, Russia
| |
Collapse
|
14
|
Ladányi E, Novakovic M, Boorom OA, Aaron AS, Scartozzi AC, Gustavson DE, Nitin R, Bamikole PO, Vaughan C, Fromboluti EK, Schuele CM, Camarata SM, McAuley JD, Gordon RL. Using Motor Tempi to Understand Rhythm and Grammatical Skills in Developmental Language Disorder and Typical Language Development. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:1-28. [PMID: 36875176 PMCID: PMC9979588 DOI: 10.1162/nol_a_00082] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 09/19/2022] [Indexed: 04/18/2023]
Abstract
Children with developmental language disorder (DLD) show relative weaknesses on rhythm tasks beyond their characteristic linguistic impairments. The current study compares preferred tempo and the width of an entrainment region for 5- to 7-year-old typically developing (TD) children and children with DLD and considers the associations with rhythm aptitude and expressive grammar skills in the two populations. Preferred tempo was measured with a spontaneous motor tempo task (tapping tempo at a comfortable speed), and the width (range) of an entrainment region was measured by the difference between the upper (slow) and lower (fast) limits of tapping a rhythm normalized by an individual's spontaneous motor tempo. Data from N = 16 children with DLD and N = 114 TD children showed that whereas entrainment-region width did not differ across the two groups, slowest motor tempo, the determinant of the upper (slow) limit of the entrainment region, was at a faster tempo in children with DLD vs. TD. In other words, the DLD group could not pace their slow tapping as slowly as the TD group. Entrainment-region width was positively associated with rhythm aptitude and receptive grammar even after taking into account potential confounding factors, whereas expressive grammar did not show an association with any of the tapping measures. Preferred tempo was not associated with any study variables after including covariates in the analyses. These results motivate future neuroscientific studies of low-frequency neural oscillatory mechanisms as the potential neural correlates of entrainment-region width and their associations with musical rhythm and spoken language processing in children with typical and atypical language development.
Collapse
Affiliation(s)
- Enikő Ladányi
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Department of Linguistics, University of Potsdam, Potsdam, Germany
| | - Michaela Novakovic
- Department of Pharmacology, Northwestern University Feinberg School of Medicine, Chicago, IL
| | - Olivia A. Boorom
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, Lawrence, KS
| | - Allison S. Aaron
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Alyssa C. Scartozzi
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, TN
| | - Daniel E. Gustavson
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO
| | - Rachana Nitin
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN
| | - Peter O. Bamikole
- Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, OR
| | - Chloe Vaughan
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | - C. Melanie Schuele
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - Stephen M. Camarata
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - J. Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI
| | - Reyna L. Gordon
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, TN
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN
| |
Collapse
|
15
|
Erickson MA, Lopez-Calderon J, Robinson B, Gold JM, Luck SJ. Gamma-band entrainment abnormalities in schizophrenia: Modality-specific or cortex-wide impairment? JOURNAL OF PSYCHOPATHOLOGY AND CLINICAL SCIENCE 2022; 131:895-905. [PMID: 36326630 PMCID: PMC9641553 DOI: 10.1037/abn0000778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
A growing body of literature suggests that cognitive impairment in people with schizophrenia (PSZ) results from disrupted cortical excitatory/inhibitory (E-I) balance, which may be linked to gamma entrainment and can be measured noninvasively using electroencephalography (EEG). However, it is not yet known the degree to which these entrainment abnormalities covary within subjects across sensory modalities. Furthermore, the degree to which cross-modal gamma entrainment reflects variation in biological processes associated with cognitive performance remains unclear. We used EEG to measure entrainment to repetitive auditory and visual stimulation at beta (20 Hz) and gamma (30 and 40 Hz) frequencies in PSZ (n = 78) and healthy control subjects (HCS; n = 80). Three indices were measured for each frequency and modality: event-related spectral perturbation (ERSP), intertrial coherence (ITC), and phase-lag angle (PLA). Cognition and symptom severity were also assessed. We found little evidence that gamma entrainment covaried across sensory modalities. PSZ exhibited a modest correlation between modalities at 40 Hz for ERSP and ITC measures (r = 0.23-0.24); however, no other significant correlations between modalities emerged for either HCS or PSZ. Both univariate and multivariate analyses revealed that (a) the pattern of entrainment abnormalities in PSZ differed across modalities, and (b) modality rather than frequency band was the main source of variance. Finally, we observed a significant association between cognition and gamma entrainment in the auditory domain only in HCS. Gamma-band EEG entrainment does not reflect a unitary transcortical mechanism but is instead modality specific. To the extent that entrainment reflects the integrity of cortical E-I balance, the deficits observed in PSZ appear to be modality specific and not consistently associated with cognitive impairment. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Molly A. Erickson
- University of Chicago Department of Psychiatry & Behavioral Neuroscience
| | | | - Ben Robinson
- Maryland Psychiatric Research Center, University of Maryland
| | - James M. Gold
- Maryland Psychiatric Research Center, University of Maryland
| | - Steven J. Luck
- Center for Mind & Brain and Department of Psychology, University of California, Davis
| |
Collapse
|
16
|
Bai F, Meyer AS, Martin AE. Neural dynamics differentially encode phrases and sentences during spoken language comprehension. PLoS Biol 2022; 20:e3001713. [PMID: 35834569 PMCID: PMC9282610 DOI: 10.1371/journal.pbio.3001713] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 06/14/2022] [Indexed: 11/19/2022] Open
Abstract
Human language stands out in the natural world as a biological signal that uses a structured system to combine the meanings of small linguistic units (e.g., words) into larger constituents (e.g., phrases and sentences). However, the physical dynamics of speech (or sign) do not stand in a one-to-one relationship with the meanings listeners perceive. Instead, listeners infer meaning based on their knowledge of the language. The neural readouts of the perceptual and cognitive processes underlying these inferences are still poorly understood. In the present study, we used scalp electroencephalography (EEG) to compare the neural response to phrases (e.g., the red vase) and sentences (e.g., the vase is red), which were close in semantic meaning and had been synthesized to be physically indistinguishable. Differences in structure were well captured in the reorganization of neural phase responses in delta (approximately <2 Hz) and theta bands (approximately 2 to 7 Hz),and in power and power connectivity changes in the alpha band (approximately 7.5 to 13.5 Hz). Consistent with predictions from a computational model, sentences showed more power, more power connectivity, and more phase synchronization than phrases did. Theta–gamma phase–amplitude coupling occurred, but did not differ between the syntactic structures. Spectral–temporal response function (STRF) modeling revealed different encoding states for phrases and sentences, over and above the acoustically driven neural response. Our findings provide a comprehensive description of how the brain encodes and separates linguistic structures in the dynamics of neural responses. They imply that phase synchronization and strength of connectivity are readouts for the constituent structure of language. The results provide a novel basis for future neurophysiological research on linguistic structure representation in the brain, and, together with our simulations, support time-based binding as a mechanism of structure encoding in neural dynamics.
Collapse
Affiliation(s)
- Fan Bai
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Antje S. Meyer
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
- * E-mail:
| |
Collapse
|
17
|
Vanden Bosch der Nederlanden CM, Joanisse MF, Grahn JA, Snijders TM, Schoffelen JM. Familiarity modulates neural tracking of sung and spoken utterances. Neuroimage 2022; 252:119049. [PMID: 35248707 DOI: 10.1016/j.neuroimage.2022.119049] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 02/11/2022] [Accepted: 03/01/2022] [Indexed: 10/18/2022] Open
Abstract
Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant's subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants' neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context.
Collapse
Affiliation(s)
| | - Marc F Joanisse
- The Brain and Mind Institute, The University of Western Ontario, London, Ontario, Canada; Psychology Department, The University of Western Ontario, London, Ontario, Canada
| | - Jessica A Grahn
- The Brain and Mind Institute, The University of Western Ontario, London, Ontario, Canada; Psychology Department, The University of Western Ontario, London, Ontario, Canada
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; Radboud University, Donders Institute for Brain, Cognition and Behaviour, the Netherlands
| | - Jan-Mathijs Schoffelen
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, the Netherlands.
| |
Collapse
|
18
|
Mittag M, Larson E, Taulu S, Clarke M, Kuhl PK. Reduced Theta Sampling in Infants at Risk for Dyslexia across the Sensitive Period of Native Phoneme Learning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031180. [PMID: 35162202 PMCID: PMC8835181 DOI: 10.3390/ijerph19031180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 01/12/2022] [Accepted: 01/19/2022] [Indexed: 11/27/2022]
Abstract
Research on children and adults with developmental dyslexia-a specific difficulty in learning to read and spell-suggests that phonological deficits in dyslexia are linked to basic auditory deficits in temporal sampling. However, it remains undetermined whether such deficits are already present in infancy, especially during the sensitive period when the auditory system specializes in native phoneme perception. Because dyslexia is strongly hereditary, it is possible to examine infants for early predictors of the condition before detectable symptoms emerge. This study examines low-level auditory temporal sampling in infants at risk for dyslexia across the sensitive period of native phoneme learning. Using magnetoencephalography (MEG), we found deficient auditory sampling at theta in at-risk infants at both 6 and 12 months, indicating atypical auditory sampling at the syllabic rate in those infants across the sensitive period for native-language phoneme learning. This interpretation is supported by our additional finding that auditory sampling at theta predicted later vocabulary comprehension, nonlinguistic communication and the ability to combine words. Our results indicate a possible early marker of risk for dyslexia.
Collapse
Affiliation(s)
- Maria Mittag
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195-7988, USA; (E.L.); (S.T.); (M.C.)
- Correspondence: (M.M.); (P.K.K.)
| | - Eric Larson
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195-7988, USA; (E.L.); (S.T.); (M.C.)
| | - Samu Taulu
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195-7988, USA; (E.L.); (S.T.); (M.C.)
- Department of Physics, University of Washington, Seattle, WA 98195-7988, USA
| | - Maggie Clarke
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195-7988, USA; (E.L.); (S.T.); (M.C.)
| | - Patricia K. Kuhl
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195-7988, USA; (E.L.); (S.T.); (M.C.)
- Correspondence: (M.M.); (P.K.K.)
| |
Collapse
|
19
|
Wagner M, Ortiz-Mantilla S, Rusiniak M, Benasich AA, Shafer VL, Steinschneider M. Acoustic-level and language-specific processing of native and non-native phonological sequence onsets in the low gamma and theta-frequency bands. Sci Rep 2022; 12:314. [PMID: 35013345 PMCID: PMC8748887 DOI: 10.1038/s41598-021-03611-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 11/08/2021] [Indexed: 11/15/2022] Open
Abstract
Acoustic structures associated with native-language phonological sequences are enhanced within auditory pathways for perception, although the underlying mechanisms are not well understood. To elucidate processes that facilitate perception, time-frequency (T-F) analyses of EEGs obtained from native speakers of English and Polish were conducted. Participants listened to same and different nonword pairs within counterbalanced attend and passive conditions. Nonwords contained the onsets /pt/, /pət/, /st/, and /sət/ that occur in both the Polish and English languages with the exception of /pt/, which never occurs in the English language in word onset. Measures of spectral power and inter-trial phase locking (ITPL) in the low gamma (LG) and theta-frequency bands were analyzed from two bilateral, auditory source-level channels, created through source localization modeling. Results revealed significantly larger spectral power in LG for the English listeners to the unfamiliar /pt/ onsets from the right hemisphere at early cortical stages, during the passive condition. Further, ITPL values revealed distinctive responses in high and low-theta to acoustic characteristics of the onsets, which were modulated by language exposure. These findings, language-specific processing in LG and acoustic-level and language-specific processing in theta, support the view that multi scale temporal processing in the LG and theta-frequency bands facilitates speech perception.
Collapse
Affiliation(s)
- Monica Wagner
- St. John's University, St. John's Hall, Room 344 e1, 8000 Utopia Parkway, Queens, NY, 11439, USA.
| | | | | | | | - Valerie L Shafer
- The Graduate Center of the City University of New York, New York, NY, 10016, USA
| | | |
Collapse
|
20
|
Moon J, Chau T, Orlandi S. A comparison and classification of oscillatory characteristics in speech perception and covert speech. Brain Res 2022; 1781:147778. [PMID: 35007548 DOI: 10.1016/j.brainres.2022.147778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 12/29/2021] [Accepted: 01/03/2022] [Indexed: 11/02/2022]
Abstract
Covert speech, the mental imagery of speaking, has been studied increasingly to understand and decode thoughts in the context of brain-computer interfaces. In studies of speech comprehension, neural oscillations are thought to play a key role in the temporal encoding of speech. However, little is known about the role of oscillations in covert speech. In this study, we investigated the oscillatory involvements in covert speech and speech perception. Data were collected from 10 participants with 64 channel EEG. Participants heard the words, 'blue' and 'orange', and subsequently mentally rehearsed them. First, continuous wavelet transform was performed on epoched signals and subsequently two-tailed t-tests between two classes were conducted to determine statistical differences in frequency and time (t-CWT). Features were also extracted using t-CWT and subsequently classified using a support vector machine. θ and γ phase amplitude coupling (PAC) was also assessed within and between tasks. All binary classifications produced accuracies significantly greater (80-90%) than chance level, supporting the use of t-CWT in determining relative oscillatory involvements. While the perception task dynamically invoked all frequencies with more prominent θ and α activity, the covert task favoured higher frequencies with significantly higher γ activity than perception. Moreover, the perception condition produced significant θ-γ PAC, corroborating a reported linkage between syllabic and phonemic sampling. Although this coupling was found to be suppressed in the covert condition, we found significant cross-task coupling between perception θ and covert speech γ. Covert speech processing appears to be largely associated with higher frequencies of EEG. Importantly, the significant cross-task coupling between speech perception and covert speech, in the absence of within-task covert speech PAC, supports the notion that the γ- and θ-bands subserve, respectively, shared and unique encoding processes across tasks.
Collapse
Affiliation(s)
- Jaewoong Moon
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada.
| | - Tom Chau
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Silvia Orlandi
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
21
|
Palana J, Schwartz S, Tager-Flusberg H. Evaluating the Use of Cortical Entrainment to Measure Atypical Speech Processing: A Systematic Review. Neurosci Biobehav Rev 2021; 133:104506. [PMID: 34942267 DOI: 10.1016/j.neubiorev.2021.12.029] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 12/12/2021] [Accepted: 12/18/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Cortical entrainment has emerged as promising means for measuring continuous speech processing in young, neurotypical adults. However, its utility for capturing atypical speech processing has not been systematically reviewed. OBJECTIVES Synthesize evidence regarding the merit of measuring cortical entrainment to capture atypical speech processing and recommend avenues for future research. METHOD We systematically reviewed publications investigating entrainment to continuous speech in populations with auditory processing differences. RESULTS In the 25 publications reviewed, most studies were conducted on older and/or hearing-impaired adults, for whom slow-wave entrainment to speech was often heightened compared to controls. Research conducted on populations with neurodevelopmental disorders, in whom slow-wave entrainment was often reduced, was less common. Across publications, findings highlighted associations between cortical entrainment and speech processing performance differences. CONCLUSIONS Measures of cortical entrainment offer useful means of capturing speech processing differences and future research should leverage them more extensively when studying populations with neurodevelopmental disorders.
Collapse
Affiliation(s)
- Joseph Palana
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA; Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Harvard Medical School, Boston Children's Hospital, 1 Autumn Street, Boston, MA, 02215, USA
| | - Sophie Schwartz
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA
| | - Helen Tager-Flusberg
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA.
| |
Collapse
|
22
|
Kern P, Assaneo MF, Endres D, Poeppel D, Rimmele JM. Preferred auditory temporal processing regimes and auditory-motor synchronization. Psychon Bull Rev 2021; 28:1860-1873. [PMID: 34100222 PMCID: PMC8642338 DOI: 10.3758/s13423-021-01933-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2021] [Indexed: 11/08/2022]
Abstract
Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underlying neuronal-processing mechanisms. Oscillatory theories suggest the existence of one optimal perceptual performance regime at auditory stimulation rates in the delta to theta range (< 10 Hz), but reduced performance in the alpha range (10-14 Hz) is controversial. Additionally, the widely discussed motor system contribution to timing remains unclear. We measured rate discrimination thresholds between 4 and 15 Hz, and auditory-motor coupling strength was estimated through a behavioral auditory-motor synchronization task. In a Bayesian model comparison, high auditory-motor synchronizers showed a larger range of constant optimal temporal judgments than low synchronizers, with performance decreasing in the alpha range. This evidence for optimal processing in the theta range is consistent with preferred oscillatory regimes in auditory cortex that compartmentalize stimulus encoding and processing. The findings suggest, remarkably, that increased auditory-motor synchronization might extend such an optimal range towards faster rates.
Collapse
Affiliation(s)
- Pius Kern
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Grüneburgweg 14, 60322, Frankfurt/M, Germany
| | - M Florencia Assaneo
- Instituto de Neurobiologia, Universidad Nacional Autónoma de México Juriquilla, Campus UNAM 3001, 76230, Juriquilla, Qro., Mexico
| | - Dominik Endres
- Department of Psychology, Philipps University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Grüneburgweg 14, 60322, Frankfurt/M, Germany
- Department of Psychology, New York University, 6 Washington Place, New York, NY, 10003, USA
- Max Planck NYU Center for Language, Music, and Emotion, Frankfurt/M, Germany, NY, USA
| | - Johanna M Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Grüneburgweg 14, 60322, Frankfurt/M, Germany.
- Max Planck NYU Center for Language, Music, and Emotion, Frankfurt/M, Germany, NY, USA.
| |
Collapse
|
23
|
Neural oscillations track natural but not artificial fast speech: Novel insights from speech-brain coupling using MEG. Neuroimage 2021; 244:118577. [PMID: 34525395 DOI: 10.1016/j.neuroimage.2021.118577] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/27/2021] [Accepted: 09/12/2021] [Indexed: 11/20/2022] Open
Abstract
Neural oscillations contribute to speech parsing via cortical tracking of hierarchical linguistic structures, including syllable rate. While the properties of neural entrainment have been largely probed with speech stimuli at either normal or artificially accelerated rates, the important case of natural fast speech has been largely overlooked. Using magnetoencephalography, we found that listening to naturally-produced speech was associated with cortico-acoustic coupling, both at normal (∼6 syllables/s) and fast (∼9 syllables/s) rates, with a corresponding shift in peak entrainment frequency. Interestingly, time-compressed sentences did not yield such coupling, despite being generated at the same rate as the natural fast sentences. Additionally, neural activity in right motor cortex exhibited stronger tuning to natural fast rather than to artificially accelerated speech, and showed evidence for stronger phase-coupling with left temporo-parietal and motor areas. These findings are highly relevant for our understanding of the role played by auditory and motor cortex oscillations in the perception of naturally produced speech.
Collapse
|
24
|
Kulkarni A, Kegler M, Reichenbach T. Effect of visual input on syllable parsing in a computational model of a neural microcircuit for speech processing. J Neural Eng 2021; 18. [PMID: 34547737 DOI: 10.1088/1741-2552/ac28d3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 09/21/2021] [Indexed: 11/12/2022]
Abstract
Objective.Seeing a person talking can help us understand them, particularly in a noisy environment. However, how the brain integrates the visual information with the auditory signal to enhance speech comprehension remains poorly understood.Approach.Here we address this question in a computational model of a cortical microcircuit for speech processing. The model consists of an excitatory and an inhibitory neural population that together create oscillations in the theta frequency range. When stimulated with speech, the theta rhythm becomes entrained to the onsets of syllables, such that the onsets can be inferred from the network activity. We investigate how well the obtained syllable parsing performs when different types of visual stimuli are added. In particular, we consider currents related to the rate of syllables as well as currents related to the mouth-opening area of the talking faces.Main results.We find that currents that target the excitatory neuronal population can influence speech comprehension, both boosting it or impeding it, depending on the temporal delay and on whether the currents are excitatory or inhibitory. In contrast, currents that act on the inhibitory neurons do not impact speech comprehension significantly.Significance.Our results suggest neural mechanisms for the integration of visual information with the acoustic information in speech and make experimentally-testable predictions.
Collapse
Affiliation(s)
- Anirudh Kulkarni
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom
| | - Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom.,Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Konrad-Zuse-Strasse 3/5, Erlangen, 91056, Germany
| |
Collapse
|
25
|
Jenson D. Audiovisual incongruence differentially impacts left and right hemisphere sensorimotor oscillations: Potential applications to production. PLoS One 2021; 16:e0258335. [PMID: 34618866 PMCID: PMC8496780 DOI: 10.1371/journal.pone.0258335] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Accepted: 09/26/2021] [Indexed: 11/21/2022] Open
Abstract
Speech production gives rise to distinct auditory and somatosensory feedback signals which are dynamically integrated to enable online monitoring and error correction, though it remains unclear how the sensorimotor system supports the integration of these multimodal signals. Capitalizing on the parity of sensorimotor processes supporting perception and production, the current study employed the McGurk paradigm to induce multimodal sensory congruence/incongruence. EEG data from a cohort of 39 typical speakers were decomposed with independent component analysis to identify bilateral mu rhythms; indices of sensorimotor activity. Subsequent time-frequency analyses revealed bilateral patterns of event related desynchronization (ERD) across alpha and beta frequency ranges over the time course of perceptual events. Right mu activity was characterized by reduced ERD during all cases of audiovisual incongruence, while left mu activity was attenuated and protracted in McGurk trials eliciting sensory fusion. Results were interpreted to suggest distinct hemispheric contributions, with right hemisphere mu activity supporting a coarse incongruence detection process and left hemisphere mu activity reflecting a more granular level of analysis including phonological identification and incongruence resolution. Findings are also considered in regard to incongruence detection and resolution processes during production.
Collapse
Affiliation(s)
- David Jenson
- Department of Speech and Hearing Sciences, Washington State University, Spokane, Washington, United States of America
| |
Collapse
|
26
|
Aberrant Static and Dynamic Functional Network Connectivity in Acute Mild Traumatic Brain Injury with Cognitive Impairment. Clin Neuroradiol 2021; 32:205-214. [PMID: 34463779 DOI: 10.1007/s00062-021-01082-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 07/31/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE This study aimed to investigate differences in static and dynamic functional network connectivity (FNC) and explore their association with neurocognitive performance in acute mild traumatic brain injury (mTBI). METHODS A total of 76 patients with acute mTBI and 70 age-matched and sex-matched healthy controls were enrolled (age 43.79 ± 10.22 years vs. 45.63 ± 9.49 years; male/female: 34/42 vs. 38/32; all p > 0.05) and underwent resting-state functional magnetic resonance imaging (fMRI) scan (repetition time/echo time = 2000/30 ms, 230 volumes). Independent component analysis was conducted to evaluate static and dynamic FNC patterns on the basis of nine resting-state networks, namely, auditory network (AUDN), dorsal attention network (dAN), ventral attention network (vAN), default mode network (DMN), left frontoparietal network (LFPN), right frontoparietal network (RFPN), somatomotor network (SMN), visual network (VN), and salience network (SN). Spearman's correlation among aberrances in FNC values, and Montreal cognitive assessment (MoCA) scores was further measured in mTBI. RESULTS Compared with controls, patients with mTBI showed wide aberrances of static FNC, such as reduced FNC in DMN-vAN and VN-vAN pairs. The mTBI patients exhibited aberrant dynamic FNC in state 2, involving reduced FNC aberrance in the vAN with AUDN, VN with DMN and dAN, and SN with SMN and vAN. Reduced dFNC in the SN-vAN pair was negatively correlated with the MoCA score. CONCLUSION Our findings suggest that aberrant static and dynamic FNC at the acute stage may contribute to cognitive symptoms, which not only may expand knowledge regarding FNC cognition relations from the static perspective but also from the dynamic perspective.
Collapse
|
27
|
Klimovich-Gray A, Barrena A, Agirre E, Molinaro N. One Way or Another: Cortical Language Areas Flexibly Adapt Processing Strategies to Perceptual And Contextual Properties of Speech. Cereb Cortex 2021; 31:4092-4103. [PMID: 33825884 DOI: 10.1093/cercor/bhab071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 02/24/2021] [Accepted: 02/25/2021] [Indexed: 11/13/2022] Open
Abstract
Cortical circuits rely on the temporal regularities of speech to optimize signal parsing for sound-to-meaning mapping. Bottom-up speech analysis is accelerated by top-down predictions about upcoming words. In everyday communications, however, listeners are regularly presented with challenging input-fluctuations of speech rate or semantic content. In this study, we asked how reducing speech temporal regularity affects its processing-parsing, phonological analysis, and ability to generate context-based predictions. To ensure that spoken sentences were natural and approximated semantic constraints of spontaneous speech we built a neural network to select stimuli from large corpora. We analyzed brain activity recorded with magnetoencephalography during sentence listening using evoked responses, speech-to-brain synchronization and representational similarity analysis. For normal speech theta band (6.5-8 Hz) speech-to-brain synchronization was increased and the left fronto-temporal areas generated stronger contextual predictions. The reverse was true for temporally irregular speech-weaker theta synchronization and reduced top-down effects. Interestingly, delta-band (0.5 Hz) speech tracking was greater when contextual/semantic predictions were lower or if speech was temporally jittered. We conclude that speech temporal regularity is relevant for (theta) syllabic tracking and robust semantic predictions while the joint support of temporal and contextual predictability reduces word and phrase-level cortical tracking (delta).
Collapse
Affiliation(s)
| | - Ander Barrena
- Computer Science Faculty, University of the Basque Country, Donostia, 20018, San Sebastian, Spain
| | - Eneko Agirre
- Computer Science Faculty, University of the Basque Country, Donostia, 20018, San Sebastian, Spain
| | - Nicola Molinaro
- BCBL, Basque Center on Cognition, Brain and Language, Donostia, 20009, San Sebastian, Spain.,Ikerbasque, Basque Foundation for Science, 48009, Bilbao, Spain
| |
Collapse
|
28
|
Auditory cortical micro-networks show differential connectivity during voice and speech processing in humans. Commun Biol 2021; 4:801. [PMID: 34172824 PMCID: PMC8233416 DOI: 10.1038/s42003-021-02328-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 06/09/2021] [Indexed: 02/05/2023] Open
Abstract
The temporal voice areas (TVAs) in bilateral auditory cortex (AC) appear specialized for voice processing. Previous research assumed a uniform functional profile for the TVAs which are broadly spread along the bilateral AC. Alternatively, the TVAs might comprise separate AC nodes controlling differential neural functions for voice and speech decoding, organized as local micro-circuits. To investigate micro-circuits, we modeled the directional connectivity between TVA nodes during voice processing in humans while acquiring brain activity using neuroimaging. Results show several bilateral AC nodes for general voice decoding (speech and non-speech voices) and for speech decoding in particular. Furthermore, non-hierarchical and differential bilateral AC networks manifest distinct excitatory and inhibitory pathways for voice and speech processing. Finally, while voice and speech processing seem to have distinctive but integrated neural circuits in the left AC, the right AC reveals disintegrated neural circuits for both sounds. Altogether, we demonstrate a functional heterogeneity in the TVAs for voice decoding based on local micro-circuits.
Collapse
|
29
|
Coupled oscillations enable rapid temporal recalibration to audiovisual asynchrony. Commun Biol 2021; 4:559. [PMID: 33976360 PMCID: PMC8113519 DOI: 10.1038/s42003-021-02087-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 04/08/2021] [Indexed: 12/31/2022] Open
Abstract
The brain naturally resolves the challenge of integrating auditory and visual signals produced by the same event despite different physical propagation speeds and neural processing latencies. Temporal recalibration manifests in human perception to realign incoming signals across the senses. Recent behavioral studies show it is a fast-acting phenomenon, relying on the most recent exposure to audiovisual asynchrony. Here we show that the physiological mechanism of rapid, context-dependent recalibration builds on interdependent pre-stimulus cortical rhythms in sensory brain regions. Using magnetoencephalography, we demonstrate that individual recalibration behavior is related to subject-specific properties of fast oscillations (>35 Hz) nested within a slower alpha rhythm (8–12 Hz) in auditory cortex. We also show that the asynchrony of a previously presented audiovisual stimulus pair alters the preferred coupling phase of these fast oscillations along the alpha cycle, with a resulting phase-shift amounting to the temporal recalibration observed behaviorally. These findings suggest that cross-frequency coupled oscillations contribute to forming unified percepts across senses. Lennert et al. use magnetoencephalography in human participants to show that individual recalibration behavior in response to audiovisual asynchrony is related to subject-specific properties of fast oscillations in the auditory cortex. Their findings shed light on how brain oscillations contribute to forming unified percepts across senses.
Collapse
|
30
|
Kolozsvári OB, Xu W, Gerike G, Parviainen T, Nieminen L, Noiray A, Hämäläinen JA. Coherence Between Brain Activation and Speech Envelope at Word and Sentence Levels Showed Age-Related Differences in Low Frequency Bands. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:226-253. [PMID: 37216146 PMCID: PMC10158622 DOI: 10.1162/nol_a_00033] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 02/17/2021] [Indexed: 05/24/2023]
Abstract
Speech perception is dynamic and shows changes across development. In parallel, functional differences in brain development over time have been well documented and these differences may interact with changes in speech perception during infancy and childhood. Further, there is evidence that the two hemispheres contribute unequally to speech segmentation at the sentence and phonemic levels. To disentangle those contributions, we studied the cortical tracking of various sized units of speech that are crucial for spoken language processing in children (4.7-9.3 years old, N = 34) and adults (N = 19). We measured participants' magnetoencephalogram (MEG) responses to syllables, words, and sentences, calculated the coherence between the speech signal and MEG responses at the level of words and sentences, and further examined auditory evoked responses to syllables. Age-related differences were found for coherence values at the delta and theta frequency bands. Both frequency bands showed an effect of stimulus type, although this was attributed to the length of the stimulus and not the linguistic unit size. There was no difference between hemispheres at the source level either in coherence values for word or sentence processing or in evoked response to syllables. Results highlight the importance of the lower frequencies for speech tracking in the brain across different lexical units. Further, stimulus length affects the speech-brain associations suggesting methodological approaches should be selected carefully when studying speech envelope processing at the neural level. Speech tracking in the brain seems decoupled from more general maturation of the auditory cortex.
Collapse
Affiliation(s)
- Orsolya B. Kolozsvári
- Department of Psychology, University of Jyväskylä, Finland
- Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Finland
| | - Weiyong Xu
- Department of Psychology, University of Jyväskylä, Finland
- Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Finland
| | - Georgia Gerike
- Department of Psychology, University of Jyväskylä, Finland
- Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Finland
- Niilo Mäki Institute, Jyväskylä, Finland
| | - Tiina Parviainen
- Department of Psychology, University of Jyväskylä, Finland
- Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Finland
| | - Lea Nieminen
- Centre for Applied Language Studies, University of Jyväskylä, Finland
| | - Aude Noiray
- Laboratory for Oral Language Acquisition (LOLA), University of Potsdam, Germany
| | - Jarmo A. Hämäläinen
- Department of Psychology, University of Jyväskylä, Finland
- Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Finland
| |
Collapse
|
31
|
Lubinus C, Orpella J, Keitel A, Gudi-Mindermann H, Engel AK, Roeder B, Rimmele JM. Data-Driven Classification of Spectral Profiles Reveals Brain Region-Specific Plasticity in Blindness. Cereb Cortex 2021; 31:2505-2522. [PMID: 33338212 DOI: 10.1093/cercor/bhaa370] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 11/10/2020] [Accepted: 11/10/2020] [Indexed: 01/22/2023] Open
Abstract
Congenital blindness has been shown to result in behavioral adaptation and neuronal reorganization, but the underlying neuronal mechanisms are largely unknown. Brain rhythms are characteristic for anatomically defined brain regions and provide a putative mechanistic link to cognitive processes. In a novel approach, using magnetoencephalography resting state data of congenitally blind and sighted humans, deprivation-related changes in spectral profiles were mapped to the cortex using clustering and classification procedures. Altered spectral profiles in visual areas suggest changes in visual alpha-gamma band inhibitory-excitatory circuits. Remarkably, spectral profiles were also altered in auditory and right frontal areas showing increased power in theta-to-beta frequency bands in blind compared with sighted individuals, possibly related to adaptive auditory and higher cognitive processing. Moreover, occipital alpha correlated with microstructural white matter properties extending bilaterally across posterior parts of the brain. We provide evidence that visual deprivation selectively modulates spectral profiles, possibly reflecting structural and functional adaptation.
Collapse
Affiliation(s)
- Christina Lubinus
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
| | - Joan Orpella
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Anne Keitel
- Psychology, University of Dundee, Dundee DD1 4HN, UK
| | - Helene Gudi-Mindermann
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany.,Department of Social Epidemiology, University of Bremen, 28359 Bremen, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Brigitte Roeder
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
| | - Johanna M Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany.,Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| |
Collapse
|
32
|
Deng Z, Wang S. Sex differentiation of brain structures in autism: Findings from a gray matter asymmetry study. Autism Res 2021; 14:1115-1126. [PMID: 33769688 DOI: 10.1002/aur.2506] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 03/11/2021] [Indexed: 11/09/2022]
Abstract
Autism spectrum disorder (ASD) is diagnosed much more often in males than females. This male predominance has prompted a number of studies to examine how sex differences are related to the neural expression of ASD. Different theories, such as the "extreme male brain" theory, the "female protective effect" (FPE) theory, and the gender incoherence (GI) theory, provide different explanations for the mixed findings of sex-related neural expression of ASD. This study sought to clarify whether either theory applies to the brain structure in individuals with ASD by analyzing a selective high-quality data subset from an open data resource (Autism Brain Imaging Data Exchange I and II) including 35 males/35 females with ASD and 86 male/86 female typical-controls (TCs). We examined the sex-related changes in ASD in gray matter asymmetry measures (i.e., asymmetry index, AI) derived from voxel-based morphometry using a 2 (diagnosis: ASD vs. TC) × 2 (sex: female vs. male) factorial design. A diagnosis-by-sex interaction effect was identified in the planum temporale/Heschl's gyrus: (i) compared to females, males exhibited decreased AI (indicating more leftward brain asymmetry) in the TC group, whereas AI was greater (indicating less leftward brain asymmetry) for males than for females in the ASD group; and (ii) females with ASD showed reduced AI (indicating more leftward brain asymmetry) compared to female TCs, whereas there were no differences between ASDs and TCs in the male group. This interaction pattern supports the FPE theory in showing greater brain structure changes (masculinization) in females with ASD. LAY SUMMARY: To understand the neural mechanisms underlying male predominance in autism spectrum disorder (ASD), we investigated the sex differences in ASD-related alterations in brain asymmetry. We found greater changes in females with ASD compared with males with ASD, revealing a female protective effect. These findings provide novel insights into the neurobiology of sex differences in ASD.
Collapse
Affiliation(s)
- Zhizhou Deng
- Department of Applied Psychology, Guangdong University of Finance and Economics, Guangzhou, China
| | - Suiping Wang
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Center for Studies of Psychological Application, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| |
Collapse
|
33
|
Mahmud MS, Yeasin M, Bidelman GM. Speech categorization is better described by induced rather than evoked neural activity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1644. [PMID: 33765780 PMCID: PMC8267855 DOI: 10.1121/10.0003572] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Categorical perception (CP) describes how the human brain categorizes speech despite inherent acoustic variability. We examined neural correlates of CP in both evoked and induced electroencephalogram (EEG) activity to evaluate which mode best describes the process of speech categorization. Listeners labeled sounds from a vowel gradient while we recorded their EEGs. Using a source reconstructed EEG, we used band-specific evoked and induced neural activity to build parameter optimized support vector machine models to assess how well listeners' speech categorization could be decoded via whole-brain and hemisphere-specific responses. We found whole-brain evoked β-band activity decoded prototypical from ambiguous speech sounds with ∼70% accuracy. However, induced γ-band oscillations showed better decoding of speech categories with ∼95% accuracy compared to evoked β-band activity (∼70% accuracy). Induced high frequency (γ-band) oscillations dominated CP decoding in the left hemisphere, whereas lower frequencies (θ-band) dominated the decoding in the right hemisphere. Moreover, feature selection identified 14 brain regions carrying induced activity and 22 regions of evoked activity that were most salient in describing category-level speech representations. Among the areas and neural regimes explored, induced γ-band modulations were most strongly associated with listeners' behavioral CP. The data suggest that the category-level organization of speech is dominated by relatively high frequency induced brain rhythms.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, Tennessee 38152, USA
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, Tennessee 38152, USA
| | - Gavin M Bidelman
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA
| |
Collapse
|
34
|
Neural signatures of syntactic variation in speech planning. PLoS Biol 2021; 19:e3001038. [PMID: 33497384 PMCID: PMC7837500 DOI: 10.1371/journal.pbio.3001038] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 12/31/2020] [Indexed: 11/20/2022] Open
Abstract
Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like “gardener” in “the gardener crouched” and in “the gardener planted trees.” A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world’s languages. Little is known about the neural processes involved in planning to speak. This study uses eye-tracking and EEG to show that speakers prepare sentence structures in different ways and rely on alpha and theta oscillations differently when planning sentences with and without agent case marking, challenging theories on how production and comprehension affect language evolution.
Collapse
|
35
|
Vaquero L, Ramos-Escobar N, Cucurell D, François C, Putkinen V, Segura E, Huotilainen M, Penhune V, Rodríguez-Fornells A. Arcuate fasciculus architecture is associated with individual differences in pre-attentive detection of unpredicted music changes. Neuroimage 2021; 229:117759. [PMID: 33454403 DOI: 10.1016/j.neuroimage.2021.117759] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 12/16/2020] [Accepted: 01/06/2021] [Indexed: 12/12/2022] Open
Abstract
The mismatch negativity (MMN) is an event related brain potential (ERP) elicited by unpredicted sounds presented in a sequence of repeated auditory stimuli. The neural sources of the MMN have been previously attributed to a fronto-temporo-parietal network which crucially overlaps with the so-called auditory dorsal stream, involving inferior and middle frontal, inferior parietal, and superior and middle temporal regions. These cortical areas are structurally connected by the arcuate fasciculus (AF), a three-branch pathway supporting the feedback-feedforward loop involved in auditory-motor integration, auditory working memory, storage of acoustic templates, as well as comparison and update of those templates. Here, we characterized the individual differences in the white-matter macrostructural properties of the AF and explored their link to the electrophysiological marker of passive change detection gathered in a melodic multifeature MMN-EEG paradigm in 26 healthy young adults without musical training. Our results show that left fronto-temporal white-matter connectivity plays an important role in the pre-attentive detection of rhythm modulations within a melody. Previous studies have shown that this AF segment is also critical for language processing and learning. This strong coupling between structure and function in auditory change detection might be related to life-time linguistic (and possibly musical) exposure and experiences, as well as to timing processing specialization of the left auditory cortex. To the best of our knowledge, this is the first time in which the relationship between neurophysiological (EEG) and brain white-matter connectivity indexes using DTI-tractography are studied together. Thus, the present results, although still exploratory, add to the existing evidence on the importance of studying the constraints imposed on cognitive functions by the underlying structural connectivity.
Collapse
Affiliation(s)
- Lucía Vaquero
- Laboratory of Cognitive and Computational Neuroscience, Complutense University of Madrid and Polytechnic University of Madrid, Campus Científico y Tecnológico de la UPM, Pozuelo de Alarcón, 28223 Madrid, Spain.
| | - Neus Ramos-Escobar
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - David Cucurell
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - Clément François
- Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain; Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| | - Vesa Putkinen
- Turku PET Centre, University of Turku, Turku, Finland
| | - Emma Segura
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - Minna Huotilainen
- Cicero Learning and Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| | - Virginia Penhune
- Penhune Laboratory for Motor Learning and Neural Plasticity, Concordia University, Montreal, QC, Canada; International Laboratory for Brain, Music and Sound Research (BRAMS). Montreal, QC, Canada; Center for Research on Brain, Language and Music (CRBLM), McGill University. Montreal, QC, Canada
| | - Antoni Rodríguez-Fornells
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain; Institució Catalana de recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
36
|
Rufener KS, Zaehle T. Dysfunctional auditory gamma oscillations in developmental dyslexia: A potential target for a tACS-based intervention. PROGRESS IN BRAIN RESEARCH 2021; 264:211-232. [PMID: 34167657 DOI: 10.1016/bs.pbr.2021.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Interventions in developmental dyslexia typically consist of orthography-based reading and writing trainings. However, their efficacy is limited and, consequently, the symptoms persist into adulthood. Critical for this lack of efficacy is the still ongoing debate about the core deficit in dyslexia and its underlying neurobiological causes. There is ample evidence on phonological as well as auditory temporal processing deficits in dyslexia and, on the other hand, cortical gamma oscillations in the auditory cortex as functionally relevant for the extraction of linguistically meaningful information units from the acoustic signal. The present work aims to shed more light on the link between auditory gamma oscillations, phonological awareness, and literacy skills in dyslexia. By mean of EEG, individual gamma frequencies were assessed in a group of children and adolescents diagnosed with dyslexia as well as in an age-matched control group with typical literacy skills. Furthermore, phonological awareness was assessed in both groups, while in dyslexic participants also reading and writing performance was measured. We found significantly lower gamma peak frequencies as well as lower phonological awareness scores in dyslexic participants compared to age-matched controls. Additionally, results showed a positive correlation between the individual gamma frequency and phonological awareness. Our data suggest a hierarchical structure of neural gamma oscillations, phonological awareness, and literacy skills. Thereby, the results emphasize altered gamma oscillation not only as a core deficit in dyslexia but also as a potential target for future causal interventions. We discuss these findings considering non-invasive brain stimulation techniques and suggest transcranial alternating current stimulation as a promising approach to normalize dysfunctional oscillations in dyslexia.
Collapse
Affiliation(s)
| | - Tino Zaehle
- Center for Behavioral Brain Sciences (CBBS), Otto von Guericke University, Magdeburg, Germany
| |
Collapse
|
37
|
Eqlimi E, Bockstael A, De Coensel B, Schönwiesner M, Talsma D, Botteldooren D. EEG Correlates of Learning From Speech Presented in Environmental Noise. Front Psychol 2020; 11:1850. [PMID: 33250798 PMCID: PMC7676901 DOI: 10.3389/fpsyg.2020.01850] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 07/06/2020] [Indexed: 01/07/2023] Open
Abstract
How the human brain retains relevant vocal information while suppressing irrelevant sounds is one of the ongoing challenges in cognitive neuroscience. Knowledge of the underlying mechanisms of this ability can be used to identify whether a person is distracted during listening to a target speech, especially in a learning context. This paper investigates the neural correlates of learning from the speech presented in a noisy environment using an ecologically valid learning context and electroencephalography (EEG). To this end, the following listening tasks were performed while 64-channel EEG signals were recorded: (1) attentive listening to the lectures in background sound, (2) attentive listening to the background sound presented alone, and (3) inattentive listening to the background sound. For the first task, 13 lectures of 5 min in length embedded in different types of realistic background noise were presented to participants who were asked to focus on the lectures. As background noise, multi-talker babble, continuous highway, and fluctuating traffic sounds were used. After the second task, a written exam was taken to quantify the amount of information that participants have acquired and retained from the lectures. In addition to various power spectrum-based EEG features in different frequency bands, the peak frequency and long-range temporal correlations (LRTC) of alpha-band activity were estimated. To reduce these dimensions, a principal component analysis (PCA) was applied to the different listening conditions resulting in the feature combinations that discriminate most between listening conditions and persons. Linear mixed-effect modeling was used to explain the origin of extracted principal components, showing their dependence on listening condition and type of background sound. Following this unsupervised step, a supervised analysis was performed to explain the link between the exam results and the EEG principal component scores using both linear fixed and mixed-effect modeling. Results suggest that the ability to learn from the speech presented in environmental noise can be predicted by the several components over the specific brain regions better than by knowing the background noise type. These components were linked to deterioration in attention, speech envelope following, decreased focusing during listening, cognitive prediction error, and specific inhibition mechanisms.
Collapse
Affiliation(s)
- Ehsan Eqlimi
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| | - Annelies Bockstael
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium.,École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada.,Erasmushogeschool Brussel, Brussels, Belgium
| | - Bert De Coensel
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium.,ASAsense, Bruges, Belgium
| | - Marc Schönwiesner
- Faculty of Biosciences, Pharmacy and Psychology, Institute of Biology, University of Leipzig, Leipzig, Germany.,International Laboratory for Brain, Music and Sound Research (BRAMS), Université de Montréal, Montreal, QC, Canada
| | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| |
Collapse
|
38
|
Speech-Brain Frequency Entrainment of Dyslexia with and without Phonological Deficits. Brain Sci 2020; 10:brainsci10120920. [PMID: 33260681 PMCID: PMC7760068 DOI: 10.3390/brainsci10120920] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 11/20/2020] [Accepted: 11/26/2020] [Indexed: 12/23/2022] Open
Abstract
Developmental dyslexia is a cognitive disorder characterized by difficulties in linguistic processing. Our purpose is to distinguish subtypes of developmental dyslexia by the level of speech–EEG frequency entrainment (δ: 1–4; β: 12.5–22.5; γ1: 25–35; and γ2: 35–80 Hz) in word/pseudoword auditory discrimination. Depending on the type of disabilities, dyslexics can divide into two subtypes—with less pronounced phonological deficits (NoPhoDys—visual dyslexia) and with more pronounced ones (PhoDys—phonological dyslexia). For correctly recognized stimuli, the δ-entrainment is significantly worse in dyslexic children compared to controls at a level of speech prosody and syllabic analysis. Controls and NoPhoDys show a stronger δ-entrainment in the left-hemispheric auditory cortex (AC), anterior temporal lobe (ATL), frontal, and motor cortices than PhoDys. Dyslexic subgroups concerning normolexics have a deficit of δ-entrainment in the left ATL, inferior frontal gyrus (IFG), and the right AC. PhoDys has higher δ-entrainment in the posterior part of adjacent STS regions than NoPhoDys. Insufficient low-frequency β changes over the IFG, the inferior parietal lobe of PhoDys compared to NoPhoDys correspond to their worse phonological short-term memory. Left-dominant 30 Hz-entrainment for normolexics to phonemic frequencies characterizes the right AC, adjacent regions to superior temporal sulcus of dyslexics. The pronounced 40 Hz-entrainment in PhoDys than the other groups suggest a hearing “reassembly” and a poor phonological working memory. Shifting up to higher-frequency γ-entrainment in the AC of NoPhoDys can lead to verbal memory deficits. Different patterns of cortical reorganization based on the left or right hemisphere lead to differential dyslexic profiles.
Collapse
|
39
|
Villafaina S, Fuentes-García JP, Cano-Plasencia R, Gusi N. Neurophysiological Differences Between Women With Fibromyalgia and Healthy Controls During Dual Task: A Pilot Study. Front Psychol 2020; 11:558849. [PMID: 33250807 PMCID: PMC7672184 DOI: 10.3389/fpsyg.2020.558849] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 09/24/2020] [Indexed: 01/05/2023] Open
Abstract
Background Women with FM have a reduced ability to perform two simultaneous tasks. However, the impact of dual task (DT) on the neurophysiological response of women with FM has not been studied. Objective To explore both the neurophysiological response and physical performance of women with FM and healthy controls while performing a DT (motor–cognitive). Design Cross-sectional study. Methods A total of 17 women with FM and 19 age- and sex-matched healthy controls (1:1 ratio) were recruited. The electroencephalographic (EEG) activity was recorded while participants performed two simultaneous tasks: a motor (30 seconds arm-curl test) and a cognitive (remembering three unrelated words). Theta (4–7 Hz), alpha (8–12 Hz), and beta (13–30) frequency bands were analyzed by using EEGLAB. Results Significant differences were obtained in the healthy control group between single task (ST) and DT in the theta, alpha, and beta frequency bands (p-value < 0.05). Neurophysiological differences between ST and DT were not found in women with FM. In addition, between-group differences were found in the alpha and beta frequency bands between healthy and FM groups, with lower values of beta and alpha in the FM group. Therefore, significant group∗condition interactions were detected in the alpha and beta frequency bands. Regarding physical condition performance, between groups, analyses showed that women with FM obtained significantly worse results in the arm curl test than healthy controls, in both ST and DT. Conclusion Women with FM showed the same electrical brain activity pattern during ST and DT conditions, whereas healthy controls seem to adapt their brain activity to task commitment. This is the first study that investigates the neurophysiological response of women with FM while simultaneously performing a motor and a cognitive task.
Collapse
Affiliation(s)
- Santos Villafaina
- Physical Activity and Quality of Life Research Group (AFYCAV), Faculty of Sport Sciences, University of Extremadura, Cáceres, Spain
| | | | - Ricardo Cano-Plasencia
- Physical Activity and Quality of Life Research Group (AFYCAV), Faculty of Sport Sciences, University of Extremadura, Cáceres, Spain.,Clinical Neurophysiology, San Pedro de Alcántara Hospital, Cáceres, Spain
| | - Narcis Gusi
- Physical Activity and Quality of Life Research Group (AFYCAV), Faculty of Sport Sciences, University of Extremadura, Cáceres, Spain
| |
Collapse
|
40
|
Auditory Mapping With MEG: An Update on the Current State of Clinical Research and Practice With Considerations for Clinical Practice Guidelines. J Clin Neurophysiol 2020; 37:574-584. [DOI: 10.1097/wnp.0000000000000518] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
|
41
|
Thézé R, Giraud AL, Mégevand P. The phase of cortical oscillations determines the perceptual fate of visual cues in naturalistic audiovisual speech. SCIENCE ADVANCES 2020; 6:6/45/eabc6348. [PMID: 33148648 PMCID: PMC7673697 DOI: 10.1126/sciadv.abc6348] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Accepted: 09/17/2020] [Indexed: 06/11/2023]
Abstract
When we see our interlocutor, our brain seamlessly extracts visual cues from their face and processes them along with the sound of their voice, making speech an intrinsically multimodal signal. Visual cues are especially important in noisy environments, when the auditory signal is less reliable. Neuronal oscillations might be involved in the cortical processing of audiovisual speech by selecting which sensory channel contributes more to perception. To test this, we designed computer-generated naturalistic audiovisual speech stimuli where one mismatched phoneme-viseme pair in a key word of sentences created bistable perception. Neurophysiological recordings (high-density scalp and intracranial electroencephalography) revealed that the precise phase angle of theta-band oscillations in posterior temporal and occipital cortex of the right hemisphere was crucial to select whether the auditory or the visual speech cue drove perception. We demonstrate that the phase of cortical oscillations acts as an instrument for sensory selection in audiovisual speech processing.
Collapse
Affiliation(s)
- Raphaël Thézé
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, 1202 Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, 1202 Geneva, Switzerland
| | - Pierre Mégevand
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, 1202 Geneva, Switzerland.
- Division of Neurology, Department of Clinical Neurosciences, Geneva University Hospitals, 1205 Geneva, Switzerland
| |
Collapse
|
42
|
The relation between neurofunctional and neurostructural determinants of phonological processing in pre-readers. Dev Cogn Neurosci 2020; 46:100874. [PMID: 33130464 PMCID: PMC7606842 DOI: 10.1016/j.dcn.2020.100874] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Revised: 10/15/2020] [Accepted: 10/16/2020] [Indexed: 12/29/2022] Open
Abstract
Phonological processing skills are known as the most robust cognitive predictor of reading ability. Therefore, the neural determinants of phonological processing have been extensively investigated by means of either neurofunctional or neurostructural techniques. However, to fully understand how the brain represents and processes phonological information, there is need for studies that combine both methods. The present study applies such a multimodal approach with the aim of investigating the pre-reading relation between neural measures of auditory temporal processing, white matter properties of the reading network and phonological processing skills. We administered auditory steady-state responses, diffusion-weighted MRI scans and phonological awareness tasks in 59 pre-readers. Our results demonstrate that a stronger rightward lateralization of syllable-rate (4 Hz) processing coheres with higher fractional anisotropy in the left fronto-temporoparietal arcuate fasciculus. Both neural features each in turn relate to better phonological processing skills. As such, the current study provides novel evidence for the existence of a pre-reading relation between functional measures of syllable-rate processing, structural organization of the arcuate fasciculus and cognitive precursors of reading development. Moreover, our findings demonstrate the value of combining different neural techniques to gain insight in the underlying neural systems for reading (dis)ability.
Collapse
|
43
|
Kegler M, Reichenbach T. Modelling the effects of transcranial alternating current stimulation on the neural encoding of speech in noise. Neuroimage 2020; 224:117427. [PMID: 33038540 DOI: 10.1016/j.neuroimage.2020.117427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 09/11/2020] [Accepted: 10/01/2020] [Indexed: 11/29/2022] Open
Abstract
Transcranial alternating current stimulation (tACS) can non-invasively modulate neuronal activity in the cerebral cortex, in particular at the frequency of the applied stimulation. Such modulation can matter for speech processing, since the latter involves the tracking of slow amplitude fluctuations in speech by cortical activity. tACS with a current signal that follows the envelope of a speech stimulus has indeed been found to influence the cortical tracking and to modulate the comprehension of the speech in background noise. However, how exactly tACS influences the speech-related cortical activity, and how it causes the observed effects on speech comprehension, remains poorly understood. A computational model for cortical speech processing in a biophysically plausible spiking neural network has recently been proposed. Here we extended the model to investigate the effects of different types of stimulation waveforms, similar to those previously applied in experimental studies, on the processing of speech in noise. We assessed in particular how well speech could be decoded from the neural network activity when paired with the exogenous stimulation. We found that, in the absence of current stimulation, the speech-in-noise decoding accuracy was comparable to the comprehension of speech in background noise of human listeners. We further found that current stimulation could alter the speech decoding accuracy by a few percent, comparable to the effects of tACS on speech-in-noise comprehension. Our simulations further allowed us to identify the parameters for the stimulation waveforms that yielded the largest enhancement of speech-in-noise encoding. Our model thereby provides insight into the potential neural mechanisms by which weak alternating current stimulation may influence speech comprehension and allows to screen a large range of stimulation waveforms for their effect on speech processing.
Collapse
Affiliation(s)
- Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2BU London, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2BU London, United Kingdom.
| |
Collapse
|
44
|
Kuiper JJ, Lin YH, Young IM, Bai MY, Briggs RG, Tanglay O, Fonseka RD, Hormovas J, Dhanaraj V, Conner AK, O'Neal CM, Sughrue ME. A parcellation-based model of the auditory network. Hear Res 2020; 396:108078. [PMID: 32961519 DOI: 10.1016/j.heares.2020.108078] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 09/01/2020] [Accepted: 09/11/2020] [Indexed: 10/23/2022]
Abstract
INTRODUCTION The auditory network plays an important role in interaction with the environment. Multiple cortical areas, such as the inferior frontal gyrus, superior temporal gyrus and adjacent insula have been implicated in this processing. However, understanding of this network's connectivity has been devoid of tractography specificity. METHODS Using attention task-based functional magnetic resonance imaging (MRI) studies, an activation likelihood estimation (ALE) of the auditory network was generated. Regions of interest corresponding to the cortical parcellation scheme previously published under the Human Connectome Project were co-registered onto the ALE in the Montreal Neurological Institute coordinate space, and visually assessed for inclusion in the network. Diffusion spectrum MRI-based fiber tractography was performed to determine the structural connections between cortical parcellations comprising the network. RESULTS Fifteen cortical regions were found to be part of the auditory network: areas 44 and 8C, auditory area 1, 4, and 5, frontal operculum area 4, the lateral belt, medial belt and parabelt, parietal area F centromedian, perisylvian language area, retroinsular cortex, supplementary and cingulate eye field and the temporoparietal junction area 1. These regions showed consistent interconnections between adjacent parcellations. The frontal aslant tract was found to connect areas within the frontal lobe, while the arcuate fasciculus was found to connect the frontal and temporal lobe, and subcortical U-fibers were found to connect parcellations within the temporal area. Further studies may refine this model with the ultimate goal of clinical application.
Collapse
Affiliation(s)
- Joseph J Kuiper
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Yueh-Hsin Lin
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | | | - Michael Y Bai
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Robert G Briggs
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Onur Tanglay
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - R Dineth Fonseka
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Jorge Hormovas
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Vukshitha Dhanaraj
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Andrew K Conner
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Christen M O'Neal
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Michael E Sughrue
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia.
| |
Collapse
|
45
|
Risueno-Segovia C, Hage SR. Theta Synchronization of Phonatory and Articulatory Systems in Marmoset Monkey Vocal Production. Curr Biol 2020; 30:4276-4283.e3. [PMID: 32888481 DOI: 10.1016/j.cub.2020.08.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 07/20/2020] [Accepted: 08/05/2020] [Indexed: 11/27/2022]
Abstract
Human speech shares a 3-8-Hz theta rhythm across all languages [1-3]. According to the frame/content theory of speech evolution, this rhythm corresponds to syllabic rates derived from natural mandibular-associated oscillations [4]. The underlying pattern originates from oscillatory movements of articulatory muscles [4, 5] tightly linked to periodic vocal fold vibrations [4, 6, 7]. Such phono-articulatory rhythms have been proposed as one of the crucial preadaptations for human speech evolution [3, 8, 9]. However, the evolutionary link in phono-articulatory rhythmicity between vertebrate vocalization and human speech remains unclear. From the phonatory perspective, theta oscillations might be phylogenetically preserved throughout all vertebrate clades [10-12]. From the articulatory perspective, theta oscillations are present in non-vocal lip smacking [1, 13, 14], teeth chattering [15], vocal lip smacking [16], and clicks and faux-speech [17] in non-human primates, potential evolutionary precursors for speech rhythmicity [1, 13]. Notably, a universal phono-articulatory rhythmicity similar to that in human speech is considered to be absent in non-human primate vocalizations, typically produced with sound modulations lacking concomitant articulatory movements [1, 9, 18]. Here, we challenge this view by investigating the coupling of phonatory and articulatory systems in marmoset vocalizations. Using quantitative measures of acoustic call structure, e.g., amplitude envelope, and call-associated articulatory movements, i.e., inter-lip distance, we show that marmosets display speech-like bi-motor rhythmicity. These oscillations are synchronized and phase locked at theta rhythms. Our findings suggest that oscillatory rhythms underlying speech production evolved early in the primate lineage, identifying marmosets as a suitable animal model to decipher the evolutionary and neural basis of coupled phono-articulatory movements.
Collapse
Affiliation(s)
- Cristina Risueno-Segovia
- Neurobiology of Social Communication, Department of Otolaryngology, Head and Neck Surgery, Hearing Research Centre, University of Tübingen Medical Center, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25, 72076 Tübingen, Germany; Graduate School of Neural & Behavioural Sciences - International Max Planck Research School, University of Tübingen, Österberg-Str. 3, 72074 Tübingen, Germany
| | - Steffen R Hage
- Neurobiology of Social Communication, Department of Otolaryngology, Head and Neck Surgery, Hearing Research Centre, University of Tübingen Medical Center, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25, 72076 Tübingen, Germany.
| |
Collapse
|
46
|
Abstract
Comparative studies on brain asymmetry date back to the 19th century but then largely disappeared due to the assumption that lateralization is uniquely human. Since the reemergence of this field in the 1970s, we learned that left-right differences of brain and behavior exist throughout the animal kingdom and pay off in terms of sensory, cognitive, and motor efficiency. Ontogenetically, lateralization starts in many species with asymmetrical expression patterns of genes within the Nodal cascade that set up the scene for later complex interactions of genetic, environmental, and epigenetic factors. These take effect during different time points of ontogeny and create asymmetries of neural networks in diverse species. As a result, depending on task demands, left- or right-hemispheric loops of feedforward or feedback projections are then activated and can temporarily dominate a neural process. In addition, asymmetries of commissural transfer can shape lateralized processes in each hemisphere. It is still unclear if interhemispheric interactions depend on an inhibition/excitation dichotomy or instead adjust the contralateral temporal neural structure to delay the other hemisphere or synchronize with it during joint action. As outlined in our review, novel animal models and approaches could be established in the last decades, and they already produced a substantial increase of knowledge. Since there is practically no realm of human perception, cognition, emotion, or action that is not affected by our lateralized neural organization, insights from these comparative studies are crucial to understand the functions and pathologies of our asymmetric brain.
Collapse
Affiliation(s)
- Onur Güntürkün
- Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Felix Ströckens
- Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sebastian Ocklenburg
- Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
47
|
Moinnereau MA, Rouat J, Whittingstall K, Plourde E. A frequency-band coupling model of EEG signals can capture features from an input audio stimulus. Hear Res 2020; 393:107994. [DOI: 10.1016/j.heares.2020.107994] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 04/27/2020] [Accepted: 05/04/2020] [Indexed: 10/24/2022]
|
48
|
Abstract
Abstract
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
Collapse
Affiliation(s)
- Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
49
|
Sheng J, Zheng L, Lyu B, Cen Z, Qin L, Tan LH, Huang MX, Ding N, Gao JH. The Cortical Maps of Hierarchical Linguistic Structures during Speech Perception. Cereb Cortex 2020; 29:3232-3240. [PMID: 30137249 DOI: 10.1093/cercor/bhy191] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2018] [Revised: 06/27/2018] [Accepted: 07/20/2018] [Indexed: 11/14/2022] Open
Abstract
The hierarchical nature of language requires human brain to internally parse connected-speech and incrementally construct abstract linguistic structures. Recent research revealed multiple neural processing timescales underlying grammar-based configuration of linguistic hierarchies. However, little is known about where in the whole cerebral cortex such temporally scaled neural processes occur. This study used novel magnetoencephalography source imaging techniques combined with a unique language stimulation paradigm to segregate cortical maps synchronized to 3 levels of linguistic units (i.e., words, phrases, and sentences). Notably, distinct ensembles of cortical loci were identified to feature structures at different levels. The superior temporal gyrus was found to be involved in processing all 3 linguistic levels while distinct ensembles of other brain regions were recruited to encode each linguistic level. Neural activities in the right motor cortex only followed the rhythm of monosyllabic words which have clear acoustic boundaries, whereas the left anterior temporal lobe and the left inferior frontal gyrus were selectively recruited in processing phrases or sentences. Our results ground a multi-timescale hierarchical neural processing of speech in neuroanatomical reality with specific sets of cortices responsible for different levels of linguistic units.
Collapse
Affiliation(s)
- Jingwei Sheng
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing, China.,Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,McGovern Institute for Brain Research, Peking University, Beijing, China
| | - Li Zheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,McGovern Institute for Brain Research, Peking University, Beijing, China.,Department of Biomedical Engineering, Peking University, Beijing, China
| | - Bingjiang Lyu
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, Cambridge, UK
| | - Zhehang Cen
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing, China.,Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,McGovern Institute for Brain Research, Peking University, Beijing, China
| | - Lang Qin
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,Department of Linguistics, The University of Hong Kong, Hong Kong, China
| | - Li Hai Tan
- Center for Brain Disorders and Cognitive Science, Shenzhen University, Shenzhen, Guangdong, China.,Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, Guangdong, China
| | - Ming-Xiong Huang
- Department of Radiology, University of California, San Diego, CA, USA.,Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA
| | - Nai Ding
- College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, Zhejiang, China.,Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, Zhejiang, China.,State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jia-Hong Gao
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing, China.,Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,McGovern Institute for Brain Research, Peking University, Beijing, China.,Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, Guangdong, China.,Shenzhen Key Laboratory of Affective and Social Cognitive Science, Institute of Affective and Social Neuroscience, Shenzhen University, Shenzhen, Guangdong, China
| |
Collapse
|
50
|
Music as a scaffold for listening to speech: Better neural phase-locking to song than speech. Neuroimage 2020; 214:116767. [DOI: 10.1016/j.neuroimage.2020.116767] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 03/18/2020] [Accepted: 03/19/2020] [Indexed: 11/23/2022] Open
|