1
|
Lad M, Taylor JP, Griffiths TD. The contribution of short-term memory for sound features to speech-in-noise perception and cognition. Hear Res 2024; 451:109081. [PMID: 39004015 DOI: 10.1016/j.heares.2024.109081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 06/20/2024] [Accepted: 07/10/2024] [Indexed: 07/16/2024]
Abstract
Speech-in-noise (SIN) perception is a fundamental ability that declines with aging, as does general cognition. We assess whether auditory cognitive ability, in particular short-term memory for sound features, contributes to both. We examined how auditory memory for fundamental sound features, the carrier frequency and amplitude modulation rate of modulated white noise, contributes to SIN perception. We assessed SIN in 153 healthy participants with varying degrees of hearing loss using measures that require single-digit perception (the Digits-in-Noise, DIN) and sentence perception (Speech-in-Babble, SIB). Independent variables were auditory memory and a range of other factors including the Pure Tone Audiogram (PTA), a measure of dichotic pitch-in-noise perception (Huggins pitch), and demographic variables including age and sex. Multiple linear regression models were compared using Bayesian Model Comparison. The best predictor model for DIN included PTA and Huggins pitch (r2 = 0.32, p < 0.001), whereas the model for SIB included the addition of auditory memory for sound features (r2 = 0.24, p < 0.001). Further analysis demonstrated that auditory memory also explained a significant portion of the variance (28 %) in scores for a screening cognitive test for dementia. Auditory memory for non-speech sounds may therefore provide an important predictor of both SIN and cognitive ability.
Collapse
Affiliation(s)
- Meher Lad
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, United Kingdom.
| | - John-Paul Taylor
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, United Kingdom
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, United Kingdom; Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, United Kingdom
| |
Collapse
|
2
|
Samoylov I, Arcara G, Buyanova I, Davydova E, Pereverzeva D, Sorokin A, Tyushkevich S, Mamokhina U, Danilina K, Dragoy O, Arutiunian V. Altered neural synchronization in response to 2 Hz amplitude-modulated tones in the auditory cortex of children with Autism Spectrum Disorder: An MEG study. Int J Psychophysiol 2024; 203:112405. [PMID: 39053734 DOI: 10.1016/j.ijpsycho.2024.112405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 05/13/2024] [Accepted: 07/17/2024] [Indexed: 07/27/2024]
Abstract
OBJECTIVE Some studies have hypothesized that atypical neural synchronization at the delta frequency band in the auditory cortex is associated with phonological and language skills in children with Autism Spectrum Disorder (ASD), but it is still poorly understood. This study investigated this neural activity and addressed the relationships between auditory response and behavioral measures of children with ASD. METHODS We used magnetoencephalography and individual brain models to investigate 2 Hz Auditory Steady-State Response (ASSR) in 20 primary-school-aged children with ASD and 20 age-matched typically developing (TD) controls. RESULTS First, we found a between-group difference in the localization of the auditory response, so as the topology of 2 Hz ASSR was more superior and posterior in TD children when comparing to children with ASD. Second, the power of 2 Hz ASSR was reduced in the ASD group. Finally, we observed a significant association between the amplitude of neural response and language skills in children with ASD. CONCLUSIONS The study provided the evidence of reduced neural response in children with ASD and its relation to language skills. SIGNIFICANCE These findings may inform future interventions targeting auditory and language impairments in ASD population.
Collapse
Affiliation(s)
- Ilya Samoylov
- Center for Language and Brain, HSE University, Moscow, Russia.
| | | | - Irina Buyanova
- Center for Language and Brain, HSE University, Moscow, Russia; University of Otago, Dunedin, New Zealand
| | - Elizaveta Davydova
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia; Chair of Differential Psychology and Psychophysiology, Moscow State University of Psychology and Education, Moscow, Russia
| | - Darya Pereverzeva
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia
| | - Alexander Sorokin
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia; Haskins Laboratories, New Haven, CT, USA
| | - Svetlana Tyushkevich
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia
| | - Uliana Mamokhina
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia
| | - Kamilla Danilina
- Federal Resource Center for ASD, Moscow State University of Psychology and Education, Moscow, Russia; Scientific Research and Practical Center for Pediatric Psychoneurology, Moscow, Russia
| | - Olga Dragoy
- Center for Language and Brain, HSE University, Moscow, Russia; Institute of Linguistics, Russian Academy of Sciences, Moscow, Russia
| | - Vardan Arutiunian
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
| |
Collapse
|
3
|
Çetinçelik M, Jordan-Barros A, Rowland CF, Snijders TM. The effect of visual speech cues on neural tracking of speech in 10-month-old infants. Eur J Neurosci 2024; 60:5381-5399. [PMID: 39188179 DOI: 10.1111/ejn.16492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 07/04/2024] [Accepted: 07/20/2024] [Indexed: 08/28/2024]
Abstract
While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1-1.75 and 2.5-3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.
Collapse
Affiliation(s)
- Melis Çetinçelik
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
- Cognitive Neuropsychology Department, Tilburg University, Tilburg, The Netherlands
| | - Antonia Jordan-Barros
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, London, UK
- Experimental Psychology, University College London, London, UK
| | - Caroline F Rowland
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Cognitive Neuropsychology Department, Tilburg University, Tilburg, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
4
|
Buzi G, Eustache F, Droit-Volet S, Desaunay P, Hinault T. Towards a neurodevelopmental cognitive perspective of temporal processing. Commun Biol 2024; 7:987. [PMID: 39143328 PMCID: PMC11324894 DOI: 10.1038/s42003-024-06641-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Accepted: 07/26/2024] [Indexed: 08/16/2024] Open
Abstract
The ability to organize and memorize the unfolding of events over time is a fundamental feature of cognition, which develops concurrently with the maturation of the brain. Nonetheless, how temporal processing evolves across the lifetime as well as the links with the underlying neural substrates remains unclear. Here, we intend to retrace the main developmental stages of brain structure, function, and cognition linked to the emergence of timing abilities. This neurodevelopmental perspective aims to untangle the puzzling trajectory of temporal processing aspects across the lifetime, paving the way to novel neuropsychological assessments and cognitive rehabilitation strategies.
Collapse
Affiliation(s)
- Giulia Buzi
- Inserm, U1077, EPHE, UNICAEN, Normandie Université, PSL Université Paris, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine (NIMH), Caen, France
| | - Francis Eustache
- Inserm, U1077, EPHE, UNICAEN, Normandie Université, PSL Université Paris, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine (NIMH), Caen, France
| | - Sylvie Droit-Volet
- Université Clermont Auvergne, LAPSCO, CNRS, UMR 6024, Clermont-Ferrand, France
| | - Pierre Desaunay
- Inserm, U1077, EPHE, UNICAEN, Normandie Université, PSL Université Paris, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine (NIMH), Caen, France
- Service de Psychiatrie de l'enfant et de l'adolescent, CHU de Caen, Caen, France
| | - Thomas Hinault
- Inserm, U1077, EPHE, UNICAEN, Normandie Université, PSL Université Paris, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine (NIMH), Caen, France.
| |
Collapse
|
5
|
Lancia L. Instantaneous phase of rhythmic behaviour under volitional control. Hum Mov Sci 2024; 96:103249. [PMID: 39047306 DOI: 10.1016/j.humov.2024.103249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 06/18/2024] [Accepted: 06/21/2024] [Indexed: 07/27/2024]
Abstract
The phase of signals representing cyclic behavioural patterns provides valuable information for understanding the mechanisms driving the observed behaviours. Methods usually adopted to estimate the phase, which are based on projecting the signal onto the complex plane, have strict requirements on its frequency content, which limits their application. To overcome these limitations, input signals can be processed using band-pass filters or decomposition techniques. In this paper, we briefly review these approaches and propose a new one. Our approach is based on the principles of Empirical Mode Decomposition (EMD), but unlike EMD, it does not aim to decompose the input signal. This avoids the many problems that can occur when extracting a signal's components one by one. The proposed approach estimates the phase of experimental signals that have one main oscillatory component modulated by slower activity and perturbed by weak, sparse, or random activity at faster time scales. We illustrate how our approach works by estimating the phase dynamics of synthetic signals and real-world signals representing knee angles during flexion/extension activity, heel height during gait, and the activity of different organs involved in speech production.
Collapse
Affiliation(s)
- Leonardo Lancia
- Laboratoire Parole et Langage, Aix-Marseille Université / CNRS, 5 av. Pasteur, 13100 Aix-en-Provence, France.
| |
Collapse
|
6
|
Sjuls GS, Harvei NN, Vulchanova MD. The relationship between neural phase entrainment and statistical word-learning: A scoping review. Psychon Bull Rev 2024; 31:1399-1419. [PMID: 38062317 PMCID: PMC11358248 DOI: 10.3758/s13423-023-02425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/14/2023] [Indexed: 08/29/2024]
Abstract
Statistical language-learning, the capacity to extract regularities from a continuous speech stream, arguably involves the ability to segment the stream before the discrete constituents can be stored in memory. According to recent accounts, the segmentation process is reflected in the alignment of neural activity to the statistical structure embedded in the input. However, the degree to which it can predict the subsequent leaning outcome is currently unclear. As this is a relatively new avenue of research on statistical learning, a scoping review approach was adopted to identify and explore the current body of evidence on the use of neural phase entrainment as a measure of online neural statistical language-learning and its relation to the learning outcome, as well as the design characteristics of these studies. All included studies (11) observed entrainment to the underlying statistical pattern with exposure to the structured speech stream. A significant association between entrainment and learning outcome was observed in six of the studies. We discuss these findings in light of what neural entrainment in statistical word-learning experiments might represent, and speculate that it might reflect a general auditory processing mechanism, rather than segmentation of the speech stream per se. Lastly, as we find the current selection of studies to provide inconclusive evidence for neural entrainment's role in statistical learning, future research avenues are proposed.
Collapse
Affiliation(s)
- Guro S Sjuls
- Department of Language and Literature, Norwegian University of Science and Technology, Dragvoll alle 6, 7049, Trondheim, Norway.
| | - Nora N Harvei
- Department of Language and Literature, Norwegian University of Science and Technology, Dragvoll alle 6, 7049, Trondheim, Norway
| | - Mila D Vulchanova
- Department of Language and Literature, Norwegian University of Science and Technology, Dragvoll alle 6, 7049, Trondheim, Norway
| |
Collapse
|
7
|
Guo Y, Li Y, Liu F, Lin H, Sun Y, Zhang J, Hong Q, Yao M, Chi X. Association between neural prosody discrimination and language abilities in toddlers: a functional near-infrared spectroscopy study. BMC Pediatr 2024; 24:449. [PMID: 38997661 PMCID: PMC11241962 DOI: 10.1186/s12887-024-04889-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 06/18/2024] [Indexed: 07/14/2024] Open
Abstract
BACKGROUND Language delay affects near- and long-term social communication and learning in toddlers, and, an increasing number of experts pay attention to it. The development of prosody discrimination is one of the earliest stages of language development in which key skills for later stages are mastered. Therefore, analyzing the relationship between brain discrimination of speech prosody and language abilities may provide an objective basis for the diagnosis and intervention of language delay. METHODS In this study, all cases(n = 241) were enrolled from a tertiary women's hospital, from 2021 to 2022. We used functional near-infrared spectroscopy (fNIRS) to assess children's neural prosody discrimination abilities, and a Chinese communicative development inventory (CCDI) were used to evaluate their language abilities. RESULTS Ninety-eight full-term and 108 preterm toddlers were included in the final analysis in phase I and II studies, respectively. The total CCDI screening abnormality rate was 9.2% for full-term and 34.3% for preterm toddlers. Full-term toddlers showed prosody discrimination ability in all channels except channel 5, while preterm toddlers showed prosody discrimination ability in channel 6 only. Multifactorial logistic regression analyses showed that prosody discrimination of the right angular gyrus (channel 3) had a statistically significant effect on language delay (odd ratio = 0.301, P < 0.05) in full-term toddlers. Random forest (RF) regression model presented that prosody discrimination reflected by channels and brain regions based on fNIRS data was an important parameter for predicting language delay in preterm toddlers, among which the prosody discrimination reflected by the right angular gyrus (channel 4) was the most important parameter. The area under the model Receiver operating characteristic (ROC) curve was 0.687. CONCLUSIONS Neural prosody discrimination ability is positively associated with language development, assessment of brain prosody discrimination abilities through fNIRS could be used as an objective indicator for early identification of children with language delay in the future clinical application.
Collapse
Affiliation(s)
- YanRu Guo
- Children's Healthcare Department, Women's Hospital of Nanjing Medical University (Nanjing Women and Children's Healthcare Hospital), Nanjing, China
| | - YanWei Li
- College of Early Childhood Education, Nanjing Xiaozhuang University, Nanjing, China
| | - FuLin Liu
- Southeast University, Nanjing, China
| | - HuanXi Lin
- Children's Healthcare Department, Women's Hospital of Nanjing Medical University (Nanjing Women and Children's Healthcare Hospital), Nanjing, China
| | - YuYing Sun
- Children's Healthcare Department, Women's Hospital of Nanjing Medical University (Nanjing Women and Children's Healthcare Hospital), Nanjing, China
| | - JiaLin Zhang
- State Key Laboratory of Reproductive Medicine and Offspring Health, Nanjing, China
| | - Qin Hong
- Children's Healthcare Department, Women's Hospital of Nanjing Medical University (Nanjing Women and Children's Healthcare Hospital), Nanjing, China
| | - MengMeng Yao
- Children's Healthcare Department, Women's Hospital of Nanjing Medical University (Nanjing Women and Children's Healthcare Hospital), Nanjing, China.
| | - Xia Chi
- Children's Healthcare Department, Women's Hospital of Nanjing Medical University (Nanjing Women and Children's Healthcare Hospital), Nanjing, China.
| |
Collapse
|
8
|
Erickson CA, Perez-Cano L, Pedapati EV, Painbeni E, Bonfils G, Schmitt LM, Sachs H, Nelson M, De Stefano L, Westerkamp G, de Souza ALS, Pohl O, Laufer O, Issachar G, Blaettler T, Hyvelin JM, Durham LA. Safety, Tolerability, and EEG-Based Target Engagement of STP1 (PDE3,4 Inhibitor and NKCC1 Antagonist) in a Randomized Clinical Trial in a Subgroup of Patients with ASD. Biomedicines 2024; 12:1430. [PMID: 39062003 PMCID: PMC11274259 DOI: 10.3390/biomedicines12071430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 06/11/2024] [Accepted: 06/20/2024] [Indexed: 07/28/2024] Open
Abstract
This study aimed to evaluate the safety and tolerability of STP1, a combination of ibudilast and bumetanide, tailored for the treatment of a clinically and biologically defined subgroup of patients with Autism Spectrum Disorder (ASD), namely ASD Phenotype 1 (ASD-Phen1). We conducted a randomized, double-blind, placebo-controlled, parallel-group phase 1b study with two 14-day treatment phases (registered at clinicaltrials.gov as NCT04644003). Nine ASD-Phen1 patients were administered STP1, while three received a placebo. We assessed safety and tolerability, along with electrophysiological markers, such as EEG, Auditory Habituation, and Auditory Chirp Synchronization, to better understand STP1's mechanism of action. Additionally, we used several clinical scales to measure treatment outcomes. The results showed that STP1 was well-tolerated, with electrophysiological markers indicating a significant and dose-related reduction of gamma power in the whole brain and in brain areas associated with executive function and memory. Treatment with STP1 also increased alpha 2 power in frontal and occipital regions and improved habituation and neural synchronization to auditory chirps. Although numerical improvements were observed in several clinical scales, they did not reach statistical significance. Overall, this study suggests that STP1 is well-tolerated in ASD-Phen1 patients and shows indirect target engagement in ASD brain regions of interest.
Collapse
Affiliation(s)
- Craig A. Erickson
- Division of Child and Adolescent Psychiatry, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Psychiatry and Behavioral Neuroscience, University of Cincinnati, Cincinnati, OH 45229, USA
| | - Laura Perez-Cano
- Discovery and Data Science (DDS) Unit, STALICLA SL, Moll de Barcelona, s/n, Edif Este, 08039 Barcelona, Spain
| | - Ernest V. Pedapati
- Division of Child and Adolescent Psychiatry, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Psychiatry and Behavioral Neuroscience, University of Cincinnati, Cincinnati, OH 45229, USA
- Division of Child Neurology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
| | - Eric Painbeni
- Drug Development Unit (DDU), STALICLA SA, Campus Biotech Innovation Park, Avenue de Sécheron 15, 1202 Geneva, Switzerland
| | - Gregory Bonfils
- Drug Development Unit (DDU), STALICLA SA, Campus Biotech Innovation Park, Avenue de Sécheron 15, 1202 Geneva, Switzerland
| | - Lauren M. Schmitt
- Division of Behavioral Medicine and Clinical Psychology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH 45229, USA
| | - Hannah Sachs
- Division of Child and Adolescent Psychiatry, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
| | - Meredith Nelson
- Division of Behavioral Medicine and Clinical Psychology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
| | - Lisa De Stefano
- Division of Behavioral Medicine and Clinical Psychology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
| | - Grace Westerkamp
- Division of Child and Adolescent Psychiatry, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
| | - Adriano L. S. de Souza
- Drug Development Unit (DDU), STALICLA SA, Campus Biotech Innovation Park, Avenue de Sécheron 15, 1202 Geneva, Switzerland
| | - Oliver Pohl
- Drug Development Unit (DDU), STALICLA SA, Campus Biotech Innovation Park, Avenue de Sécheron 15, 1202 Geneva, Switzerland
| | | | | | - Thomas Blaettler
- Drug Development Unit (DDU), STALICLA SA, Campus Biotech Innovation Park, Avenue de Sécheron 15, 1202 Geneva, Switzerland
| | - Jean-Marc Hyvelin
- Drug Development Unit (DDU), STALICLA SA, Campus Biotech Innovation Park, Avenue de Sécheron 15, 1202 Geneva, Switzerland
| | - Lynn A. Durham
- Drug Development Unit (DDU), STALICLA SA, Campus Biotech Innovation Park, Avenue de Sécheron 15, 1202 Geneva, Switzerland
| |
Collapse
|
9
|
Nora A, Rinkinen O, Renvall H, Service E, Arkkila E, Smolander S, Laasonen M, Salmelin R. Impaired Cortical Tracking of Speech in Children with Developmental Language Disorder. J Neurosci 2024; 44:e2048232024. [PMID: 38589232 PMCID: PMC11140678 DOI: 10.1523/jneurosci.2048-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/10/2024] Open
Abstract
In developmental language disorder (DLD), learning to comprehend and express oneself with spoken language is impaired, but the reason for this remains unknown. Using millisecond-scale magnetoencephalography recordings combined with machine learning models, we investigated whether the possible neural basis of this disruption lies in poor cortical tracking of speech. The stimuli were common spoken Finnish words (e.g., dog, car, hammer) and sounds with corresponding meanings (e.g., dog bark, car engine, hammering). In both children with DLD (10 boys and 7 girls) and typically developing (TD) control children (14 boys and 3 girls), aged 10-15 years, the cortical activation to spoken words was best modeled as time-locked to the unfolding speech input at ∼100 ms latency between sound and cortical activation. Amplitude envelope (amplitude changes) and spectrogram (detailed time-varying spectral content) of the spoken words, but not other sounds, were very successfully decoded based on time-locked brain responses in bilateral temporal areas; based on the cortical responses, the models could tell at ∼75-85% accuracy which of the two sounds had been presented to the participant. However, the cortical representation of the amplitude envelope information was poorer in children with DLD compared with TD children at longer latencies (at ∼200-300 ms lag). We interpret this effect as reflecting poorer retention of acoustic-phonetic information in short-term memory. This impaired tracking could potentially affect the processing and learning of words as well as continuous speech. The present results offer an explanation for the problems in language comprehension and acquisition in DLD.
Collapse
Affiliation(s)
- Anni Nora
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| | - Oona Rinkinen
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| | - Hanna Renvall
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
- BioMag Laboratory, HUS Diagnostic Center, Helsinki University Hospital, Helsinki FI-00029, Finland
| | - Elisabet Service
- Department of Linguistics and Languages, Centre for Advanced Research in Experimental and Applied Linguistics (ARiEAL), McMaster University, Hamilton, Ontario L8S 4L8, Canada
- Department of Psychology and Logopedics, University of Helsinki, Helsinki FI-00014, Finland
| | - Eva Arkkila
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
| | - Sini Smolander
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
- Research Unit of Logopedics, University of Oulu, Oulu FI-90014, Finland
- Department of Logopedics, University of Eastern Finland, Joensuu FI-80101, Finland
| | - Marja Laasonen
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
- Department of Logopedics, University of Eastern Finland, Joensuu FI-80101, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| |
Collapse
|
10
|
Endevelt-Shapira Y, Bosseler AN, Zhao TC, Mizrahi JC, Meltzoff AN, Kuhl PK. Heart-to-heart: infant heart rate at 3 months is linked to infant-directed speech, mother-infant interaction, and later language outcomes. Front Hum Neurosci 2024; 18:1380075. [PMID: 38756844 PMCID: PMC11096508 DOI: 10.3389/fnhum.2024.1380075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 04/16/2024] [Indexed: 05/18/2024] Open
Abstract
Introduction Previous studies underscore the importance of speech input, particularly infant-directed speech (IDS) during one-on-one (1:1) parent-infant interaction, for child language development. We hypothesize that infants' attention to speech input, specifically IDS, supports language acquisition. In infants, attention and orienting responses are associated with heart rate deceleration. We examined whether individual differences in infants' heart rate measured during 1:1 mother-infant interaction is related to speech input and later language development scores in a longitudinal study. Methods Using a sample of 31 3-month-olds, we assessed infant heart rate during mother-infant face-to-face interaction in a laboratory setting. Multiple measures of speech input were gathered at 3 months of age during naturally occurring interactions at home using the Language ENvironment Analysis (LENA) system. Language outcome measures were assessed in the same children at 30 months of age using the MacArthur-Bates Communicative Development Inventory (CDI). Results Two novel findings emerged. First, we found that higher maternal IDS in a 1:1 context at home, as well as more mother-infant conversational turns at home, are associated with a lower heart rate measured during mother-infant social interaction in the laboratory. Second, we found significant associations between infant heart rate during mother-infant interaction in the laboratory at 3 months and prospective language development (CDI scores) at 30 months of age. Discussion Considering the current results in conjunction with other converging theoretical and neuroscientific data, we argue that high IDS input in the context of 1:1 social interaction increases infants' attention to speech and that infants' attention to speech in early development fosters their prospective language growth.
Collapse
Affiliation(s)
- Yaara Endevelt-Shapira
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States
| | - Alexis N. Bosseler
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States
| | - T. Christina Zhao
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, United States
| | - Julia C. Mizrahi
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States
| | - Andrew N. Meltzoff
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Patricia K. Kuhl
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, United States
| |
Collapse
|
11
|
Aldag N, Nogueira W. Psychoacoustic and electroencephalographic responses to changes in amplitude modulation depth and frequency in relation to speech recognition in cochlear implantees. Sci Rep 2024; 14:8181. [PMID: 38589483 PMCID: PMC11002021 DOI: 10.1038/s41598-024-58225-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 03/26/2024] [Indexed: 04/10/2024] Open
Abstract
Temporal envelope modulations (TEMs) are one of the most important features that cochlear implant (CI) users rely on to understand speech. Electroencephalographic assessment of TEM encoding could help clinicians to predict speech recognition more objectively, even in patients unable to provide active feedback. The acoustic change complex (ACC) and the auditory steady-state response (ASSR) evoked by low-frequency amplitude-modulated pulse trains can be used to assess TEM encoding with electrical stimulation of individual CI electrodes. In this study, we focused on amplitude modulation detection (AMD) and amplitude modulation frequency discrimination (AMFD) with stimulation of a basal versus an apical electrode. In twelve adult CI users, we (a) assessed behavioral AMFD thresholds and (b) recorded cortical auditory evoked potentials (CAEPs), AMD-ACC, AMFD-ACC, and ASSR in a combined 3-stimulus paradigm. We found that the electrophysiological responses were significantly higher for apical than for basal stimulation. Peak amplitudes of AMFD-ACC were small and (therefore) did not correlate with speech-in-noise recognition. We found significant correlations between speech-in-noise recognition and (a) behavioral AMFD thresholds and (b) AMD-ACC peak amplitudes. AMD and AMFD hold potential to develop a clinically applicable tool for assessing TEM encoding to predict speech recognition in CI users.
Collapse
Affiliation(s)
- Nina Aldag
- Department of Otolaryngology, Hannover Medical School and Cluster of Excellence 'Hearing4all', Hanover, Germany
| | - Waldo Nogueira
- Department of Otolaryngology, Hannover Medical School and Cluster of Excellence 'Hearing4all', Hanover, Germany.
| |
Collapse
|
12
|
Ershaid H, Lizarazu M, McLaughlin D, Cooke M, Simantiraki O, Koutsogiannaki M, Lallier M. Contributions of listening effort and intelligibility to cortical tracking of speech in adverse listening conditions. Cortex 2024; 172:54-71. [PMID: 38215511 DOI: 10.1016/j.cortex.2023.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 09/05/2023] [Accepted: 11/14/2023] [Indexed: 01/14/2024]
Abstract
Cortical tracking of speech is vital for speech segmentation and is linked to speech intelligibility. However, there is no clear consensus as to whether reduced intelligibility leads to a decrease or an increase in cortical speech tracking, warranting further investigation of the factors influencing this relationship. One such factor is listening effort, defined as the cognitive resources necessary for speech comprehension, and reported to have a strong negative correlation with speech intelligibility. Yet, no studies have examined the relationship between speech intelligibility, listening effort, and cortical tracking of speech. The aim of the present study was thus to examine these factors in quiet and distinct adverse listening conditions. Forty-nine normal hearing adults listened to sentences produced casually, presented in quiet and two adverse listening conditions: cafeteria noise and reverberant speech. Electrophysiological responses were registered with electroencephalogram, and listening effort was estimated subjectively using self-reported scores and objectively using pupillometry. Results indicated varying impacts of adverse conditions on intelligibility, listening effort, and cortical tracking of speech, depending on the preservation of the speech temporal envelope. The more distorted envelope in the reverberant condition led to higher listening effort, as reflected in higher subjective scores, increased pupil diameter, and stronger cortical tracking of speech in the delta band. These findings suggest that using measures of listening effort in addition to those of intelligibility is useful for interpreting cortical tracking of speech results. Moreover, reading and phonological skills of participants were positively correlated with listening effort in the cafeteria condition, suggesting a special role of expert language skills in processing speech in this noisy condition. Implications for future research and theories linking atypical cortical tracking of speech and reading disorders are further discussed.
Collapse
Affiliation(s)
- Hadeel Ershaid
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Mikel Lizarazu
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Drew McLaughlin
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Martin Cooke
- Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| | | | | | - Marie Lallier
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| |
Collapse
|
13
|
Naghibi N, Jahangiri N, Khosrowabadi R, Eickhoff CR, Eickhoff SB, Coull JT, Tahmasian M. Embodying Time in the Brain: A Multi-Dimensional Neuroimaging Meta-Analysis of 95 Duration Processing Studies. Neuropsychol Rev 2024; 34:277-298. [PMID: 36857010 PMCID: PMC10920454 DOI: 10.1007/s11065-023-09588-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 10/05/2022] [Indexed: 03/02/2023]
Abstract
Time is an omnipresent aspect of almost everything we experience internally or in the external world. The experience of time occurs through such an extensive set of contextual factors that, after decades of research, a unified understanding of its neural substrates is still elusive. In this study, following the recent best-practice guidelines, we conducted a coordinate-based meta-analysis of 95 carefully-selected neuroimaging papers of duration processing. We categorized the included papers into 14 classes of temporal features according to six categorical dimensions. Then, using the activation likelihood estimation (ALE) technique we investigated the convergent activation patterns of each class with a cluster-level family-wise error correction at p < 0.05. The regions most consistently activated across the various timing contexts were the pre-SMA and bilateral insula, consistent with an embodied theory of timing in which abstract representations of duration are rooted in sensorimotor and interoceptive experience, respectively. Moreover, class-specific patterns of activation could be roughly divided according to whether participants were timing auditory sequential stimuli, which additionally activated the dorsal striatum and SMA-proper, or visual single interval stimuli, which additionally activated the right middle frontal and inferior parietal cortices. We conclude that temporal cognition is so entangled with our everyday experience that timing stereotypically common combinations of stimulus characteristics reactivates the sensorimotor systems with which they were first experienced.
Collapse
Affiliation(s)
- Narges Naghibi
- Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
| | - Nadia Jahangiri
- Faculty of Psychology & Education, Allameh Tabataba'i University, Tehran, Iran
| | - Reza Khosrowabadi
- Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
| | - Claudia R Eickhoff
- Institute of Neuroscience and Medicine Research, Structural and functional organisation of the brain (INM-1), Jülich Research Center, Jülich, Germany
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, Düsseldorf, Germany
| | - Simon B Eickhoff
- Institute of Neuroscience and Medicine Research, Brain and Behaviour (INM-7), Jülich Research Center, Wilhelm-Johnen-Straße, Jülich, Germany
- Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University, Düsseldorf, Germany
| | - Jennifer T Coull
- Laboratoire de Neurosciences Cognitives (UMR 7291), Aix-Marseille Université & CNRS, Marseille, France
| | - Masoud Tahmasian
- Institute of Neuroscience and Medicine Research, Brain and Behaviour (INM-7), Jülich Research Center, Wilhelm-Johnen-Straße, Jülich, Germany.
- Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University, Düsseldorf, Germany.
| |
Collapse
|
14
|
Yang Y, Zeng FG. Syllable-rate-adjusted-modulation (SRAM) predicts clear and conversational speech intelligibility. Front Hum Neurosci 2024; 18:1324027. [PMID: 38410256 PMCID: PMC10895021 DOI: 10.3389/fnhum.2024.1324027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 01/17/2024] [Indexed: 02/28/2024] Open
Abstract
Introduction Objectively predicting speech intelligibility is important in both telecommunication and human-machine interaction systems. The classic method relies on signal-to-noise ratios (SNR) to successfully predict speech intelligibility. One exception is clear speech, in which a talker intentionally articulates as if speaking to someone who has hearing loss or is from a different language background. As a result, at the same SNR, clear speech produces higher intelligibility than conversational speech. Despite numerous efforts, no objective metric can successfully predict the clear speech benefit at the sentence level. Methods We proposed a Syllable-Rate-Adjusted-Modulation (SRAM) index to predict the intelligibility of clear and conversational speech. The SRAM used as short as 1 s speech and estimated its modulation power above the syllable rate. We compared SRAM with three reference metrics: envelope-regression-based speech transmission index (ER-STI), hearing-aid speech perception index version 2 (HASPI-v2) and short-time objective intelligibility (STOI), and five automatic speech recognition systems: Amazon Transcribe, Microsoft Azure Speech-To-Text, Google Speech-To-Text, wav2vec2 and Whisper. Results SRAM outperformed the three reference metrics (ER-STI, HASPI-v2 and STOI) and the five automatic speech recognition systems. Additionally, we demonstrated the important role of syllable rate in predicting speech intelligibility by comparing SRAM with the total modulation power (TMP) that was not adjusted by the syllable rate. Discussion SRAM can potentially help understand the characteristics of clear speech, screen speech materials with high intelligibility, and convert conversational speech into clear speech.
Collapse
Affiliation(s)
- Ye Yang
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, United States
| | - Fan-Gang Zeng
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, United States
- Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
15
|
Smith TM, Shen Y, Williams CN, Kidd GR, McAuley JD. Contribution of speech rhythm to understanding speech in noisy conditions: Further test of a selective entrainment hypothesis. Atten Percept Psychophys 2024; 86:627-642. [PMID: 38012475 DOI: 10.3758/s13414-023-02815-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2023] [Indexed: 11/29/2023]
Abstract
Previous work by McAuley et al. Attention, Perception, & Psychophysics, 82, 3222-3233, (2020), Attention, Perception & Psychophysics, 83, 2229-2240, (2021) showed that disruption of the natural rhythm of target (attended) speech worsens speech recognition in the presence of competing background speech or noise (a target-rhythm effect), while disruption of background speech rhythm improves target recognition (a background-rhythm effect). While these results were interpreted as support for the role of rhythmic regularities in facilitating target-speech recognition amidst competing backgrounds (in line with a selective entrainment hypothesis), questions remain about the factors that contribute to the target-rhythm effect. Experiment 1 ruled out the possibility that the target-rhythm effect relies on a decrease in intelligibility of the rhythm-altered keywords. Sentences from the Coordinate Response Measure (CRM) paradigm were presented with a background of speech-shaped noise, and the rhythm of the initial portion of these target sentences (the target rhythmic context) was altered while critically leaving the target Color and Number keywords intact. Results showed a target-rhythm effect, evidenced by poorer keyword recognition when the target rhythmic context was altered, despite the absence of rhythmic manipulation of the keywords. Experiment 2 examined the influence of the relative onset asynchrony between target and background keywords. Results showed a significant target-rhythm effect that was independent of the effect of target-background keyword onset asynchrony. Experiment 3 provided additional support for the selective entrainment hypothesis by replicating the target-rhythm effect with a set of speech materials that were less rhythmically constrained than the CRM sentences.
Collapse
Affiliation(s)
- Toni M Smith
- Department of Psychology, Michigan State University, East Lansing, MI, USA.
| | - Yi Shen
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Christina N Williams
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Gary R Kidd
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - J Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI, USA
| |
Collapse
|
16
|
Çetinçelik M, Rowland CF, Snijders TM. Ten-month-old infants' neural tracking of naturalistic speech is not facilitated by the speaker's eye gaze. Dev Cogn Neurosci 2023; 64:101297. [PMID: 37778275 PMCID: PMC10543766 DOI: 10.1016/j.dcn.2023.101297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 08/21/2023] [Accepted: 09/08/2023] [Indexed: 10/03/2023] Open
Abstract
Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker's eye gaze on ten-month-old infants' neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants' speech-brain coherence at stress (1-1.75 Hz) and syllable (2.5-3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants' brains tracked the speech rhythm both at the stress and syllable rates, and that infants' neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker's gaze.
Collapse
Affiliation(s)
- Melis Çetinçelik
- Department of Experimental Psychology, Utrecht University, Utrecht, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| | - Caroline F Rowland
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands; Cognitive Neuropsychology Department, Tilburg University, Tilburg, the Netherlands
| |
Collapse
|
17
|
Ni G, Xu Z, Bai Y, Zheng Q, Zhao R, Wu Y, Ming D. EEG-based assessment of temporal fine structure and envelope effect in mandarin syllable and tone perception. Cereb Cortex 2023; 33:11287-11299. [PMID: 37804238 DOI: 10.1093/cercor/bhad366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 09/13/2023] [Accepted: 09/15/2023] [Indexed: 10/09/2023] Open
Abstract
In recent years, speech perception research has benefited from low-frequency rhythm entrainment tracking of the speech envelope. However, speech perception is still controversial regarding the role of speech envelope and temporal fine structure, especially in Mandarin. This study aimed to discuss the dependence of Mandarin syllables and tones perception on the speech envelope and the temporal fine structure. We recorded the electroencephalogram (EEG) of the subjects under three acoustic conditions using the sound chimerism analysis, including (i) the original speech, (ii) the speech envelope and the sinusoidal modulation, and (iii) the fine structure of time and the modulation of the non-speech (white noise) sound envelope. We found that syllable perception mainly depended on the speech envelope, while tone perception depended on the temporal fine structure. The delta bands were prominent, and the parietal and prefrontal lobes were the main activated brain areas, regardless of whether syllable or tone perception was involved. Finally, we decoded the spatiotemporal features of Mandarin perception from the microstate sequence. The spatiotemporal feature sequence of the EEG caused by speech material was found to be specific, suggesting a new perspective for the subsequent auditory brain-computer interface. These results provided a new scheme for the coding strategy of new hearing aids for native Mandarin speakers. HIGHLIGHTS
Collapse
Affiliation(s)
- Guangjian Ni
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
- Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| | - Zihao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
- Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China
| | - Yanru Bai
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
- Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China
| | - Qi Zheng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Ran Zhao
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
- Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China
| | - Yubo Wu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
- Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| |
Collapse
|
18
|
Ortiz-Barajas MC, Guevara R, Gervain J. Neural oscillations and speech processing at birth. iScience 2023; 26:108187. [PMID: 37965146 PMCID: PMC10641252 DOI: 10.1016/j.isci.2023.108187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 08/29/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2023] Open
Abstract
Are neural oscillations biologically endowed building blocks of the neural architecture for speech processing from birth, or do they require experience to emerge? In adults, delta, theta, and low-gamma oscillations support the simultaneous processing of phrasal, syllabic, and phonemic units in the speech signal, respectively. Using electroencephalography to investigate neural oscillations in the newborn brain we reveal that delta and theta oscillations differ for rhythmically different languages, suggesting that these bands underlie newborns' universal ability to discriminate languages on the basis of rhythm. Additionally, higher theta activity during post-stimulus as compared to pre-stimulus rest suggests that stimulation after-effects are present from birth.
Collapse
Affiliation(s)
- Maria Clemencia Ortiz-Barajas
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, 45 rue des Saints-Pères, 75006 Paris, France
| | - Ramón Guevara
- Department of Physics and Astronomy, University of Padua, Via Marzolo 8, 35131 Padua, Italy
| | - Judit Gervain
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, 45 rue des Saints-Pères, 75006 Paris, France
- Department of Developmental and Social Psychology, University of Padua, Via Venezia 8, 35131 Padua, Italy
| |
Collapse
|
19
|
Menn KH, Männel C, Meyer L. Phonological acquisition depends on the timing of speech sounds: Deconvolution EEG modeling across the first five years. SCIENCE ADVANCES 2023; 9:eadh2560. [PMID: 37910625 PMCID: PMC10619930 DOI: 10.1126/sciadv.adh2560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 09/29/2023] [Indexed: 11/03/2023]
Abstract
The late development of fast brain activity in infancy restricts initial processing abilities to slow information. Nevertheless, infants acquire the short-lived speech sounds of their native language during their first year of life. Here, we trace the early buildup of the infant phoneme inventory with naturalistic electroencephalogram. We apply the recent method of deconvolution modeling to capture the emergence of the feature-based phoneme representation that is known to govern speech processing in the mature brain. Our cross-sectional analysis uncovers a gradual developmental increase in neural responses to native phonemes. Critically, infants appear to acquire those phoneme features first that extend over longer time intervals-thus meeting infants' slow processing abilities. Shorter-lived phoneme features are added stepwise, with the shortest acquired last. Our study shows that the ontogenetic acceleration of electrophysiology shapes early language acquisition by determining the duration of the acquired units.
Collapse
Affiliation(s)
- Katharina H. Menn
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103 Leipzig, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103 Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, Stephanstr 1a, 04103 Leipzig, Germany
| | - Claudia Männel
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103 Leipzig, Germany
- Department of Audiology and Phoniatrics, Charité – Universitätsmedizin Berlin, Augustenburger Platz 1, 13353 Berlin, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103 Leipzig, Germany
- Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Albert-Schweitzer-Campus 1, 48149 Münster, Germany
| |
Collapse
|
20
|
Menn KH, Männel C, Meyer L. Does Electrophysiological Maturation Shape Language Acquisition? PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1271-1281. [PMID: 36753616 PMCID: PMC10623610 DOI: 10.1177/17456916231151584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
Infants master temporal patterns of their native language at a developmental trajectory from slow to fast: Shortly after birth, they recognize the slow acoustic modulations specific to their native language before tuning into faster language-specific patterns between 6 and 12 months of age. We propose here that this trajectory is constrained by neuronal maturation-in particular, the gradual emergence of high-frequency neural oscillations in the infant electroencephalogram. Infants' initial focus on slow prosodic modulations is consistent with the prenatal availability of slow electrophysiological activity (i.e., theta- and delta-band oscillations). Our proposal is consistent with the temporal patterns of infant-directed speech, which initially amplifies slow modulations, approaching the faster modulation range of adult-directed speech only as infants' language has advanced sufficiently. Moreover, our proposal agrees with evidence from premature infants showing maturational age is a stronger predictor of language development than ex utero exposure to speech, indicating that premature infants cannot exploit their earlier availability of speech because of electrophysiological constraints. In sum, we provide a new perspective on language acquisition emphasizing neuronal development as a critical driving force of infants' language development.
Collapse
Affiliation(s)
- Katharina H. Menn
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, Leipzig, Germany
| | - Claudia Männel
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Audiology and Phoniatrics, Charité – Universitätsmedizin Berlin, Berlin, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Münster, Germany
| |
Collapse
|
21
|
Wang X, Delgado J, Marchesotti S, Kojovic N, Sperdin HF, Rihs TA, Schaer M, Giraud AL. Speech Reception in Young Children with Autism Is Selectively Indexed by a Neural Oscillation Coupling Anomaly. J Neurosci 2023; 43:6779-6795. [PMID: 37607822 PMCID: PMC10552944 DOI: 10.1523/jneurosci.0112-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 07/02/2023] [Accepted: 07/07/2023] [Indexed: 08/24/2023] Open
Abstract
Communication difficulties are one of the core criteria in diagnosing autism spectrum disorder (ASD), and are often characterized by speech reception difficulties, whose biological underpinnings are not yet identified. This deficit could denote atypical neuronal ensemble activity, as reflected by neural oscillations. Atypical cross-frequency oscillation coupling, in particular, could disrupt the joint tracking and prediction of dynamic acoustic stimuli, a dual process that is essential for speech comprehension. Whether such oscillatory anomalies already exist in very young children with ASD, and with what specificity they relate to individual language reception capacity is unknown. We collected neural activity data using electroencephalography (EEG) in 64 very young children with and without ASD (mean age 3; 17 females, 47 males) while they were exposed to naturalistic-continuous speech. EEG power of frequency bands typically associated with phrase-level chunking (δ, 1-3 Hz), phonemic encoding (low-γ, 25-35 Hz), and top-down control (β, 12-20 Hz) were markedly reduced in ASD relative to typically developing (TD) children. Speech neural tracking by δ and θ (4-8 Hz) oscillations was also weaker in ASD compared with TD children. After controlling gaze-pattern differences, we found that the classical θ/γ coupling was replaced by an atypical β/γ coupling in children with ASD. This anomaly was the single most specific predictor of individual speech reception difficulties in ASD children. These findings suggest that early interventions (e.g., neurostimulation) targeting the disruption of β/γ coupling and the upregulation of θ/γ coupling could improve speech processing coordination in young children with ASD and help them engage in oral interactions.SIGNIFICANCE STATEMENT Very young children already present marked alterations of neural oscillatory activity in response to natural speech at the time of autism spectrum disorder (ASD) diagnosis. Hierarchical processing of phonemic-range and syllabic-range information (θ/γ coupling) is disrupted in ASD children. Abnormal bottom-up (low-γ) and top-down (low-β) coordination specifically predicts speech reception deficits in very young ASD children, and no other cognitive deficit.
Collapse
Affiliation(s)
- Xiaoyue Wang
- Auditory Language Group, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
- Institut Pasteur, Université Paris Cité, Hearing Institute, Paris, France, 75012
| | - Jaime Delgado
- Auditory Language Group, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
| | - Silvia Marchesotti
- Auditory Language Group, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
| | - Nada Kojovic
- Autism Brain & Behavior Lab, Department of Psychiatry, University of Geneva, Geneva, Switzerland, 1202
| | - Holger Franz Sperdin
- Autism Brain & Behavior Lab, Department of Psychiatry, University of Geneva, Geneva, Switzerland, 1202
| | - Tonia A Rihs
- Functional Brain Mapping Laboratory, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
| | - Marie Schaer
- Autism Brain & Behavior Lab, Department of Psychiatry, University of Geneva, Geneva, Switzerland, 1202
| | - Anne-Lise Giraud
- Auditory Language Group, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
- Institut Pasteur, Université Paris Cité, Hearing Institute, Paris, France, 75012
| |
Collapse
|
22
|
Daikoku T, Kumagaya S, Ayaya S, Nagai Y. Non-autistic persons modulate their speech rhythm while talking to autistic individuals. PLoS One 2023; 18:e0285591. [PMID: 37768917 PMCID: PMC10538692 DOI: 10.1371/journal.pone.0285591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Accepted: 04/27/2023] [Indexed: 09/30/2023] Open
Abstract
How non-autistic persons modulate their speech rhythm while talking to autistic (AUT) individuals remains unclear. We investigated two types of phonological characteristics: (1) the frequency power of each prosodic, syllabic, and phonetic rhythm and (2) the dynamic interaction among these rhythms using speech between AUT and neurotypical (NT) individuals. Eight adults diagnosed with AUT (all men; age range, 24-44 years) and eight age-matched non-autistic NT adults (three women, five men; age range, 23-45 years) participated in this study. Six NT and eight AUT respondents were asked by one of the two NT questioners (both men) to share their recent experiences on 12 topics. We included 87 samples of AUT-directed speech (from an NT questioner to an AUT respondent), 72 of NT-directed speech (from an NT questioner to an NT respondent), 74 of AUT speech (from an AUT respondent to an NT questioner), and 55 of NT speech (from an NT respondent to an NT questioner). We found similarities between AUT speech and AUT-directed speech, and between NT speech and NT-directed speech. Prosody and interactions between prosodic, syllabic, and phonetic rhythms were significantly weaker in AUT-directed and AUT speech than in NT-directed and NT speech, respectively. AUT speech showed weaker dynamic processing from higher to lower phonological bands (e.g. from prosody to syllable) than NT speech. Further, we found that the weaker the frequency power of prosody in NT and AUT respondents, the weaker the frequency power of prosody in NT questioners. This suggests that NT individuals spontaneously imitate speech rhythms of the NT and AUT interlocutor. Although the speech sample of questioners came from just two NT individuals, our findings may suggest the possibility that the phonological characteristics of a speaker influence those of the interlocutor.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Shinichiro Kumagaya
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Satsuki Ayaya
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Yukie Nagai
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
23
|
Martínez-Castilla P, Calet N, Jiménez-Fernández G. Music skills of Spanish-speaking children with developmental language disorder. RESEARCH IN DEVELOPMENTAL DISABILITIES 2023; 140:104575. [PMID: 37515985 DOI: 10.1016/j.ridd.2023.104575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/11/2023] [Accepted: 07/19/2023] [Indexed: 07/31/2023]
Abstract
BACKGROUND According to temporal sampling theory, deficits in rhythm processing contribute to both language and music difficulties in children with developmental language disorder (DLD). Evidence for this proposition is derived mainly from studies conducted in stress-timed languages, but the results may differ in languages with different rhythm features (e.g., syllable-timed languages). AIMS This research aimed to study a previously unexamined topic, namely, the music skills of children with DLD who speak Spanish (a syllable-timed language), and to analyze the possible relationships between the language and music skills of these children. METHODS AND PROCEDURES Two groups of 18 Spanish-speaking children with DLD and 19 typically-developing peers matched for chronological age completed a set of language tests. Their rhythm discrimination, melody discrimination and music memory skills were also assessed. OUTCOMES AND RESULTS Children with DLD exhibited significantly lower performance than their typically-developing peers on all three music subtests. Music and language skills were significantly related in both groups. CONCLUSIONS AND IMPLICATIONS The results suggest that similar music difficulties may be found in children with DLD whether they speak stress-timed or syllable-timed languages. The relationships found between music and language skills may pave the way for the design of possible language intervention programs based on music stimuli.
Collapse
Affiliation(s)
- Pastora Martínez-Castilla
- Department of Developmental and Educational Psychology, Faculty of Psychology, Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain.
| | - Nuria Calet
- Department of Developmental and Educational Psychology, Faculty of Educational Sciences, University of Granada, Granada, Spain.
| | - Gracia Jiménez-Fernández
- Department of Developmental and Educational Psychology, Faculty of Educational Sciences, University of Granada, Granada, Spain.
| |
Collapse
|
24
|
Kujala T, Partanen E, Virtala P, Winkler I. Prerequisites of language acquisition in the newborn brain. Trends Neurosci 2023; 46:726-737. [PMID: 37344237 DOI: 10.1016/j.tins.2023.05.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 04/13/2023] [Accepted: 05/24/2023] [Indexed: 06/23/2023]
Abstract
Learning to decode and produce speech is one of the most demanding tasks faced by infants. Nevertheless, infants typically utter their first words within a year, and phrases soon follow. Here we review cognitive abilities of newborn infants that promote language acquisition, focusing primarily on studies tapping neural activity. The results of these studies indicate that infants possess core adult auditory abilities already at birth, including statistical learning and rule extraction from variable speech input. Thus, the neonatal brain is ready to categorize sounds, detect word boundaries, learn words, and separate speech streams: in short, to acquire language quickly and efficiently from everyday linguistic input.
Collapse
Affiliation(s)
- Teija Kujala
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| | - Eino Partanen
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Paula Virtala
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
25
|
Ní Choisdealbha Á, Attaheri A, Rocha S, Mead N, Olawole-Scott H, Brusini P, Gibbon S, Boutris P, Grey C, Hines D, Williams I, Flanagan SA, Goswami U. Neural phase angle from two months when tracking speech and non-speech rhythm linked to language performance from 12 to 24 months. BRAIN AND LANGUAGE 2023; 243:105301. [PMID: 37399686 DOI: 10.1016/j.bandl.2023.105301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 06/05/2023] [Accepted: 06/28/2023] [Indexed: 07/05/2023]
Abstract
Atypical phase alignment of low-frequency neural oscillations to speech rhythm has been implicated in phonological deficits in developmental dyslexia. Atypical phase alignment to rhythm could thus also characterize infants at risk for later language difficulties. Here, we investigate phase-language mechanisms in a neurotypical infant sample. 122 two-, six- and nine-month-old infants were played speech and non-speech rhythms while EEG was recorded in a longitudinal design. The phase of infants' neural oscillations aligned consistently to the stimuli, with group-level convergence towards a common phase. Individual low-frequency phase alignment related to subsequent measures of language acquisition up to 24 months of age. Accordingly, individual differences in language acquisition are related to the phase alignment of cortical tracking of auditory and audiovisual rhythms in infancy, an automatic neural mechanism. Automatic rhythmic phase-language mechanisms could eventually serve as biomarkers, identifying at-risk infants and enabling intervention at the earliest stages of development.
Collapse
Affiliation(s)
| | - Adam Attaheri
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Sinead Rocha
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Natasha Mead
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Helen Olawole-Scott
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Perrine Brusini
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Samuel Gibbon
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Panagiotis Boutris
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Christina Grey
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Declan Hines
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Isabel Williams
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Sheila A Flanagan
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Usha Goswami
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom.
| |
Collapse
|
26
|
Acoustic correlates of the syllabic rhythm of speech: Modulation spectrum or local features of the temporal envelope. Neurosci Biobehav Rev 2023; 147:105111. [PMID: 36822385 DOI: 10.1016/j.neubiorev.2023.105111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 12/04/2022] [Accepted: 02/19/2023] [Indexed: 02/25/2023]
Abstract
The syllable is a perceptually salient unit in speech. Since both the syllable and its acoustic correlate, i.e., the speech envelope, have a preferred range of rhythmicity between 4 and 8 Hz, it is hypothesized that theta-band neural oscillations play a major role in extracting syllables based on the envelope. A literature survey, however, reveals inconsistent evidence about the relationship between speech envelope and syllables, and the current study revisits this question by analyzing large speech corpora. It is shown that the center frequency of speech envelope, characterized by the modulation spectrum, reliably correlates with the rate of syllables only when the analysis is pooled over minutes of speech recordings. In contrast, in the time domain, a component of the speech envelope is reliably phase-locked to syllable onsets. Based on a speaker-independent model, the timing of syllable onsets explains about 24% variance of the speech envelope. These results indicate that local features in the speech envelope, instead of the modulation spectrum, are a more reliable acoustic correlate of syllables.
Collapse
|
27
|
Haartsen R, Charman T, Pasco G, Johnson MH, Jones EJH. Modulation of EEG theta by naturalistic social content is not altered in infants with family history of autism. Sci Rep 2022; 12:20758. [PMID: 36456597 PMCID: PMC9715667 DOI: 10.1038/s41598-022-24870-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 11/22/2022] [Indexed: 12/05/2022] Open
Abstract
Theta oscillations (spectral power and connectivity) are sensitive to the social content of an experience in typically developing infants, providing a possible marker of early social brain development. Autism is a neurodevelopmental condition affecting early social behaviour, but links to underlying social brain function remain unclear. We explored whether modulations of theta spectral power and connectivity by naturalistic social content in infancy are related to family history for autism. Fourteen-month-old infants with (family history; FH; N = 75) and without (no family history; NFH; N = 26) a first-degree relative with autism watched social and non-social videos during EEG recording. We calculated theta (4-5 Hz) spectral power and connectivity modulations (social-non-social) and associated them with outcomes at 36 months. We replicated previous findings of increased theta power and connectivity during social compared to non-social videos. Theta modulations with social content were similar between groups, for both power and connectivity. Together, these findings suggest that neural responses to naturalistic social stimuli may not be strongly altered in 14-month-old infants with family history of autism.
Collapse
Affiliation(s)
- Rianne Haartsen
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, WC1E 7HX, UK.
- ToddlerLab, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| | - Tony Charman
- Department of Psychology, Institute of Psychiatry, Psychology and Neuroscience, King's College London, De Crespigny Park, London, SE5 8AF, UK
- South London and Maudsley NHS Foundation Trust, Bethlem Royal Hospital, Monks Orchard Road, Beckenham, Kent, BR3 3BX, UK
| | - Greg Pasco
- Department of Psychology, Institute of Psychiatry, Psychology and Neuroscience, King's College London, De Crespigny Park, London, SE5 8AF, UK
| | - Mark H Johnson
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, WC1E 7HX, UK
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Emily J H Jones
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, WC1E 7HX, UK
| |
Collapse
|
28
|
Keshavarzi M, Mandke K, Macfarlane A, Parvez L, Gabrielczyk F, Wilson A, Flanagan S, Goswami U. Decoding of speech information using EEG in children with dyslexia: Less accurate low-frequency representations of speech, not "Noisy" representations. BRAIN AND LANGUAGE 2022; 235:105198. [PMID: 36343509 DOI: 10.1016/j.bandl.2022.105198] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 10/03/2022] [Accepted: 10/24/2022] [Indexed: 06/16/2023]
Abstract
The amplitude envelope of speech carries crucial low-frequency acoustic information that assists linguistic decoding. The sensory-neural Temporal Sampling (TS) theory of developmental dyslexia proposes atypical encoding of speech envelope information < 10 Hz, leading to atypical phonological representations. Here a backward linear TRF model and story listening were employed to estimate the speech information encoded in the electroencephalogram in the canonical delta, theta and alpha bands by 9-year-old children with and without dyslexia. TRF decoding accuracy provided an estimate of how faithfully the children's brains encoded low-frequency envelope information. Between-group analyses showed that the children with dyslexia exhibited impaired reconstruction of speech information in the delta band. However, when the quality of speech encoding for each child was estimated using child-by-child decoding models, then the dyslexic children did not differ from controls. This suggests that children with dyslexia encode neither "noisy" nor "normal" representations of the speech signal, but different representations.
Collapse
Affiliation(s)
- Mahmoud Keshavarzi
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom.
| | - Kanad Mandke
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Annabel Macfarlane
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Lyla Parvez
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Fiona Gabrielczyk
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Angela Wilson
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Sheila Flanagan
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Usha Goswami
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| |
Collapse
|
29
|
Understanding why infant-directed speech supports learning: A dynamic attention perspective. DEVELOPMENTAL REVIEW 2022. [DOI: 10.1016/j.dr.2022.101047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
30
|
Vanden Bempt F, Van Herck S, Economou M, Vanderauwera J, Vandermosten M, Wouters J, Ghesquière P. Speech perception deficits and the effect of envelope-enhanced story listening combined with phonics intervention in pre-readers at risk for dyslexia. Front Psychol 2022; 13:1021767. [PMID: 36389538 PMCID: PMC9650384 DOI: 10.3389/fpsyg.2022.1021767] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 10/12/2022] [Indexed: 11/28/2022] Open
Abstract
Developmental dyslexia is considered to be most effectively addressed with preventive phonics-based interventions, including grapheme-phoneme coupling and blending exercises. These intervention types require intact speech perception abilities, given their large focus on exercises with auditorily presented phonemes. Yet some children with (a risk for) dyslexia experience problems in this domain due to a poorer sensitivity to rise times, i.e., rhythmic acoustic cues present in the speech envelope. As a result, the often subtle speech perception problems could potentially constrain an optimal response to phonics-based interventions in at-risk children. The current study therefore aimed (1) to extend existing research by examining the presence of potential speech perception deficits in pre-readers at cognitive risk for dyslexia when compared to typically developing peers and (2) to explore the added value of a preventive auditory intervention for at-risk pre-readers, targeting rise time sensitivity, on speech perception and other reading-related skills. To obtain the first research objective, we longitudinally compared speech-in-noise perception between 28 5-year-old pre-readers with and 30 peers without a cognitive risk for dyslexia during the second half of the third year of kindergarten. The second research objective was addressed by exploring growth in speech perception and other reading-related skills in an independent sample of 62 at-risk 5-year-old pre-readers who all combined a 12-week preventive phonics-based intervention (GraphoGame-Flemish) with an auditory story listening intervention. In half of the sample, story recordings contained artificially enhanced rise times (GG-FL_EE group, n = 31), while in the other half, stories remained unprocessed (GG-FL_NE group, n = 31; Clinical Trial Number S60962-https://www.uzleuven.be/nl/clinical-trial-center). Results revealed a slower speech-in-noise perception growth in the at-risk compared to the non-at-risk group, due to an emerged deficit at the end of kindergarten. Concerning the auditory intervention effects, both intervention groups showed equal growth in speech-in-noise perception and other reading-related skills, suggesting no boost of envelope-enhanced story listening on top of the effect of combining GraphoGame-Flemish with listening to unprocessed stories. These findings thus provide evidence for a link between speech perception problems and dyslexia, yet do not support the potential of the auditory intervention in its current form.
Collapse
Affiliation(s)
- Femke Vanden Bempt
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Shauni Van Herck
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Maria Economou
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Jolijn Vanderauwera
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Psychological Sciences Research Institute, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Maaike Vandermosten
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Jan Wouters
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
31
|
Daikoku T, Goswami U. Hierarchical amplitude modulation structures and rhythm patterns: Comparing Western musical genres, song, and nature sounds to Babytalk. PLoS One 2022; 17:e0275631. [PMID: 36240225 PMCID: PMC9565671 DOI: 10.1371/journal.pone.0275631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 09/20/2022] [Indexed: 11/19/2022] Open
Abstract
Statistical learning of physical stimulus characteristics is important for the development of cognitive systems like language and music. Rhythm patterns are a core component of both systems, and rhythm is key to language acquisition by infants. Accordingly, the physical stimulus characteristics that yield speech rhythm in "Babytalk" may also describe the hierarchical rhythmic relationships that characterize human music and song. Computational modelling of the amplitude envelope of "Babytalk" (infant-directed speech, IDS) using a demodulation approach (Spectral-Amplitude Modulation Phase Hierarchy model, S-AMPH) can describe these characteristics. S-AMPH modelling of Babytalk has shown previously that bands of amplitude modulations (AMs) at different temporal rates and their phase relations help to create its structured inherent rhythms. Additionally, S-AMPH modelling of children's nursery rhymes shows that different rhythm patterns (trochaic, iambic, dactylic) depend on the phase relations between AM bands centred on ~2 Hz and ~5 Hz. The importance of these AM phase relations was confirmed via a second demodulation approach (PAD, Probabilistic Amplitude Demodulation). Here we apply both S-AMPH and PAD to demodulate the amplitude envelopes of Western musical genres and songs. Quasi-rhythmic and non-human sounds found in nature (birdsong, rain, wind) were utilized for control analyses. We expected that the physical stimulus characteristics in human music and song from an AM perspective would match those of IDS. Given prior speech-based analyses, we also expected that AM cycles derived from the modelling may identify musical units like crotchets, quavers and demi-quavers. Both models revealed an hierarchically-nested AM modulation structure for music and song, but not nature sounds. This AM modulation structure for music and song matched IDS. Both models also generated systematic AM cycles yielding musical units like crotchets and quavers. Both music and language are created by humans and shaped by culture. Acoustic rhythm in IDS and music appears to depend on many of the same physical characteristics, facilitating learning.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
- International Research Center for Neurointelligence, The University of Tokyo, Bunkyo City, Tokyo, Japan
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
- * E-mail:
| | - Usha Goswami
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
32
|
Menn KH, Ward EK, Braukmann R, van den Boomen C, Buitelaar J, Hunnius S, Snijders TM. Neural Tracking in Infancy Predicts Language Development in Children With and Without Family History of Autism. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:495-514. [PMID: 37216063 PMCID: PMC10158647 DOI: 10.1162/nol_a_00074] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 05/16/2022] [Indexed: 05/24/2023]
Abstract
During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1-3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not in 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.
Collapse
Affiliation(s)
- Katharina H. Menn
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, Leipzig, Germany
| | - Emma K. Ward
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Ricarda Braukmann
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Carlijn van den Boomen
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jan Buitelaar
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Cognitive Neuroscience, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Sabine Hunnius
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Tineke M. Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Cognitive Neuropsychology Department, Tilburg University
| |
Collapse
|
33
|
Goswami U. Language acquisition and speech rhythm patterns: an auditory neuroscience perspective. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211855. [PMID: 35911192 PMCID: PMC9326295 DOI: 10.1098/rsos.211855] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 07/04/2022] [Indexed: 06/15/2023]
Abstract
All human infants acquire language, but their brains do not know which language/s to prepare for. This observation suggests that there are fundamental components of the speech signal that contribute to building a language system, and fundamental neural processing mechanisms that use these components, which are shared across languages. Equally, disorders of language acquisition are found across all languages, with the most prevalent being developmental language disorder (approx. 7% prevalence), where oral language comprehension and production is atypical, and developmental dyslexia (approx. 7% prevalence), where written language acquisition is atypical. Recent advances in auditory neuroscience, along with advances in modelling the speech signal from an amplitude modulation (AM, intensity or energy change) perspective, have increased our understanding of both language acquisition and these developmental disorders. Speech rhythm patterns turn out to be fundamental to both sensory and neural linguistic processing. The rhythmic routines typical of childcare in many cultures, the parental practice of singing lullabies to infants, and the ubiquitous presence of BabyTalk (infant-directed speech) all enhance the fundamental AM components that contribute to building a linguistic brain.
Collapse
Affiliation(s)
- Usha Goswami
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
34
|
Lau JCY, Fyshe A, Waxman SR. Rhythm May Be Key to Linking Language and Cognition in Young Infants: Evidence From Machine Learning. Front Psychol 2022; 13:894405. [PMID: 35693512 PMCID: PMC9178268 DOI: 10.3389/fpsyg.2022.894405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/03/2022] [Indexed: 11/30/2022] Open
Abstract
Rhythm is key to language acquisition. Across languages, rhythmic features highlight fundamental linguistic elements of the sound stream and structural relations among them. A sensitivity to rhythmic features, which begins in utero, is evident at birth. What is less clear is whether rhythm supports infants' earliest links between language and cognition. Prior evidence has documented that for infants as young as 3 and 4 months, listening to their native language (English) supports the core cognitive capacity of object categorization. This precocious link is initially part of a broader template: listening to a non-native language from the same rhythmic class as (e.g., German, but not Cantonese) and to vocalizations of non-human primates (e.g., lemur, Eulemur macaco flavifrons, but not birds e.g., zebra-finches, Taeniopygia guttata) provide English-acquiring infants the same cognitive advantage as does listening to their native language. Here, we implement a machine-learning (ML) approach to ask whether there are acoustic properties, available on the surface of these vocalizations, that permit infants' to identify which vocalizations are candidate links to cognition. We provided the model with a robust sample of vocalizations that, from the vantage point of English-acquiring 4-month-olds, either support object categorization (English, German, lemur vocalizations) or fail to do so (Cantonese, zebra-finch vocalizations). We assess (a) whether supervised ML classification models can distinguish those vocalizations that support cognition from those that do not, and (b) which class(es) of acoustic features (including rhythmic, spectral envelope, and pitch features) best support that classification. Our analysis reveals that principal components derived from rhythm-relevant acoustic features were among the most robust in supporting the classification. Classifications performed using temporal envelope components were also robust. These new findings provide in principle evidence that infants' earliest links between vocalizations and cognition may be subserved by their perceptual sensitivity to rhythmic and spectral elements available on the surface of these vocalizations, and that these may guide infants' identification of candidate links to cognition.
Collapse
Affiliation(s)
- Joseph C. Y. Lau
- Department of Psychology, Northwestern University, Evanston, IL, United States
- Institute for Policy Research, Northwestern University, Evanston, IL, United States
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Alona Fyshe
- Department of Computing Science and Psychology, University of Alberta, Edmonton, AB, Canada
| | - Sandra R. Waxman
- Department of Psychology, Northwestern University, Evanston, IL, United States
- Institute for Policy Research, Northwestern University, Evanston, IL, United States
| |
Collapse
|
35
|
Keshavarzi M, Mandke K, Macfarlane A, Parvez L, Gabrielczyk F, Wilson A, Goswami U. Atypical delta-band phase consistency and atypical preferred phase in children with dyslexia during neural entrainment to rhythmic audio-visual speech. Neuroimage Clin 2022; 35:103054. [PMID: 35642984 PMCID: PMC9136320 DOI: 10.1016/j.nicl.2022.103054] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 04/13/2022] [Accepted: 05/18/2022] [Indexed: 11/18/2022]
Abstract
Children with and without dyslexia showed consistent phase entrainment. Dyslexic children had significantly reduced delta band phase consistency. Dyslexic children had a different preferred phase in delta compared to controls. The dyslexic brain showed faster pre-stimulus delta band angular velocity.
According to the sensory-neural Temporal Sampling theory of developmental dyslexia, neural sampling of auditory information at slow rates (<10 Hz, related to speech rhythm) is atypical in dyslexic individuals, particularly in the delta band (0.5–4 Hz). Here we examine the underlying neural mechanisms related to atypical sampling using a simple repetitive speech paradigm. Fifty-one children (21 control children [15M, 6F] and 30 children with dyslexia [16M, 14F]) aged 9 years with or without developmental dyslexia watched and listened as a ‘talking head’ repeated the syllable “ba” every 500 ms, while EEG was recorded. Occasionally a syllable was “out of time”, with a temporal delay calibrated individually and adaptively for each child so that it was detected around 79.4% of the time by a button press. Phase consistency in the delta (rate of stimulus delivery), theta (speech-related) and alpha (control) bands was evaluated for each child and each group. Significant phase consistency was found for both groups in the delta and theta bands, demonstrating neural entrainment, but not the alpha band. However, the children with dyslexia showed a different preferred phase and significantly reduced phase consistency compared to control children, in the delta band only. Analysis of pre- and post-stimulus angular velocity of group preferred phases revealed that the children in the dyslexic group showed an atypical response in the delta band only. The delta-band pre-stimulus angular velocity (−130 ms to 0 ms) for the dyslexic group appeared to be significantly faster compared to the control group. It is concluded that neural responding to simple beat-based stimuli may provide a unique neural marker of developmental dyslexia. The automatic nature of this neural response may enable new tools for diagnosis, as well as opening new avenues for remediation.
Collapse
Affiliation(s)
- Mahmoud Keshavarzi
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom.
| | - Kanad Mandke
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Annabel Macfarlane
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Lyla Parvez
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Fiona Gabrielczyk
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Angela Wilson
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Usha Goswami
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| |
Collapse
|
36
|
Wei Y, Hancock R, Mozeiko J, Large EW. The relationship between entrainment dynamics and reading fluency assessed by sensorimotor perturbation. Exp Brain Res 2022; 240:1775-1790. [PMID: 35507069 DOI: 10.1007/s00221-022-06369-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/06/2022] [Indexed: 11/25/2022]
Abstract
A consistent relationship has been found between rhythmic processing and reading skills. Impairment of the ability to entrain movements to an auditory rhythm in clinical populations with language-related deficits, such as children with developmental dyslexia, has been found in both behavioral and neural studies. In this study, we explored the relationship between rhythmic entrainment, behavioral synchronization, reading fluency, and reading comprehension in neurotypical English- and Mandarin-speaking adults. First, we examined entrainment stability by asking participants to coordinate taps with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Next, we assessed behavioral synchronization by asking participants to coordinate taps with the syllables they produced while reading sentences as naturally as possible (tap to syllable task). Finally, we measured reading fluency and reading comprehension for native English and native Mandarin speakers. Stability of entrainment correlated strongly with tap to syllable task performance and with reading fluency, and both findings generalized across English and Mandarin speakers.
Collapse
Affiliation(s)
- Yi Wei
- Department of Psychological Sciences, University of Connecticut, Storrs, USA.
- Brain Imaging Research Center, University of Connecticut, Storrs, USA.
- The Connecticut Institute for the Brain and Cognitive Sciences of University of Connecticut, Storrs, USA.
| | - Roeland Hancock
- Department of Psychological Sciences, University of Connecticut, Storrs, USA
- Brain Imaging Research Center, University of Connecticut, Storrs, USA
- The Connecticut Institute for the Brain and Cognitive Sciences of University of Connecticut, Storrs, USA
| | - Jennifer Mozeiko
- Department of Speech, Language and Hearing Sciences, University of Connecticut, Storrs, USA
| | - Edward W Large
- Department of Psychological Sciences, University of Connecticut, Storrs, USA
- Department of Physics, University of Connecticut, Storrs, USA
- Brain Imaging Research Center, University of Connecticut, Storrs, USA
- The Connecticut Institute for the Brain and Cognitive Sciences of University of Connecticut, Storrs, USA
| |
Collapse
|
37
|
Partanen E, Kivimäki R, Huotilainen M, Ylinen S, Tervaniemi M. Musical perceptual skills, but not neural auditory processing, are associated with better reading ability in childhood. Neuropsychologia 2022; 169:108189. [DOI: 10.1016/j.neuropsychologia.2022.108189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 02/21/2022] [Accepted: 02/21/2022] [Indexed: 10/18/2022]
|
38
|
Ní Choisdealbha Á, Attaheri A, Rocha S, Brusini P, Flanagan SA, Mead N, Gibbon S, Olawole-Scott H, Williams I, Grey C, Boutris P, Ahmed H, Goswami U. Neural detection of changes in amplitude rise time in infancy. Dev Cogn Neurosci 2022; 54:101075. [PMID: 35078120 PMCID: PMC8792064 DOI: 10.1016/j.dcn.2022.101075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 12/21/2021] [Accepted: 01/19/2022] [Indexed: 11/03/2022] Open
Abstract
Amplitude rise times play a crucial role in the perception of rhythm in speech, and reduced perceptual sensitivity to differences in rise time is related to developmental language difficulties. Amplitude rise times also play a mechanistic role in neural entrainment to the speech amplitude envelope. Using an ERP paradigm, here we examined for the first time whether infants at the ages of seven and eleven months exhibit an auditory mismatch response to changes in the rise times of simple repeating auditory stimuli. We found that infants exhibited a mismatch response (MMR) to all of the oddball rise times used for the study. The MMR was more positive at seven than eleven months of age. At eleven months, there was a shift to a mismatch negativity (MMN) that was more pronounced over left fronto-central electrodes. The MMR over right fronto-central electrodes was sensitive to the size of the difference in rise time. The results indicate that neural processing of changes in rise time is present at seven months, supporting the possibility that early speech processing is facilitated by neural sensitivity to these important acoustic cues.
Collapse
Affiliation(s)
- Áine Ní Choisdealbha
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom.
| | - Adam Attaheri
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Sinead Rocha
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Perrine Brusini
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Sheila A Flanagan
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Natasha Mead
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Samuel Gibbon
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Helen Olawole-Scott
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Isabel Williams
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Christina Grey
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Panagiotis Boutris
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Henna Ahmed
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| | - Usha Goswami
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, United Kingdom
| |
Collapse
|
39
|
Natural Infant-Directed Speech Facilitates Neural Tracking of Prosody. Neuroimage 2022; 251:118991. [PMID: 35158023 DOI: 10.1016/j.neuroimage.2022.118991] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 02/02/2022] [Accepted: 02/10/2022] [Indexed: 01/04/2023] Open
Abstract
Infants prefer to be addressed with infant-directed speech (IDS). IDS benefits language acquisition through amplified low-frequency amplitude modulations. It has been reported that this amplification increases electrophysiological tracking of IDS compared to adult-directed speech (ADS). It is still unknown which particular frequency band triggers this effect. Here, we compare tracking at the rates of syllables and prosodic stress, which are both critical to word segmentation and recognition. In mother-infant dyads (n=30), mothers described novel objects to their 9-month-olds while infants' EEG was recorded. For IDS, mothers were instructed to speak to their children as they typically do, while for ADS, mothers described the objects as if speaking with an adult. Phonetic analyses confirmed that pitch features were more prototypically infant-directed in the IDS-condition compared to the ADS-condition. Neural tracking of speech was assessed by speech-brain coherence, which measures the synchronization between speech envelope and EEG. Results revealed significant speech-brain coherence at both syllabic and prosodic stress rates, indicating that infants track speech in IDS and ADS at both rates. We found significantly higher speech-brain coherence for IDS compared to ADS in the prosodic stress rate but not the syllabic rate. This indicates that the IDS benefit arises primarily from enhanced prosodic stress. Thus, neural tracking is sensitive to parents' speech adaptations during natural interactions, possibly facilitating higher-level inferential processes such as word segmentation from continuous speech.
Collapse
|
40
|
Attaheri A, Choisdealbha ÁN, Di Liberto GM, Rocha S, Brusini P, Mead N, Olawole-Scott H, Boutris P, Gibbon S, Williams I, Grey C, Flanagan S, Goswami U. Delta- and theta-band cortical tracking and phase-amplitude coupling to sung speech by infants. Neuroimage 2021; 247:118698. [PMID: 34798233 DOI: 10.1016/j.neuroimage.2021.118698] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 10/15/2021] [Accepted: 10/30/2021] [Indexed: 01/13/2023] Open
Abstract
The amplitude envelope of speech carries crucial low-frequency acoustic information that assists linguistic decoding at multiple time scales. Neurophysiological signals are known to track the amplitude envelope of adult-directed speech (ADS), particularly in the theta-band. Acoustic analysis of infant-directed speech (IDS) has revealed significantly greater modulation energy than ADS in an amplitude-modulation (AM) band centred on ∼2 Hz. Accordingly, cortical tracking of IDS by delta-band neural signals may be key to language acquisition. Speech also contains acoustic information within its higher-frequency bands (beta, gamma). Adult EEG and MEG studies reveal an oscillatory hierarchy, whereby low-frequency (delta, theta) neural phase dynamics temporally organize the amplitude of high-frequency signals (phase amplitude coupling, PAC). Whilst consensus is growing around the role of PAC in the matured adult brain, its role in the development of speech processing is unexplored. Here, we examined the presence and maturation of low-frequency (<12 Hz) cortical speech tracking in infants by recording EEG longitudinally from 60 participants when aged 4-, 7- and 11- months as they listened to nursery rhymes. After establishing stimulus-related neural signals in delta and theta, cortical tracking at each age was assessed in the delta, theta and alpha [control] bands using a multivariate temporal response function (mTRF) method. Delta-beta, delta-gamma, theta-beta and theta-gamma phase-amplitude coupling (PAC) was also assessed. Significant delta and theta but not alpha tracking was found. Significant PAC was present at all ages, with both delta and theta -driven coupling observed.
Collapse
Affiliation(s)
- Adam Attaheri
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Áine Ní Choisdealbha
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, UMR 8248, CNRS, France; Ecole Normale Supérieure, PSL University, France; Department of Mechanical, Trinity Centre for Biomedical Engineering and Trinity Institute of Neuroscience, Manufacturing and Biomedical Engineering, Trinity College, The University of Dublin, Ireland; School of Electrical and Electronic Engineering and UCD Centre for Biomedical Engineering, University College Dublin, Ireland.
| | - Sinead Rocha
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Perrine Brusini
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom; Institute of Population Health, Waterhouse Building, Block B, Brownlow Street, Liverpool L69 3GF, United Kingdom.
| | - Natasha Mead
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Helen Olawole-Scott
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Panagiotis Boutris
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Samuel Gibbon
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Isabel Williams
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Christina Grey
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Sheila Flanagan
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| | - Usha Goswami
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3 EB, United Kingdom.
| |
Collapse
|
41
|
Van Herck S, Vanden Bempt F, Economou M, Vanderauwera J, Glatz T, Dieudonné B, Vandermosten M, Ghesquière P, Wouters J. Ahead of maturation: Enhanced speech envelope training boosts rise time discrimination in pre-readers at cognitive risk for dyslexia. Dev Sci 2021; 25:e13186. [PMID: 34743382 DOI: 10.1111/desc.13186] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 09/24/2021] [Accepted: 10/22/2021] [Indexed: 12/24/2022]
Abstract
Dyslexia has frequently been related to atypical auditory temporal processing and speech perception. Results of studies emphasizing speech onset cues and reinforcing the temporal structure of the speech envelope, that is, envelope enhancement (EE), demonstrated reduced speech perception deficits in individuals with dyslexia. The use of this strategy as auditory intervention might thus reduce some of the deficits related to dyslexia. Importantly, reading-skill interventions are most effective when they are provided during kindergarten and first grade. Hence, we provided a tablet-based 12-week auditory and phonics-based intervention to pre-readers at cognitive risk for dyslexia and investigated the effect on auditory temporal processing with a rise time discrimination (RTD) task. Ninety-one pre-readers at cognitive risk for dyslexia (aged 5-6) were assigned to two groups receiving a phonics-based intervention and playing a story listening game either with (n = 31) or without (n = 31) EE or a third group playing control games and listening to non-enhanced stories (n = 29). RTD was measured directly before, directly after and 1 year after the intervention. While the groups listening to non-enhanced stories mainly improved after the intervention during first grade, the group listening to enhanced stories improved during the intervention in kindergarten and subsequently remained stable during first grade. Hence, an EE intervention improves auditory processing skills important for the development of phonological skills. This occurred before the onset of reading instruction, preceding the maturational improvement of these skills, hence potentially giving at risk children a head start when learning to read.
Collapse
Affiliation(s)
- Shauni Van Herck
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, Leuven, Belgium
| | - Femke Vanden Bempt
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, Leuven, Belgium
| | - Maria Economou
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, Leuven, Belgium
| | - Jolijn Vanderauwera
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, Leuven, Belgium.,Université Catholique de Louvain, Psychological Sciences Research Institute, Louvain-la-Neuve, Belgium.,Université Catholique de Louvain, Institute of Neuroscience, Louvain-la-Neuve, Belgium
| | - Toivo Glatz
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Public Health, Charitéplatz 1, Berlin, Germany
| | - Benjamin Dieudonné
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Maaike Vandermosten
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, Leuven, Belgium
| | - Jan Wouters
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
42
|
Rhythm discrimination and metronome tapping in 4-year-old children at risk for developmental dyslexia. COGNITIVE DEVELOPMENT 2021. [DOI: 10.1016/j.cogdev.2021.101129] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
43
|
Rathcke T, Lin CY. Towards a Comprehensive Account of Rhythm Processing Issues in Developmental Dyslexia. Brain Sci 2021; 11:brainsci11101303. [PMID: 34679368 PMCID: PMC8533826 DOI: 10.3390/brainsci11101303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 09/17/2021] [Accepted: 09/21/2021] [Indexed: 11/16/2022] Open
Abstract
Developmental dyslexia is typically defined as a difficulty with an individual's command of written language, arising from deficits in phonological awareness. However, motor entrainment difficulties in non-linguistic synchronization and time-keeping tasks have also been reported. Such findings gave rise to proposals of an underlying rhythm processing deficit in dyslexia, even though to date, evidence for impaired motor entrainment with the rhythm of natural speech is rather scarce, and the role of speech rhythm in phonological awareness is unclear. The present study aimed to fill these gaps. Dyslexic adults and age-matched control participants with variable levels of previous music training completed a series of experimental tasks assessing phoneme processing, rhythm perception, and motor entrainment abilities. In a rhythm entrainment task, participants tapped along to the perceived beat of natural spoken sentences. In a phoneme processing task, participants monitored for sonorant and obstruent phonemes embedded in nonsense strings. Individual sensorimotor skills were assessed using a number of screening tests. The results lacked evidence for a motor impairment or a general motor entrainment difficulty in dyslexia, at least among adult participants of the study. Instead, the results showed that the participants' performance in the phonemic task was predictive of their performance in the rhythmic task, but not vice versa, suggesting that atypical rhythm processing in dyslexia may be the consequence, but not the cause, of dyslexic difficulties with phoneme-level encoding. No evidence for a deficit in the entrainment to the syllable rate in dyslexic adults was found. Rather, metrically weak syllables were significantly less often at the center of rhythmic attention in dyslexic adults as compared to neurotypical controls, with an increased tendency in musically trained participants. This finding could not be explained by an auditory deficit in the processing of acoustic-prosodic cues to the rhythm structure, but it is likely to be related to the well-documented auditory short-term memory issue in dyslexia.
Collapse
Affiliation(s)
- Tamara Rathcke
- Department of Linguistics, Faculty of Humanities, University of Konstanz, 78464 Konstanz, Germany
- Modern Languages and Linguistics, School of Cultures and Languages, University of Kent, Canterbury CT2 7NR, UK;
- Correspondence:
| | - Chia-Yuan Lin
- Modern Languages and Linguistics, School of Cultures and Languages, University of Kent, Canterbury CT2 7NR, UK;
- Department of Psychology, School of Humanities and Health Sciences, University of Huddersfield, Huddersfield HD1 3DH, UK
| |
Collapse
|
44
|
Kostilainen K, Partanen E, Mikkola K, Wikström V, Pakarinen S, Fellman V, Huotilainen M. Repeated Parental Singing During Kangaroo Care Improved Neural Processing of Speech Sound Changes in Preterm Infants at Term Age. Front Neurosci 2021; 15:686027. [PMID: 34539329 PMCID: PMC8446605 DOI: 10.3389/fnins.2021.686027] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 07/20/2021] [Indexed: 11/13/2022] Open
Abstract
Preterm birth carries a risk for adverse neurodevelopment. Cognitive dysfunctions, such as language disorders may manifest as atypical sound discrimination already in early infancy. As infant-directed singing has been shown to enhance language acquisition in infants, we examined whether parental singing during skin-to-skin care (kangaroo care) improves speech sound discrimination in preterm infants. Forty-five preterm infants born between 26 and 33 gestational weeks (GW) and their parents participated in this cluster-randomized controlled trial (ClinicalTrials ID IRB00003181SK). In both groups, parents conducted kangaroo care during 33-40 GW. In the singing intervention group (n = 24), a certified music therapist guided parents to sing or hum during daily kangaroo care. In the control group (n = 21), parents conducted standard kangaroo care and were not instructed to use their voices. Parents in both groups reported the duration of daily intervention. Auditory event-related potentials were recorded with electroencephalogram at term age using a multi-feature paradigm consisting of phonetic and emotional speech sound changes and a one-deviant oddball paradigm with pure tones. In the multi-feature paradigm, prominent mismatch responses (MMR) were elicited to the emotional sounds and many of the phonetic deviants in the singing intervention group and in the control group to some of the emotional and phonetic deviants. A group difference was found as the MMRs were larger in the singing intervention group, mainly due to larger MMRs being elicited to the emotional sounds, especially in females. The overall duration of the singing intervention (range 15-63 days) was positively associated with the MMR amplitudes for both phonetic and emotional stimuli in both sexes, unlike the daily singing time (range 8-120 min/day). In the oddball paradigm, MMRs for the non-speech sounds were elicited in both groups and no group differences nor connections between the singing time and the response amplitudes were found. These results imply that repeated parental singing during kangaroo care improved auditory discrimination of phonetic and emotional speech sounds in preterm infants at term age. Regular singing routines can be recommended for parents to promote the development of the auditory system and auditory processing of speech sounds in preterm infants.
Collapse
Affiliation(s)
- Kaisamari Kostilainen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Eino Partanen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Kaija Mikkola
- New Children's Hospital, Pediatric Research Center, Neonatology, Department of Pediatrics, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Valtteri Wikström
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Satu Pakarinen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Vineta Fellman
- Pediatrics, Department of Clinical Sciences, Lund University, Lund, Sweden.,Children's Hospital, University of Helsinki, Helsinki, Finland
| | - Minna Huotilainen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,CICERO Learning Network, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland
| |
Collapse
|
45
|
Gibbon S, Attaheri A, Ní Choisdealbha Á, Rocha S, Brusini P, Mead N, Boutris P, Olawole-Scott H, Ahmed H, Flanagan S, Mandke K, Keshavarzi M, Goswami U. Machine learning accurately classifies neural responses to rhythmic speech vs. non-speech from 8-week-old infant EEG. BRAIN AND LANGUAGE 2021; 220:104968. [PMID: 34111684 PMCID: PMC8358977 DOI: 10.1016/j.bandl.2021.104968] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 05/11/2021] [Accepted: 05/13/2021] [Indexed: 05/10/2023]
Abstract
Currently there are no reliable means of identifying infants at-risk for later language disorders. Infant neural responses to rhythmic stimuli may offer a solution, as neural tracking of rhythm is atypical in children with developmental language disorders. However, infant brain recordings are noisy. As a first step to developing accurate neural biomarkers, we investigate whether infant brain responses to rhythmic stimuli can be classified reliably using EEG from 95 eight-week-old infants listening to natural stimuli (repeated syllables or drumbeats). Both Convolutional Neural Network (CNN) and Support Vector Machine (SVM) approaches were employed. Applied to one infant at a time, the CNN discriminated syllables from drumbeats with a mean AUC of 0.87, against two levels of noise. The SVM classified with AUC 0.95 and 0.86 respectively, showing reduced performance as noise increased. Our proof-of-concept modelling opens the way to the development of clinical biomarkers for language disorders related to rhythmic entrainment.
Collapse
Affiliation(s)
- Samuel Gibbon
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK.
| | - Adam Attaheri
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Áine Ní Choisdealbha
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Sinead Rocha
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Perrine Brusini
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Natasha Mead
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Panagiotis Boutris
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Helen Olawole-Scott
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Henna Ahmed
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Sheila Flanagan
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Kanad Mandke
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| | - Mahmoud Keshavarzi
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK; Department of Bioengineering and Centre for Neurotechnology, Imperial College London, UK
| | - Usha Goswami
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, UK
| |
Collapse
|
46
|
Alexopoulos J, Giordano V, Janda C, Benavides‐Varela S, Seidl R, Doering S, Berger A, Bartha‐Doering L. The duration of intrauterine development influences discrimination of speech prosody in infants. Dev Sci 2021; 24:e13110. [PMID: 33817911 PMCID: PMC11475226 DOI: 10.1111/desc.13110] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 02/17/2021] [Accepted: 03/05/2021] [Indexed: 11/26/2022]
Abstract
Auditory speech discrimination is essential for normal language development. Children born preterm are at greater risk of language developmental delays. Using functional near-infrared spectroscopy at term-equivalent age, the present study investigated early discrimination of speech prosody in 62 neonates born between week 23 and 41 of gestational age (GA). We found a significant positive correlation between GA at birth and neural discrimination of forward versus backward speech at term-equivalent age. Cluster analysis identified a critical threshold at around week 32 of GA, pointing out the existence of subgroups. Infants born before week 32 of GA exhibited a significantly different pattern of hemodynamic response to speech stimuli compared to infants born at or after week 32 of GA. Thus, children born before the GA of 32 weeks are especially vulnerable to early speech discrimination deficits. To support their early language development, we therefore suggest a close follow-up and additional speech and language therapy especially in the group of children born before week 32 of GA.
Collapse
Affiliation(s)
- Johanna Alexopoulos
- Department of Psychoanalysis and PsychotherapyMedical University of ViennaViennaAustria
- Department of Pediatrics and Adolescent MedicineComprehensive Center for PediatricsMedical University of ViennaViennaAustria
| | - Vito Giordano
- Department of Pediatrics and Adolescent MedicineComprehensive Center for PediatricsMedical University of ViennaViennaAustria
| | - Charlotte Janda
- Department of Pediatrics and Adolescent MedicineComprehensive Center for PediatricsMedical University of ViennaViennaAustria
| | | | - Rainer Seidl
- Department of Pediatrics and Adolescent MedicineComprehensive Center for PediatricsMedical University of ViennaViennaAustria
| | - Stephan Doering
- Department of Psychoanalysis and PsychotherapyMedical University of ViennaViennaAustria
| | - Angelika Berger
- Department of Pediatrics and Adolescent MedicineComprehensive Center for PediatricsMedical University of ViennaViennaAustria
| | - Lisa Bartha‐Doering
- Department of Pediatrics and Adolescent MedicineComprehensive Center for PediatricsMedical University of ViennaViennaAustria
| |
Collapse
|
47
|
McAuley JD, Shen Y, Smith T, Kidd GR. Effects of speech-rhythm disruption on selective listening with a single background talker. Atten Percept Psychophys 2021; 83:2229-2240. [PMID: 33782913 PMCID: PMC10612531 DOI: 10.3758/s13414-021-02298-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/05/2021] [Indexed: 11/08/2022]
Abstract
Recent work by McAuley et al. (Attention, Perception, & Psychophysics, 82, 3222-3233, 2020) using the Coordinate Response Measure (CRM) paradigm with a multitalker background revealed that altering the natural rhythm of target speech amidst background speech worsens target recognition (a target-rhythm effect), while altering background speech rhythm improves target recognition (a background-rhythm effect). Here, we used a single-talker background to examine the role of specific properties of target and background sound patterns on selective listening without the complexity of multiple background stimuli. Experiment 1 manipulated the sex of the background talker, presented with a male target talker, to assess target and background-rhythm effects with and without a strong pitch cue to aid perceptual segregation. Experiment 2 used a vocoded single-talker background to examine target and background-rhythm effects with envelope-based speech rhythms preserved, but without semantic content or temporal fine structure. While a target-rhythm effect was present with all backgrounds, the background-rhythm effect was only observed for the same-sex background condition. Results provide additional support for a selective entrainment hypothesis, while also showing that the background-rhythm effect is not driven by envelope-based speech rhythm alone, and may be reduced or eliminated when pitch or other acoustic differences provide a strong basis for selective listening.
Collapse
Affiliation(s)
- J Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI, 48824, USA.
| | - Yi Shen
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Toni Smith
- Department of Psychology, Michigan State University, East Lansing, MI, 48824, USA
| | - Gary R Kidd
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| |
Collapse
|
48
|
Woodruff Carr K, Perszyk DR, Norton ES, Voss JL, Poeppel D, Waxman SR. Developmental changes in auditory‐evoked neural activity underlie infants’ links between language and cognition. Dev Sci 2021; 24:e13121. [DOI: 10.1111/desc.13121] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 04/21/2021] [Accepted: 04/27/2021] [Indexed: 11/29/2022]
Affiliation(s)
- Kali Woodruff Carr
- Department of Psychology Northwestern University Evanston Illinois USA
- Department of Communication Sciences and Disorders Northwestern University Evanston Illinois USA
| | | | - Elizabeth S. Norton
- Department of Communication Sciences and Disorders Northwestern University Evanston Illinois USA
- Institute for Innovations in Developmental Sciences Northwestern University Chicago Illinois USA
| | - Joel L. Voss
- Institute for Innovations in Developmental Sciences Northwestern University Chicago Illinois USA
- Department of Medical Social Sciences Ken and Ruth Davee Department of Neurology Department of Psychiatry and Behavioral Sciences Interdepartmental Neuroscience Program, Feinberg School of Medicine Northwestern University Chicago Illinois USA
| | - David Poeppel
- Department of Neuroscience Max Planck Institute for Empirical Aesthetics Frankfurt am Main Germany
- Department of Psychology and Center for Neural Science New York University New York New York USA
| | - Sandra R. Waxman
- Department of Psychology Northwestern University Evanston Illinois USA
- Institute for Innovations in Developmental Sciences Northwestern University Chicago Illinois USA
- Institute for Policy Research Northwestern University Evanston Illinois USA
| |
Collapse
|
49
|
Van Hirtum T, Ghesquière P, Wouters J. A Bridge over Troubled Listening: Improving Speech-in-Noise Perception by Children with Dyslexia. J Assoc Res Otolaryngol 2021; 22:465-480. [PMID: 33861393 DOI: 10.1007/s10162-021-00793-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/26/2021] [Indexed: 10/21/2022] Open
Abstract
Developmental dyslexia is most commonly associated with phonological processing difficulties. However, children with dyslexia may experience poor speech-in-noise perception as well. Although there is an ongoing debate whether a speech perception deficit is inherent to dyslexia or acts as an aggravating risk factor compromising learning to read indirectly, improving speech perception might boost reading-related skills and reading acquisition. In the current study, we evaluated advanced speech technology as applied in auditory prostheses, to promote and eventually normalize speech perception of school-aged children with dyslexia, i.e., envelope enhancement (EE). The EE strategy automatically detects and emphasizes onset cues and consequently reinforces the temporal structure of the speech envelope. Our results confirmed speech-in-noise perception difficulties by children with dyslexia. However, we found that exaggerating temporal "landmarks" of the speech envelope (i.e., amplitude rise time and modulations)-by using EE-passively and instantaneously improved speech perception in noise for children with dyslexia. Moreover, the benefit derived from EE was large enough to completely bridge the initial gap between children with dyslexia and their typical reading peers. Taken together, the beneficial outcome of EE suggests an important contribution of the temporal structure of the envelope to speech perception in noise difficulties in dyslexia, providing an interesting foundation for future intervention studies based on auditory and speech rhythm training.
Collapse
Affiliation(s)
- Tilde Van Hirtum
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven University of Leuven, Leuven, Belgium. .,Faculty of Psychology and Educational Sciences, Parenting and Special Education Research Unit, KU Leuven University of Leuven, Leuven, Belgium.
| | - Pol Ghesquière
- Faculty of Psychology and Educational Sciences, Parenting and Special Education Research Unit, KU Leuven University of Leuven, Leuven, Belgium
| | - Jan Wouters
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven University of Leuven, Leuven, Belgium
| |
Collapse
|
50
|
Luo C, Ding N. Cortical encoding of acoustic and linguistic rhythms in spoken narratives. eLife 2020; 9:60433. [PMID: 33345775 PMCID: PMC7775109 DOI: 10.7554/elife.60433] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 12/20/2020] [Indexed: 11/13/2022] Open
Abstract
Speech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, for example, words and phrases, on top of basic acoustic features, for example, the speech envelope. When listening to natural speech, it remains unclear, however, how cortical activity jointly encodes acoustic and linguistic information. Here we investigate the neural encoding of words using electroencephalography and observe neural activity synchronous to multi-syllabic words when participants naturally listen to narratives. An amplitude modulation (AM) cue for word rhythm enhances the word-level response, but the effect is only observed during passive listening. Furthermore, words and the AM cue are encoded by spatially separable neural responses that are differentially modulated by attention. These results suggest that bottom-up acoustic cues and top-down linguistic knowledge separately contribute to cortical encoding of linguistic units in spoken narratives.
Collapse
Affiliation(s)
- Cheng Luo
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China.,Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou, China
| |
Collapse
|