1
|
Wu M, Liu H, Zhao X, Lu L, Wang Y, Wei C, Liu Y, Zhang YX. Speech-Processing Network Formation of Cochlear-Implanted Toddlers With Early Hearing Experiences. Dev Sci 2025; 28:e13568. [PMID: 39412370 DOI: 10.1111/desc.13568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 07/04/2024] [Accepted: 09/05/2024] [Indexed: 11/10/2024]
Abstract
To reveal the formation process of speech processing with early hearing experiences, we tracked the development of functional connectivity in the auditory and language-related cortical areas of 84 (36 female) congenitally deafened toddlers using repeated functional near-infrared spectroscopy for up to 36 months post cochlear implantation (CI). Upon hearing restoration, the CI children lacked the modular organization of the mature speech-processing network and demonstrated a higher degree of immaturity in temporo-parietal than temporo-frontal connections. The speech-processing network appeared to form rapidly with early CI experiences, with two-thirds of the developing connections following nonlinear trajectories reflecting possibly more than one synaptogenesis-pruning cycle. A few key features of the mature speech-processing network emerged within the first year of CI hearing, including left-hemispheric advantage, differentiation of the dorsal and ventral processing streams, and functional state (speech listening vs. resting) specific patterns of connectivity development. The developmental changes were predictable of future auditory and verbal communication skills of the CI children, with prominent contribution from temporo-parietal connections in the dorsal stream, suggesting a mediating role of speech-processing network formation with early hearing experiences in speech acquisition.
Collapse
Affiliation(s)
- Meiyun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Haotian Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Xue Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Li Lu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Yuyang Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Chaogang Wei
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Yu-Xuan Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| |
Collapse
|
2
|
Mugitani R, Kashino M. Eight-Month-Old Infants Are Susceptible to the Auditory Continuity Illusion. Dev Psychobiol 2024; 66:e22551. [PMID: 39344404 DOI: 10.1002/dev.22551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/10/2024] [Accepted: 08/27/2024] [Indexed: 10/01/2024]
Abstract
The real world is full of noise and constantly overlapping sounds. However, our auditory system provides a solution to this, that is, the continuity illusion; when we hear a sound stream that is partially replaced by high-level noise, we can restore missing sound information and "fill in" the information as if it were smooth and continuous even against a background of noise. In the present study, we tested the preferences for familiar and novel melodies of 8-month-old infants after a 2-month memory retention interval following 1-week exposure to a specific melody. A preference for familiarity was seen not only when the melody was presented intact but also when it was periodically replaced by high-level noise, which elicits the continuity illusion in adults (Experiment 1). However, a trend toward preference for a novel melody was observed for stimuli periodically replaced by low-level noise that did not satisfy the ecological constraints for the elicitation of the illusion (Experiment 2). For the first time, this study showed that infants as young as 8 months of age are susceptible to the auditory continuity illusion. The study also revealed that the infants could recognize the melody they heard 2 months previously.
Collapse
Affiliation(s)
- Ryoko Mugitani
- Department of Psychology, Faculty of Integrated Arts and Social Sciences, Japan Women's University, Bunkyo-Ku, Tokyo, Japan
| | - Makio Kashino
- NTT Communication Science Laboratories, Atsugi, Kanagawa, Japan
| |
Collapse
|
3
|
Wu M, Wang Y, Zhao X, Xin T, Wu K, Liu H, Wu S, Liu M, Chai X, Li J, Wei C, Zhu C, Liu Y, Zhang YX. Anti-phasic oscillatory development for speech and noise processing in cochlear implanted toddlers. Child Dev 2024. [PMID: 38742715 DOI: 10.1111/cdev.14105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Human brain demonstrates amazing readiness for speech and language learning at birth, but the auditory development preceding such readiness remains unknown. Cochlear implanted (CI) children (n = 67; mean age 2.77 year ± 1.31 SD; 28 females) with prelingual deafness provide a unique opportunity to study this stage. Using functional near-infrared spectroscopy, it was revealed that the brain of CI children was irresponsive to sounds at CI hearing onset. With increasing CI experiences up to 32 months, the brain demonstrated function, region and hemisphere specific development. Most strikingly, the left anterior temporal lobe showed an oscillatory trajectory, changing in opposite phases for speech and noise. The study provides the first longitudinal brain imaging evidence for early auditory development preceding speech acquisition.
Collapse
Affiliation(s)
- Meiyun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yuyang Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- Department of Otolaryngology Head and Neck Surgery, Hunan Provincial People's Hospital (First Affiliated Hospital of Hunan Normal University), Changsha, China
| | - Xue Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Tianyu Xin
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Kun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Haotian Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- Department of Otolaryngology Head and Neck Surgery, West China Hospital of Sichuan University, Chengdu, China
| | - Shinan Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Min Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaoke Chai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Jinhong Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Chaogang Wei
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Chaozhe Zhu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Yu-Xuan Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
4
|
Mariani B, Nicoletti G, Barzon G, Ortiz Barajas MC, Shukla M, Guevara R, Suweis SS, Gervain J. Prenatal experience with language shapes the brain. SCIENCE ADVANCES 2023; 9:eadj3524. [PMID: 37992161 PMCID: PMC10664997 DOI: 10.1126/sciadv.adj3524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 10/20/2023] [Indexed: 11/24/2023]
Abstract
Human infants acquire language with notable ease compared to adults, but the neural basis of their remarkable brain plasticity for language remains little understood. Applying a scaling analysis of neural oscillations to address this question, we show that newborns' electrophysiological activity exhibits increased long-range temporal correlations after stimulation with speech, particularly in the prenatally heard language, indicating the early emergence of brain specialization for the native language.
Collapse
Affiliation(s)
- Benedetta Mariani
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | - Giorgio Nicoletti
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Department of Mathematics, University of Padua, Padua, Italy
| | - Giacomo Barzon
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | | | - Mohinish Shukla
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Department of Developmental and Social Psychology, University of Padua, Padua, Italy
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Ramón Guevara
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Department of Developmental and Social Psychology, University of Padua, Padua, Italy
| | - Samir Simon Suweis
- Department of Physics and Astronomy, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | - Judit Gervain
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Integrative Neuroscience and Cognition Center, CNRS and Université Paris Cité, Paris, France
- Department of Developmental and Social Psychology, University of Padua, Padua, Italy
| |
Collapse
|
5
|
Martinez-Alvarez A, Benavides-Varela S, Lapillonne A, Gervain J. Newborns discriminate utterance-level prosodic contours. Dev Sci 2023; 26:e13304. [PMID: 35841609 DOI: 10.1111/desc.13304] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 05/13/2022] [Accepted: 06/27/2022] [Indexed: 11/29/2022]
Abstract
Prosody is the fundamental organizing principle of spoken language, carrying lexical, morphosyntactic, and pragmatic information. It, therefore, provides highly relevant input for language development. Are infants sensitive to this important aspect of spoken language early on? In this study, we asked whether infants are able to discriminate well-formed utterance-level prosodic contours from ill-formed, backward prosodic contours at birth. This deviant prosodic contour was obtained by time-reversing the original one, and super-imposing it on the otherwise intact segmental information. The resulting backward prosodic contour was thus unfamiliar to the infants and ill-formed in French. We used near-infrared spectroscopy (NIRS) in 1-3-day-old French newborns (n = 25) to measure their brain responses to well-formed contours as standards and their backward prosody counterparts as deviants in the frontal, temporal, and parietal areas bilaterally. A cluster-based permutation test revealed greater responses to the Deviant than to the Standard condition in right temporal areas. These results suggest that newborns are already capable of detecting utterance-level prosodic violations at birth, a key ability for breaking into the native language, and that this ability is supported by brain areas similar to those in adults. RESEARCH HIGHLIGHTS: At birth, infants have sophisticated speech perception abilities. Prosody may be particularly important for early language development. We show that newborns are already capable of discriminating utterance-level prosodic contours. This discrimination can be localized to the right hemisphere of the neonate brain.
Collapse
Affiliation(s)
- Anna Martinez-Alvarez
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy.,Integrative Neuroscience and Cognition Center, Université Paris Cité & CNRS, Paris, France
| | | | - Alexandre Lapillonne
- Hôpital Necker - Enfants Malades, Department of Neonatology, Université Paris Cité, Paris, France
| | - Judit Gervain
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy.,Integrative Neuroscience and Cognition Center, Université Paris Cité & CNRS, Paris, France
| |
Collapse
|
6
|
Bücher S, Bernhofs V, Thieme A, Christiner M, Schneider P. Chronology of auditory processing and related co-activation in the orbitofrontal cortex depends on musical expertise. Front Neurosci 2023; 16:1041397. [PMID: 36685231 PMCID: PMC9846135 DOI: 10.3389/fnins.2022.1041397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 12/02/2022] [Indexed: 01/05/2023] Open
Abstract
Introduction The present study aims to explore the extent to which auditory processing is reflected in the prefrontal cortex. Methods Using magnetoencephalography (MEG), we investigated the chronology of primary and secondary auditory responses and associated co-activation in the orbitofrontal cortex in a large cohort of 162 participants of various ages. The sample consisted of 38 primary school children, 39 adolescents, 43 younger, and 42 middle-aged adults and was further divided into musically experienced participants and non-musicians by quantifying musical training and aptitude parameters. Results We observed that the co-activation in the orbitofrontal cortex [Brodmann-Area 10 (BA10)] strongly depended on musical expertise but not on age. In the musically experienced groups, a systematic coincidence of peak latencies of the primary auditory P1 response and the co-activated response in the orbitofrontal cortex was observed in childhood at the onset of musical education. In marked contrast, in all non-musicians, the orbitofrontal co-activation occurred 25-40 ms later when compared with the P1 response. Musical practice and musical aptitude contributed equally to the observed activation and co-activation patterns in the auditory and orbitofrontal cortex, confirming the reciprocal, interrelated influence of nature, and nurture in the musical brain. Discussion Based on the observed ageindependent differences in the chronology and lateralization of neurological responses, we suggest that orbitofrontal functions may contribute to musical learning at an early age.
Collapse
Affiliation(s)
- Steffen Bücher
- Section of Biomagnetism Heidelberg, Department of Neurology, Faculty of Medicine Heidelberg, Heidelberg, Germany
| | | | - Andrea Thieme
- Section of Biomagnetism Heidelberg, Department of Neurology, Faculty of Medicine Heidelberg, Heidelberg, Germany
| | - Markus Christiner
- Jāzeps Vītols Latvian Academy of Music, Riga, Latvia
- Centre of Systematic Musicology, University of Graz, Graz, Austria
| | - Peter Schneider
- Section of Biomagnetism Heidelberg, Department of Neurology, Faculty of Medicine Heidelberg, Heidelberg, Germany
- Jāzeps Vītols Latvian Academy of Music, Riga, Latvia
- Centre of Systematic Musicology, University of Graz, Graz, Austria
- Department of Neuroradiology, Medical School Heidelberg, Heidelberg, Germany
| |
Collapse
|
7
|
Llanos F, Zhao TC, Kuhl PK, Chandrasekaran B. The emergence of idiosyncratic patterns in the frequency-following response during the first year of life. JASA EXPRESS LETTERS 2022; 2:054401. [PMID: 35578694 PMCID: PMC9096806 DOI: 10.1121/10.0010493] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 04/24/2022] [Indexed: 06/15/2023]
Abstract
The frequency-following response (FFR) is a scalp-recorded signal that reflects phase-locked activity from neurons across the auditory system. In addition to capturing information about sounds, the FFR conveys biometric information, reflecting individual differences in auditory processing. To investigate the development of FFR biometric patterns, we trained a pattern recognition model to recognize infants (N = 16) from FFRs collected at 7 and 11 months. Model recognition scores were used to index the robustness of FFR biometric patterns at each time. Results showed better recognition scores at 11 months, demonstrating the emergence of robust FFR idiosyncratic patterns during this first year of life.
Collapse
Affiliation(s)
- Fernando Llanos
- Department of Linguistics, University of Texas at Austin, Austin, Texas 78712, USA
| | - T Christina Zhao
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington 98195, USA
| | - Patricia K Kuhl
- Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington 98195, USA
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA , , ,
| |
Collapse
|
8
|
Jahn KN, Arenberg JG, Horn DL. Spectral Resolution Development in Children With Normal Hearing and With Cochlear Implants: A Review of Behavioral Studies. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1646-1658. [PMID: 35201848 PMCID: PMC9499384 DOI: 10.1044/2021_jslhr-21-00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 09/09/2021] [Accepted: 12/01/2021] [Indexed: 06/14/2023]
Abstract
PURPOSE This review article provides a theoretical overview of the development of spectral resolution in children with normal hearing (cNH) and in those who use cochlear implants (CIs), with an emphasis on methodological considerations. The aim was to identify key directions for future research on spectral resolution development in children with CIs. METHOD A comprehensive literature review was conducted to summarize and synthesize previously published behavioral research on spectral resolution development in normal and impaired auditory systems. CONCLUSIONS In cNH, performance on spectral resolution tasks continues to improve through the teenage years and is likely driven by gradual maturation of across-channel intensity resolution. A small but growing body of evidence from children with CIs suggests a more complex relationship between spectral resolution development, patient demographics, and the quality of the CI electrode-neuron interface. Future research should aim to distinguish between the effects of patient-specific variables and the underlying physiology on spectral resolution abilities in children of all ages who are hard of hearing and use auditory prostheses.
Collapse
Affiliation(s)
- Kelly N. Jahn
- Department of Speech, Language, and Hearing, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson
- Callier Center for Communication Disorders, The University of Texas at Dallas
| | - Julie G. Arenberg
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston
| | - David L. Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle
- Division of Otolaryngology, Seattle Children's Hospital, WA
| |
Collapse
|
9
|
Cruz S, Crego A, Moreira C, Ribeiro E, Gonçalves Ó, Ramos R, Sampaio A. Cortical auditory evoked potentials in 1-month-old infants predict language outcomes at 12 months. INFANCY 2022; 27:324-340. [PMID: 35037391 DOI: 10.1111/infa.12454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 09/27/2021] [Accepted: 01/02/2022] [Indexed: 11/27/2022]
Abstract
The neurophysiological assessment of infants in their first developmental year can provide important information about the functional changes of the brain and supports the study of behavioral and developmental characteristics. Infants' cortical auditory evoked potentials (CAEPs) reflect cortical maturation and appear to predict subsequent language abilities. This study aimed to identify CAEP components to two auditory stimulus intensities in 1-month-old infants and to understand how these are associated with social interactive and self-regulatory behaviors. In addition, it examined whether CAEPs predicted developmental outcomes when infants were assessed at 12 months of age. At 1 month, P2 and N2 components were present for both auditory stimulus intensities, with an increased P2 amplitude being observed for the higher-intensity stimuli. We also observed that an increased P2 amplitude in the lower intensity predicted receptive and expressive language competencies at 12 months. These results are consistent with previous findings indicating an association between auditory processing and developmental outcomes in infants. This study suggests that specific auditory neurophysiological markers are associated with developmental outcomes in the first developmental year.
Collapse
Affiliation(s)
- Sara Cruz
- The Psychology for Positive Development Research Center (CIPD), Lusíada University North, Porto, Portugal
| | - Alberto Crego
- Psychological Neuroscience Laboratory, Research Center in Psychology (CIPsi), School of Psychology, University of Minho, Braga, Portugal
| | - Carla Moreira
- Centre of Mathematics, School of Sciences, University of Minho, Braga, Portugal
| | - Eugénia Ribeiro
- Research Center in Psychology (CIPsi), School of Psychology, University of Minho, Braga, Portugal
| | - Óscar Gonçalves
- Proaction Lab, CINEICC, Faculdade de Psicologia e de Ciências da Educação, Universidade de Coimbra, Coimbra, Portugal
| | - Rita Ramos
- Psychological Neuroscience Laboratory, Research Center in Psychology (CIPsi), School of Psychology, University of Minho, Braga, Portugal
| | - Adriana Sampaio
- Psychological Neuroscience Laboratory, Research Center in Psychology (CIPsi), School of Psychology, University of Minho, Braga, Portugal
| |
Collapse
|
10
|
Lau BK, Oxenham AJ, Werner LA. Infant Pitch and Timbre Discrimination in the Presence of Variation in the Other Dimension. J Assoc Res Otolaryngol 2021; 22:693-702. [PMID: 34519951 DOI: 10.1007/s10162-021-00807-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 07/02/2021] [Indexed: 11/25/2022] Open
Abstract
Adult listeners perceive pitch with fine precision, with many adults capable of discriminating less than a 1 % change in fundamental frequency (F0). Although there is variability across individuals, this precise pitch perception is an ability ascribed to cortical functions that are also important for speech and music perception. Infants display neural immaturity in the auditory cortex, suggesting that pitch discrimination may improve throughout infancy. In two experiments, we tested the limits of F0 (pitch) and spectral centroid (timbre) perception in 66 infants and 31 adults. Contrary to expectations, we found that infants at both 3 and 7 months were able to reliably detect small changes in F0 in the presence of random variations in spectral content, and vice versa, to the extent that their performance matched that of adults with musical training and exceeded that of adults without musical training. The results indicate high fidelity of F0 and spectral-envelope coding in infants, implying that fully mature cortical processing is not necessary for accurate discrimination of these features. The surprising difference in performance between infants and musically untrained adults may reflect a developmental trajectory for learning natural statistical covariations between pitch and timbre that improves coding efficiency but results in degraded performance in adults without musical training when expectations for such covariations are violated.
Collapse
Affiliation(s)
- Bonnie K Lau
- Institute for Language and Brain Sciences, University of Washington, 1715 NE Columbia Rd, Box 357988, Seattle, WA, 98195, USA.
- Department of Otolaryngology - Head and Neck Surgery, University of Washington, 1701 NE Columbia Rd, Box 357923, Seattle, WA, 98195, USA.
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| | - Lynne A Werner
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd Street, Box 354875, Seattle, WA, 98105, USA
| |
Collapse
|
11
|
Ortiz Barajas MC, Gervain J. The Role of Prenatal Experience and Basic Auditory Mechanisms in the Development of Language. MINNESOTA SYMPOSIA ON CHILD PSYCHOLOGY 2021. [DOI: 10.1002/9781119684527.ch4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
12
|
Aberrant auditory system and its developmental implications for autism. SCIENCE CHINA-LIFE SCIENCES 2021; 64:861-878. [DOI: 10.1007/s11427-020-1863-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 12/06/2020] [Indexed: 12/26/2022]
|
13
|
Penhune VB. A gene-maturation-environment model for understanding sensitive period effects in musical training. Curr Opin Behav Sci 2020. [DOI: 10.1016/j.cobeha.2020.05.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
14
|
Schwartz S, Wang L, Shinn-Cunningham BG, Tager-Flusberg H. Neural Evidence for Speech Processing Deficits During a Cocktail Party Scenario in Minimally and Low Verbal Adolescents and Young Adults with Autism. Autism Res 2020; 13:1828-1842. [PMID: 32827357 DOI: 10.1002/aur.2356] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 05/26/2020] [Accepted: 05/27/2020] [Indexed: 01/04/2023]
Abstract
As demonstrated by the Cocktail Party Effect, a person's attention is grabbed when they hear their name in a multispeaker setting. However, individuals with autism (ASD) are commonly challenged in multispeaker settings and often do not respond to salient speech, including one's own name (OON). It is unknown whether neural responses during this Cocktail Party scenario differ in those with ASD and whether such differences are associated with expressive language or auditory filtering abilities. We measured neural responses to hearing OON in quiet and multispeaker settings using electroencephalography in 20 minimally or low verbal ASD (ASD-MLV), 27 verbally fluent ASD (ASD-V), and 27 neurotypical (TD) participants, ages 13-22. First, we determined whether TD's neural responses to OON relative to other names could be quantified with early frontal mismatch responses (MMRs) and late, slow shift parietal and frontal responses (LPPs/FNs). Second, we compared the strength of MMRs and LPPs/FNs across the three groups. Third, we tested whether participants with poorer auditory filtering abilities exhibited particularly weak neural responses to OON heard in a multispeaker setting. Our primary finding was that TDs and ASD-Vs, but not ASD-MLVs, had significant MMRs to OON in a multispeaker setting, and strength of LPPs positively correlated with auditory filtering abilities in those with ASD. These findings reveal electrophysiological correlates of auditory filtering disruption within a clinical population that has severe language and communication impairments and offer a novel neuroimaging approach to studying the Cocktail Party effect in neurotypical and clinical populations. Autism Res 2020, 13: 1828-1842. © 2020 International Society for Autism Research and Wiley Periodicals LLC. LAY SUMMARY: We found that minimally and low verbal adolescents and young adults with autism exhibit decreased neural responses to one's own name when heard in a multispeaker setting. In addition, decreased strength of neural responses in those with autism correlated with decreased auditory filtering abilities. We propose that these neural deficits may reflect the ineffective processing of salient speech in noisy settings and contribute to language and communication deficits observed in autism.
Collapse
Affiliation(s)
- Sophie Schwartz
- Department of Psychological and Brain Sciences, Boston University, Boston, Massachusetts, USA.,Graduate Program for Neuroscience, Boston University School of Medicine, Boston, Massachusetts, USA
| | - Le Wang
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, USA
| | - Barbara G Shinn-Cunningham
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, USA.,Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
| | - Helen Tager-Flusberg
- Department of Psychological and Brain Sciences, Boston University, Boston, Massachusetts, USA
| |
Collapse
|
15
|
Oster MM, Werner LA. Infants' use of isolated and combined temporal cues in speech sound segregation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:401. [PMID: 32752747 PMCID: PMC7386947 DOI: 10.1121/10.0001582] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 06/14/2020] [Accepted: 06/28/2020] [Indexed: 06/11/2023]
Abstract
This paper investigates infants' and adults' use of envelope cues and combined onset asynchrony and envelope cues in the segregation of concurrent vowels. Listeners heard superimposed vowel pairs consisting of two different vowels spoken by a male and a female talker and were trained to respond to one specific target vowel, either the male /u:/ or male /i:/. Vowel detection was measured in three conditions. In the baseline condition the two superimposed vowels had similar amplitude envelopes and synchronous onset. In the envelope cue condition, the amplitude envelopes of the two vowels differed. In the combined cue condition, both the onset time and amplitude envelopes of the two vowels differed. Seven-month-old infants' concurrent vowel segregation improved both with envelope and with combined onset asynchrony and envelope cues to the same extent as adults'. A preliminary investigation with 3-month-old infants suggested that neither envelope cues nor combined asynchrony and envelope cues improved their ability to detect the target vowel. Taken together, these results suggest that envelope and combined onset-asynchrony cues are available to infants as they attempt to process competing speech sounds, at least after 7 months of age.
Collapse
Affiliation(s)
- Monika-Maria Oster
- Listen and Talk, 8610 8th Avenue Northeast, Seattle, Washington 98115, USA
| | - Lynne A Werner
- Department of Speech and Hearing Sciences, University of Washington, 1417 Northeast 42nd Street, Seattle, Washington 98105, USA
| |
Collapse
|
16
|
Wang X, Zhu M, Samuel OW, Wang X, Zhang H, Yao J, Lu Y, Wang M, Mukhopadhyay SC, Wu W, Chen S, Li G. The Effects of Random Stimulation Rate on Measurements of Auditory Brainstem Response. Front Hum Neurosci 2020; 14:78. [PMID: 32265673 PMCID: PMC7098959 DOI: 10.3389/fnhum.2020.00078] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 02/21/2020] [Indexed: 12/04/2022] Open
Abstract
Electroencephalography (EEG) signal is an electrophysiological recording from electrodes placed on the scalp to reflect the electrical activities of the brain. Auditory brainstem response (ABR) is one type of EEG signals in response to an auditory stimulus, and it has been widely used to evaluate the potential disorders of the auditory function within the brain. Currently, the ABR measurements in the clinic usually adopt a fixed stimulation rate (FSR) technique in which the late evoked response could contaminate the ABR signals and deteriorate the waveform differentiation after averaging, thus compromising the overall auditory function assessment task. To resolve this issue, this study proposed a random stimulation rate (RSR) method by integrating a random interval between two adjacent stimuli. The results showed that the proposed RSR method was consistently repeatable and reliable in multiple trials of repeated measurements, and there was a large amplitude of successive late evoked response that would contaminate the ABR signals for conventional FSR methods. The ABR waveforms of the RSR method showed better wave I–V morphology across different stimulation rates and stimulus levels, and the improved ABR morphology played an important role in early diagnoses of auditory pathway abnormities. The correlation coefficients as functions of averaging time showed that the ABR waveform of the RSR method stabilizes significantly faster, and therefore, it could be used to speed up current ABR measurements with more reliable testing results. The study suggests that the proposed method would potentially aid the adequate reconstruction of ABR signals towards a more effective means of hearing loss screening, brain function diagnoses, and potential brain–computer interface.
Collapse
Affiliation(s)
- Xin Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Mingxing Zhu
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Oluwarotimi Williams Samuel
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Xiaochen Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Haoshi Zhang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Junjie Yao
- The Duke Institute for Brain Sciences, Duke University, Durham, NC, United States
| | - Yun Lu
- The School of Electronics and Information Engineering, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, China
| | - Mingjiang Wang
- The School of Electronics and Information Engineering, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, China
| | | | - Wanqing Wu
- The School of Biomedical Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Shixiong Chen
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Guanglin Li
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| |
Collapse
|
17
|
Lalonde K, Werner LA. Infants and Adults Use Visual Cues to Improve Detection and Discrimination of Speech in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3860-3875. [PMID: 31618097 PMCID: PMC7201336 DOI: 10.1044/2019_jslhr-h-19-0106] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 05/30/2019] [Accepted: 07/08/2019] [Indexed: 06/10/2023]
Abstract
Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in the presence of visual speech than in auditory-only conditions. In addition, we examined whether visual cues to the onset and offset of the auditory signal account for this benefit. Method Sixty infants and 24 adults were randomly assigned to speech detection or discrimination tasks and were tested using a modified observer-based psychoacoustic procedure. Each participant completed 1-3 conditions: auditory-only, with visual speech, and with a visual signal that only cued the onset and offset of the auditory syllable. Results Mixed linear modeling indicated that infants and adults benefited from visual speech on both tasks. Adults relied on the onset-offset cue for detection, but the same cue did not improve their discrimination. The onset-offset cue benefited infants for both detection and discrimination. Whereas the onset-offset cue improved detection similarly for infants and adults, the full visual speech signal benefited infants to a lesser extent than adults on the discrimination task. Conclusions These results suggest that infants' use of visual onset-offset cues is mature, but their ability to use more complex visual speech cues is still developing. Additional research is needed to explore differences in audiovisual enhancement (a) of speech discrimination across speech targets and (b) with increasingly complex tasks and stimuli.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Department of Speech & Hearing Sciences, University of Washington, Seattle
| | - Lynne A. Werner
- Department of Speech & Hearing Sciences, University of Washington, Seattle
| |
Collapse
|
18
|
Leibold LJ, Buss E. Masked Speech Recognition in School-Age Children. Front Psychol 2019; 10:1981. [PMID: 31551862 PMCID: PMC6733920 DOI: 10.3389/fpsyg.2019.01981] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Accepted: 08/13/2019] [Indexed: 11/13/2022] Open
Abstract
Children who are typically developing often struggle to hear and understand speech in the presence of competing background sounds, particularly when the background sounds are also speech. For example, in many cases, young school-age children require an additional 5- to 10-dB signal-to-noise ratio relative to adults to achieve the same word or sentence recognition performance in the presence of two streams of competing speech. Moreover, adult-like performance is not observed until adolescence. Despite ample converging evidence that children are more susceptible to auditory masking than adults, the field lacks a comprehensive model that accounts for the development of masked speech recognition. This review provides a synthesis of the literature on the typical development of masked speech recognition. Age-related changes in the ability to recognize phonemes, words, or sentences in the presence of competing background sounds will be discussed by considering (1) how masking sounds influence the sensory encoding of target speech; (2) differences in the time course of development for speech-in-noise versus speech-in-speech recognition; and (3) the central auditory and cognitive processes required to separate and attend to target speech when multiple people are speaking at the same time.
Collapse
Affiliation(s)
- Lori J Leibold
- Human Auditory Development Laboratory, Department of Research, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, United States
| | - Emily Buss
- Psychoacoustics Laboratories, Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|
19
|
Musical playschool activities are linked to faster auditory development during preschool-age: a longitudinal ERP study. Sci Rep 2019; 9:11310. [PMID: 31383938 PMCID: PMC6683192 DOI: 10.1038/s41598-019-47467-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 06/17/2019] [Indexed: 01/20/2023] Open
Abstract
The influence of musical experience on brain development has been mostly studied in school-aged children with formal musical training while little is known about the possible effects of less formal musical activities typical for preschool-aged children (e.g., before the age of seven). In the current study, we investigated whether the amount of musical group activities is reflected in the maturation of neural sound discrimination from toddler to preschool-age. Specifically, we recorded event-related potentials longitudinally (84 recordings from 33 children) in a mismatch negativity (MMN) paradigm to different musically relevant sound changes at ages 2–3, 4–5 and 6–7 years from children who attended a musical playschool throughout the follow-up period and children with shorter attendance to the same playschool. In the first group, we found a gradual positive to negative shift in the polarities of the mismatch responses while the latter group showed little evidence of age-related changes in neural sound discrimination. The current study indicates that the maturation of sound encoding indexed by the MMN may be more protracted than once thought and provides first longitudinal evidence that even quite informal musical group activities facilitate the development of neural sound discrimination during early childhood.
Collapse
|
20
|
Haider HF, Bojić T, Ribeiro SF, Paço J, Hall DA, Szczepek AJ. Pathophysiology of Subjective Tinnitus: Triggers and Maintenance. Front Neurosci 2018; 12:866. [PMID: 30538616 PMCID: PMC6277522 DOI: 10.3389/fnins.2018.00866] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2018] [Accepted: 11/06/2018] [Indexed: 01/07/2023] Open
Abstract
Tinnitus is the conscious perception of a sound without a corresponding external acoustic stimulus, usually described as a phantom perception. One of the major challenges for tinnitus research is to understand the pathophysiological mechanisms triggering and maintaining the symptoms, especially for subjective chronic tinnitus. Our objective was to synthesize the published literature in order to provide a comprehensive update on theoretical and experimental advances and to identify further research and clinical directions. We performed literature searches in three electronic databases, complemented by scanning reference lists from relevant reviews in our included records, citation searching of the included articles using Web of Science, and manual searching of the last 6 months of principal otology journals. One-hundred and thirty-two records were included in the review and the information related to peripheral and central mechanisms of tinnitus pathophysiology was collected in order to update on theories and models. A narrative synthesis examined the main themes arising from this information. Tinnitus pathophysiology is complex and multifactorial, involving the auditory and non-auditory systems. Recent theories assume the necessary involvement of extra-auditory brain regions for tinnitus to reach consciousness. Tinnitus engages multiple active dynamic and overlapping networks. We conclude that advancing knowledge concerning the origin and maintenance of specific tinnitus subtypes origin and maintenance mechanisms is of paramount importance for identifying adequate treatment.
Collapse
Affiliation(s)
- Haúla Faruk Haider
- ENT Department, Hospital Cuf Infante Santo - NOVA Medical School, Lisbon, Portugal
| | - Tijana Bojić
- Laboratory of Radiobiology and Molecular Genetics, Vinča Institute of Nuclear Sciences, University of Belgrade, Belgrade, Serbia
| | - Sara F Ribeiro
- ENT Department, Hospital Cuf Infante Santo - NOVA Medical School, Lisbon, Portugal
| | - João Paço
- ENT Department, Hospital Cuf Infante Santo - NOVA Medical School, Lisbon, Portugal
| | - Deborah A Hall
- NIHR Nottingham Biomedical Research Centre, Nottingham, United Kingdom.,Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom.,Queen's Medical Centre, Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom.,University of Nottingham Malaysia, Semeniyh, Malaysia
| | - Agnieszka J Szczepek
- Department of Otorhinolaryngology, Head and Neck Surgery, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| |
Collapse
|
21
|
Oster MM, Werner LA. Infants use onset asynchrony cues in auditory scene analysis. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2052. [PMID: 30404496 PMCID: PMC6181648 DOI: 10.1121/1.5058397] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Revised: 09/01/2018] [Accepted: 09/17/2018] [Indexed: 06/08/2023]
Abstract
This experiment investigated the effect of onset asynchrony on the segregation of concurrent vowels in infants and adults. Two vowels, randomly chosen from seven American-English vowels, were superimposed. Each vowel pair contained one vowel by a male and one by a female talker. A train of such vowel pairs was presented to listeners, who were trained to respond to the male target vowel /i:/ or /u:/. The ability to identify the target vowel was compared among three conditions: synchronous onset, 100-, and 200-ms onset asynchrony. Experiment 1 measured performance, in d', in 7-month-old infants and adults. Infants and adults performed better with asynchronous than synchronous vowel onset, regardless of asynchrony duration. Experiment 2 compared the proportion of 3-month-old infants achieving an 80% correct criterion with and without onset asynchrony. Significantly more infants reached criterion with asynchronous than with synchronous vowel onset. Asynchrony duration did not influence performance. These experiments show that infants, as young as 3 months old, benefit from onset asynchrony.
Collapse
Affiliation(s)
- Monika-Maria Oster
- Department of Speech and Hearing Sciences, University of Washington, 1417 Northeast 42nd Street, Seattle, Washington 98105, USA
| | - Lynne A Werner
- Department of Speech and Hearing Sciences, University of Washington, 1417 Northeast 42nd Street, Seattle, Washington 98105, USA
| |
Collapse
|
22
|
Cusack R, Wild CJ, Zubiaurre-Elorza L, Linke AC. Why does language not emerge until the second year? Hear Res 2018; 366:75-81. [PMID: 30029804 DOI: 10.1016/j.heares.2018.05.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 04/30/2018] [Accepted: 05/07/2018] [Indexed: 12/20/2022]
Abstract
From their second year, infants typically begin to show rapid acquisition of receptive and expressive language. Here, we ask why these language skills do not begin to develop earlier. One evolutionary hypothesis is that infants are born when many brains systems are immature and not yet functioning, including those critical to language, because human infants have large have a large head and their mother's pelvis size is limited, necessitating an early birth. An alternative proposal, inspired by discoveries in machine learning, is that the language systems are mature enough to function but need auditory experience to develop effective representations of speech, before the language functions that manifest in behaviour can emerge. Growing evidence, in particular from neuroimaging, is supporting this latter hypothesis. We have previously shown with magnetic resonance imaging (MRI) that the acoustic radiation, carrying rich information to auditory cortex, is largely mature by 1 month, and using functional MRI (fMRI) that auditory cortex is processing many complex features of natural sounds by 3 months. However, speech perception relies upon a network of regions beyond auditory cortex, and it is not established if this network is mature. Here we measure the maturity of the speech network using functional connectivity with fMRI in infants at 3 months (N = 6) and 9 months (N = 7), and in an adult comparison group (N = 15). We find that functional connectivity in speech networks is mature at 3 months, suggesting that the delay in the onset of language is not due to brain immaturity but rather to the time needed to develop representations through experience. Future avenues for the study of language development are proposed, and the implications for clinical care and infant education are discussed.
Collapse
Affiliation(s)
- Rhodri Cusack
- Trinity College Institute of Neuroscience, Trinity College Dublin, Ireland; Brain and Mind Institute, Western University, London, Canada.
| | - Conor J Wild
- Brain and Mind Institute, Western University, London, Canada
| | - Leire Zubiaurre-Elorza
- Brain and Mind Institute, Western University, London, Canada; Department of Methods and Experimental Psychology, University of Deusto, Bilbao, Spain
| | - Annika C Linke
- Brain and Mind Institute, Western University, London, Canada; San Diego State University, San Diego, CA, USA
| |
Collapse
|
23
|
Zubiaurre-Elorza L, Linke AC, Herzmann C, Wild CJ, Duffy H, Lee DSC, Han VK, Cusack R. Auditory structural connectivity in preterm and healthy term infants during the first postnatal year. Dev Psychobiol 2018; 60:256-264. [PMID: 29355936 DOI: 10.1002/dev.21610] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Revised: 12/06/2017] [Accepted: 12/19/2017] [Indexed: 11/10/2022]
Abstract
Assessing language development in the first postnatal year is difficult, as receptive and expressive skills are rudimentary. Although outward manifestations of change are limited, the auditory language system is thought to undergo critical development at this age, as the foundations are laid for the rapid onset of spoken language in the second and third years. We recruited 11 infants, 7 healthy controls (gestational age = 40.69 ± 0.56; range from 40 to 41.43) and preterm babies (gestational age = 28.04 ± 0.95; range from 27.43 to 29.43) who underwent a Magnetic Resonance Imaging study during the first postnatal year (age at scan = 194.18 ± 97.98). We assessed white matter tracts using diffusion-weighted magnetic resonance imaging with probabilistic tractography. Fractional anisotropy was found to be largely mature even at one month, although there was a little further increase during the first postnatal year in both the acoustic radiation and the direct brainstem-Heschl's pathway.
Collapse
Affiliation(s)
- Leire Zubiaurre-Elorza
- Brain and Mind Institute, Western University, London, Canada.,Faculty of Psychology and Education, Department of Methods and Experimental Psychology, University of Deusto, Bilbao, Spain
| | - Annika C Linke
- Brain and Mind Institute, Western University, London, Canada
| | | | - Conor J Wild
- Brain and Mind Institute, Western University, London, Canada
| | - Hester Duffy
- Brain and Mind Institute, Western University, London, Canada
| | - David S C Lee
- Children's Health Research Institute, London, Canada
| | - Victor K Han
- Children's Health Research Institute, London, Canada
| | - Rhodri Cusack
- Brain and Mind Institute, Western University, London, Canada.,Children's Health Research Institute, London, Canada
| |
Collapse
|
24
|
Wang X, Keefe DH, Gan RZ. Predictions of middle-ear and passive cochlear mechanics using a finite element model of the pediatric ear. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:1735. [PMID: 27106321 PMCID: PMC4833734 DOI: 10.1121/1.4944949] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2014] [Revised: 02/10/2016] [Accepted: 03/16/2016] [Indexed: 06/05/2023]
Abstract
A finite element (FE) model was developed based on histological sections of a temporal bone of a 4-year-old child to simulate middle-ear and cochlear function in ears with normal hearing and otitis media. This pediatric model of the normal ear, consisting of an ear canal, middle ear, and spiral cochlea, was first validated with published energy absorbance (EA) measurements in young children with normal ears. The model was used to simulate EA in an ear with middle-ear effusion, whose results were compared to clinical EA measurements. The spiral cochlea component of the model was constructed under the assumption that the mechanics were passive. The FE model predicted middle-ear transfer functions between the ear canal and cochlea. Effects of ear structure and mechanical properties of soft tissues were compared in model predictions for the pediatric and adult ears. EA responses are predicted to differ between adult and pediatric ears due to differences in the stiffness and damping of soft tissues within the ear, and any residual geometrical differences between the adult ear and pediatric ear at age 4 years. The results have significance for predicting effects of otitis media in children.
Collapse
Affiliation(s)
- Xuelin Wang
- School of Aerospace and Mechanical Engineering and Biomedical Engineering Center, University of Oklahoma, Norman, Oklahoma 73019, USA
| | - Douglas H Keefe
- Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Rong Z Gan
- School of Aerospace and Mechanical Engineering and Biomedical Engineering Center, University of Oklahoma, Norman, Oklahoma 73019, USA
| |
Collapse
|
25
|
Somatic memory and gain increase as preconditions for tinnitus: Insights from congenital deafness. Hear Res 2016; 333:37-48. [DOI: 10.1016/j.heares.2015.12.018] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2015] [Revised: 11/27/2015] [Accepted: 12/18/2015] [Indexed: 11/19/2022]
|
26
|
Pundir AS, Singh UA, Ahuja N, Makhija S, Dikshit PC, Radotra B, Kumar P, Shankar SK, Mahadevan A, Roy TS, Iyengar S. Growth and refinement of excitatory synapses in the human auditory cortex. Brain Struct Funct 2015; 221:3641-74. [PMID: 26438332 DOI: 10.1007/s00429-015-1124-6] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Accepted: 09/25/2015] [Indexed: 02/03/2023]
Abstract
We had earlier demonstrated a neurofilament-rich plexus of axons in the presumptive human auditory cortex during fetal development which became adult-like during infancy. To elucidate the origin of these axons, we studied the expression of the vesicular glutamate transporters (VGLUT) 1 and 2 in the human auditory cortex at different stages of development. While VGLUT-1 expression predominates in intrinsic and cortico-cortical synapses, VGLUT-2 expression predominates in thalamocortical synapses. Levels of VGLUT-2 mRNA were higher in the auditory cortex before birth compared to postnatal development. In contrast, levels of VGLUT-1 mRNA were low before birth and increased during postnatal development to peak during childhood and then began to decrease in adolescence. Both VGLUT-1 and VGLUT-2 proteins were present in the human auditory cortex as early as 15GW. Further, immunohistochemistry revealed that the supra- and infragranular layers were more immunoreactive for VGLUT-1 compared to that in Layer IV at 34GW and this pattern was maintained until adulthood. As for VGLUT-1 mRNA, VGLUT-1 synapses increased in density between prenatal development and childhood in the human auditory cortex after which they appeared to undergo attrition or pruning. The adult pattern of VGLUT-2 immunoreactivity (a dense band of VGLUT-2-positive terminals in Layer IV) also began to appear in the presumptive Heschl's gyrus at 34GW. The density of VGLUT-2-positive puncta in Layer IV increased between prenatal development and adolescence, followed by a decrease in adulthood, suggesting that thalamic axons which innervate the human auditory cortex undergo pruning comparatively late in development.
Collapse
Affiliation(s)
- Arvind Singh Pundir
- Division of Systems Neuroscience, National Brain Research Centre (Deemed University), NH-8, Manesar, Gurgaon, Haryana, 122051, India
| | - Utkarsha A Singh
- Division of Systems Neuroscience, National Brain Research Centre (Deemed University), NH-8, Manesar, Gurgaon, Haryana, 122051, India
| | - Nikhil Ahuja
- Division of Systems Neuroscience, National Brain Research Centre (Deemed University), NH-8, Manesar, Gurgaon, Haryana, 122051, India
| | - Sonal Makhija
- Division of Systems Neuroscience, National Brain Research Centre (Deemed University), NH-8, Manesar, Gurgaon, Haryana, 122051, India
| | - P C Dikshit
- Department of Forensic Medicine, Maulana Azad Medical College, Bahadur Shah Zafar Marg, New Delhi, 110002, India
| | - Bishan Radotra
- Department of Histopathology, Post Graduate Institute of Medical Education and Research, Sector-12, Chandigarh, 160012, India
| | - Praveen Kumar
- Department of Obstetrics and Gynecology, Base Hospital, Delhi Cantonment, Delhi, 110010, India
| | - S K Shankar
- Department of Neuropathology, National Institute of Mental Health and Allied Sciences, Hosur Road, Bangalore, 560029, India
| | - Anita Mahadevan
- Department of Neuropathology, National Institute of Mental Health and Allied Sciences, Hosur Road, Bangalore, 560029, India
| | - T S Roy
- Department of Anatomy, All India Institute of Medical Sciences, New Delhi, 110002, India
| | - Soumya Iyengar
- Division of Systems Neuroscience, National Brain Research Centre (Deemed University), NH-8, Manesar, Gurgaon, Haryana, 122051, India.
| |
Collapse
|
27
|
López-Teijón M, García-Faura Á, Prats-Galino A. Fetal facial expression in response to intravaginal music emission. ULTRASOUND : JOURNAL OF THE BRITISH MEDICAL ULTRASOUND SOCIETY 2015; 23:216-223. [PMID: 26539240 PMCID: PMC4616906 DOI: 10.1177/1742271x15609367] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
This study compared fetal response to musical stimuli applied intravaginally (intravaginal music [IVM]) with application via emitters placed on the mother’s abdomen (abdominal music [ABM]). Responses were quantified by recording facial movements identified on 3D/4D ultrasound. One hundred and six normal pregnancies between 14 and 39 weeks of gestation were randomized to 3D/4D ultrasound with: (a) ABM with standard headphones (flute monody at 98.6 dB); (b) IVM with a specially designed device emitting the same monody at 53.7 dB; or (c) intravaginal vibration (IVV; 125 Hz) at 68 dB with the same device. Facial movements were quantified at baseline, during stimulation, and for 5 minutes after stimulation was discontinued. In fetuses at a gestational age of >16 weeks, IVM-elicited mouthing (MT) and tongue expulsion (TE) in 86.7% and 46.6% of fetuses, respectively, with significant differences when compared with ABM and IVV (p = 0.002 and p = 0.004, respectively). There were no changes from baseline in ABM and IVV. TE occurred ≥5 times in 5 minutes in 13.3% with IVM. IVM was related with higher occurrence of MT (odds ratio = 10.980; 95% confidence interval = 3.105–47.546) and TE (odds ratio = 10.943; 95% confidence interval = 2.568–77.037). The frequency of TE with IVM increased significantly with gestational age (p = 0.024). Fetuses at 16–39 weeks of gestation respond to intravaginally emitted music with repetitive MT and TE movements not observed with ABM or IVV. Our findings suggest that neural pathways participating in the auditory–motor system are developed as early as gestational week 16. These findings might contribute to diagnostic methods for prenatal hearing screening, and research into fetal neurological stimulation.
Collapse
Affiliation(s)
| | | | - Alberto Prats-Galino
- Human Anatomy and Embryology Unit, Laboratory of Surgical Neuroanatomy, Facultat de Medicina, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
28
|
Putkinen V, Tervaniemi M, Saarikivi K, Huotilainen M. Promises of formal and informal musical activities in advancing neurocognitive development throughout childhood. Ann N Y Acad Sci 2015; 1337:153-62. [PMID: 25773630 DOI: 10.1111/nyas.12656] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Adult musicians show superior neural sound discrimination when compared to nonmusicians. However, it is unclear whether these group differences reflect the effects of experience or preexisting neural enhancement in individuals who seek out musical training. Tracking how brain function matures over time in musically trained and nontrained children can shed light on this issue. Here, we review our recent longitudinal event-related potential (ERP) studies that examine how formal musical training and less formal musical activities influence the maturation of brain responses related to sound discrimination and auditory attention. These studies found that musically trained school-aged children and preschool-aged children attending a musical playschool show more rapid maturation of neural sound discrimination than their control peers. Importantly, we found no evidence for pretraining group differences. In a related cross-sectional study, we found ERP and behavioral evidence for improved executive functions and control over auditory novelty processing in musically trained school-aged children and adolescents. Taken together, these studies provide evidence for the causal role of formal musical training and less formal musical activities in shaping the development of important neural auditory skills and suggest transfer effects with domain-general implications.
Collapse
Affiliation(s)
- Vesa Putkinen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland; Finnish Centre of Interdisciplinary Music Research, University of Jyväskylä, Jyväskylä, Finland
| | | | | | | |
Collapse
|
29
|
Park MH, Won JH, Horn DL, Rubinstein JT. Acoustic temporal modulation detection in normal-hearing and cochlear implanted listeners: effects of hearing mechanism and development. J Assoc Res Otolaryngol 2015; 16:389-99. [PMID: 25790949 DOI: 10.1007/s10162-014-0499-z] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 11/10/2014] [Indexed: 11/28/2022] Open
Abstract
Temporal modulation detection ability matures over many years after birth and may be particularly sensitive to experience during this period. Profound hearing loss during early childhood might result in greater perceptual deficits than a similar loss beginning in adulthood. We tested this idea by measuring performance in temporal modulation detection in profoundly deaf children and adults fitted with cochlear implants (CIs). At least two independent variables could constrain temporal modulation detection performance in children with CIs: altered encoding of modulation information due to the CI-auditory nerve interface, and atypical development of central processing of sound information provided by CIs. The effect of altered encoding was investigated by testing subjects with one of two different hearing mechanisms (normal hearing vs. CI) and the effect of atypical development was studied by testing two different age groups. All subjects were tested for their ability to detect acoustic temporal modulations of sound amplitude. A comparison of the slope, or cutoff frequency, of the temporal modulation transfer functions (TMTFs) among the four subject groups revealed that temporal resolution was mainly constrained by hearing mechanism: normal-hearing listeners could detect smaller amplitude modulations at high modulation frequencies than CI users. In contrast, a comparison of the height of the TMTFs revealed a significant interaction between hearing mechanism and age group on overall sensitivity to temporal modulation: sensitivity was significantly poorer in children with CIs, relative to the other three groups. Results suggest that there is an age-specific vulnerability of intensity discrimination or non-sensory factors, which subsequently affects sensitivity to temporal modulation in prelingually deaf children who use CIs.
Collapse
Affiliation(s)
- Min-Hyun Park
- Department of Otorhinolaryngology, Boramae Medical Center, Seoul Metropolitan Government - Seoul National University, Seoul, 156-707, Korea
| | | | | | | |
Collapse
|
30
|
Lau BK, Werner LA. Perception of the pitch of unresolved harmonics by 3- and 7-month-old human infants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:760-767. [PMID: 25096110 PMCID: PMC4144174 DOI: 10.1121/1.4887464] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2013] [Revised: 06/22/2014] [Accepted: 06/25/2014] [Indexed: 06/03/2023]
Abstract
Three-month-olds discriminate resolved harmonic complexes on the basis of missing fundamental (MF) pitch. In view of reported difficulty in discriminating unresolved complexes at 7 months and striking changes in the organization of the auditory system during early infancy, infants' ability to discriminate unresolved complexes is of some interest. This study investigated the ability of 3-month-olds, 7-month-olds, and adults to discriminate the pitch of unresolved harmonic complexes using an observer-based method. Stimuli were MF complexes bandpass filtered with a -12 dB/octave slope, combined in random phase, presented at 70 dB sound pressure level (SPL) for 650 ms with a 50 ms rise/fall with a pink noise at 65 dB SPL. The conditions were (1) "LOW" unresolved harmonics (2500-4500 Hz) based on MFs of 160 and 200 Hz and (2) "HIGH" unresolved harmonics (4000-6000 Hz) based on MFs of 190 and 200 Hz. To demonstrate MF discrimination, participants had to ignore spectral changes in complexes with the same fundamental and respond only when the fundamental changed. Nearly all infants tested categorized complexes by MF pitch suggesting discrimination of pitch extracted from unresolved harmonics by 3 months. Adults also categorized the complexes by MF pitch, although musically trained adults were more successful than musically untrained adults.
Collapse
Affiliation(s)
- Bonnie K Lau
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd Street, Seattle, Washington 98105
| | - Lynne A Werner
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd Street, Seattle, Washington 98105
| |
Collapse
|