1
|
Ellery A. Bio-Inspired Strategies Are Adaptable to Sensors Manufactured on the Moon. Biomimetics (Basel) 2024; 9:496. [PMID: 39194475 DOI: 10.3390/biomimetics9080496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 08/09/2024] [Accepted: 08/10/2024] [Indexed: 08/29/2024] Open
Abstract
Bio-inspired strategies for robotic sensing are essential for in situ manufactured sensors on the Moon. Sensors are one crucial component of robots that should be manufactured from lunar resources to industrialize the Moon at low cost. We are concerned with two classes of sensor: (a) position sensors and derivatives thereof are the most elementary of measurements; and (b) light sensing arrays provide for distance measurement within the visible waveband. Terrestrial approaches to sensor design cannot be accommodated within the severe limitations imposed by the material resources and expected manufacturing competences on the Moon. Displacement and strain sensors may be constructed as potentiometers with aluminium extracted from anorthite. Anorthite is also a source of silica from which quartz may be manufactured. Thus, piezoelectric sensors may be constructed. Silicone plastic (siloxane) is an elastomer that may be derived from lunar volatiles. This offers the prospect for tactile sensing arrays. All components of photomultiplier tubes may be constructed from lunar resources. However, the spatial resolution of photomultiplier tubes is limited so only modest array sizes can be constructed. This requires us to exploit biomimetic strategies: (i) optical flow provides the visual navigation competences of insects implemented through modest circuitry, and (ii) foveated vision trades the visual resolution deficiencies with higher resolution of pan-tilt motors enabled by micro-stepping. Thus, basic sensors may be manufactured from lunar resources. They are elementary components of robotic machines that are crucial for constructing a sustainable lunar infrastructure. Constraints imposed by the Moon may be compensated for using biomimetic strategies which are adaptable to non-Earth environments.
Collapse
Affiliation(s)
- Alex Ellery
- Centre for Self-Replication Research (CESER), Department of Mechanical & Aerospace Engineering, Carleton University, 1125 Colonel By Drive, Ottawa, ON K1S 5B6, Canada
| |
Collapse
|
2
|
Ngetich R, Burleigh TL, Czakó A, Vékony T, Németh D, Demetrovics Z. Working memory performance in disordered gambling and gaming: A systematic review. Compr Psychiatry 2023; 126:152408. [PMID: 37573802 DOI: 10.1016/j.comppsych.2023.152408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 06/21/2023] [Accepted: 07/21/2023] [Indexed: 08/15/2023] Open
Abstract
BACKGROUND Converging evidence supports that gaming and gambling disorders are associated with executive dysfunction. The involvement of different components of executive functions (EF) in these forms of behavioural addiction is unclear. AIM In a systematic review, we aim to uncover the association between working memory (WM), a crucial component of EF, and disordered gaming and gambling. Note that, in the context of this review, gaming has been used synonymously with video gaming. METHODS Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), we systematically searched for studies published from 2012 onwards. RESULTS The search yielded 6081 records after removing duplicates, from which 17 peer-reviewed journal articles were eligible for inclusion. The association between WM and problem or disordered gaming and gambling have been categorized separately to observe possible differences. Essentially, problem gaming or gambling, compared to disorder, presents lesser severity and clinical significance. The results demonstrate reduced auditory-verbal WM in individuals with gambling disorder. Decreased WM capacity was also associated with problem gambling, with a correlation between problem gambling severity and decreased WM capacity. Similarly, gaming disorder was associated with decreased WM. Specifically, gaming disorder patients had lower WM capacity than the healthy controls. CONCLUSION Working memory seems to be a significant predictor of gambling and gaming disorders. Therefore, holistic treatment approaches that incorporate cognitive techniques that could enhance working memory may significantly boost gambling and gaming disorders treatment success.
Collapse
Affiliation(s)
- Ronald Ngetich
- Centre of Excellence in Responsible Gaming, University of Gibraltar, Gibraltar, Gibraltar
| | - Tyrone L Burleigh
- Centre of Excellence in Responsible Gaming, University of Gibraltar, Gibraltar, Gibraltar
| | - Andrea Czakó
- Centre of Excellence in Responsible Gaming, University of Gibraltar, Gibraltar, Gibraltar; Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Teodóra Vékony
- INSERM, Université Claude Bernard Lyon 1, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, Bron, France
| | - Dezso Németh
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary; INSERM, Université Claude Bernard Lyon 1, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, Bron, France; Brain, Memory and Language Research Group, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - Zsolt Demetrovics
- Centre of Excellence in Responsible Gaming, University of Gibraltar, Gibraltar, Gibraltar; Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary.
| |
Collapse
|
3
|
Ji H, Yu X, Xiao Z, Zhu H, Liu P, Lin H, Chen R, Hong Q. Features of Cognitive Ability and Central Auditory Processing of Preschool Children With Minimal and Mild Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1867-1888. [PMID: 37116308 DOI: 10.1044/2023_jslhr-22-00395] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
OBJECTIVE This study aimed to investigate the current status of cognitive development and central auditory processing development of preschool children with minimal and mild hearing loss (MMHL) in Nanjing, China. METHOD We recruited 34 children with MMHL and 45 children with normal hearing (NH). They completed a series of tests, including cognitive tests (i.e., Wechsler Preschool and Primary Scale of Intelligence and Continuous Performance Test), behavioral auditory tests (speech-in-noise [SIN] test and frequency pattern test), and objective electrophysiological audiometry (speech-evoked auditory brainstem response and cortical auditory evoked potential). In addition, teacher evaluations and demographic information and questionnaires completed by parents were collected. RESULTS Regarding cognitive ability, statistical differences in the verbal comprehensive index, full-scale intelligence quotient, and abnormal rate of attention test score were found between the MMHL group and the NH group. The children with MMHL performed poorer on the SIN test than the children with NH. As for the auditory electrophysiology of the two groups, the latency and amplitude of some waves of the speech-evoked auditory brainstem response and cortical auditory evoked potential were statistically different between the two groups. We attempted to explore the relationship between some key indicators of auditory processing and some key indicators of cognitive development. CONCLUSIONS Children with MMHL are already at increased developmental risk as early as preschool. They are more likely to have problems with attention and verbal comprehension than children with NH. This condition is not compensated with increasing age during the preschool years. The results suggest a possible relationship between the risk of cognitive deficit and divergence of auditory processing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.22670473.
Collapse
Affiliation(s)
- Hui Ji
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Jiangsu, China
| | - Xinyue Yu
- School of Pediatrics, Nanjing Medical University, Jiangsu, China
| | - Zhenglu Xiao
- School of Pediatrics, Nanjing Medical University, Jiangsu, China
| | - Huiqin Zhu
- School of Pediatrics, Nanjing Medical University, Jiangsu, China
| | - Panting Liu
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Jiangsu, China
| | - Huanxi Lin
- School of Nursing, Nanjing Medical University, Jiangsu, China
| | - Renjie Chen
- The Second Affiliated Hospital of Nanjing Medical University, Jiangsu, China
| | - Qin Hong
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Jiangsu, China
| |
Collapse
|
4
|
Beguš G, Zhou A, Zhao TC. Encoding of speech in convolutional layers and the brain stem based on language experience. Sci Rep 2023; 13:6480. [PMID: 37081119 PMCID: PMC10119295 DOI: 10.1038/s41598-023-33384-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 04/12/2023] [Indexed: 04/22/2023] Open
Abstract
Comparing artificial neural networks with outputs of neuroimaging techniques has recently seen substantial advances in (computer) vision and text-based language models. Here, we propose a framework to compare biological and artificial neural computations of spoken language representations and propose several new challenges to this paradigm. The proposed technique is based on a similar principle that underlies electroencephalography (EEG): averaging of neural (artificial or biological) activity across neurons in the time domain, and allows to compare encoding of any acoustic property in the brain and in intermediate convolutional layers of an artificial neural network. Our approach allows a direct comparison of responses to a phonetic property in the brain and in deep neural networks that requires no linear transformations between the signals. We argue that the brain stem response (cABR) and the response in intermediate convolutional layers to the exact same stimulus are highly similar without applying any transformations, and we quantify this observation. The proposed technique not only reveals similarities, but also allows for analysis of the encoding of actual acoustic properties in the two signals: we compare peak latency (i) in cABR relative to the stimulus in the brain stem and in (ii) intermediate convolutional layers relative to the input/output in deep convolutional networks. We also examine and compare the effect of prior language exposure on the peak latency in cABR and in intermediate convolutional layers. Substantial similarities in peak latency encoding between the human brain and intermediate convolutional networks emerge based on results from eight trained networks (including a replication experiment). The proposed technique can be used to compare encoding between the human brain and intermediate convolutional layers for any acoustic property and for other neuroimaging techniques.
Collapse
Affiliation(s)
- Gašper Beguš
- Department of Linguistics, University of California, Berkeley, USA.
| | - Alan Zhou
- Department of Cognitive Science, Johns Hopkins University, Baltimore, USA
| | - T Christina Zhao
- Institute for Learning and Brain Sciences, University of Washington, Seattle, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, USA
| |
Collapse
|
5
|
Bücher S, Bernhofs V, Thieme A, Christiner M, Schneider P. Chronology of auditory processing and related co-activation in the orbitofrontal cortex depends on musical expertise. Front Neurosci 2023; 16:1041397. [PMID: 36685231 PMCID: PMC9846135 DOI: 10.3389/fnins.2022.1041397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 12/02/2022] [Indexed: 01/05/2023] Open
Abstract
Introduction The present study aims to explore the extent to which auditory processing is reflected in the prefrontal cortex. Methods Using magnetoencephalography (MEG), we investigated the chronology of primary and secondary auditory responses and associated co-activation in the orbitofrontal cortex in a large cohort of 162 participants of various ages. The sample consisted of 38 primary school children, 39 adolescents, 43 younger, and 42 middle-aged adults and was further divided into musically experienced participants and non-musicians by quantifying musical training and aptitude parameters. Results We observed that the co-activation in the orbitofrontal cortex [Brodmann-Area 10 (BA10)] strongly depended on musical expertise but not on age. In the musically experienced groups, a systematic coincidence of peak latencies of the primary auditory P1 response and the co-activated response in the orbitofrontal cortex was observed in childhood at the onset of musical education. In marked contrast, in all non-musicians, the orbitofrontal co-activation occurred 25-40 ms later when compared with the P1 response. Musical practice and musical aptitude contributed equally to the observed activation and co-activation patterns in the auditory and orbitofrontal cortex, confirming the reciprocal, interrelated influence of nature, and nurture in the musical brain. Discussion Based on the observed ageindependent differences in the chronology and lateralization of neurological responses, we suggest that orbitofrontal functions may contribute to musical learning at an early age.
Collapse
Affiliation(s)
- Steffen Bücher
- Section of Biomagnetism Heidelberg, Department of Neurology, Faculty of Medicine Heidelberg, Heidelberg, Germany
| | | | - Andrea Thieme
- Section of Biomagnetism Heidelberg, Department of Neurology, Faculty of Medicine Heidelberg, Heidelberg, Germany
| | - Markus Christiner
- Jāzeps Vītols Latvian Academy of Music, Riga, Latvia
- Centre of Systematic Musicology, University of Graz, Graz, Austria
| | - Peter Schneider
- Section of Biomagnetism Heidelberg, Department of Neurology, Faculty of Medicine Heidelberg, Heidelberg, Germany
- Jāzeps Vītols Latvian Academy of Music, Riga, Latvia
- Centre of Systematic Musicology, University of Graz, Graz, Austria
- Department of Neuroradiology, Medical School Heidelberg, Heidelberg, Germany
| |
Collapse
|
6
|
Hussain RO, Kumar P, Singh NK. Subcortical and Cortical Electrophysiological Measures in Children With Speech-in-Noise Deficits Associated With Auditory Processing Disorders. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4454-4468. [PMID: 36279585 DOI: 10.1044/2022_jslhr-22-00094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The aim of this study was to analyze the subcortical and cortical auditory evoked potentials for speech stimuli in children with speech-in-noise (SIN) deficits associated with auditory processing disorder (APD) without any reading or language deficits. METHOD The study included 20 children in the age range of 9-13 years. Ten children were recruited to the APD group; they had below-normal scores on the speech-perception-in-noise test and were diagnosed as having APD. The remaining 10 were typically developing (TD) children and were recruited to the TD group. Speech-evoked subcortical (brainstem) and cortical (auditory late latency) responses were recorded and compared across both groups. RESULTS The results showed a statistically significant reduction in the amplitudes of the subcortical potentials (both for stimulus in quiet and in noise) and the magnitudes of the spectral components (fundamental frequency and the second formant) in children with SIN deficits in the APD group compared to the TD group. In addition, the APD group displayed enhanced amplitudes of the cortical potentials compared to the TD group. CONCLUSION Children with SIN deficits associated with APD exhibited impaired coding/processing of the auditory information at the level of the brainstem and the auditory cortex. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21357735.
Collapse
Affiliation(s)
| | - Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore
| | - Niraj Kumar Singh
- Department of Audiology, All India Institute of Speech and Hearing, Mysore
| |
Collapse
|
7
|
Goller B, Baumhardt P, Dominguez-Villegas E, Katzner T, Fernández-Juricic E, Lucas JR. Selecting auditory alerting stimuli for eagles on the basis of auditory evoked potentials. CONSERVATION PHYSIOLOGY 2022; 10:coac059. [PMID: 36134144 PMCID: PMC9486983 DOI: 10.1093/conphys/coac059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 07/11/2022] [Accepted: 09/13/2022] [Indexed: 06/16/2023]
Abstract
Development of wind energy facilities results in interactions between wildlife and wind turbines. Raptors, including bald and golden eagles, are among the species known to incur mortality from these interactions. Several alerting technologies have been proposed to mitigate this mortality by increasing eagle avoidance of wind energy facilities. However, there has been little attempt to match signals used as alerting stimuli with the sensory capabilities of target species like eagles. One potential approach to tuning signals is to use sensory physiology to determine what stimuli the target eagle species are sensitive to even in the presence of background noise, thereby allowing the development of a maximally stimulating signal. To this end, we measured auditory evoked potentials of bald and golden eagles to determine what types of sounds eagles can process well, especially in noisy conditions. We found that golden eagles are significantly worse than bald eagles at processing rapid frequency changes in sounds, but also that noise effects on hearing in both species are minimal in response to rapidly changing sounds. Our findings therefore suggest that sounds of intermediate complexity may be ideal both for targeting bald and golden eagle hearing and for ensuring high stimulation in noisy field conditions. These results suggest that the sensory physiology of target species is likely an important consideration when selecting auditory alerting sounds and may provide important insight into what sounds have a reasonable probability of success in field applications under variable conditions and background noise.
Collapse
Affiliation(s)
- Benjamin Goller
- Department of Biological Sciences, Purdue University, West Lafayette, IN 47907, USA
| | - Patrice Baumhardt
- Department of Biological Sciences, Purdue University, West Lafayette, IN 47907, USA
| | | | - Todd Katzner
- U.S. Geological Survey, Forest & Rangeland Ecosystem Science Center, 230 N Collins Rd., Boise, ID 83702, USA
| | | | - Jeffrey R Lucas
- Corresponding author: Department of Biological Sciences, Purdue University, West Lafayette, IN 47907, USA. Tel: 765-494-8112.
| |
Collapse
|
8
|
Sanjana M, Nisha KV. Effects of Abacus Training on Auditory Spatial Maturation in Children with Normal Hearing. Int Arch Otorhinolaryngol 2022; 27:e56-e66. [PMID: 36714899 PMCID: PMC9879648 DOI: 10.1055/s-0041-1741434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 09/11/2021] [Indexed: 02/01/2023] Open
Abstract
Introduction The spatial auditory system, though developed at birth, attains functional maturity in the late childhood (12 years). Spatial changes during childhood affect navigation in the environment and source segregation. Accommodation of a new skill through learning, especially during childhood, can expedite this process. Objective To explore the auditory spatial benefits of abacus training on psychoacoustic metrics in children. The study also aimed to identify the most sensitive metric to abacus training related changes in spatial processing, and utilize this metric for a detailed spatial error profiling. Methods A standard group comparison analysis with 90 participants divided into three groups: I: children with abacus training (C-AT); II: children with no training (C-UT); III: adults with no training (A-UT). The groups underwent a series of psychoacoustic tests, such as interaural time difference (ITD), interaural level difference (ILD), and virtual auditory space identification (VASI), as well as perceptual tests such as the Kannada version of the speech, spatial, and quality questionnaire (K-SSQ). Results Significant group differences were observed in the multivariate analysis of variance (MANOVA) and post-hoc tests, with the C-AT group showing significantly lower ILD scores ( p = 0.01) and significantly higher VASI scores ( p <0.001) compared to the C-UT group, which is indicative of better spatial processing abilities in the former group. The discriminant function (DF) analyses showed that the VASI was the most sensitive metric for training-related changes, based on which elaborate error analyses were performed. Conclusions Despite the physiological limits of the immature neural framework, the performance of the C-AT group was equivalent to that of untrained adults on psychoacoustic tests, which is reflective of the positive role of abacus training in expediting auditory spatial maturation.
Collapse
Affiliation(s)
- M. Sanjana
- Department of Speech and Hearing, Manipal College of Health Professions (MCHP), Manipal, Karnataka, India.
| | - K. V. Nisha
- Center for Hearing Sciences, Center of Excellence, All India Institute of Speech and Hearing (AIISH), Naimisham Campus, Manasagangothri, Mysore, Karnataka, India.,Address for correspondence K. V. Nisha, PhD Department of Audiology, All India Institute of Speech and Hearing (AIISH)Mysore 570006, KarnatakaIndia
| |
Collapse
|
9
|
Suresh CH, Krishnan A. Frequency-Following Response to Steady-State Vowel in Quiet and Background Noise Among Marching Band Participants With Normal Hearing. Am J Audiol 2022; 31:719-736. [PMID: 35944059 DOI: 10.1044/2022_aja-21-00226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVE Human studies enrolling individuals at high risk for cochlear synaptopathy (CS) have reported difficulties in speech perception in adverse listening conditions. The aim of this study is to determine if these individuals show a degradation in the neural encoding of speech in quiet and in the presence of background noise as reflected in neural phase-locking to both envelope periodicity and temporal fine structure (TFS). To our knowledge, there are no published reports that have specifically examined the neural encoding of both envelope periodicity and TFS of speech stimuli (in quiet and in adverse listening conditions) among a sample with loud-sound exposure history who are at risk for CS. METHOD Using scalp-recorded frequency-following response (FFR), the authors evaluated the neural encoding of envelope periodicity (FFRENV) and TFS (FFRTFS) for a steady-state vowel (English back vowel /u/) in quiet and in the presence of speech-shaped noise presented at +5- and 0 dB SNR. Participants were young individuals with normal hearing who participated in the marching band for at least 5 years (high-risk group) and non-marching band group with low-noise exposure history (low-risk group). RESULTS The results showed no group differences in the neural encoding of either the FFRENV or the first formant (F1) in the FFRTFS in quiet and in noise. Paradoxically, the high-risk group demonstrated enhanced representation of F2 harmonics across all stimulus conditions. CONCLUSIONS These results appear to be in line with a music experience-dependent enhancement of F2 harmonics. However, due to sound overexposure in the high-risk group, the role of homeostatic central compensation cannot be ruled out. A larger scale data set with different noise exposure background, longitudinal measurements with an array of behavioral and electrophysiological tests is needed to disentangle the nature of the complex interaction between the effects of central compensatory gain and experience-dependent enhancement.
Collapse
Affiliation(s)
- Chandan H Suresh
- Department of Communication Disorders, California State University, Los Angeles
| | | |
Collapse
|
10
|
Macambira YKDS, Menezes PDL, Frizzo ACF, Griz SMS, Menezes DC, Advíncula KP. Cortical auditory evoked potentials using the speech stimulus /ma/. REVISTA CEFAC 2022. [DOI: 10.1590/1982-0216/20222439021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
11
|
Magimairaj BM, Nagaraj NK, Champlin CA, Thibodeau LK, Loeb DF, Gillam RB. Speech Perception in Noise Predicts Oral Narrative Comprehension in Children With Developmental Language Disorder. Front Psychol 2021; 12:735026. [PMID: 34744907 PMCID: PMC8566731 DOI: 10.3389/fpsyg.2021.735026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 09/17/2021] [Indexed: 11/13/2022] Open
Abstract
We examined the relative contribution of auditory processing abilities (tone perception and speech perception in noise) after controlling for short-term memory capacity and vocabulary, to narrative language comprehension in children with developmental language disorder. Two hundred and sixteen children with developmental language disorder, ages 6 to 9 years (Mean = 7; 6), were administered multiple measures. The dependent variable was children's score on the narrative comprehension scale of the Test of Narrative Language. Predictors were auditory processing abilities, phonological short-term memory capacity, and language (vocabulary) factors, with age, speech perception in quiet, and non-verbal IQ as covariates. Results showed that narrative comprehension was positively correlated with the majority of the predictors. Regression analysis suggested that speech perception in noise contributed uniquely to narrative comprehension in children with developmental language disorder, over and above all other predictors; however, tone perception tasks failed to explain unique variance. The relative importance of speech perception in noise over tone-perception measures for language comprehension reinforces the need for the assessment and management of listening in noise deficits and makes a compelling case for the functional implications of complex listening situations for children with developmental language disorder.
Collapse
Affiliation(s)
- Beula M Magimairaj
- Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, UT, United States
| | - Naveen K Nagaraj
- Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, UT, United States
| | - Craig A Champlin
- Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States
| | - Linda K Thibodeau
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, United States
| | - Diane F Loeb
- Communication Sciences and Disorders, Baylor University, Waco, TX, United States
| | - Ronald B Gillam
- Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, UT, United States
| |
Collapse
|
12
|
Jeng FC, Hart BN, Lin CD. Separating the Novel Speech Sound Perception of Lexical Tone Chimeras From Their Auditory Signal Manipulations: Behavioral and Electroencephalographic Evidence. Percept Mot Skills 2021; 128:2527-2543. [PMID: 34586922 DOI: 10.1177/00315125211049723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Previous research has shown the novelty of lexical-tone chimeras (artificially constructed speech sounds created by combining normal speech sounds of a given language) to native speakers of the language from which the chimera components were drawn. However, the source of such novelty remains unclear. Our goal in this study was to separate the effects of chimeric tonal novelty in Mandarin speech from the effects of auditory signal manipulations. We recruited 20 native speakers of Mandarin and constructed two sets of lexical-tone chimeras by interchanging the envelopes and fine structures of both a falling/yi4/and a rising/yi2/Mandarin tone through 1, 2, 3, 4, 6, 8, 16, 32, and 64 auditory filter banks. We conducted pitch-perception ability tasks via a two-alternative, forced-choice paradigm to produce behavioral (versus physiological) pitch perception data. We also obtained electroencephalographic measurements through the scalp-recorded frequency-following response (FFR). Analyses of variances and post hoc Greenhouse-Geisser procedures revealed that the differences observed in the participants' reaction times and FFR measurements were attributable primarily to chimeric novelty rather than signal manipulation effects. These findings can be useful in assessing neuroplasticity and developing speech-processing strategies.
Collapse
Affiliation(s)
- Fuh-Cherng Jeng
- Communication Sciences and Disorders, 1354Ohio University, Ohio University, Athens, Ohio, United States.,Department of Otolaryngology-HNS, Medical University Hospital, Taichung City
| | - Breanna N Hart
- Communication Sciences and Disorders, 1354Ohio University, Ohio University, Athens, Ohio, United States
| | - Chia-Der Lin
- Department of Otolaryngology-HNS, Medical University Hospital, Taichung City
| |
Collapse
|
13
|
Wong PCM, Lai CM, Chan PHY, Leung TF, Lam HS, Feng G, Maggu AR, Novitskiy N. Neural Speech Encoding in Infancy Predicts Future Language and Communication Difficulties. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2021; 30:2241-2250. [PMID: 34383568 DOI: 10.1044/2021_ajslp-21-00077] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study aimed to construct an objective and cost-effective prognostic tool to forecast the future language and communication abilities of individual infants. Method Speech-evoked electroencephalography (EEG) data were collected from 118 infants during the first year of life during the exposure to speech stimuli that differed principally in fundamental frequency. Language and communication outcomes, namely four subtests of the MacArthur-Bates Communicative Development Inventories (MCDI)-Chinese version, were collected between 3 and 16 months after initial EEG testing. In the two-way classification, children were classified into those with future MCDI scores below the 25th percentile for their age group and those above the same percentile, while the three-way classification classified them into < 25th, 25th-75th, and > 75th percentile groups. Machine learning (support vector machine classification) with cross validation was used for model construction. Statistical significance was assessed. Results Across the four MCDI measures of early gestures, later gestures, vocabulary comprehension, and vocabulary production, the areas under the receiver-operating characteristic curve of the predictive models were respectively .92 ± .031, .91 ± .028, .90 ± .035, and .89 ± .039 for the two-way classification, and .88 ± .041, .89 ± .033, .85 ± .047, and .85 ± .050 for the three-way classification (p < .01 for all models). Conclusions Future language and communication variability can be predicted by an objective EEG method that indicates the function of the auditory neural pathway foundational to spoken language development, with precision sufficient for individual predictions. Longer-term research is needed to assess predictability of categorical diagnostic status. Supplemental Material https://doi.org/10.23641/asha.15138546.
Collapse
Affiliation(s)
- Patrick C M Wong
- Brain and Mind Institute and Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Ching Man Lai
- Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Peggy H Y Chan
- Brain and Mind Institute and Department of Paediatrics, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Ting Fan Leung
- Department of Paediatrics, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Hugh Simon Lam
- Department of Paediatrics, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Gangyi Feng
- Brain and Mind Institute and Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Akshay R Maggu
- Brain and Mind Institute and Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Psychology & Neuroscience, Duke University, Durham, NC
| | - Nikolay Novitskiy
- Brain and Mind Institute and Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
14
|
Liu P, Zhu H, Chen M, Hong Q, Chi X. Electrophysiological Screening for Children With Suspected Auditory Processing Disorder: A Systematic Review. Front Neurol 2021; 12:692840. [PMID: 34497576 PMCID: PMC8419449 DOI: 10.3389/fneur.2021.692840] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 07/07/2021] [Indexed: 11/13/2022] Open
Abstract
Objective: This research aimed to provide evidence for the early identification and intervention of children at risk for auditory processing disorder (APD). Electrophysiological studies on children with suspected APDs were systematically reviewed to understand the different electrophysiological characteristics of children with suspected APDs. Methods: Computerized databases such as PubMed, Cochrane, MEDLINE, Web of Science, and EMBASE were searched for retrieval of articles since the establishment of the database through May 18, 2020. Cohort, case-control, and cross-sectional studies that evaluated the literature for the electrophysiological assessment of children with suspected APD were independently reviewed by two researchers for literature screening, literature quality assessment, and data extraction. The Newcastle-Ottawa Scale and 11 entries recommended by the Agency for Healthcare Research and Quality were used to evaluate the quality of the literature. Results: In accordance with the inclusion criteria, 14 articles were included. These articles involved 7 electrophysiological testing techniques: click-evoked auditory brainstem responses, frequency-following responses, the binaural interaction component of the auditory brainstem responses, the middle-latency response, cortical auditory evoked potential, mismatch negativity, and P300. The literature quality was considered moderate. Conclusions: Auditory electrophysiological testing can be used for the characteristic identification of children with suspected APD; however, the value of various electrophysiological testing methods for screening children with suspected APD requires further study.
Collapse
Affiliation(s)
- Panting Liu
- School of Nursing, Nanjing Medical University, Nanjing, China
| | - Huiqin Zhu
- Department of Child Health Care, The Affiliated Obstetrics and Gynecology Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing, China
| | - Mingxia Chen
- School of Nursing, Nanjing Medical University, Nanjing, China
| | - Qin Hong
- Department of Child Health Care, The Affiliated Obstetrics and Gynecology Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing, China
| | - Xia Chi
- Department of Child Health Care, The Affiliated Obstetrics and Gynecology Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing, China
| |
Collapse
|
15
|
Ianiszewski A, Fuente A, Gagné JP. Auditory brainstem response asymmetries in older adults: An exploratory study using click and speech stimuli. PLoS One 2021; 16:e0251287. [PMID: 33961673 PMCID: PMC8104406 DOI: 10.1371/journal.pone.0251287] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Accepted: 04/25/2021] [Indexed: 11/27/2022] Open
Abstract
Background Some evidence suggests that young adults exhibit a selective laterality of auditory brainstem response (ABR) elicited with speech stimuli. Little is known about such an auditory laterality in older adults. Objective The aim of this study was to investigate possible asymmetric auditory brainstem processing between right and left ear presentation in older adults. Methods Sixty-two older adults presenting with normal hearing thresholds according to their age and who were native speakers of Quebec French participated in this study. ABR was recorded using click and a 40-ms /da/ syllable. ABR was elicited through monaural right and monaural left stimulation. Latency and amplitude for click-and speech-ABR components were compared between right and left ear presentations. In addition, for the /da/ syllable, a fast Fourier transform analysis of the sustained frequency-following response (FFR) of the vowel was performed along with stimulus-to-response and right-left ear correlation analyses. Results No significant differences between right and left ear presentation were found for amplitudes and latencies of the click-ABR components. Significantly shorter latencies for right ear presentation as compared to left ear presentation were observed for onset and offset transient components (V, A and O), sustained components (D and E), and voiced transition components (C) of the speech-ABR. In addition, the spectral amplitude of the fundamental frequency (F0) was significantly larger for the left ear presentation than the right ear presentation. Conclusions Results of this study show that older adults with normal hearing exhibit symmetric encoding for click stimuli at the brainstem level between the right and left ear presentation. However, they present with brainstem asymmetries for the encoding of selective stimulus components of the speech-ABR between the right and left ear presentation. The right ear presentation of a /da/ syllable elicited reduced neural timing for both transient and sustained components compared to the left ear. Conversely, a stronger left ear F0 encoding was observed. These findings suggest that at a preattentive, sensory stage of auditory processing, older adults lateralize speech stimuli similarly to young adults.
Collapse
Affiliation(s)
- Alejandro Ianiszewski
- École d'orthophonie et d'audiologie, Faculté de médecine, Université de Montréal, Montréal, Québec, Canada.,Centre de recherche de l'Institut universitaire de gériatrie de Montréal, Montréal, Québec, Canada
| | - Adrian Fuente
- École d'orthophonie et d'audiologie, Faculté de médecine, Université de Montréal, Montréal, Québec, Canada.,Centre de recherche de l'Institut universitaire de gériatrie de Montréal, Montréal, Québec, Canada
| | - Jean-Pierre Gagné
- École d'orthophonie et d'audiologie, Faculté de médecine, Université de Montréal, Montréal, Québec, Canada.,Centre de recherche de l'Institut universitaire de gériatrie de Montréal, Montréal, Québec, Canada
| |
Collapse
|
16
|
Neural encoding of voice pitch and formant structure at birth as revealed by frequency-following responses. Sci Rep 2021; 11:6660. [PMID: 33758251 PMCID: PMC7987955 DOI: 10.1038/s41598-021-85799-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 03/04/2021] [Indexed: 11/22/2022] Open
Abstract
Detailed neural encoding of voice pitch and formant structure plays a crucial role in speech perception, and is of key importance for an appropriate acquisition of the phonetic repertoire in infants since birth. However, the extent to what newborns are capable of extracting pitch and formant structure information from the temporal envelope and the temporal fine structure of speech sounds, respectively, remains unclear. Here, we recorded the frequency-following response (FFR) elicited by a novel two-vowel, rising-pitch-ending stimulus to simultaneously characterize voice pitch and formant structure encoding accuracy in a sample of neonates and adults. Data revealed that newborns tracked changes in voice pitch reliably and no differently than adults, but exhibited weaker signatures of formant structure encoding, particularly at higher formant frequency ranges. Thus, our results indicate a well-developed encoding of voice pitch at birth, while formant structure representation is maturing in a frequency-dependent manner. Furthermore, we demonstrate the feasibility to assess voice pitch and formant structure encoding within clinical evaluation times in a hospital setting, and suggest the possibility to use this novel stimulus as a tool for longitudinal developmental studies of the auditory system.
Collapse
|
17
|
Farahani ED, Wouters J, van Wieringen A. Brain mapping of auditory steady-state responses: A broad view of cortical and subcortical sources. Hum Brain Mapp 2021; 42:780-796. [PMID: 33166050 PMCID: PMC7814770 DOI: 10.1002/hbm.25262] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 10/13/2020] [Accepted: 10/15/2020] [Indexed: 12/21/2022] Open
Abstract
Auditory steady-state responses (ASSRs) are evoked brain responses to modulated or repetitive acoustic stimuli. Investigating the underlying neural generators of ASSRs is important to gain in-depth insight into the mechanisms of auditory temporal processing. The aim of this study is to reconstruct an extensive range of neural generators, that is, cortical and subcortical, as well as primary and non-primary ones. This extensive overview of neural generators provides an appropriate basis for studying functional connectivity. To this end, a minimum-norm imaging (MNI) technique is employed. We also present a novel extension to MNI which facilitates source analysis by quantifying the ASSR for each dipole. Results demonstrate that the proposed MNI approach is successful in reconstructing sources located both within (primary) and outside (non-primary) of the auditory cortex (AC). Primary sources are detected in different stimulation conditions (four modulation frequencies and two sides of stimulation), thereby demonstrating the robustness of the approach. This study is one of the first investigations to identify non-primary sources. Moreover, we show that the MNI approach is also capable of reconstructing the subcortical activities of ASSRs. Finally, the results obtained using the MNI approach outperform the group-independent component analysis method on the same data, in terms of detection of sources in the AC, reconstructing the subcortical activities and reducing computational load.
Collapse
Affiliation(s)
- Ehsan Darestani Farahani
- Research Group Experimental ORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
| | - Jan Wouters
- Research Group Experimental ORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
| | - Astrid van Wieringen
- Research Group Experimental ORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
| |
Collapse
|
18
|
Neural generators of the frequency-following response elicited to stimuli of low and high frequency: A magnetoencephalographic (MEG) study. Neuroimage 2021; 231:117866. [PMID: 33592244 DOI: 10.1016/j.neuroimage.2021.117866] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/09/2021] [Indexed: 01/03/2023] Open
Abstract
The frequency-following response (FFR) to periodic complex sounds has gained recent interest in auditory cognitive neuroscience as it captures with great fidelity the tracking accuracy of the periodic sound features in the ascending auditory system. Seminal studies suggested the FFR as a correlate of subcortical sound encoding, yet recent studies aiming to locate its sources challenged this assumption, demonstrating that FFR receives some contribution from the auditory cortex. Based on frequency-specific phase-locking capabilities along the auditory hierarchy, we hypothesized that FFRs to higher frequencies would receive less cortical contribution than those to lower frequencies, hence supporting a major subcortical involvement for these high frequency sounds. Here, we used a magnetoencephalographic (MEG) approach to trace the neural sources of the FFR elicited in healthy adults (N = 19) to low (89 Hz) and high (333 Hz) frequency sounds. FFRs elicited to the high and low frequency sounds were clearly observable on MEG and comparable to those obtained in simultaneous electroencephalographic recordings. Distributed source modeling analyses revealed midbrain, thalamic, and cortical contributions to FFR, arranged in frequency-specific configurations. Our results showed that the main contribution to the high-frequency sound FFR originated in the inferior colliculus and the medial geniculate body of the thalamus, with no significant cortical contribution. In contrast, the low-frequency sound FFR had a major contribution located in the auditory cortices, and also received contributions originating in the midbrain and thalamic structures. These findings support the multiple generator hypothesis of the FFR and are relevant for our understanding of the neural encoding of sounds along the auditory hierarchy, suggesting a hierarchical organization of periodicity encoding.
Collapse
|
19
|
Abstract
OBJECTIVES There is increasing interest in using the frequency following response (FFR) to describe the effects of varying different aspects of hearing aid signal processing on brainstem neural representation of speech. To this end, recent studies have examined the effects of filtering on brainstem neural representation of the speech fundamental frequency (f0) in listeners with normal hearing sensitivity by measuring FFRs to low- and high-pass filtered signals. However, the stimuli used in these studies do not reflect the entire range of typical cutoff frequencies used in frequency-specific gain adjustments during hearing aid fitting. Further, there has been limited discussion on the effect of filtering on brainstem neural representation of formant-related harmonics. Here, the effects of filtering on brainstem neural representation of speech fundamental frequency (f0) and harmonics related to first formant frequency (F1) were assessed by recording envelope and spectral FFRs to a vowel low-, high-, and band-pass filtered at cutoff frequencies ranging from 0.125 to 8 kHz. DESIGN FFRs were measured to a synthetically generated vowel stimulus /u/ presented in a full bandwidth and low-pass (experiment 1), high-pass (experiment 2), and band-pass (experiment 3) filtered conditions. In experiment 1, FFRs were measured to a synthetically generated vowel stimulus /u/ presented in a full bandwidth condition as well as 11 low-pass filtered conditions (low-pass cutoff frequencies: 0.125, 0.25, 0.5, 0.75, 1, 1.5, 2, 3, 4, 6, and 8 kHz) in 19 adult listeners with normal hearing sensitivity. In experiment 2, FFRs were measured to the same synthetically generated vowel stimulus /u/ presented in a full bandwidth condition as well as 10 high-pass filtered conditions (high-pass cutoff frequencies: 0.125, 0.25, 0.5, 0.75, 1, 1.5, 2, 3, 4, and 6 kHz) in 7 adult listeners with normal hearing sensitivity. In experiment 3, in addition to the full bandwidth condition, FFRs were measured to vowel /u/ low-pass filtered at 2 kHz, band-pass filtered between 2-4 kHz and 4-6 kHz in 10 adult listeners with normal hearing sensitivity. A Fast Fourier Transform analysis was conducted to measure the strength of f0 and the F1-related harmonic relative to the noise floor in the brainstem neural responses obtained to the full bandwidth and filtered stimulus conditions. RESULTS Brainstem neural representation of f0 was reduced when the low-pass filter cutoff frequency was between 0.25 and 0.5 kHz; no differences in f0 strength were noted between conditions when the low-pass filter cutoff condition was at or greater than 0.75 kHz. While envelope FFR f0 strength was reduced when the stimulus was high-pass filtered at 6 kHz, there was no effect of high-pass filtering on brainstem neural representation of f0 when the high-pass filter cutoff frequency ranged from 0.125 to 4 kHz. There was a weakly significant global effect of band-pass filtering on brainstem neural phase-locking to f0. A trends analysis indicated that mean f0 magnitude in the brainstem neural response was greater when the stimulus was band-pass filtered between 2 and 4 kHz as compared to when the stimulus was band-pass filtered between 4 and 6 kHz, low-pass filtered at 2 kHz or presented in the full bandwidth condition. Last, neural phase-locking to f0 was reduced or absent in envelope FFRs measured to filtered stimuli that lacked spectral energy above 0.125 kHz or below 6 kHz. Similarly, little to no energy was seen at F1 in spectral FFRs obtained to low-, high-, or band-pass filtered stimuli that did not contain energy in the F1 region. For stimulus conditions that contained energy at F1, the strength of the peak at F1 in the spectral FFR varied little with low-, high-, or band-pass filtering. CONCLUSIONS Energy at f0 in envelope FFRs may arise due to neural phase-locking to low-, mid-, or high-frequency stimulus components, provided the stimulus envelope is modulated by at least two interacting harmonics. Stronger neural responses at f0 are measured when filtering results in stimulus bandwidths that preserve stimulus energy at F1 and F2. In addition, results suggest that unresolved harmonics may favorably influence f0 strength in the neural response. Lastly, brainstem neural representation of the F1-related harmonic measured in spectral FFRs obtained to filtered stimuli is related to the presence or absence of stimulus energy at F1. These findings add to the existing literature exploring the viability of the FFR as an objective technique to evaluate hearing aid fitting where stimulus bandwidth is altered by design due to frequency-specific gain applied by amplification algorithms.
Collapse
|
20
|
BinKhamis G, Elia Forte A, Reichenbach T, O'Driscoll M, Kluk K. Speech Auditory Brainstem Responses in Adult Hearing Aid Users: Effects of Aiding and Background Noise, and Prediction of Behavioral Measures. Trends Hear 2019; 23:2331216519848297. [PMID: 31264513 PMCID: PMC6607564 DOI: 10.1177/2331216519848297] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Evaluation of patients who are unable to provide behavioral responses on standard clinical measures is challenging due to the lack of standard objective (non-behavioral) clinical audiological measures that assess the outcome of an intervention (e.g., hearing aids). Brainstem responses to short consonant-vowel stimuli (speech-auditory brainstem responses [speech-ABRs]) have been proposed as a measure of subcortical encoding of speech, speech detection, and speech-in-noise performance in individuals with normal hearing. Here, we investigated the potential application of speech-ABRs as an objective clinical outcome measure of speech detection, speech-in-noise detection and recognition, and self-reported speech understanding in 98 adults with sensorineural hearing loss. We compared aided and unaided speech-ABRs, and speech-ABRs in quiet and in noise. In addition, we evaluated whether speech-ABR F0 encoding (obtained from the complex cross-correlation with the 40 ms [da] fundamental waveform) predicted aided behavioral speech recognition in noise or aided self-reported speech understanding. Results showed that (a) aided speech-ABRs had earlier peak latencies, larger peak amplitudes, and larger F0 encoding amplitudes compared to unaided speech-ABRs; (b) the addition of background noise resulted in later F0 encoding latencies but did not have an effect on peak latencies and amplitudes or on F0 encoding amplitudes; and (c) speech-ABRs were not a significant predictor of any of the behavioral or self-report measures. These results show that speech-ABR F0 encoding is not a good predictor of speech-in-noise recognition or self-reported speech understanding with hearing aids. However, our results suggest that speech-ABRs may have potential for clinical application as an objective measure of speech detection with hearing aids.
Collapse
Affiliation(s)
- Ghada BinKhamis
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.,2 Department of Communication and Swallowing Disorders, King Fahad Medical City, Riyadh, Saudi Arabia
| | - Antonio Elia Forte
- 3 John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Tobias Reichenbach
- 4 Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, UK
| | - Martin O'Driscoll
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.,5 Manchester Auditory Implant Centre, Manchester University Hospitals NHS Foundation Trust, Manchester, UK
| | - Karolina Kluk
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
21
|
Speech Auditory Brainstem Responses: Effects of Background, Stimulus Duration, Consonant-Vowel, and Number of Epochs. Ear Hear 2019; 40:659-670. [PMID: 30124503 PMCID: PMC6493675 DOI: 10.1097/aud.0000000000000648] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Supplemental Digital Content is available in the text. Objectives: The aims of this study were to systematically explore the effects of stimulus duration, background (quiet versus noise), and three consonant–vowels on speech-auditory brainstem responses (ABRs). Additionally, the minimum number of epochs required to record speech-ABRs with clearly identifiable waveform components was assessed. The purpose was to evaluate whether shorter duration stimuli could be reliably used to record speech-ABRs both in quiet and in background noise to the three consonant–vowels, as opposed to longer duration stimuli that are commonly used in the literature. Shorter duration stimuli and a smaller number of epochs would require shorter test sessions and thus encourage the transition of the speech-ABR from research to clinical practice. Design: Speech-ABRs in response to 40 msec [da], 50 msec [ba] [da] [ga], and 170 msec [ba] [da] [ga] stimuli were collected from 12 normal-hearing adults with confirmed normal click-ABRs. Monaural (right-ear) speech-ABRs were recorded to all stimuli in quiet and to 40 msec [da], 50 msec [ba] [da] [ga], and 170 msec [da] in a background of two-talker babble at +10 dB signal to noise ratio using a 2-channel electrode montage (Cz-Active, A1 and A2-reference, Fz-ground). Twelve thousand epochs (6000 per polarity) were collected for each stimulus and background from all participants. Latencies and amplitudes of speech-ABR peaks (V, A, D, E, F, O) were compared across backgrounds (quiet and noise) for all stimulus durations, across stimulus durations (50 and 170 msec) and across consonant–vowels ([ba], [da], and [ga]). Additionally, degree of phase locking to the stimulus fundamental frequency (in quiet versus noise) was evaluated for the frequency following response in speech-ABRs to the 170 msec [da]. Finally, the number of epochs required for a robust response was evaluated using Fsp statistic and bootstrap analysis at different epoch iterations. Results: Background effect: the addition of background noise resulted in speech-ABRs with longer peak latencies and smaller peak amplitudes compared with speech-ABRs in quiet, irrespective of stimulus duration. However, there was no effect of background noise on the degree of phase locking of the frequency following response to the stimulus fundamental frequency in speech-ABRs to the 170 msec [da]. Duration effect: speech-ABR peak latencies and amplitudes did not differ in response to the 50 and 170 msec stimuli. Consonant–vowel effect: different consonant–vowels did not have an effect on speech-ABR peak latencies regardless of stimulus duration. Number of epochs: a larger number of epochs was required to record speech-ABRs in noise compared with in quiet, and a smaller number of epochs was required to record speech-ABRs to the 40 msec [da] compared with the 170 msec [da]. Conclusions: This is the first study that systematically investigated the clinical feasibility of speech-ABRs in terms of stimulus duration, background noise, and number of epochs. Speech-ABRs can be reliably recorded to the 40 msec [da] without compromising response quality even when presented in background noise. Because fewer epochs were needed for the 40 msec [da], this would be the optimal stimulus for clinical use. Finally, given that there was no effect of consonant–vowel on speech-ABR peak latencies, there is no evidence that speech-ABRs are suitable for assessing auditory discrimination of the stimuli used.
Collapse
|
22
|
Objective Comparison of the Quality and Reliability of Auditory Brainstem Response Features Elicited by Click and Speech Sounds. Ear Hear 2019; 40:447-457. [PMID: 30142101 DOI: 10.1097/aud.0000000000000639] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Auditory brainstem responses (ABRs) are commonly generated using simple, transient stimuli (e.g., clicks or tone bursts). While resulting waveforms are undeniably valuable clinical tools, they are unlikely to be representative of responses to more complex, behaviorally relevant sounds such as speech. There has been interest in the use of more complex stimuli to elicit the ABR, with considerable work focusing on the use of synthetically generated consonant-vowel (CV) stimuli. Such responses may be sensitive to a range of clinical conditions and to the effects of auditory training. Several ABR features have been documented in response to CV stimuli; however, an important issue is how robust such features are. In the current research, we use time- and frequency-domain objective measures of quality to compare the reliability of Wave V of the click-evoked ABR to that of waves elicited by the CV stimulus /da/. DESIGN Stimuli were presented to 16 subjects at 70 dB nHL in quiet for 6000 epochs. The presence and quality of response features across subjects were examined using Fsp and a Bootstrap analysis method, which was used to assign p values to ABR features for individual recordings in both time and frequency domains. RESULTS All consistent peaks identified within the /da/-evoked response had significantly lower amplitude than Wave V of the ABR. The morphology of speech-evoked waveforms varied across subjects. Mean Fsp values for several waves of the speech-evoked ABR were below 3, suggesting low quality. The most robust response to the /da/ stimulus appeared to be an offset response. Only click-evoked Wave V showed 100% wave presence. Responses to the /da/ stimulus showed lower wave detectability. Frequency-domain analysis showed stronger and more consistent activity evoked by clicks than by /da/. Only the click ABR had consistent time-frequency domain features across all subjects. CONCLUSIONS Based on the objective analysis used within this investigation, it appears that the quality of speech-evoked ABR is generally less than that of click-evoked responses, although the quality of responses may be improved by increasing the number of epochs or the stimulation level. This may have implications for the clinical use of speech-evoked ABR.
Collapse
|
23
|
Abstract
OBJECTIVE To investigate how tinnitus affects the processing of speech and non-speech stimuli at the subcortical level. STUDY DESIGN Cross-sectional analytical study. SETTING Academic, tertiary referral center. PATIENTS Eighteen individuals with tinnitus and 20 controls without tinnitus matched based on their age and sex. All subjects had normal hearing sensitivity. INTERVENTION Diagnostic. MAIN OUTCOME MEASURES The effect of tinnitus on the parameters of auditory brainstem responses (ABR) to non-speech (click-ABR), and speech (sABR) stimuli was investigated. RESULTS Latencies of click ABR in waves III, V, and Vn, as well as inter-peak latency (IPL) of I to V were significantly longer in individuals with tinnitus compared with the controls. Individuals with tinnitus demonstrated significantly longer latencies of all sABR waves than the control group. The tinnitus patients also exhibited a significant decrease in the slope of the V-A complex and reduced encoding of the first and higher formants. A significant difference was observed between the two groups in the spectral magnitudes, the first formant frequency range (F1) and a higher frequency region (HF). CONCLUSIONS Our findings suggest that maladaptive neural plasticity resulting from tinnitus can be subcortically measured and affects timing processing of both speech and non-speech stimuli. The findings have been discussed based on models of maladaptive plasticity and the interference of tinnitus as an internal noise in synthesizing speech auditory stimuli.
Collapse
|
24
|
Human Frequency Following Responses to Vocoded Speech: Amplitude Modulation Versus Amplitude Plus Frequency Modulation. Ear Hear 2019; 41:300-311. [PMID: 31246660 DOI: 10.1097/aud.0000000000000756] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The most commonly employed speech processing strategies in cochlear implants (CIs) only extract and encode amplitude modulation (AM) in a limited number of frequency channels. proposed a novel speech processing strategy that encodes both frequency modulation (FM) and AM to improve CI performance. Using behavioral tests, they reported better speech, speaker, and tone recognition with this novel strategy than with the AM-alone strategy. Here, we used the scalp-recorded human frequency following responses (FFRs) to examine the differences in the neural representation of vocoded speech sounds with AM alone and AM + FM as the spectral and temporal cues were varied. Specifically, we were interested in determining whether the addition of FM to AM improved the neural representation of envelope periodicity (FFRENV) and temporal fine structure (FFRTFS), as reflected in the temporal pattern of the phase-locked neural activity generating the FFR. DESIGN FFRs were recorded from 13 normal-hearing, adult listeners in response to the original unprocessed stimulus (a synthetic diphthong /au/ with a 110-Hz fundamental frequency or F0 and a 250-msec duration) and the 2-, 4-, 8- and 16-channel sine vocoded versions of /au/ with AM alone and AM + FM. Temporal waveforms, autocorrelation analyses, fast Fourier Transform, and stimulus-response spectral correlations were used to analyze both the strength and fidelity of the neural representation of envelope periodicity (F0) and TFS (formant structure). RESULTS The periodicity strength in the FFRENV decreased more for the AM stimuli than for the relatively resilient AM + FM stimuli as the number of channels was increased. Regardless of the number of channels, a clear spectral peak of FFRENV was consistently observed at the stimulus F0 for all the AM + FM stimuli but not for the AM stimuli. Neural representation as revealed by the spectral correlation of FFRTFS was better for the AM + FM stimuli when compared to the AM stimuli. Neural representation of the time-varying formant-related harmonics as revealed by the spectral correlation was also better for the AM + FM stimuli as compared to the AM stimuli. CONCLUSIONS These results are consistent with previously reported behavioral results and suggest that the AM + FM processing strategy elicited brainstem neural activity that better preserved periodicity, temporal fine structure, and time-varying spectral information than the AM processing strategy. The relatively more robust neural representation of AM + FM stimuli observed here likely contributes to the superior performance on speech, speaker, and tone recognition with the AM + FM processing strategy. Taken together, these results suggest that neural information preserved in the FFR may be used to evaluate signal processing strategies considered for CIs.
Collapse
|
25
|
BinKhamis G, Perugia E, O'Driscoll M, Kluk K. Speech-ABRs in cochlear implant recipients: feasibility study. Int J Audiol 2019; 58:678-684. [PMID: 31132012 DOI: 10.1080/14992027.2019.1619100] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Objective: The aim of this study was to assess the feasibility of recording speech-ABRs from cochlear implant (CI) recipients, and to remove the artefact using a clinically applicable single-channel approach. Design: Speech-ABRs were recorded to a 40 ms [da] presented via loudspeaker using a two-channel electrode montage. Additionally, artefacts were recorded using an artificial-head incorporating a MED-EL CI with stimulation parameters as similar as possible to those of three MED-EL participants. A single-channel artefact removal technique was applied to all responses. Study sample: A total of 12 adult CI recipients (6 Cochlear Nucleus and 6 MED-EL CIs). Results: Responses differed according to the CI type, artefact removal resulted in responses containing speech-ARB characteristics in two MED-EL CI participants; however, it was not possible to verify whether these were true responses or were modulated by artefacts, and artefact removal was successful from the artificial-head recordings. Conclusions: This is the first study that attempted to record speech-ABRs from CI recipients. Results suggest that there is a potential for application of a single-channel approach to artefact removal. However, a more robust and adaptive approach to artefact removal that includes a method to verify true responses is needed.
Collapse
Affiliation(s)
- Ghada BinKhamis
- Manchester Centre for Audiology and Deafness, Manchester Academic Health Science Centre, University of Manchester , Manchester , UK.,King Fahad Medical City , Riyadh , Saudi Arabia
| | - Emanuele Perugia
- Manchester Centre for Audiology and Deafness, Manchester Academic Health Science Centre, University of Manchester , Manchester , UK
| | - Martin O'Driscoll
- Manchester Centre for Audiology and Deafness, Manchester Academic Health Science Centre, University of Manchester , Manchester , UK.,Manchester Auditory Implant Centre, Manchester University Hospitals NHS Foundation Trust , Manchester , UK
| | - Karolina Kluk
- Manchester Centre for Audiology and Deafness, Manchester Academic Health Science Centre, University of Manchester , Manchester , UK
| |
Collapse
|
26
|
Visibility graph analysis of speech evoked auditory brainstem response in persistent developmental stuttering. Neurosci Lett 2019; 696:28-32. [DOI: 10.1016/j.neulet.2018.12.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 12/09/2018] [Accepted: 12/10/2018] [Indexed: 10/27/2022]
|
27
|
Pinto ESM, Martinelli MC. Brainstem auditory evoked potentials with speech stimulus in neonates. Braz J Otorhinolaryngol 2018; 86:191-200. [PMID: 30683567 PMCID: PMC9422734 DOI: 10.1016/j.bjorl.2018.11.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 10/17/2018] [Accepted: 11/05/2018] [Indexed: 11/29/2022] Open
Abstract
INTRODUCTION Brainstem auditory evoked potentials in response to complex sounds, such as speech sounds, investigate the neural representation of these sounds at subcortical levels, and faithfully reflect the stimulus characteristics. However, there are few studies that utilize this type of stimulus; for it to be used in clinical practice it is necessary to establish standards of normality through studies performed in different populations. OBJECTIVE To analyze the latencies and amplitudes of the waves obtained from the tracings of brainstem auditory evoked potentials using speech stimuli in Brazilian neonates with normal hearing and without auditory risk factors. METHODS 21 neonates with a mean age of 9 days without risk of hearing loss and with normal results at the neonatal hearing screening were evaluated according to the Joint Committee on Infant Hearing protocols. Auditory evoked potentials were performed with speech stimuli (/da/ syllable) at the intensity of 80 dBNA and the latency and amplitude of the waves obtained were analyzed. RESULTS In the transient portion, we observed a 100% response rate for all analyzable waves (Wave I, Wave III, Wave V and Wave A), and these waves exhibited a latency <10ms. In the sustained portion, Wave B was identified in 53.12% of subjects; Wave C in 75%; Wave D in 90.62%; Wave E in 96.87%; Wave F in 87.5% and Wave O was identified in 87.5% of subjects. The observed latency of these waves ranged from 11.51ms to 52.16ms. Greater similarity was observed for the response latencies, as well as greater amplitude variation in the studied group. CONCLUSIONS Although the wave morphology obtained for brainstem evoked potentials with speech stimulation in neonates is quite similar to that of adults, a longer latency and greater variation in amplitude were observed in the waves analyzed.
Collapse
|
28
|
Musacchia G, Ortiz-Mantilla S, Roesler CP, Rajendran S, Morgan-Byrne J, Benasich AA. Effects of noise and age on the infant brainstem response to speech. Clin Neurophysiol 2018; 129:2623-2634. [DOI: 10.1016/j.clinph.2018.08.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 08/20/2018] [Accepted: 08/24/2018] [Indexed: 12/23/2022]
|
29
|
Auditory brainstem response to speech in children with high functional autism spectrum disorder. Neurol Sci 2018; 40:121-125. [DOI: 10.1007/s10072-018-3594-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Accepted: 09/28/2018] [Indexed: 11/27/2022]
|
30
|
Lucchetti F, Deltenre P, Avan P, Giraudet F, Fan X, Nonclercq A. Generalization of the primary tone phase variation method: An exclusive way of isolating the frequency-following response components. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2400. [PMID: 30404467 DOI: 10.1121/1.5063821] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 08/28/2018] [Indexed: 06/08/2023]
Abstract
The primary tone phase variation (PTPV) technique combines selective sub-averaging with systematic variation of the phases of multitone stimuli. Each response component having a known phase relationship with the stimulus components phases can be isolated in the time domain. The method was generalized to the frequency-following response (FFR) evoked by a two-tone (f 1 and f 2) stimulus comprising both linear and non-linear, as well as transient components. The generalized PTPV technique isolated each spectral component present in the FFR, including those sharing the same frequency, allowing comparison of their latencies. After isolation of the envelope component f 2 - f 1 from its harmonic distortion 2f 2 - 2f 1 and from the transient auditory brainstem response, a computerized analysis of instantaneous amplitudes and phases was applied in order to objectively determine the onset and offset latencies of the response components. The successive activation of two generators separated by 3.7 ms could be detected in all (N = 12) awake adult normal subjects, but in none (N = 10) of the sleeping/sedated children with normal hearing thresholds. The method offers an unprecedented way of disentangling the various FFR subcomponents. These results open the way for renewed investigations of the FFR components in both human and animal research as well as for clinical applications.
Collapse
Affiliation(s)
- Federico Lucchetti
- Laboratoire de Neurophysiologie Sensorielle et Cognitive CP403/22, Brugmann Hospital, Place Van Gehuchten 4, Brussels, B1060, Belgium
| | - Paul Deltenre
- Laboratoire de Neurophysiologie Sensorielle et Cognitive CP403/22, Brugmann Hospital, Place Van Gehuchten 4, Brussels, B1060, Belgium
| | - Paul Avan
- Laboratory of Neurosensory Biophysics Unité mixte de recherche, Institut national de la santé et de la recherche médicale 1107, University Clermont Auvergne, 28 Place Henri Dunant, BP38 Clermont-Ferrand, Cedex 1, F63001, France
| | - Fabrice Giraudet
- Laboratory of Neurosensory Biophysics Unité mixte de recherche, Institut national de la santé et de la recherche médicale 1107, University Clermont Auvergne, 28 Place Henri Dunant, BP38 Clermont-Ferrand, Cedex 1, F63001, France
| | - Xiaoya Fan
- Bio-, Electro- and Mechanical Systems CP165/56, Université Libre de Bruxelles, Avenue F. D. Roosevelt, 50 Brussels, B1050, Belgium
| | - Antoine Nonclercq
- Bio-, Electro- and Mechanical Systems CP165/56, Université Libre de Bruxelles, Avenue F. D. Roosevelt, 50 Brussels, B1050, Belgium
| |
Collapse
|
31
|
Fu Q, Wang T, Liang Y, Lin Y, Zhao X, Wan J, Fan S. Auditory Deficits in Patients With Mild and Moderate Obstructive Sleep Apnea Syndrome: A Speech Syllable Evoked Auditory Brainstem Response Study. Clin Exp Otorhinolaryngol 2018; 12:58-65. [PMID: 30134647 PMCID: PMC6315215 DOI: 10.21053/ceo.2018.00017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2018] [Accepted: 06/25/2018] [Indexed: 12/14/2022] Open
Abstract
Objectives The energy consumption process of cochlea and neural signal transduction along the auditory pathway are highly dependent on blood oxygen supply. At present, it is under debate on whether the obstructive sleep apnea syndrome (OSAS) would affect the auditory function since the patients suffer from low oxygen saturation. Moreover, it is difficult to detect the functional state of auditory in less severe stage of OSAS. Recently, speech-evoked auditory brainstem response (speech-ABR) has been reported to be a new electrophysiological tool in characterizing the auditory dysfunction. The aim of the present study is to evaluate the auditory processes in adult patients with mild and moderate OSAS by speech-ABR. Methods An experimental group of 31 patients with mild to moderate OSAS, and a control group without OSAS diagnosed by apnea hypopnea index in polysomnogram were recruited. All participants underwent otologic examinations and tests of pure-tone audiogram, distortion product otoacoustic emissions, click-evoked auditory brainstem response (click-ABR) and speech-ABR, respectively. Results The results of pure-tone audiogram, distortion product otoacoustic emissions, and click-ABR in OSAS group showed no significant differences compared with the control group (P>0.05). Speech-ABRs for OSAS participants and controls showed similar morphological waveforms and typical peak structures. There were significant group differences for the onset and offset transient peaks (P<0.05), where OSAS group had longer latencies for peak V (6.69± 0.33 ms vs. 6.39±0.23 ms), peak C (13.48±0.30 ms vs. 13.31±0.23 ms), and peak O (48.27±0.39 ms vs. 47.60± 0.40 ms) compared to the control group. The latency of these peaks showed significant correlations with apnea hypopnea index for peak V (r=0.37, P=0.040), peak C (r=0.36, P=0.045), as well as peak O (r=0.55, P=0.001). Conclusion These findings indicate that some auditory dysfunctions may be present in patients with mild and moderate OSAS, and the damages were aggravated with the severity of OSAS, which suggests that speech-ABR may be a potential biomarker in the diagnosis and evaluation at early stage of OSAS.
Collapse
Affiliation(s)
- Qiuyang Fu
- Department of Otolaryngology, Guangdong Second Provincial General Hospital, Guangzhou, China
| | - Tao Wang
- Laboratory of Medical Data and Engineering, Shenzhen Technology University, Shenzhen, China
| | - Yong Liang
- Department of Otolaryngology at Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yong Lin
- Department of Otolaryngology, The First People's Hospital of Kashi Area, Kashi, China
| | - Xiangdong Zhao
- Department of Otolaryngology, Guangdong Second Provincial General Hospital, Guangzhou, China
| | - Jian Wan
- Department of Otolaryngology, Guangdong Second Provincial General Hospital, Guangzhou, China
| | - Suxiao Fan
- Department of Otolaryngology, Guangdong Second Provincial General Hospital, Guangzhou, China
| |
Collapse
|
32
|
Schut MJ, Van der Stoep N, Van der Stigchel S. Auditory spatial attention is encoded in a retinotopic reference frame across eye-movements. PLoS One 2018; 13:e0202414. [PMID: 30125311 PMCID: PMC6101386 DOI: 10.1371/journal.pone.0202414] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 08/02/2018] [Indexed: 11/21/2022] Open
Abstract
The retinal location of visual information changes each time we move our eyes. Although it is now known that visual information is remapped in retinotopic coordinates across eye-movements (saccades), it is currently unclear how head-centered auditory information is remapped across saccades. Keeping track of the location of a sound source in retinotopic coordinates requires a rapid multi-modal reference frame transformation when making saccades. To reveal this reference frame transformation, we designed an experiment where participants attended an auditory or visual cue and executed a saccade. After the saccade had landed, an auditory or visual target could be presented either at the prior retinotopic location or at an uncued location. We observed that both auditory and visual targets presented at prior retinotopic locations were reacted to faster than targets at other locations. In a second experiment, we observed that spatial attention pointers obtained via audition are available in retinotopic coordinates immediately after an eye-movement is made. In a third experiment, we found evidence for an asymmetric cross-modal facilitation of information that is presented at the retinotopic location. In line with prior single cell recording studies, this study provides the first behavioral evidence for immediate auditory and cross-modal transsaccadic updating of spatial attention. These results indicate that our brain has efficient solutions for solving the challenges in localizing sensory input that arise in a dynamic context.
Collapse
Affiliation(s)
- Martijn Jan Schut
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | | |
Collapse
|
33
|
Complexity-Based Analysis of the Difference Between Normal Subjects and Subjects with Stuttering in Speech Evoked Auditory Brainstem Response. J Med Biol Eng 2018. [DOI: 10.1007/s40846-018-0430-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
34
|
Jafari Z, Malayeri S. Subcortical encoding of speech cues in children with congenital blindness. Restor Neurol Neurosci 2018; 34:757-68. [PMID: 27589504 DOI: 10.3233/rnn-160639] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Congenital visual deprivation underlies neural plasticity in different brain areas, and provides an outstanding opportunity to study the neuroplastic capabilities of the brain. OBJECTIVES The present study aimed to investigate the effect of congenital blindness on subcortical auditory processing using electrophysiological and behavioral assessments in children. METHODS A total of 47 children aged 8-12 years, including 22 congenitally blind (CB) children and 25 normal-sighted (NS) control, were studied. All children were tested using an auditory brainstem response (ABR) test with both click and speech stimuli. Speech recognition and musical abilities were tested using standard tools. RESULTS Significant differences were observed between the two groups in speech ABR wave latencies A, F and O (p≤0.043), wave amplitude F (p = 0.039), V-A slope (p = 0.026), and three spectral magnitudes F0, F1 and HF (p≤0.002). CB children showed a superior performance compared to NS peers in all the subtests and the total score of musical abilities (p≤0.003). Moreover, they had significantly higher scores during the nonsense syllable test in noise than the NS children (p = 0.034). Significant negative correlations were found only in CB children between the total music score and both wave A (p = 0.039) and wave F (p = 0.029) latencies, as well as nonsense-syllable test in noise and the wave A latency (p = 0.041). CONCLUSION Our results suggest that neuroplasticity resulting from congenital blindness can be measured subcortically and has a heightened effect on temporal, musical and speech processing abilities. The findings have been discussed based on models of plasticity and the influence of corticofugal modulation in synthesizing complex auditory stimuli.
Collapse
Affiliation(s)
- Zahra Jafari
- Rehabilitation Research Center (RRC), Iran University of Medical Sciences (IUMS), Tehran, Iran.,Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran.,Canadian Center for Behavioral Neuroscience (CCBN), University of Lethbridge, Lethbridge, Alberta, Canada
| | | |
Collapse
|
35
|
Differences between auditory frequency-following responses and onset responses: Intracranial evidence from rat inferior colliculus. Hear Res 2017; 357:25-32. [PMID: 29156225 DOI: 10.1016/j.heares.2017.10.014] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2017] [Revised: 10/14/2017] [Accepted: 10/30/2017] [Indexed: 11/22/2022]
Abstract
A periodic sound, such as a pure tone, evokes both transient onset field-potential responses and sustained frequency-following responses (FFRs) in the auditory midbrain, the inferior colliculus (IC). It is not clear whether the two types of responses are based on the same or different neural substrates. Although it has been assumed that FFRs are based on phase locking to the periodic sound, the evidence showing the direct relationship between the FFR amplitude and the phase-locking strength is still lacking. Using intracranial recordings from the rat central nucleus of inferior colliculus (ICC), this study was to examine whether FFRs and onset responses are different in sensitivity to pure-tone frequency and/or response-stimulus correlation, when a tone stimulus is presented either monaurally or binaurally. Particularly, this study was to examine whether the FFR amplitude is correlated with the strength of phase locking. The results showed that with the increase of tone-stimulus frequency from 1 to 2 kHz, the FFR amplitude decreased but the onset-response amplitude increased. Moreover, the FFR amplitude, but not the onset-response amplitude, was significantly correlated with the phase coherence between tone-evoked potentials and the tone stimulus. Finally, the FFR amplitude was negatively correlated with the onset-response amplitude. These results indicate that periodic-sound-evoked FFRs are based on phase-locking activities of sustained-response neurons, but onset responses are based on transient activities of onset-response neurons, suggesting that FFRs and onset responses are associated with different functions.
Collapse
|
36
|
Cortical Correlates of the Auditory Frequency-Following and Onset Responses: EEG and fMRI Evidence. J Neurosci 2017; 37:830-838. [PMID: 28123019 DOI: 10.1523/jneurosci.1265-16.2016] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Revised: 11/01/2016] [Accepted: 11/06/2016] [Indexed: 11/21/2022] Open
Abstract
The frequency-following response (FFR) is a measure of the brain's periodic sound encoding. It is of increasing importance for studying the human auditory nervous system due to numerous associations with auditory cognition and dysfunction. Although the FFR is widely interpreted as originating from brainstem nuclei, a recent study using MEG suggested that there is also a right-lateralized contribution from the auditory cortex at the fundamental frequency (Coffey et al., 2016b). Our objectives in the present work were to validate and better localize this result using a completely different neuroimaging modality and to document the relationships between the FFR, the onset response, and cortical activity. Using a combination of EEG, fMRI, and diffusion-weighted imaging, we show that activity in the right auditory cortex is related to individual differences in FFR-fundamental frequency (f0) strength, a finding that was replicated with two independent stimulus sets, with and without acoustic energy at the fundamental frequency. We demonstrate a dissociation between this FFR-f0-sensitive response in the right and an area in left auditory cortex that is sensitive to individual differences in the timing of initial response to sound onset. Relationships to timing and their lateralization are supported by parallels in the microstructure of the underlying white matter, implicating a mechanism involving neural conduction efficiency. These data confirm that the FFR has a cortical contribution and suggest ways in which auditory neuroscience may be advanced by connecting early sound representation to measures of higher-level sound processing and cognitive function. SIGNIFICANCE STATEMENT The frequency-following response (FFR) is an EEG signal that is used to explore how the auditory system encodes temporal regularities in sound and is related to differences in auditory function between individuals. It is known that brainstem nuclei contribute to the FFR, but recent findings of an additional cortical source are more controversial. Here, we use fMRI to validate and extend the prediction from MEG data of a right auditory cortex contribution to the FFR. We also demonstrate a dissociation between FFR-related cortical activity from that related to the latency of the response to sound onset, which is found in left auditory cortex. The findings provide a clearer picture of cortical processes for analysis of sound features.
Collapse
|
37
|
Yi HG, Xie Z, Reetzke R, Dimakis AG, Chandrasekaran B. Vowel decoding from single-trial speech-evoked electrophysiological responses: A feature-based machine learning approach. Brain Behav 2017; 7:e00665. [PMID: 28638700 PMCID: PMC5474698 DOI: 10.1002/brb3.665] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION Scalp-recorded electrophysiological responses to complex, periodic auditory signals reflect phase-locked activity from neural ensembles within the auditory system. These responses, referred to as frequency-following responses (FFRs), have been widely utilized to index typical and atypical representation of speech signals in the auditory system. One of the major limitations in FFR is the low signal-to-noise ratio at the level of single trials. For this reason, the analysis relies on averaging across thousands of trials. The ability to examine the quality of single-trial FFRs will allow investigation of trial-by-trial dynamics of the FFR, which has been impossible due to the averaging approach. METHODS In a novel, data-driven approach, we used machine learning principles to decode information related to the speech signal from single trial FFRs. FFRs were collected from participants while they listened to two vowels produced by two speakers. Scalp-recorded electrophysiological responses were projected onto a low-dimensional spectral feature space independently derived from the same two vowels produced by 40 speakers, which were not presented to the participants. A novel supervised machine learning classifier was trained to discriminate vowel tokens on a subset of FFRs from each participant, and tested on the remaining subset. RESULTS We demonstrate reliable decoding of speech signals at the level of single-trials by decomposing the raw FFR based on information-bearing spectral features in the speech signal that were independently derived. CONCLUSIONS Taken together, the ability to extract interpretable features at the level of single-trials in a data-driven manner offers unchartered possibilities in the noninvasive assessment of human auditory function.
Collapse
Affiliation(s)
- Han G Yi
- Department of Communication Sciences & Disorders Moody College of Communication The University of Texas at Austin Austin TX USA
| | - Zilong Xie
- Department of Communication Sciences & Disorders Moody College of Communication The University of Texas at Austin Austin TX USA
| | - Rachel Reetzke
- Department of Communication Sciences & Disorders Moody College of Communication The University of Texas at Austin Austin TX USA
| | - Alexandros G Dimakis
- Department of Electrical and Computer Engineering Cockrell School of Engineering The University of Texas at Austin Austin TX USA
| | - Bharath Chandrasekaran
- Department of Communication Sciences & Disorders Moody College of Communication The University of Texas at Austin Austin TX USA.,Department of Psychology College of Liberal Arts The University of Texas at Austin Austin TX USA.,Department of Linguistics College of Liberal Arts The University of Texas at Austin Austin TX USA.,Institute of Mental Health Research College of Liberal Arts The University of Texas at Austin Austin TX USA.,Institute for Neuroscience College of Liberal Arts The University of Texas at Austin Austin TX USA
| |
Collapse
|
38
|
Slugocki C, Bosnyak D, Trainor LJ. Simultaneously-evoked auditory potentials (SEAP): A new method for concurrent measurement of cortical and subcortical auditory-evoked activity. Hear Res 2017; 345:30-42. [DOI: 10.1016/j.heares.2016.12.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Revised: 12/07/2016] [Accepted: 12/16/2016] [Indexed: 10/20/2022]
|
39
|
The Janus Face of Auditory Learning: How Life in Sound Shapes Everyday Communication. THE FREQUENCY-FOLLOWING RESPONSE 2017. [DOI: 10.1007/978-3-319-47944-6_6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
40
|
Neurophysiological aspects of brainstem processing of speech stimuli in audiometric-normal geriatric population. The Journal of Laryngology & Otology 2016; 131:239-244. [DOI: 10.1017/s0022215116009841] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractObjective:Poor auditory speech perception in geriatrics is attributable to neural de-synchronisation due to structural and degenerative changes of ageing auditory pathways. The speech-evoked auditory brainstem response may be useful for detecting alterations that cause loss of speech discrimination. Therefore, this study aimed to compare the speech-evoked auditory brainstem response in adult and geriatric populations with normal hearing.Methods:The auditory brainstem responses to click sounds and to a 40 ms speech sound (the Hindi phoneme |da|) were compared in 25 young adults and 25 geriatric people with normal hearing. The latencies and amplitudes of transient peaks representing neural responses to the onset, offset and sustained portions of the speech stimulus in quiet and noisy conditions were recorded.Results:The older group had significantly smaller amplitudes and longer latencies for the onset and offset responses to |da| in noisy conditions. Stimulus-to-response times were longer and the spectral amplitude of the sustained portion of the stimulus was reduced. The overall stimulus level caused significant shifts in latency across the entire speech-evoked auditory brainstem response in the older group.Conclusion:The reduction in neural speech processing in older adults suggests diminished subcortical responsiveness to acoustically dynamic spectral cues. However, further investigations are needed to encode temporal cues at the brainstem level and determine their relationship to speech perception for developing a routine tool for clinical decision-making.
Collapse
|
41
|
Kraus N, Thompson EC, Krizman J, Cook K, White-Schwoch T, LaBella CR. Auditory biological marker of concussion in children. Sci Rep 2016; 6:39009. [PMID: 28005070 PMCID: PMC5178332 DOI: 10.1038/srep39009] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 11/14/2016] [Indexed: 01/16/2023] Open
Abstract
Concussions carry devastating potential for cognitive, neurologic, and socio-emotional disease, but no objective test reliably identifies a concussion and its severity. A variety of neurological insults compromise sound processing, particularly in complex listening environments that place high demands on brain processing. The frequency-following response captures the high computational demands of sound processing with extreme granularity and reliably reveals individual differences. We hypothesize that concussions disrupt these auditory processes, and that the frequency-following response indicates concussion occurrence and severity. Specifically, we hypothesize that concussions disrupt the processing of the fundamental frequency, a key acoustic cue for identifying and tracking sounds and talkers, and, consequently, understanding speech in noise. Here we show that children who sustained a concussion exhibit a signature neural profile. They have worse representation of the fundamental frequency, and smaller and more sluggish neural responses. Neurophysiological responses to the fundamental frequency partially recover to control levels as concussion symptoms abate, suggesting a gain in biological processing following partial recovery. Neural processing of sound correctly identifies 90% of concussion cases and clears 95% of control cases, suggesting this approach has practical potential as a scalable biological marker for sports-related concussion and other types of mild traumatic brain injuries.
Collapse
Affiliation(s)
- Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, United States.,Department of Communication Sciences, Northwestern University, Evanston, IL, United States.,Department of Neurobiology &Physiology, Northwestern University, Evanston, IL, United States.,Department of Otolaryngology, Northwestern University's Feinberg School of Medicine, Chicago, IL, United States
| | - Elaine C Thompson
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, United States.,Department of Communication Sciences, Northwestern University, Evanston, IL, United States
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, United States.,Department of Communication Sciences, Northwestern University, Evanston, IL, United States
| | - Katherine Cook
- Division of Pediatric Orthopaedic Surgery &Sports Medicine, Ann &Robert H. Lurie Children's Hospital of Chicago, Chicago, IL, United States.,Department of Pediatrics, Northwestern University's Feinberg School of Medicine, Chicago, IL, United States
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, United States.,Department of Communication Sciences, Northwestern University, Evanston, IL, United States
| | - Cynthia R LaBella
- Division of Pediatric Orthopaedic Surgery &Sports Medicine, Ann &Robert H. Lurie Children's Hospital of Chicago, Chicago, IL, United States.,Department of Pediatrics, Northwestern University's Feinberg School of Medicine, Chicago, IL, United States
| |
Collapse
|
42
|
Leite RA, Magliaro FCL, Raimundo JC, Gândara M, Garbi S, Bento RF, Matas CG. Effect of hearing aids use on speech stimulus decoding through speech-evoked ABR. Braz J Otorhinolaryngol 2016; 84:S1808-8694(16)30236-1. [PMID: 28011120 PMCID: PMC9442878 DOI: 10.1016/j.bjorl.2016.11.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Revised: 10/17/2016] [Accepted: 11/15/2016] [Indexed: 11/16/2022] Open
Abstract
INTRODUCTION The electrophysiological responses obtained with the complex auditory brainstem response (cABR) provide objective measures of subcortical processing of speech and other complex stimuli. The cABR has also been used to verify the plasticity in the auditory pathway in the subcortical regions. OBJECTIVE To compare the results of cABR obtained in children using hearing aids before and after 9 months of adaptation, as well as to compare the results of these children with those obtained in children with normal hearing. METHODS Fourteen children with normal hearing (Control Group - CG) and 18 children with mild to moderate bilateral sensorineural hearing loss (Study Group - SG), aged 7-12 years, were evaluated. The children were submitted to pure tone and vocal audiometry, acoustic immittance measurements and ABR with speech stimulus, being submitted to the evaluations at three different moments: initial evaluation (M0), 3 months after the initial evaluation (M3) and 9 months after the evaluation (M9); at M0, the children assessed in the study group did not use hearing aids yet. RESULTS When comparing the CG and the SG, it was observed that the SG had a lower median for the V-A amplitude at M0 and M3, lower median for the latency of the component V at M9 and a higher median for the latency of component O at M3 and M9. A reduction in the latency of component A at M9 was observed in the SG. CONCLUSION Children with mild to moderate hearing loss showed speech stimulus processing deficits and the main impairment is related to the decoding of the transient portion of this stimulus spectrum. It was demonstrated that the use of hearing aids promoted neuronal plasticity of the Central Auditory Nervous System after an extended time of sensory stimulation.
Collapse
Affiliation(s)
| | | | - Jeziela Cristina Raimundo
- Universidade de São Paulo (USP), Fundação Otorrinolaringologia do Hospital das Clínicas, Ambulatório de Saúde Auditiva Reouvir, São Paulo, SP, Brazil
| | - Mara Gândara
- Universidade de São Paulo (USP), Fundação Otorrinolaringologia do Hospital das Clínicas, Ambulatório de Saúde Auditiva Reouvir, São Paulo, SP, Brazil
| | - Sergio Garbi
- Universidade de São Paulo (USP), Fundação Otorrinolaringologia do Hospital das Clínicas, Ambulatório de Saúde Auditiva Reouvir, São Paulo, SP, Brazil
| | - Ricardo Ferreira Bento
- Universidade de São Paulo (USP), Fundação Otorrinolaringologia do Hospital das Clínicas, Ambulatório de Saúde Auditiva Reouvir, São Paulo, SP, Brazil
| | - Carla Gentile Matas
- Universidade de São Paulo (USP), Curso de Fonoaudiologia, São Paulo, SP, Brazil
| |
Collapse
|
43
|
Encoding of speech sounds at auditory brainstem level in good and poor hearing aid performers. Braz J Otorhinolaryngol 2016; 83:512-522. [PMID: 27516129 PMCID: PMC9444769 DOI: 10.1016/j.bjorl.2016.06.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2016] [Revised: 05/22/2016] [Accepted: 06/20/2016] [Indexed: 11/24/2022] Open
Abstract
Introduction Hearing aids are prescribed to alleviate loss of audibility. It has been reported that about 31% of hearing aid users reject their own hearing aid because of annoyance towards background noise. The reason for dissatisfaction can be located anywhere from the hearing aid microphone till the integrity of neurons along the auditory pathway. Objectives To measure spectra from the output of hearing aid at the ear canal level and frequency following response recorded at the auditory brainstem from individuals with hearing impairment. Methods A total of sixty participants having moderate sensorineural hearing impairment with age range from 15 to 65 years were involved. Each participant was classified as either Good or Poor Hearing aid Performers based on acceptable noise level measure. Stimuli /da/ and /si/ were presented through loudspeaker at 65 dB SPL. At the ear canal, the spectra were measured in the unaided and aided conditions. At auditory brainstem, frequency following response were recorded to the same stimuli from the participants. Results Spectrum measured in each condition at ear canal was same in good hearing aid performers and poor hearing aid performers. At brainstem level, better F0 encoding; F0 and F1 energies were significantly higher in good hearing aid performers than in poor hearing aid performers. Though the hearing aid spectra were almost same between good hearing aid performers and poor hearing aid performers, subtle physiological variations exist at the auditory brainstem. Conclusion The result of the present study suggests that neural encoding of speech sound at the brainstem level might be mediated distinctly in good hearing aid performers from that of poor hearing aid performers. Thus, it can be inferred that subtle physiological changes are evident at the auditory brainstem in a person who is willing to accept noise from those who are not willing to accept noise.
Collapse
|
44
|
Gilles A, Schlee W, Rabau S, Wouters K, Fransen E, Van de Heyning P. Decreased Speech-In-Noise Understanding in Young Adults with Tinnitus. Front Neurosci 2016; 10:288. [PMID: 27445661 PMCID: PMC4923253 DOI: 10.3389/fnins.2016.00288] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Accepted: 06/09/2016] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES Young people are often exposed to high music levels which make them more at risk to develop noise-induced symptoms such as hearing loss, hyperacusis, and tinnitus of which the latter is the symptom perceived the most by young adults. Although, subclinical neural damage was demonstrated in animal experiments, the human correlate remains under debate. Controversy exists on the underlying condition of young adults with normal hearing thresholds and noise-induced tinnitus (NIT) due to leisure noise. The present study aimed to assess differences in audiological characteristics between noise-exposed adolescents with and without NIT. METHODS A group of 87 young adults with a history of recreational noise exposure was investigated by use of the following tests: otoscopy, impedance measurements, pure-tone audiometry including high-frequencies, transient and distortion product otoacoustic emissions, speech-in-noise testing with continuous and modulated noise (amplitude-modulated by 15 Hz), auditory brainstem responses (ABR) and questionnaires.Nineteen students reported NIT due to recreational noise exposure, and their measures were compared to the non-tinnitus subjects. RESULTS No significant differences between tinnitus and non-tinnitus subjects could be found for hearing thresholds, otoacoustic emissions, and ABR results.Tinnitus subjects had significantly worse speech reception in noise compared to non-tinnitus subjects for sentences embedded in steady-state noise (mean speech reception threshold (SRT) scores, respectively -5.77 and -6.90 dB SNR; p = 0.025) as well as for sentences embedded in 15 Hz AM-noise (mean SRT scores, respectively -13.04 and -15.17 dB SNR; p = 0.013). In both groups speech reception was significantly improved during AM-15 Hz noise compared to the steady-state noise condition (p < 0.001). However, the modulation masking release was not affected by the presence of NIT. CONCLUSIONS Young adults with and without NIT did not differ regarding audiometry, OAE, and ABR.However, tinnitus patients showed decreased speech-in-noise reception. The results are discussed in the light of previous findings suggestion NIT may occur in the absence of measurable peripheral damage as reflected in speech-in-noise deficits in tinnitus subjects.
Collapse
Affiliation(s)
- Annick Gilles
- University Department of Otorhinolaryngology and Head and Neck Surgery, Antwerp University HospitalEdegem, Belgium; Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of AntwerpWilrijk, Belgium; Department of Human and Social Welfare, University College GhentGhent, Belgium
| | - Winny Schlee
- University Department of Psychology, University of Konstanz Konstanz, Germany
| | - Sarah Rabau
- University Department of Otorhinolaryngology and Head and Neck Surgery, Antwerp University HospitalEdegem, Belgium; Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of AntwerpWilrijk, Belgium
| | - Kristien Wouters
- Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of AntwerpWilrijk, Belgium; University Department of Scientific Coordination and Biostatistics, Antwerp University HospitalEdegem, Belgium
| | - Erik Fransen
- Department of Medical Genetics, Faculty of Medicine and Health Sciences, University of Antwerp Wilrijk, Belgium
| | - Paul Van de Heyning
- University Department of Otorhinolaryngology and Head and Neck Surgery, Antwerp University HospitalEdegem, Belgium; Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of AntwerpWilrijk, Belgium
| |
Collapse
|
45
|
Kraus N, White-Schwoch T. Neurobiology of Everyday Communication: What Have We Learned From Music? Neuroscientist 2016; 23:287-298. [PMID: 27284021 DOI: 10.1177/1073858416653593] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Sound is an invisible but powerful force that is central to everyday life. Studies in the neurobiology of everyday communication seek to elucidate the neural mechanisms underlying sound processing, their stability, their plasticity, and their links to language abilities and disabilities. This sound processing lies at the nexus of cognitive, sensorimotor, and reward networks. Music provides a powerful experimental model to understand these biological foundations of communication, especially with regard to auditory learning. We review studies of music training that employ a biological approach to reveal the integrity of sound processing in the brain, the bearing these mechanisms have on everyday communication, and how these processes are shaped by experience. Together, these experiments illustrate that music works in synergistic partnerships with language skills and the ability to make sense of speech in complex, everyday listening environments. The active, repeated engagement with sound demanded by music making augments the neural processing of speech, eventually cascading to listening and language. This generalization from music to everyday communications illustrates both that these auditory brain mechanisms have a profound potential for plasticity and that sound processing is biologically intertwined with listening and language skills. A new wave of studies has pushed neuroscience beyond the traditional laboratory by revealing the effects of community music training in underserved populations. These community-based studies reinforce laboratory work highlight how the auditory system achieves a remarkable balance between stability and flexibility in processing speech. Moreover, these community studies have the potential to inform health care, education, and social policy by lending a neurobiological perspective to their efficacy.
Collapse
Affiliation(s)
- Nina Kraus
- 1 Auditory Neuroscience Laboratory ( www.brainvolts.northwestern.edu ) and Department of Communication Sciences, Northwestern University, Evanston, IL, USA.,2 Department of Neurobiology & Physiology and Department of Otolaryngology, Northwestern University, Evanston, IL, USA
| | - Travis White-Schwoch
- 1 Auditory Neuroscience Laboratory ( www.brainvolts.northwestern.edu ) and Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| |
Collapse
|
46
|
Coffey EBJ, Colagrosso EMG, Lehmann A, Schönwiesner M, Zatorre RJ. Individual Differences in the Frequency-Following Response: Relation to Pitch Perception. PLoS One 2016; 11:e0152374. [PMID: 27015271 PMCID: PMC4807774 DOI: 10.1371/journal.pone.0152374] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 03/14/2016] [Indexed: 11/30/2022] Open
Abstract
The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system’s representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with people's general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system.
Collapse
Affiliation(s)
- Emily B. J. Coffey
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- * E-mail:
| | | | - Alexandre Lehmann
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Department of Otolaryngology Head & Neck Surgery, McGill University, Montreal, Canada
| | - Marc Schönwiesner
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
| | - Robert J. Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
| |
Collapse
|
47
|
Cortical contributions to the auditory frequency-following response revealed by MEG. Nat Commun 2016; 7:11070. [PMID: 27009409 PMCID: PMC4820836 DOI: 10.1038/ncomms11070] [Citation(s) in RCA: 248] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Accepted: 02/17/2016] [Indexed: 11/09/2022] Open
Abstract
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. Auditory brainstem response (ABR) is used to study temporal encoding of auditory information in music and language. This study utilizes magnetoencephalography to localize both cortical and subcortical origins of the sustained frequency following response (FFR), the ABR component that encodes the periodicity of sound.
Collapse
|
48
|
Young KS, Parsons CE, Jegindoe Elmholdt EM, Woolrich MW, van Hartevelt TJ, Stevner ABA, Stein A, Kringelbach ML. Evidence for a Caregiving Instinct: Rapid Differentiation of Infant from Adult Vocalizations Using Magnetoencephalography. Cereb Cortex 2016; 26:1309-1321. [PMID: 26656998 PMCID: PMC4737615 DOI: 10.1093/cercor/bhv306] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Crying is the most salient vocal signal of distress. The cries of a newborn infant alert adult listeners and often elicit caregiving behavior. For the parent, rapid responding to an infant in distress is an adaptive behavior, functioning to ensure offspring survival. The ability to react rapidly requires quick recognition and evaluation of stimuli followed by a co-ordinated motor response. Previous neuroimaging research has demonstrated early specialized activity in response to infant faces. Using magnetoencephalography, we found similarly early (100-200 ms) differences in neural responses to infant and adult cry vocalizations in auditory, emotional, and motor cortical brain regions. We propose that this early differential activity may help to rapidly identify infant cries and engage affective and motor neural circuitry to promote adaptive behavioral responding, before conscious awareness. These differences were observed in adults who were not parents, perhaps indicative of a universal brain-based "caregiving instinct."
Collapse
Affiliation(s)
- Katherine S Young
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Department of Psychology
| | - Christine E Parsons
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Else-Marie Jegindoe Elmholdt
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Mark W Woolrich
- Oxford Centre for Human Brain Activity (OHBA), University of Oxford, Oxford, UK
| | - Tim J van Hartevelt
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Angus B A Stevner
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Oxford Centre for Human Brain Activity (OHBA), University of Oxford, Oxford, UK
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Alan Stein
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Wits/MRC Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, University of Witwatersrand, Johannesburg, South Africa
| | - Morten L Kringelbach
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, CA, USA
| |
Collapse
|
49
|
Jeng FC, Lin CD, Chou MS, Hollister GR, Sabol JT, Mayhugh GN, Wang TC, Wang CY. Development of Subcortical Pitch Representation in Three-Month-Old Chinese Infants. Percept Mot Skills 2016; 122:123-35. [PMID: 27420311 DOI: 10.1177/0031512516631054] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study investigated the development of subcortical pitch processing, as reflected by the scalp-recorded frequency-following response, during early infancy. Thirteen Chinese infants who were born and raised in Mandarin-speaking households were recruited to partake in this study. Through a prospective-longitudinal study design, infants were tested twice: at 1-3 days after birth and at three months of age. A set of four contrastive Mandarin pitch contours were used to elicit frequency-following responses. Frequency Error and Pitch Strength were derived to represent the accuracy and magnitude of the elicited responses. Paired-samples t tests were conducted and demonstrated a significant decrease in Frequency Error and a significant increase in Pitch Strength at three months of age compared to 1-3 days after birth. Results indicated the developmental trajectory of subcortical pitch processing during the first three months of life.
Collapse
Affiliation(s)
| | - Chia-Der Lin
- Department of Otolaryngology-HNS, China Medical University Hospital, Taiwan;School of Medicine, China Medical University, Taiwan
| | - Meng-Shih Chou
- Department of Otolaryngology-HNS, China Medical University Hospital, Taiwan;School of Medicine, China Medical University, Taiwan
| | | | - John T Sabol
- Communication Sciences and Disorders, Ohio University, USA
| | | | - Tang-Chuan Wang
- Department of Otolaryngology-HNS, China Medical University Hospital, Taiwan;School of Medicine, China Medical University, Taiwan
| | - Ching-Yuan Wang
- Department of Otolaryngology-HNS, China Medical University Hospital, Taiwan;School of Medicine, China Medical University, Taiwan
| |
Collapse
|
50
|
Gabr TA, Darwish ME. Speech auditory brainstem response audiometry in children with specific language impairment. HEARING BALANCE AND COMMUNICATION 2015. [DOI: 10.3109/21695717.2016.1092715] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|