1
|
Borjigin A, Bharadwaj HM. Individual Differences Elucidate the Perceptual Benefits Associated with Robust Temporal Fine-Structure Processing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.20.558670. [PMID: 37790457 PMCID: PMC10542537 DOI: 10.1101/2023.09.20.558670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues. Here, we circumnavigated this limitation by leveraging individual differences across 200 participants to systematically compare variations in TFS sensitivity to performance in a range of speech perception tasks. TFS sensitivity was assessed through detection of interaural time/phase differences, while speech perception was evaluated by word identification under noise interference. Results suggest that greater TFS sensitivity is not associated with greater masking release from fundamental-frequency or spatial cues, but appears to contribute to resilience against the effects of reverberation. We also found that greater TFS sensitivity is associated with faster response times, indicating reduced listening effort. These findings highlight the perceptual significance of TFS coding for everyday hearing.
Collapse
Affiliation(s)
- Agudemu Borjigin
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
- Waisman Center, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Hari M. Bharadwaj
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA 15213, USA
| |
Collapse
|
2
|
Fujihira H, Yamagishi S, Furukawa S, Kashino M. Auditory brainstem response to paired clicks as a candidate marker of cochlear synaptopathy in humans. Clin Neurophysiol 2024; 165:44-54. [PMID: 38959535 DOI: 10.1016/j.clinph.2024.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 06/02/2024] [Accepted: 06/08/2024] [Indexed: 07/05/2024]
Abstract
OBJECTIVE This study aimed to evaluate whether auditory brainstem response (ABR) using a paired-click stimulation paradigm could serve as a tool for detecting cochlear synaptopathy (CS). METHODS The ABRs to single-clicks and paired-clicks with various inter-click intervals (ICIs) and scores for word intelligibility in degraded listening conditions were obtained from 57 adults with normal hearing. The wave I peak amplitude and root mean square values for the post-wave I response within a range delayed from the wave I peak (referred to as the RMSpost-w1) were calculated for the single- and second-click responses. RESULTS The wave I peak amplitudes did not correlate with age except for the second-click responses at an ICI of 7 ms, and the word intelligibility scores. However, we found that the RMSpost-w1 values for the second-click responses significantly decreased with increasing age. Moreover, the RMSpost-w1 values for the second-click responses at an ICI of 5 ms correlated significantly with the scores for word intelligibility in degraded listening conditions. CONCLUSIONS The magnitude of the post-wave I response for the second-click response could serve as a tool for detecting CS in humans. SIGNIFICANCE Our findings shed new light on the analytical methods of ABR for quantifying CS.
Collapse
Affiliation(s)
- Haruna Fujihira
- NTT Communication Science Laboratories, Atsugi, Kanagawa, Japan; Department of Informatics, Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
| | | | - Shigeto Furukawa
- NTT Communication Science Laboratories, Atsugi, Kanagawa, Japan; Graduate School of Public Health, Shizuoka Graduate University of Public Health, Shizuoka, Japan; Speech-Language-Hearing Center, Shizuoka General Hospital, Shizuoka, Japan
| | - Makio Kashino
- NTT Communication Science Laboratories, Atsugi, Kanagawa, Japan
| |
Collapse
|
3
|
Bhatt IS, Garay JAR, Bhagavan SG, Ingalls V, Dias R, Torkamani A. A genome-wide association study reveals a polygenic architecture of speech-in-noise deficits in individuals with self-reported normal hearing. Sci Rep 2024; 14:13089. [PMID: 38849415 PMCID: PMC11161523 DOI: 10.1038/s41598-024-63972-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 06/04/2024] [Indexed: 06/09/2024] Open
Abstract
Speech-in-noise (SIN) perception is a primary complaint of individuals with audiometric hearing loss. SIN performance varies drastically, even among individuals with normal hearing. The present genome-wide association study (GWAS) investigated the genetic basis of SIN deficits in individuals with self-reported normal hearing in quiet situations. GWAS was performed on 279,911 individuals from the UB Biobank cohort, with 58,847 reporting SIN deficits despite reporting normal hearing in quiet. GWAS identified 996 single nucleotide polymorphisms (SNPs), achieving significance (p < 5*10-8) across four genomic loci. 720 SNPs across 21 loci achieved suggestive significance (p < 10-6). GWAS signals were enriched in brain tissues, such as the anterior cingulate cortex, dorsolateral prefrontal cortex, entorhinal cortex, frontal cortex, hippocampus, and inferior temporal cortex. Cochlear cell types revealed no significant association with SIN deficits. SIN deficits were associated with various health traits, including neuropsychiatric, sensory, cognitive, metabolic, cardiovascular, and inflammatory conditions. A replication analysis was conducted on 242 healthy young adults. Self-reported speech perception, hearing thresholds (0.25-16 kHz), and distortion product otoacoustic emissions (1-16 kHz) were utilized for the replication analysis. 73 SNPs were replicated with a self-reported speech perception measure. 211 SNPs were replicated with at least one and 66 with at least two audiological measures. 12 SNPs near or within MAPT, GRM3, and HLA-DQA1 were replicated for all audiological measures. The present study highlighted a polygenic architecture underlying SIN deficits in individuals with self-reported normal hearing.
Collapse
Affiliation(s)
- Ishan Sunilkumar Bhatt
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA.
| | - Juan Antonio Raygoza Garay
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
- Holden Comprehensive Cancer Center, University of Iowa, Iowa City, IA, 52242, USA
| | - Srividya Grama Bhagavan
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Valerie Ingalls
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Raquel Dias
- Department of Microbiology and Cell Science, University of Florida, Gainesville, FL, 32608, USA
| | - Ali Torkamani
- Department of Integrative Structural and Computational Biology, Scripps Research Institute, La Jolla, CA, 92037, USA
| |
Collapse
|
4
|
Mukhopadhyay M, McHaney JR, Chandrasekaran B, Sarkar A. Bayesian Semiparametric Longitudinal Inverse-Probit Mixed Models for Category Learning. PSYCHOMETRIKA 2024; 89:461-485. [PMID: 38374497 DOI: 10.1007/s11336-024-09947-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Indexed: 02/21/2024]
Abstract
Understanding how the adult human brain learns novel categories is an important problem in neuroscience. Drift-diffusion models are popular in such contexts for their ability to mimic the underlying neural mechanisms. One such model for gradual longitudinal learning was recently developed in Paulon et al. (J Am Stat Assoc 116:1114-1127, 2021). In practice, category response accuracies are often the only reliable measure recorded by behavioral scientists to describe human learning. Category response accuracies are, however, often the only reliable measure recorded by behavioral scientists to describe human learning. To our knowledge, however, drift-diffusion models for such scenarios have never been considered in the literature before. To address this gap, in this article, we build carefully on Paulon et al. (J Am Stat Assoc 116:1114-1127, 2021), but now with latent response times integrated out, to derive a novel biologically interpretable class of 'inverse-probit' categorical probability models for observed categories alone. However, this new marginal model presents significant identifiability and inferential challenges not encountered originally for the joint model in Paulon et al. (J Am Stat Assoc 116:1114-1127, 2021). We address these new challenges using a novel projection-based approach with a symmetry-preserving identifiability constraint that allows us to work with conjugate priors in an unconstrained space. We adapt the model for group and individual-level inference in longitudinal settings. Building again on the model's latent variable representation, we design an efficient Markov chain Monte Carlo algorithm for posterior computation. We evaluate the empirical performance of the method through simulation experiments. The practical efficacy of the method is illustrated in applications to longitudinal tone learning studies.
Collapse
Affiliation(s)
- Minerva Mukhopadhyay
- Department of Mathematics and Statistics, Indian Institute of Technology, Kanpur, 208016, Uttar Pradesh, India
| | - Jacie R McHaney
- Department of Communication Sciences and Disorders, Northwestern University, 70 Arts Circle Drive, Evanston, IL, 60208, USA
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, Northwestern University, 70 Arts Circle Drive, Evanston, IL, 60208, USA
| | - Abhra Sarkar
- Department of Statistics and Data Sciences, University of Texas at Austin, 105 East 24th Street D9800, Austin, TX, 78712, USA.
| |
Collapse
|
5
|
McFarlane KA, Sanchez JT. Effects of Temporal Processing on Speech-in-Noise Perception in Middle-Aged Adults. BIOLOGY 2024; 13:371. [PMID: 38927251 PMCID: PMC11200514 DOI: 10.3390/biology13060371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/20/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024]
Abstract
Auditory temporal processing is a vital component of auditory stream segregation, or the process in which complex sounds are separated and organized into perceptually meaningful objects. Temporal processing can degrade prior to hearing loss, and is suggested to be a contributing factor to difficulties with speech-in-noise perception in normal-hearing listeners. The current study tested this hypothesis in middle-aged adults-an under-investigated cohort, despite being the age group where speech-in-noise difficulties are first reported. In 76 participants, three mechanisms of temporal processing were measured: peripheral auditory nerve function using electrocochleography, subcortical encoding of periodic speech cues (i.e., fundamental frequency; F0) using the frequency following response, and binaural sensitivity to temporal fine structure (TFS) using a dichotic frequency modulation detection task. Two measures of speech-in-noise perception were administered to explore how contributions of temporal processing may be mediated by different sensory demands present in the speech perception task. This study supported the hypothesis that temporal coding deficits contribute to speech-in-noise difficulties in middle-aged listeners. Poorer speech-in-noise perception was associated with weaker subcortical F0 encoding and binaural TFS sensitivity, but in different contexts, highlighting that diverse aspects of temporal processing are differentially utilized based on speech-in-noise task characteristics.
Collapse
Affiliation(s)
- Kailyn A. McFarlane
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA;
| | - Jason Tait Sanchez
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA;
- Knowles Hearing Center, Northwestern University, Evanston, IL 60208, USA
- Department of Neurobiology, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
6
|
Clayton KK, Stecyk KS, Guo AA, Chambers AR, Chen K, Hancock KE, Polley DB. Sound elicits stereotyped facial movements that provide a sensitive index of hearing abilities in mice. Curr Biol 2024; 34:1605-1620.e5. [PMID: 38492568 PMCID: PMC11043000 DOI: 10.1016/j.cub.2024.02.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 01/02/2024] [Accepted: 02/23/2024] [Indexed: 03/18/2024]
Abstract
Sound elicits rapid movements of muscles in the face, ears, and eyes that protect the body from injury and trigger brain-wide internal state changes. Here, we performed quantitative facial videography from mice resting atop a piezoelectric force plate and observed that broadband sounds elicited rapid and stereotyped facial twitches. Facial motion energy (FME) adjacent to the whisker array was 30 dB more sensitive than the acoustic startle reflex and offered greater inter-trial and inter-animal reliability than sound-evoked pupil dilations or movement of other facial and body regions. FME tracked the low-frequency envelope of broadband sounds, providing a means to study behavioral discrimination of complex auditory stimuli, such as speech phonemes in noise. Approximately 25% of layer 5-6 units in the auditory cortex (ACtx) exhibited firing rate changes during facial movements. However, FME facilitation during ACtx photoinhibition indicated that sound-evoked facial movements were mediated by a midbrain pathway and modulated by descending corticofugal input. FME and auditory brainstem response (ABR) thresholds were closely aligned after noise-induced sensorineural hearing loss, yet FME growth slopes were disproportionately steep at spared frequencies, reflecting a central plasticity that matched commensurate changes in ABR wave 4. Sound-evoked facial movements were also hypersensitive in Ptchd1 knockout mice, highlighting the use of FME for identifying sensory hyper-reactivity phenotypes after adult-onset hyperacusis and inherited deficiencies in autism risk genes. These findings present a sensitive and integrative measure of hearing while also highlighting that even low-intensity broadband sounds can elicit a complex mixture of auditory, motor, and reafferent somatosensory neural activity.
Collapse
Affiliation(s)
- Kameron K Clayton
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA.
| | - Kamryn S Stecyk
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA
| | - Anna A Guo
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA
| | - Anna R Chambers
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| | - Ke Chen
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| | - Kenneth E Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| |
Collapse
|
7
|
Liu J, Stohl J, Overath T. Hidden hearing loss: Fifteen years at a glance. Hear Res 2024; 443:108967. [PMID: 38335624 DOI: 10.1016/j.heares.2024.108967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/15/2024] [Accepted: 01/29/2024] [Indexed: 02/12/2024]
Abstract
Hearing loss affects approximately 18% of the population worldwide. Hearing difficulties in noisy environments without accompanying audiometric threshold shifts likely affect an even larger percentage of the global population. One of the potential causes of hidden hearing loss is cochlear synaptopathy, the loss of synapses between inner hair cells (IHC) and auditory nerve fibers (ANF). These synapses are the most vulnerable structures in the cochlea to noise exposure or aging. The loss of synapses causes auditory deafferentation, i.e., the loss of auditory afferent information, whose downstream effect is the loss of information that is sent to higher-order auditory processing stages. Understanding the physiological and perceptual effects of this early auditory deafferentation might inform interventions to prevent later, more severe hearing loss. In the past decade, a large body of work has been devoted to better understand hidden hearing loss, including the causes of hidden hearing loss, their corresponding impact on the auditory pathway, and the use of auditory physiological measures for clinical diagnosis of auditory deafferentation. This review synthesizes the findings from studies in humans and animals to answer some of the key questions in the field, and it points to gaps in knowledge that warrant more investigation. Specifically, recent studies suggest that some electrophysiological measures have the potential to function as indicators of hidden hearing loss in humans, but more research is needed for these measures to be included as part of a clinical test battery.
Collapse
Affiliation(s)
- Jiayue Liu
- Department of Psychology and Neuroscience, Duke University, Durham, USA.
| | - Joshua Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, USA
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, USA
| |
Collapse
|
8
|
Clonan AC, Zhai X, Stevenson IH, Escabí MA. Low-dimensional interference of mid-level sound statistics predicts human speech recognition in natural environmental noise. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.13.579526. [PMID: 38405870 PMCID: PMC10888804 DOI: 10.1101/2024.02.13.579526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Recognizing speech in noise, such as in a busy street or restaurant, is an essential listening task where the task difficulty varies across acoustic environments and noise levels. Yet, current cognitive models are unable to account for changing real-world hearing sensitivity. Here, using natural and perturbed background sounds we demonstrate that spectrum and modulations statistics of environmental backgrounds drastically impact human word recognition accuracy and they do so independently of the noise level. These sound statistics can facilitate or hinder recognition - at the same noise level accuracy can range from 0% to 100%, depending on the background. To explain this perceptual variability, we optimized a biologically grounded hierarchical model, consisting of frequency-tuned cochlear filters and subsequent mid-level modulation-tuned filters that account for central auditory tuning. Low-dimensional summary statistics from the mid-level model accurately predict single trial perceptual judgments, accounting for more than 90% of the perceptual variance across backgrounds and noise levels, and substantially outperforming a cochlear model. Furthermore, perceptual transfer functions in the mid-level auditory space identify multi-dimensional natural sound features that impact recognition. Thus speech recognition in natural backgrounds involves interference of multiple summary statistics that are well described by an interpretable, low-dimensional auditory model. Since this framework relates salient natural sound cues to single trial perceptual judgements, it may improve outcomes for auditory prosthetics and clinical measurements of real-world hearing sensitivity.
Collapse
Affiliation(s)
- Alex C. Clonan
- Electrical and Computer Engineering, University of Connecticut, Storrs, CT 06269
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269
- Institute of Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269
| | - Xiu Zhai
- Biomedical Engineering, Wentworth Institute of Technology, Boston, MA 02115
| | - Ian H. Stevenson
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269
- Psychological Sciences, University of Connecticut, Storrs, CT 06269
- Institute of Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269
| | - Monty A. Escabí
- Electrical and Computer Engineering, University of Connecticut, Storrs, CT 06269
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269
- Psychological Sciences, University of Connecticut, Storrs, CT 06269
- Institute of Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269
| |
Collapse
|
9
|
Parida S, Yurasits K, Cancel VE, Zink ME, Mitchell C, Ziliak MC, Harrison AV, Bartlett EL, Parthasarathy A. Rapid and objective assessment of auditory temporal processing using dynamic amplitude-modulated stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577641. [PMID: 38352339 PMCID: PMC10862703 DOI: 10.1101/2024.01.28.577641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Auditory neural coding of speech-relevant temporal cues can be noninvasively probed using envelope following responses (EFRs), neural ensemble responses phase-locked to the stimulus amplitude envelope. EFRs emphasize different neural generators, such as the auditory brainstem or auditory cortex, by altering the temporal modulation rate of the stimulus. EFRs can be an important diagnostic tool to assess auditory neural coding deficits that go beyond traditional audiometric estimations. Existing approaches to measure EFRs use discrete amplitude modulated (AM) tones of varying modulation frequencies, which is time consuming and inefficient, impeding clinical translation. Here we present a faster and more efficient framework to measure EFRs across a range of AM frequencies using stimuli that dynamically vary in modulation rates, combined with spectrally specific analyses that offer optimal spectrotemporal resolution. EFRs obtained from several species (humans, Mongolian gerbils, Fischer-344 rats, and Cba/CaJ mice) showed robust, high-SNR tracking of dynamic AM trajectories (up to 800Hz in humans, and 1.4 kHz in rodents), with a fivefold decrease in recording time and thirtyfold increase in spectrotemporal resolution. EFR amplitudes between dynamic AM stimuli and traditional discrete AM tokens within the same subjects were highly correlated (94% variance explained) across species. Hence, we establish a time-efficient and spectrally specific approach to measure EFRs. These results could yield novel clinical diagnostics for precision audiology approaches by enabling rapid, objective assessment of temporal processing along the entire auditory neuraxis.
Collapse
Affiliation(s)
- Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Kimberly Yurasits
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Victoria E. Cancel
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Maggie E. Zink
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Claire Mitchell
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Meredith C. Ziliak
- Department of Biological Sciences, Purdue University, West Lafayette, IN, USA
| | - Audrey V. Harrison
- Department of Biological Sciences, Purdue University, West Lafayette, IN, USA
| | - Edward L. Bartlett
- Department of Biological Sciences, Purdue University, West Lafayette, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
- Department of BioEngineering, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
10
|
Smith SS, Jahn KN, Sugai JA, Hancock KE, Polley DB. The human pupil and face encode sound affect and provide objective signatures of tinnitus and auditory hypersensitivity disorders. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.22.571929. [PMID: 38187580 PMCID: PMC10769427 DOI: 10.1101/2023.12.22.571929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Sound is jointly processed along acoustic and emotional dimensions. These dimensions can become distorted and entangled in persons with sensory disorders, producing a spectrum of loudness hypersensitivity, phantom percepts, and - in some cases - debilitating sound aversion. Here, we looked for objective signatures of disordered hearing (DH) in the human face. Pupil dilations and micro facial movement amplitudes scaled with sound valence in neurotypical listeners but not DH participants with chronic tinnitus (phantom ringing) and sound sensitivity. In DH participants, emotionally evocative sounds elicited abnormally large pupil dilations but blunted and invariant facial reactions that jointly provided an accurate prediction of individual tinnitus and hyperacusis questionnaire handicap scores. By contrast, EEG measures of central auditory gain identified steeper neural response growth functions but no association with symptom severity. These findings highlight dysregulated affective sound processing in persons with bothersome tinnitus and sound sensitivity disorders and introduce approaches for their objective measurement.
Collapse
Affiliation(s)
- Samuel S Smith
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA, 02114 USA
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston MA 02114 USA
- Lead contact
| | - Kelly N Jahn
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA, 02114 USA
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston MA 02114 USA
| | - Jenna A Sugai
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA, 02114 USA
| | - Ken E Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA, 02114 USA
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston MA 02114 USA
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA, 02114 USA
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston MA 02114 USA
| |
Collapse
|
11
|
Liu J, Stohl J, Lopez-Poveda EA, Overath T. Quantifying the Impact of Auditory Deafferentation on Speech Perception. Trends Hear 2024; 28:23312165241227818. [PMID: 38291713 PMCID: PMC10832414 DOI: 10.1177/23312165241227818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 12/22/2023] [Accepted: 01/05/2024] [Indexed: 02/01/2024] Open
Abstract
The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.
Collapse
Affiliation(s)
- Jiayue Liu
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - Joshua Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, NC, USA
| | - Enrique A. Lopez-Poveda
- Instituto de Neurociencias de Castilla y Leon, University of Salamanca, Salamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, University of Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| |
Collapse
|
12
|
Bramhall NF, McMillan GP. Perceptual Consequences of Cochlear Deafferentation in Humans. Trends Hear 2024; 28:23312165241239541. [PMID: 38738337 DOI: 10.1177/23312165241239541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024] Open
Abstract
Cochlear synaptopathy, a form of cochlear deafferentation, has been demonstrated in a number of animal species, including non-human primates. Both age and noise exposure contribute to synaptopathy in animal models, indicating that it may be a common type of auditory dysfunction in humans. Temporal bone and auditory physiological data suggest that age and occupational/military noise exposure also lead to synaptopathy in humans. The predicted perceptual consequences of synaptopathy include tinnitus, hyperacusis, and difficulty with speech-in-noise perception. However, confirming the perceptual impacts of this form of cochlear deafferentation presents a particular challenge because synaptopathy can only be confirmed through post-mortem temporal bone analysis and auditory perception is difficult to evaluate in animals. Animal data suggest that deafferentation leads to increased central gain, signs of tinnitus and abnormal loudness perception, and deficits in temporal processing and signal-in-noise detection. If equivalent changes occur in humans following deafferentation, this would be expected to increase the likelihood of developing tinnitus, hyperacusis, and difficulty with speech-in-noise perception. Physiological data from humans is consistent with the hypothesis that deafferentation is associated with increased central gain and a greater likelihood of tinnitus perception, while human data on the relationship between deafferentation and hyperacusis is extremely limited. Many human studies have investigated the relationship between physiological correlates of deafferentation and difficulty with speech-in-noise perception, with mixed findings. A non-linear relationship between deafferentation and speech perception may have contributed to the mixed results. When differences in sample characteristics and study measurements are considered, the findings may be more consistent.
Collapse
Affiliation(s)
- Naomi F Bramhall
- VA National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, OR, USA
- Department of Otolaryngology/Head & Neck Surgery, Oregon Health & Science University, Portland, OR, USA
| | - Garnett P McMillan
- VA National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, OR, USA
| |
Collapse
|
13
|
Vasilkov V, Caswell-Midwinter B, Zhao Y, de Gruttola V, Jung DH, Liberman MC, Maison SF. Evidence of cochlear neural degeneration in normal-hearing subjects with tinnitus. Sci Rep 2023; 13:19870. [PMID: 38036538 PMCID: PMC10689483 DOI: 10.1038/s41598-023-46741-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 11/04/2023] [Indexed: 12/02/2023] Open
Abstract
Tinnitus, reduced sound-level tolerance, and difficulties hearing in noisy environments are the most common complaints associated with sensorineural hearing loss in adult populations. This study aims to clarify if cochlear neural degeneration estimated in a large pool of participants with normal audiograms is associated with self-report of tinnitus using a test battery probing the different stages of the auditory processing from hair cell responses to the auditory reflexes of the brainstem. Self-report of chronic tinnitus was significantly associated with (1) reduced cochlear nerve responses, (2) weaker middle-ear muscle reflexes, (3) stronger medial olivocochlear efferent reflexes and (4) hyperactivity in the central auditory pathways. These results support the model of tinnitus generation whereby decreased neural activity from a damaged cochlea can elicit hyperactivity from decreased inhibition in the central nervous system.
Collapse
Affiliation(s)
- Viacheslav Vasilkov
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA, 02114, USA
- Department of Otolaryngology, Harvard Medical School, Boston, MA, 02114, USA
| | - Benjamin Caswell-Midwinter
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA, 02114, USA
- Department of Otolaryngology, Harvard Medical School, Boston, MA, 02114, USA
| | - Yan Zhao
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA, 02114, USA
| | - Victor de Gruttola
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, 02114, USA
| | - David H Jung
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA, 02114, USA
- Department of Otolaryngology, Harvard Medical School, Boston, MA, 02114, USA
| | - M Charles Liberman
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA, 02114, USA
- Department of Otolaryngology, Harvard Medical School, Boston, MA, 02114, USA
| | - Stéphane F Maison
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA, 02114, USA.
- Department of Otolaryngology, Harvard Medical School, Boston, MA, 02114, USA.
| |
Collapse
|
14
|
Cancel VE, McHaney JR, Milne V, Palmer C, Parthasarathy A. A data-driven approach to identify a rapid screener for auditory processing disorder testing referrals in adults. Sci Rep 2023; 13:13636. [PMID: 37604867 PMCID: PMC10442397 DOI: 10.1038/s41598-023-40645-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 08/16/2023] [Indexed: 08/23/2023] Open
Abstract
Hearing thresholds form the gold standard assessment in Audiology clinics. However, ~ 10% of adult patients seeking audiological care for self-perceived hearing deficits have thresholds that are normal. Currently, a diagnostic assessment for auditory processing disorder (APD) remains one of the few viable avenues of further care for this patient population, yet there are no standard guidelines for referrals. Here, we identified tests within the APD testing battery that could provide a rapid screener to inform APD referrals in adults. We first analyzed records from the University of Pittsburgh Medical Center (UPMC) Audiology database to identify adult patients with self-perceived hearing difficulties despite normal audiometric thresholds. We then looked at the patients who were referred for APD testing. We examined test performances, correlational relationships, and classification accuracies. Patients experienced most difficulties within the dichotic domain of testing. Additionally, accuracies calculated from sensitivities and specificities revealed the words-in-noise (WIN), the Random Dichotic Digits Task (RDDT) and Quick Speech in Noise (QuickSIN) tests had the highest classification accuracies. The addition of these tests have the greatest promise as a quick screener during routine audiological assessments to help identify adult patients who may be referred for APD assessment and resulting treatment plans.
Collapse
Affiliation(s)
- Victoria E Cancel
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA
| | - Jacie R McHaney
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Virginia Milne
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA
- Department of Otolaryngology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
- University of Pittsburgh Medical Center, University of Pittsburgh, Pittsburgh, PA, USA
| | - Catherine Palmer
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA
- Department of Otolaryngology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
- University of Pittsburgh Medical Center, University of Pittsburgh, Pittsburgh, PA, USA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, 5060A Forbes Tower, Pittsburgh, PA, 15260, USA.
- Department of Otolaryngology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA.
- University of Pittsburgh Medical Center, University of Pittsburgh, Pittsburgh, PA, USA.
- Department of BioEngineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
15
|
McHaney JR, Hancock KE, Polley DB, Parthasarathy A. Sensory representations and pupil-indexed listening effort provide complementary contributions to multi-talker speech intelligibility. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.13.553131. [PMID: 37645975 PMCID: PMC10462058 DOI: 10.1101/2023.08.13.553131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Optimal speech perception in noise requires successful separation of the target speech stream from multiple competing background speech streams. The ability to segregate these competing speech streams depends on the fidelity of bottom-up neural representations of sensory information in the auditory system and top-down influences of effortful listening. Here, we use objective neurophysiological measures of bottom-up temporal processing using envelope-following responses (EFRs) to amplitude modulated tones and investigate their interactions with pupil-indexed listening effort, as it relates to performance on the Quick speech in noise (QuickSIN) test in young adult listeners with clinically normal hearing thresholds. We developed an approach using ear-canal electrodes and adjusting electrode montages for modulation rate ranges, which extended the rage of reliable EFR measurements as high as 1024Hz. Pupillary responses revealed changes in listening effort at the two most difficult signal-to-noise ratios (SNR), but behavioral deficits at the hardest SNR only. Neither pupil-indexed listening effort nor the slope of the EFR decay function independently related to QuickSIN performance. However, a linear model using the combination of EFRs and pupil metrics significantly explained variance in QuickSIN performance. These results suggest a synergistic interaction between bottom-up sensory coding and top-down measures of listening effort as it relates to speech perception in noise. These findings can inform the development of next-generation tests for hearing deficits in listeners with normal-hearing thresholds that incorporates a multi-dimensional approach to understanding speech intelligibility deficits.
Collapse
Affiliation(s)
- Jacie R. McHaney
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
| | - Kenneth E. Hancock
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Daniel B. Polley
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh PA
| |
Collapse
|
16
|
Samelli AG, Rocha CH, Kamita MK, Lopes MEP, Andrade CQ, Matas CG. Evaluation of Subtle Auditory Impairments with Multiple Audiological Assessments in Normal Hearing Workers Exposed to Occupational Noise. Brain Sci 2023; 13:968. [PMID: 37371447 DOI: 10.3390/brainsci13060968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/10/2023] [Accepted: 06/15/2023] [Indexed: 06/29/2023] Open
Abstract
Recent studies involving guinea pigs have shown that noise can damage the synapses between the inner hair cells and spiral ganglion neurons, even with normal hearing thresholds-which makes it important to investigate this kind of impairment in humans. The aim was to investigate, with multiple audiological assessments, the auditory function of normal hearing workers exposed to occupational noise. Altogether, 60 workers were assessed (30 in the noise-exposure group [NEG], who were exposed to occupational noise, and 30 in the control group [CG], who were not exposed to occupational noise); the workers were matched according to age. The following procedures were used: complete audiological assessment; speech recognition threshold in noise (SRTN); speech in noise (SN) in an acoustic field; gaps-in-noise (GIN); transient evoked otoacoustic emissions (TEOAE) and inhibitory effect of the efferent auditory pathway; auditory brainstem response (ABR); and long-latency auditory evoked potentials (LLAEP). No significant difference was found between the groups in SRTN. In SN, the NEG performed worse than the CG in signal-to-noise ratio (SNR) 0 (p-value 0.023). In GIN, the NEG had a significantly lower percentage of correct answers (p-value 0.042). In TEOAE, the NEG had smaller amplitude values bilaterally (RE p-value 0.048; LE p-value 0.045) and a smaller inhibitory effect of the efferent pathway (p-value 0.009). In ABR, the NEG had greater latencies of wave V (p-value 0.017) and interpeak intervals III-V and I-V in the LE (respective p-values: 0.005 and 0.04). In LLAEP, the NEG had a smaller P3 amplitude bilaterally (RE p-value 0.001; LE p-value 0.002). The NEG performed worse than the CG in most of the assessments, suggesting that the auditory function in individuals exposed to occupational noise is impaired, even with normal audiometric thresholds.
Collapse
Affiliation(s)
- Alessandra Giannella Samelli
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, Medical School (FMUSP), University of São Paulo, São Paulo 05360-160, SP, Brazil
| | - Clayton Henrique Rocha
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, Medical School (FMUSP), University of São Paulo, São Paulo 05360-160, SP, Brazil
| | - Mariana Keiko Kamita
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, Medical School (FMUSP), University of São Paulo, São Paulo 05360-160, SP, Brazil
| | - Maria Elisa Pereira Lopes
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, Medical School (FMUSP), University of São Paulo, São Paulo 05360-160, SP, Brazil
| | - Camila Quintiliano Andrade
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, Medical School (FMUSP), University of São Paulo, São Paulo 05360-160, SP, Brazil
| | - Carla Gentile Matas
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, Medical School (FMUSP), University of São Paulo, São Paulo 05360-160, SP, Brazil
| |
Collapse
|
17
|
Karunathilake IMD, Dunlap JL, Perera J, Presacco A, Decruy L, Anderson S, Kuchinsky SE, Simon JZ. Effects of aging on cortical representations of continuous speech. J Neurophysiol 2023; 129:1359-1377. [PMID: 37096924 PMCID: PMC10202479 DOI: 10.1152/jn.00356.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 04/04/2023] [Accepted: 04/20/2023] [Indexed: 04/26/2023] Open
Abstract
Understanding speech in a noisy environment is crucial in day-to-day interactions and yet becomes more challenging with age, even for healthy aging. Age-related changes in the neural mechanisms that enable speech-in-noise listening have been investigated previously; however, the extent to which age affects the timing and fidelity of encoding of target and interfering speech streams is not well understood. Using magnetoencephalography (MEG), we investigated how continuous speech is represented in auditory cortex in the presence of interfering speech in younger and older adults. Cortical representations were obtained from neural responses that time-locked to the speech envelopes with speech envelope reconstruction and temporal response functions (TRFs). TRFs showed three prominent peaks corresponding to auditory cortical processing stages: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). Older adults showed exaggerated speech envelope representations compared with younger adults. Temporal analysis revealed both that the age-related exaggeration starts as early as ∼50 ms and that older adults needed a substantially longer integration time window to achieve their better reconstruction of the speech envelope. As expected, with increased speech masking envelope reconstruction for the attended talker decreased and all three TRF peaks were delayed, with aging contributing additionally to the reduction. Interestingly, for older adults the late peak was delayed, suggesting that this late peak may receive contributions from multiple sources. Together these results suggest that there are several mechanisms at play compensating for age-related temporal processing deficits at several stages but which are not able to fully reestablish unimpaired speech perception.NEW & NOTEWORTHY We observed age-related changes in cortical temporal processing of continuous speech that may be related to older adults' difficulty in understanding speech in noise. These changes occur in both timing and strength of the speech representations at different cortical processing stages and depend on both noise condition and selective attention. Critically, their dependence on noise condition changes dramatically among the early, middle, and late cortical processing stages, underscoring how aging differentially affects these stages.
Collapse
Affiliation(s)
- I M Dushyanthi Karunathilake
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States
| | - Jason L Dunlap
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Janani Perera
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Alessandro Presacco
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States
| | - Lien Decruy
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland, United States
| | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States
- Department of Biology, University of Maryland, College Park, Maryland, United States
| |
Collapse
|
18
|
Whiteford KL, Oxenham AJ. Sensitivity to Frequency Modulation is Limited Centrally. J Neurosci 2023; 43:3687-3695. [PMID: 37028932 PMCID: PMC10198444 DOI: 10.1523/jneurosci.0995-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 03/23/2023] [Accepted: 03/31/2023] [Indexed: 04/09/2023] Open
Abstract
Modulations in both amplitude and frequency are prevalent in natural sounds and are critical in defining their properties. Humans are exquisitely sensitive to frequency modulation (FM) at the slow modulation rates and low carrier frequencies that are common in speech and music. This enhanced sensitivity to slow-rate and low-frequency FM has been widely believed to reflect precise, stimulus-driven phase locking to temporal fine structure in the auditory nerve. At faster modulation rates and/or higher carrier frequencies, FM is instead thought to be coded by coarser frequency-to-place mapping, where FM is converted to amplitude modulation (AM) via cochlear filtering. Here, we show that patterns of human FM perception that have classically been explained by limits in peripheral temporal coding are instead better accounted for by constraints in the central processing of fundamental frequency (F0) or pitch. We measured FM detection in male and female humans using harmonic complex tones with an F0 within the range of musical pitch but with resolved harmonic components that were all above the putative limits of temporal phase locking (>8 kHz). Listeners were more sensitive to slow than fast FM rates, even though all components were beyond the limits of phase locking. In contrast, AM sensitivity remained better at faster than slower rates, regardless of carrier frequency. These findings demonstrate that classic trends in human FM sensitivity, previously attributed to auditory nerve phase locking, may instead reflect the constraints of a unitary code that operates at a more central level of processing.SIGNIFICANCE STATEMENT Natural sounds involve dynamic frequency and amplitude fluctuations. Humans are particularly sensitive to frequency modulation (FM) at slow rates and low carrier frequencies, which are prevalent in speech and music. This sensitivity has been ascribed to encoding of stimulus temporal fine structure (TFS) via phase-locked auditory nerve activity. To test this long-standing theory, we measured FM sensitivity using complex tones with a low F0 but only high-frequency harmonics beyond the limits of phase locking. Dissociating the F0 from TFS showed that FM sensitivity is limited not by peripheral encoding of TFS but rather by central processing of F0, or pitch. The results suggest a unitary code for FM detection limited by more central constraints.
Collapse
Affiliation(s)
- Kelly L Whiteford
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| |
Collapse
|
19
|
Narne VK, Jain S, Ravi SK, Almudhi A, Krishna Y, Moore BCJ. The effect of recreational noise exposure on amplitude-modulation detection, hearing sensitivity at frequencies above 8 kHz, and perception of speech in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:2562. [PMID: 37129676 DOI: 10.1121/10.0017973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 04/08/2023] [Indexed: 05/03/2023]
Abstract
Psychoacoustic and speech perception measures were compared for a group who were exposed to noise regularly through listening to music via personal music players (PMP) and a control group without such exposure. Lifetime noise exposure, quantified using the NESI questionnaire, averaged ten times higher for the exposed group than for the control group. Audiometric thresholds were similar for the two groups over the conventional frequency range up to 8 kHz, but for higher frequencies, the exposed group had higher thresholds than the control group. Amplitude modulation detection (AMD) thresholds were measured using a 4000-Hz sinusoidal carrier presented in threshold-equalizing noise at 30, 60, and 90 dB sound pressure level (SPL) for modulation frequencies of 8, 16, 32, and 64 Hz. At 90 dB SPL but not at the lower levels, AMD thresholds were significantly higher (worse) for the exposed than for the control group, especially for low modulation frequencies. The exposed group required significantly higher signal-to-noise ratios than the control group to understand sentences in noise. Otoacoustic emissions did not differ for the two groups. It is concluded that listening to music via PMP can have subtle deleterious effects on speech perception, AM detection, and hearing sensitivity over the extended high-frequency range.
Collapse
Affiliation(s)
- Vijaya Kumar Narne
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
| | - Saransh Jain
- All India Institute of Speech and Hearing, University of Mysore, Mysuru, India
| | - Sunil Kumar Ravi
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
| | - Abdulaziz Almudhi
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
| | - Yerraguntla Krishna
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
- All India Institute of Speech and Hearing, University of Mysore, Mysuru, India
| | - Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
20
|
Winn MB. Time Scales and Moments of Listening Effort Revealed in Pupillometry. Semin Hear 2023; 44:106-123. [PMID: 37122881 PMCID: PMC10147502 DOI: 10.1055/s-0043-1767741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023] Open
Abstract
This article offers a collection of observations that highlight the value of time course data in pupillometry and points out ways in which these observations create deeper understanding of listening effort. The main message is that listening effort should be considered on a moment-to-moment basis rather than as a singular amount. A review of various studies and the reanalysis of data reveal distinct signatures of effort before a stimulus, during a stimulus, in the moments after a stimulus, and changes over whole experimental testing sessions. Collectively these observations motivate questions that extend beyond the "amount" of effort, toward understanding how long the effort lasts, and how precisely someone can allocate effort at specific points in time or reduce effort at other times. Apparent disagreements between studies are reconsidered as informative lessons about stimulus selection and the nature of pupil dilation as a reflection of decision making rather than the difficulty of sensory encoding.
Collapse
Affiliation(s)
- Matthew B. Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|
21
|
He F, Stevenson IH, Escabí MA. Two stages of bandwidth scaling drives efficient neural coding of natural sounds. PLoS Comput Biol 2023; 19:e1010862. [PMID: 36787338 PMCID: PMC9970106 DOI: 10.1371/journal.pcbi.1010862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 02/27/2023] [Accepted: 01/09/2023] [Indexed: 02/15/2023] Open
Abstract
Theories of efficient coding propose that the auditory system is optimized for the statistical structure of natural sounds, yet the transformations underlying optimal acoustic representations are not well understood. Using a database of natural sounds including human speech and a physiologically-inspired auditory model, we explore the consequences of peripheral (cochlear) and mid-level (auditory midbrain) filter tuning transformations on the representation of natural sound spectra and modulation statistics. Whereas Fourier-based sound decompositions have constant time-frequency resolution at all frequencies, cochlear and auditory midbrain filters bandwidths increase proportional to the filter center frequency. This form of bandwidth scaling produces a systematic decrease in spectral resolution and increase in temporal resolution with increasing frequency. Here we demonstrate that cochlear bandwidth scaling produces a frequency-dependent gain that counteracts the tendency of natural sound power to decrease with frequency, resulting in a whitened output representation. Similarly, bandwidth scaling in mid-level auditory filters further enhances the representation of natural sounds by producing a whitened modulation power spectrum (MPS) with higher modulation entropy than both the cochlear outputs and the conventional Fourier MPS. These findings suggest that the tuning characteristics of the peripheral and mid-level auditory system together produce a whitened output representation in three dimensions (frequency, temporal and spectral modulation) that reduces redundancies and allows for a more efficient use of neural resources. This hierarchical multi-stage tuning strategy is thus likely optimized to extract available information and may underlies perceptual sensitivity to natural sounds.
Collapse
Affiliation(s)
- Fengrong He
- Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
- The Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Monty A. Escabí
- Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
- The Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut, United States of America
- Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- * E-mail:
| |
Collapse
|
22
|
Vasilkov V, Liberman MC, Maison SF. Isolating auditory-nerve contributions to electrocochleography by high-pass filtering: A better biomarker for cochlear nerve degeneration? JASA EXPRESS LETTERS 2023; 3:024401. [PMID: 36858988 PMCID: PMC9969351 DOI: 10.1121/10.0017328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 01/26/2023] [Indexed: 05/17/2023]
Abstract
In search of biomarkers for cochlear neural degeneration (CND) in electrocochleography from humans with normal thresholds, we high-pass and low-pass filtered the responses to separate contributions of auditory-nerve action potentials (N1) from hair-cell summating potentials (SP). The new N1 measure is better correlated with performance on difficult word-recognition tasks used as a proxy for CND. Furthermore, the paradoxical correlation between larger SPs and worse word scores, observed with classic electrocochleographic analysis, disappears with the new metric. Classic SP is simultaneous with and opposite in phase to an early neural contribution, and filtering separates the sources to eliminate this interference.
Collapse
Affiliation(s)
- Viacheslav Vasilkov
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear and Department of Otolaryngology -Head and Neck Surgery, Harvard Medical School, Boston, Massachussetts 02114, USA ; ;
| | - M Charles Liberman
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear and Department of Otolaryngology -Head and Neck Surgery, Harvard Medical School, Boston, Massachussetts 02114, USA ; ;
| | - Stéphane F Maison
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear and Department of Otolaryngology -Head and Neck Surgery, Harvard Medical School, Boston, Massachussetts 02114, USA ; ;
| |
Collapse
|
23
|
Le Prell CG, Clavier OH, Bao J. Noise-induced hearing disorders: Clinical and investigational tools. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:711. [PMID: 36732240 PMCID: PMC9889121 DOI: 10.1121/10.0017002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 01/05/2023] [Accepted: 01/09/2023] [Indexed: 06/18/2023]
Abstract
A series of articles discussing advanced diagnostics that can be used to assess noise injury and associated noise-induced hearing disorders (NIHD) was developed under the umbrella of the United States Department of Defense Hearing Center of Excellence Pharmaceutical Interventions for Hearing Loss working group. The overarching goals of the current series were to provide insight into (1) well-established and more recently developed metrics that are sensitive for detection of cochlear pathology or diagnosis of NIHD, and (2) the tools that are available for characterizing individual noise hazard as personal exposure will vary based on distance to the sound source and placement of hearing protection devices. In addition to discussing the utility of advanced diagnostics in patient care settings, the current articles discuss the selection of outcomes and end points that can be considered for use in clinical trials investigating hearing loss prevention and hearing rehabilitation.
Collapse
Affiliation(s)
- Colleen G Le Prell
- Department of Speech, Language, and Hearing Science, University of Texas at Dallas, Richardson, Texas 75080, USA
| | | | - Jianxin Bao
- Gateway Biotechnology Inc., St. Louis, Missouri 63132, USA
| |
Collapse
|
24
|
Davidson A, Ellis G, Sherlock LP, Schurman J, Brungart D. Rapid Assessment of Subjective Hearing Complaints With a Modified Version of the Tinnitus and Hearing Survey. Trends Hear 2023; 27:23312165231198374. [PMID: 37822285 PMCID: PMC10571680 DOI: 10.1177/23312165231198374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 08/09/2023] [Accepted: 08/13/2023] [Indexed: 10/13/2023] Open
Abstract
Hearing difficulties are frequently reported by patients in audiology clinics, including patients with normal audiometric thresholds. However, because all individuals experience some difficulty understanding speech in noisy environments, it can be difficult to assess hearing complaints objectively across patients. Normative values help address this issue by providing an objective cutoff score for determining what is or is not clinically significant. The goal of this study was to establish normative values for the four-item hearing subscale of the Tinnitus and Hearing Survey (THS-H). Respondents completing the THS-H rate the level of difficulty understanding speech in the situations most commonly reported as being difficult: in the presence of noise, on TV or in movies, soft voices and group conversations. In this study, 22,583 US Service Members (SMs) completed the THS-H using an 11-point scale ranging from 0 (not a problem) to 10 (a very big problem). Responses to the four items were summed to produce values between 0 and 40. The distribution of the final scores was analyzed based on severity of hearing loss, age, and sex. Only 5% of SMs with clinically normal hearing scored above 27, so this score was selected as a cutoff for "clinically significant hearing problems." Due to its ease of administration and interpretation, the THS-H could be a useful tool for identifying patients with subjective hearing difficulty warranting audiological evaluation and management.
Collapse
Affiliation(s)
- Alyssa Davidson
- Audiology and Speech Center, Walter Reed National Military Medical Center, National Military Audiology and Speech Center, Bethesda, MD, USA
| | - Gregory Ellis
- Audiology and Speech Center, Walter Reed National Military Medical Center, National Military Audiology and Speech Center, Bethesda, MD, USA
| | - LaGuinn P. Sherlock
- Audiology and Speech Center, Walter Reed National Military Medical Center, National Military Audiology and Speech Center, Bethesda, MD, USA
- Hearing Conservation and Readiness Branch, Defense Centers for Public Health-Aberdeen, Aberdeen, MD, USA
| | - Jaclyn Schurman
- Audiology and Speech Center, Walter Reed National Military Medical Center, National Military Audiology and Speech Center, Bethesda, MD, USA
| | - Douglas Brungart
- Audiology and Speech Center, Walter Reed National Military Medical Center, National Military Audiology and Speech Center, Bethesda, MD, USA
| |
Collapse
|
25
|
Grinn SK, Le Prell CG. Evaluation of hidden hearing loss in normal-hearing firearm users. Front Neurosci 2022; 16:1005148. [PMID: 36389238 PMCID: PMC9644938 DOI: 10.3389/fnins.2022.1005148] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 09/07/2022] [Indexed: 04/05/2024] Open
Abstract
Some noise exposures resulting in temporary threshold shift (TTS) result in cochlear synaptopathy. The purpose of this retrospective study was to evaluate a human population that might be at risk for noise-induced cochlear synaptopathy (i.e., "hidden hearing loss"). Participants were firearm users who were (1) at-risk for prior audiometric noise-induced threshold shifts, given their history of firearm use, (2) likely to have experienced complete threshold recovery if any prior TTS had occurred, based on this study's normal-hearing inclusion criteria, and (3) not at-risk for significant age-related synaptopathic loss, based on this study's young-adult inclusion criteria. 70 participants (age 18-25 yr) were enrolled, including 33 firearm users experimental (EXP), and 37 non-firearm users control (CNTRL). All participants were required to exhibit audiometric thresholds ≤20 dB HL bilaterally, from 0.25 to 8 kHz. The study was designed to test the hypothesis that EXP participants would exhibit a reduced cochlear nerve response compared to CNTRL participants, despite normal-hearing sensitivity in both groups. No statistically significant group differences in auditory performance were detected between the CNTRL and EXP participants on standard audiom to etry, extended high-frequency audiometry, Words-in-Noise performance, distortion product otoacoustic emission, middle ear muscle reflex, or auditory brainstem response. Importantly, 91% of EXP participants reported that they wore hearing protection either "all the time" or "almost all the time" while using firearms. The data suggest that consistent use of hearing protection during firearm use can effectively protect cochlear and neural measures of auditory function, including suprathreshold responses. The current results do not exclude the possibility that neural pathology may be evident in firearm users with less consistent hearing protection use. However, firearm users with less consistent hearing protection use are also more likely to exhibit threshold elevation, among other cochlear deficits, thereby confounding the isolation of any potentially selective neural deficits. Taken together, it seems most likely that firearm users who consistently and correctly use hearing protection will exhibit preserved measures of cochlear and neural function, while firearm users who inconsistently and incorrectly use hearing protection are most likely to exhibit cochlear injury, rather than evidence of selective neural injury in the absence of cochlear injury.
Collapse
Affiliation(s)
- Sarah K. Grinn
- Department of Communication Sciences and Disorders, Central Michigan University, Mount Pleasant, MI, United States
| | - Colleen G. Le Prell
- Department of Speech, Language, and Hearing, University of Texas at Dallas, Dallas, TX, United States
| |
Collapse
|
26
|
McGill M, Hight AE, Watanabe YL, Parthasarathy A, Cai D, Clayton K, Hancock KE, Takesian A, Kujawa SG, Polley DB. Neural signatures of auditory hypersensitivity following acoustic trauma. eLife 2022; 11:e80015. [PMID: 36111669 PMCID: PMC9555866 DOI: 10.7554/elife.80015] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 09/14/2022] [Indexed: 11/25/2022] Open
Abstract
Neurons in sensory cortex exhibit a remarkable capacity to maintain stable firing rates despite large fluctuations in afferent activity levels. However, sudden peripheral deafferentation in adulthood can trigger an excessive, non-homeostatic cortical compensatory response that may underlie perceptual disorders including sensory hypersensitivity, phantom limb pain, and tinnitus. Here, we show that mice with noise-induced damage of the high-frequency cochlear base were behaviorally hypersensitive to spared mid-frequency tones and to direct optogenetic stimulation of auditory thalamocortical neurons. Chronic two-photon calcium imaging from ACtx pyramidal neurons (PyrNs) revealed an initial stage of spatially diffuse hyperactivity, hyper-correlation, and auditory hyperresponsivity that consolidated around deafferented map regions three or more days after acoustic trauma. Deafferented PyrN ensembles also displayed hypersensitive decoding of spared mid-frequency tones that mirrored behavioral hypersensitivity, suggesting that non-homeostatic regulation of cortical sound intensity coding following sensorineural loss may be an underlying source of auditory hypersensitivity. Excess cortical response gain after acoustic trauma was expressed heterogeneously among individual PyrNs, yet 40% of this variability could be accounted for by each cell's baseline response properties prior to acoustic trauma. PyrNs with initially high spontaneous activity and gradual monotonic intensity growth functions were more likely to exhibit non-homeostatic excess gain after acoustic trauma. This suggests that while cortical gain changes are triggered by reduced bottom-up afferent input, their subsequent stabilization is also shaped by their local circuit milieu, where indicators of reduced inhibition can presage pathological hyperactivity following sensorineural hearing loss.
Collapse
Affiliation(s)
- Matthew McGill
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
- Division of Medical Sciences, Harvard Medical SchoolBostonUnited States
| | - Ariel E Hight
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
- Division of Medical Sciences, Harvard Medical SchoolBostonUnited States
| | - Yurika L Watanabe
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
| | - Aravindakshan Parthasarathy
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical SchoolBostonUnited States
| | - Dongqin Cai
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical SchoolBostonUnited States
| | - Kameron Clayton
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical SchoolBostonUnited States
| | - Kenneth E Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical SchoolBostonUnited States
| | - Anne Takesian
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical SchoolBostonUnited States
| | - Sharon G Kujawa
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical SchoolBostonUnited States
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical SchoolBostonUnited States
| |
Collapse
|
27
|
Valderrama JT, de la Torre A, McAlpine D. The hunt for hidden hearing loss in humans: From preclinical studies to effective interventions. Front Neurosci 2022; 16:1000304. [PMID: 36188462 PMCID: PMC9519997 DOI: 10.3389/fnins.2022.1000304] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 08/29/2022] [Indexed: 11/30/2022] Open
Abstract
Many individuals experience hearing problems that are hidden under a normal audiogram. This not only impacts on individual sufferers, but also on clinicians who can offer little in the way of support. Animal studies using invasive methodologies have developed solid evidence for a range of pathologies underlying this hidden hearing loss (HHL), including cochlear synaptopathy, auditory nerve demyelination, elevated central gain, and neural mal-adaptation. Despite progress in pre-clinical models, evidence supporting the existence of HHL in humans remains inconclusive, and clinicians lack any non-invasive biomarkers sensitive to HHL, as well as a standardized protocol to manage hearing problems in the absence of elevated hearing thresholds. Here, we review animal models of HHL as well as the ongoing research for tools with which to diagnose and manage hearing difficulties associated with HHL. We also discuss new research opportunities facilitated by recent methodological tools that may overcome a series of barriers that have hampered meaningful progress in diagnosing and treating of HHL.
Collapse
Affiliation(s)
- Joaquin T. Valderrama
- National Acoustic Laboratories, Sydney, NSW, Australia
- Department of Linguistics, Macquarie University Hearing, Macquarie University, Sydney, NSW, Australia
- *Correspondence: Joaquin T. Valderrama, ;
| | - Angel de la Torre
- Department of Signal Theory, Telematics and Communications, University of Granada, Granada, Spain
- Research Centre for Information and Communications Technologies (CITIC-UGR), University of Granada, Granada, Spain
| | - David McAlpine
- Department of Linguistics, Macquarie University Hearing, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
28
|
Xiao H, Amaerjiang N, Wang W, Li M, Zunong J, En H, Zhao X, Wen C, Yu Y, Huang L, Hu Y. Hearing thresholds elevation and potential association with emotional problems among 1,914 children in Beijing, China. Front Public Health 2022; 10:937301. [PMID: 35991012 PMCID: PMC9386347 DOI: 10.3389/fpubh.2022.937301] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 07/12/2022] [Indexed: 01/09/2023] Open
Abstract
Objectives School-aged children may experience hearing loss and emotional problems. Previous studies have shown a bidirectional relationship between hearing loss and emotional problems in the elderly population, and we aimed to analyze the association between hearing thresholds and emotional problems in school-aged children. Methods Based on the Beijing Child Growth and Health Cohort (PROC) study, the hearing screenings were conducted in November 2019 using pure tone audiometry. A total of 1,877 parents completed the Strengths and Difficulties Questionnaire (SDQ) to assess children's emotional and behavioral status. We used generalized linear regression analysis to assess the potential association of emotional problems with hearing thresholds, based on multiple imputed datasets with a sample size of 1,914. Results The overall pass rate of hearing screening was 91.5%. The abnormal rate of SDQ total difficulties was 55.8%. Emotional symptoms were positively associated with left ear average hearing thresholds (β = 0.24, 95%CI: 0.08–0.40), and right ear average hearing thresholds (β = 0.18, 95%CI: 0.04–0.32). Conduct problems, hyperactivity/inattention, peer problems, and prosocial behaviors had no association with the pass rate of the hearing screening. Regarding emotional symptoms, boys with many fears and who are easily scared coincided with increased right ear average hearing thresholds (β = 0.67, 95%CI: 0.01–1.33). Girls having many worries, frequently feeling unhappy and downhearted were positively associated with left and right ear average hearing thresholds, respectively (β = 0.96, 95%CI: 0.20–1.73; β = 0.72, 95%CI: 0.07–1.37). Conclusions The co-occurrence of hearing problems and emotional problems of children aged 6–8 in Beijing attracts attention. It is important to address undiscovered hearing loss and emotional problems from the perspective of comorbidity driving factors.
Collapse
Affiliation(s)
- Huidi Xiao
- Department of Child, Adolescent Health and Maternal Care, School of Public Health, Capital Medical University, Beijing, China
| | - Nubiya Amaerjiang
- Department of Child, Adolescent Health and Maternal Care, School of Public Health, Capital Medical University, Beijing, China
| | - Weiwei Wang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Menglong Li
- Department of Child, Adolescent Health and Maternal Care, School of Public Health, Capital Medical University, Beijing, China
| | - Jiawulan Zunong
- Department of Child, Adolescent Health and Maternal Care, School of Public Health, Capital Medical University, Beijing, China
| | - Hui En
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuelei Zhao
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Institute of Otolaryngology, Beijing, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Ministry of Education, Beijing, China
| | - Cheng Wen
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Institute of Otolaryngology, Beijing, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Ministry of Education, Beijing, China
| | - Yiding Yu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Institute of Otolaryngology, Beijing, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Ministry of Education, Beijing, China
| | - Lihui Huang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Institute of Otolaryngology, Beijing, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Ministry of Education, Beijing, China
- Lihui Huang
| | - Yifei Hu
- Department of Child, Adolescent Health and Maternal Care, School of Public Health, Capital Medical University, Beijing, China
- *Correspondence: Yifei Hu ;
| |
Collapse
|
29
|
Grant KJ, Parthasarathy A, Vasilkov V, Caswell-Midwinter B, Freitas ME, de Gruttola V, Polley DB, Liberman MC, Maison SF. Predicting neural deficits in sensorineural hearing loss from word recognition scores. Sci Rep 2022; 12:8929. [PMID: 35739134 PMCID: PMC9226113 DOI: 10.1038/s41598-022-13023-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 05/19/2022] [Indexed: 12/28/2022] Open
Abstract
The current gold standard of clinical hearing assessment includes a pure-tone audiogram combined with a word recognition task. This retrospective study tests the hypothesis that deficits in word recognition that cannot be explained by loss in audibility or cognition may reflect underlying cochlear nerve degeneration (CND). We collected the audiological data of nearly 96,000 ears from patients with normal hearing, conductive hearing loss (CHL) and a variety of sensorineural etiologies including (1) age-related hearing loss (ARHL); (2) neuropathy related to vestibular schwannoma or neurofibromatosis of type 2; (3) Ménière’s disease; (4) sudden sensorineural hearing loss (SSNHL), (5) exposure to ototoxic drugs (carboplatin and/or cisplatin, vancomycin or gentamicin) or (6) noise damage including those with a 4-kHz “noise notch” or reporting occupational or recreational noise exposure. Word recognition was scored using CID W-22 monosyllabic word lists. The Articulation Index was used to predict the speech intelligibility curve using a transfer function for CID W-22. The level at which maximal intelligibility was predicted was used as presentation level (70 dB HL minimum). Word scores decreased dramatically with age and thresholds in all groups with SNHL etiologies, but relatively little in the conductive hearing loss group. Discrepancies between measured and predicted word scores were largest in patients with neuropathy, Ménière’s disease and SSNHL, intermediate in the noise-damage and ototoxic drug groups, and smallest in the ARHL group. In the CHL group, the measured and predicted word scores were very similar. Since word-score predictions assume that audiometric losses can be compensated by increasing stimulus level, their accuracy in predicting word score for CHL patients is unsurprising. The lack of a strong age effect on word scores in CHL shows that cognitive decline is not a major factor in this test. Amongst the possible contributions to word score discrepancies, CND is a prime candidate: it should worsen intelligibility without affecting thresholds and has been documented in human temporal bones with SNHL. Comparing the audiological trends observed here with the existing histopathological literature supports the notion that word score discrepancies may be a useful CND metric.
Collapse
Affiliation(s)
- Kelsie J Grant
- Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA, 02114-3096, USA
| | - Aravindakshan Parthasarathy
- Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA, 02114-3096, USA.,Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, MA, USA.,Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Viacheslav Vasilkov
- Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA, 02114-3096, USA.,Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| | - Benjamin Caswell-Midwinter
- Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA, 02114-3096, USA.,Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| | - Maria E Freitas
- Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA, 02114-3096, USA
| | - Victor de Gruttola
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA, 02114-3096, USA.,Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| | - M Charles Liberman
- Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA, 02114-3096, USA.,Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| | - Stéphane F Maison
- Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA, 02114-3096, USA. .,Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
30
|
Jahn KN, Hancock KE, Maison SF, Polley DB. Estimated cochlear neural degeneration is associated with loudness hypersensitivity in individuals with normal audiograms. JASA EXPRESS LETTERS 2022; 2:064403. [PMID: 35719240 PMCID: PMC9199082 DOI: 10.1121/10.0011694] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 05/25/2022] [Indexed: 05/27/2023]
Abstract
In animal models, cochlear neural degeneration (CND) is associated with excess central gain and hyperacusis, but a compelling link between reduced cochlear neural inputs and heightened loudness perception in humans remains elusive. The present study examined whether greater estimated cochlear neural degeneration (eCND) in human participants with normal hearing thresholds is associated with heightened loudness perception and sound aversion. Results demonstrated that loudness perception was heightened in ears with greater eCND and in subjects who self-report loudness aversion via a hyperacusis questionnaire. These findings suggest that CND may be a potential trigger for loudness hypersensitivity.
Collapse
Affiliation(s)
- Kelly N Jahn
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts 02114, USA , , ,
| | - Kenneth E Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts 02114, USA , , ,
| | - Stéphane F Maison
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts 02114, USA , , ,
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts 02114, USA , , ,
| |
Collapse
|
31
|
Naumann LB, Keijser J, Sprekeler H. Invariant neural subspaces maintained by feedback modulation. eLife 2022; 11:76096. [PMID: 35442191 PMCID: PMC9106332 DOI: 10.7554/elife.76096] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 04/06/2022] [Indexed: 11/13/2022] Open
Abstract
Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.
Collapse
|
32
|
Individualized Assays of Temporal Coding in the Ascending Human Auditory System. eNeuro 2022; 9:ENEURO.0378-21.2022. [PMID: 35193890 PMCID: PMC8925652 DOI: 10.1523/eneuro.0378-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 01/12/2022] [Accepted: 02/08/2022] [Indexed: 11/21/2022] Open
Abstract
Neural phase-locking to temporal fluctuations is a fundamental and unique mechanism by which acoustic information is encoded by the auditory system. The perceptual role of this metabolically expensive mechanism, the neural phase-locking to temporal fine structure (TFS) in particular, is debated. Although hypothesized, it is unclear whether auditory perceptual deficits in certain clinical populations are attributable to deficits in TFS coding. Efforts to uncover the role of TFS have been impeded by the fact that there are no established assays for quantifying the fidelity of TFS coding at the individual level. While many candidates have been proposed, for an assay to be useful, it should not only intrinsically depend on TFS coding, but should also have the property that individual differences in the assay reflect TFS coding per se over and beyond other sources of variance. Here, we evaluate a range of behavioral and electroencephalogram (EEG)-based measures as candidate individualized measures of TFS sensitivity. Our comparisons of behavioral and EEG-based metrics suggest that extraneous variables dominate both behavioral scores and EEG amplitude metrics, rendering them ineffective. After adjusting behavioral scores using lapse rates, and extracting latency or percent-growth metrics from EEG, interaural timing sensitivity measures exhibit robust behavior-EEG correlations. Together with the fact that unambiguous theoretical links can be made relating binaural measures and phase-locking to TFS, our results suggest that these "adjusted" binaural assays may be well suited for quantifying individual TFS processing.
Collapse
|
33
|
Yanagi M, Tsuchiya A, Hosomi F, Ozaki S, Shirakawa O. Application of evoked response audiometry for specifying aberrant gamma oscillations in schizophrenia. Sci Rep 2022; 12:287. [PMID: 34997139 PMCID: PMC8741931 DOI: 10.1038/s41598-021-04278-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 12/17/2021] [Indexed: 12/25/2022] Open
Abstract
Gamma oscillations probed using auditory steady-state response (ASSR) are promising clinical biomarkers that may give rise to novel therapeutic interventions for schizophrenia. Optimizing clinical settings for these biomarker-driven interventions will require a quick and easy assessment system for gamma oscillations in psychiatry. ASSR has been used in clinical otolaryngology for evoked response audiometry (ERA) in order to judge hearing loss by focusing on the phase-locked response detectability via an automated analysis system. Herein, a standard ERA system with 40- and 46-Hz ASSRs was applied to evaluate the brain pathophysiology of patients with schizophrenia. Both ASSRs in the ERA system showed excellent detectability regarding the phase-locked response in healthy subjects and sharply captured the deficits of the phase-locked response caused by aberrant gamma oscillations in individuals with schizophrenia. These findings demonstrate the capability of the ERA system to specify patients who have aberrant gamma oscillations. The ERA system may have a potential to serve as a real-world clinical medium for upcoming biomarker-driven therapeutics in psychiatry.
Collapse
Affiliation(s)
- Masaya Yanagi
- Department of Neuropsychiatry, Faculty of Medicine, Kindai University, 377-2 Ohnohigashi, Osaka-sayama, Osaka, 589-8511, Japan.
| | - Aki Tsuchiya
- Department of Neuropsychiatry, Faculty of Medicine, Kindai University, 377-2 Ohnohigashi, Osaka-sayama, Osaka, 589-8511, Japan
| | - Fumiharu Hosomi
- Department of Neuropsychiatry, Faculty of Medicine, Kindai University, 377-2 Ohnohigashi, Osaka-sayama, Osaka, 589-8511, Japan
| | | | - Osamu Shirakawa
- Department of Neuropsychiatry, Faculty of Medicine, Kindai University, 377-2 Ohnohigashi, Osaka-sayama, Osaka, 589-8511, Japan
| |
Collapse
|
34
|
Cutting Through the Noise: Noise-Induced Cochlear Synaptopathy and Individual Differences in Speech Understanding Among Listeners With Normal Audiograms. Ear Hear 2022; 43:9-22. [PMID: 34751676 PMCID: PMC8712363 DOI: 10.1097/aud.0000000000001147] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Following a conversation in a crowded restaurant or at a lively party poses immense perceptual challenges for some individuals with normal hearing thresholds. A number of studies have investigated whether noise-induced cochlear synaptopathy (CS; damage to the synapses between cochlear hair cells and the auditory nerve following noise exposure that does not permanently elevate hearing thresholds) contributes to this difficulty. A few studies have observed correlations between proxies of noise-induced CS and speech perception in difficult listening conditions, but many have found no evidence of a relationship. To understand these mixed results, we reviewed previous studies that have examined noise-induced CS and performance on speech perception tasks in adverse listening conditions in adults with normal or near-normal hearing thresholds. Our review suggests that superficially similar speech perception paradigms used in previous investigations actually placed very different demands on sensory, perceptual, and cognitive processing. Speech perception tests that use low signal-to-noise ratios and maximize the importance of fine sensory details- specifically by using test stimuli for which lexical, syntactic, and semantic cues do not contribute to performance-are more likely to show a relationship to estimated CS levels. Thus, the current controversy as to whether or not noise-induced CS contributes to individual differences in speech perception under challenging listening conditions may be due in part to the fact that many of the speech perception tasks used in past studies are relatively insensitive to CS-induced deficits.
Collapse
|
35
|
Petley L, Hunter LL, Zadeh LM, Stewart HJ, Sloat NT, Perdew A, Lin L, Moore DR. Listening Difficulties in Children With Normal Audiograms: Relation to Hearing and Cognition. Ear Hear 2021; 42:1640-1655. [PMID: 34261857 PMCID: PMC8545703 DOI: 10.1097/aud.0000000000001076] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Children presenting at audiology services with caregiver-reported listening difficulties often have normal audiograms. The appropriate approach for the further assessment and clinical management of these children is currently unclear. In this Sensitive Indicators of Childhood Listening Difficulties (SICLiD) study, we assessed listening ability using a reliable and validated caregiver questionnaire (the Evaluation of Children's Listening and Processing Skills [ECLiPS]) in a large (n = 146) and heterogeneous sample of 6- to 13-year-old children with normal audiograms. Scores on the ECLiPS were related to a multifaceted laboratory assessment of the children's audiological, psycho- and physiological-acoustic, and cognitive abilities. This report is an overview of the SICLiD study and focuses on the children's behavioral performance. The overall goals of SICLiD were to understand the auditory and other neural mechanisms underlying childhood listening difficulties and translate that understanding into clinical assessment and, ultimately, intervention. DESIGN Cross-sectional behavioral assessment of children with "listening difficulties" and an age-matched "typically developing" control group. Caregivers completed the ECLiPS, and the resulting total standardized composite score formed the basis of further descriptive statistics, univariate, and multivariate modeling of experimental data. RESULTS All scores of the ECLiPS, the SCAN-3:C, a standardized clinical test suite for auditory processing, and the National Institutes of Health (NIH) Cognition Toolbox were significantly lower for children with listening difficulties than for their typically developing peers using group comparisons via t-tests and Wilcoxon Rank-Sum tests. A similar effect was observed on the Listening in Spatialized Noise-Sentences (LiSN-S) test for speech sentence-in-noise intelligibility but only reached significance for the Low Cue and High Cue conditions and the Talker Advantage derived score. Stepwise regression to examine the factors contributing to the ECLiPS Total scaled score (pooled across groups) yielded a model that explained 42% of its variance based on the SCAN-3:C composite, LiSN-S Talker Advantage, and the NIH Toolbox Picture Vocabulary, and Dimensional Change Card Sorting scores (F[4, 95] = 17.35, p < 0.001). High correlations were observed between many test scores including the ECLiPS, SCAN-3:C, and NIH Toolbox composite measures. LiSN-S Advantage measures generally correlated weakly and nonsignificantly with non-LiSN-S measures. However, a significant interaction was found between extended high-frequency threshold and LiSN-S Talker Advantage. CONCLUSIONS Children with listening difficulties but normal audiograms have problems with the cognitive processing of auditory and nonauditory stimuli that include both fluid and crystallized reasoning. Analysis of poor performance on the LiSN-S Talker Advantage measure identified subclinical hearing loss as a minor contributing factor to talker segregation. Beyond auditory tests, evaluations of children with complaints of listening difficulties should include standardized caregiver observations and consideration of broad cognitive abilities.
Collapse
Affiliation(s)
- Lauren Petley
- Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- Department of Psychology, Clarkson University, Potsdam, NY, USA
| | - Lisa L. Hunter
- Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
| | - Lina Motlagh Zadeh
- Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
| | - Hannah J. Stewart
- Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Nicholette T. Sloat
- Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
| | - Audrey Perdew
- Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
| | - Li Lin
- Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
| | - David R. Moore
- Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- Department of Otolaryngology, College of Medicine, University of Cincinnati
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, UK
| |
Collapse
|
36
|
Attia S, King A, Varnet L, Ponsot E, Lorenzi C. Double-pass consistency for amplitude- and frequency-modulation detection in normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:3631. [PMID: 34852611 DOI: 10.1121/10.0006811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 10/05/2021] [Indexed: 06/13/2023]
Abstract
Amplitude modulation (AM) and frequency modulation (FM) provide crucial auditory information. If FM is encoded as AM, it should be possible to give a unified account of AM and FM perception both in terms of response consistency and performance. These two aspects of behavior were estimated for normal-hearing participants using a constant-stimuli, forced-choice detection task repeated twice with the same stimuli (double pass). Sinusoidal AM or FM with rates of 2 or 20 Hz were applied to a 500-Hz pure-tone carrier and presented at detection threshold. All stimuli were masked by a modulation noise. Percent agreement of responses across passes and percent-correct detection for the two passes were used to estimate consistency and performance, respectively. These data were simulated using a model implementing peripheral processes, a central modulation filterbank, an additive internal noise, and a template-matching device. Different levels of internal noise were required to reproduce AM and FM data, but a single level could account for the 2- and 20-Hz AM data. As for FM, two levels of internal noise were needed to account for detection at slow and fast rates. Finally, the level of internal noise yielding best predictions increased with the level of the modulation-noise masker. Overall, these results suggest that different sources of internal variability are involved for AM and FM detection at low audio frequencies.
Collapse
Affiliation(s)
- Sarah Attia
- Laboratoire des systèmes perceptifs (CNRS 8248), Département d'études cognitives, Ecole normale supérieure, Université Paris Sciences et Lettres, 29 rue d'Ulm, 75005 Paris, France
| | - Andrew King
- Laboratoire des systèmes perceptifs (CNRS 8248), Département d'études cognitives, Ecole normale supérieure, Université Paris Sciences et Lettres, 29 rue d'Ulm, 75005 Paris, France
| | - Léo Varnet
- Laboratoire des systèmes perceptifs (CNRS 8248), Département d'études cognitives, Ecole normale supérieure, Université Paris Sciences et Lettres, 29 rue d'Ulm, 75005 Paris, France
| | - Emmanuel Ponsot
- Laboratoire des systèmes perceptifs (CNRS 8248), Département d'études cognitives, Ecole normale supérieure, Université Paris Sciences et Lettres, 29 rue d'Ulm, 75005 Paris, France
| | - Christian Lorenzi
- Laboratoire des systèmes perceptifs (CNRS 8248), Département d'études cognitives, Ecole normale supérieure, Université Paris Sciences et Lettres, 29 rue d'Ulm, 75005 Paris, France
| |
Collapse
|
37
|
Multiple Cases of Auditory Neuropathy Illuminate the Importance of Subcortical Neural Synchrony for Speech-in-noise Recognition and the Frequency-following Response. Ear Hear 2021; 43:605-619. [PMID: 34619687 DOI: 10.1097/aud.0000000000001122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The role of subcortical synchrony in speech-in-noise (SIN) recognition and the frequency-following response (FFR) was examined in multiple listeners with auditory neuropathy. Although an absent FFR has been documented in one listener with idiopathic neuropathy who has severe difficulty recognizing SIN, several etiologies cause the neuropathy phenotype. Consequently, it is necessary to replicate absent FFRs and concomitant SIN difficulties in patients with multiple sources and clinical presentations of neuropathy to elucidate fully the importance of subcortical neural synchrony for the FFR and SIN recognition. DESIGN Case series. Three children with auditory neuropathy (two males with neuropathy attributed to hyperbilirubinemia, one female with a rare missense mutation in the OPA1 gene) were compared to age-matched controls with normal hearing (52 for electrophysiology and 48 for speech recognition testing). Tests included standard audiological evaluations, FFRs, and sentence recognition in noise. The three children with neuropathy had a range of clinical presentations, including moderate sensorineural hearing loss, use of a cochlear implant, and a rapid progressive hearing loss. RESULTS Children with neuropathy generally had good speech recognition in quiet but substantial difficulties in noise. These SIN difficulties were somewhat mitigated by a clear speaking style and presenting words in a high semantic context. In the children with neuropathy, FFRs were absent from all tested stimuli. In contrast, age-matched controls had reliable FFRs. CONCLUSION Subcortical synchrony is subject to multiple forms of disruption but results in a consistent phenotype of an absent FFR and substantial difficulties recognizing SIN. These results support the hypothesis that subcortical synchrony is necessary for the FFR. Thus, in healthy listeners, the FFR may reflect subcortical neural processes important for SIN recognition.
Collapse
|
38
|
Vander Ghinst M, Bourguignon M, Wens V, Naeije G, Ducène C, Niesen M, Hassid S, Choufani G, Goldman S, De Tiège X. Inaccurate cortical tracking of speech in adults with impaired speech perception in noise. Brain Commun 2021; 3:fcab186. [PMID: 34541530 PMCID: PMC8445395 DOI: 10.1093/braincomms/fcab186] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 06/05/2021] [Accepted: 06/08/2021] [Indexed: 01/17/2023] Open
Abstract
Impaired speech perception in noise despite normal peripheral auditory function is a common problem in young adults. Despite a growing body of research, the pathophysiology of this impairment remains unknown. This magnetoencephalography study characterizes the cortical tracking of speech in a multi-talker background in a group of highly selected adult subjects with impaired speech perception in noise without peripheral auditory dysfunction. Magnetoencephalographic signals were recorded from 13 subjects with impaired speech perception in noise (six females, mean age: 30 years) and matched healthy subjects while they were listening to 5 different recordings of stories merged with a multi-talker background at different signal to noise ratios (No Noise, +10, +5, 0 and −5 dB). The cortical tracking of speech was quantified with coherence between magnetoencephalographic signals and the temporal envelope of (i) the global auditory scene (i.e. the attended speech stream and the multi-talker background noise), (ii) the attended speech stream only and (iii) the multi-talker background noise. Functional connectivity was then estimated between brain areas showing altered cortical tracking of speech in noise in subjects with impaired speech perception in noise and the rest of the brain. All participants demonstrated a selective cortical representation of the attended speech stream in noisy conditions, but subjects with impaired speech perception in noise displayed reduced cortical tracking of speech at the syllable rate (i.e. 4–8 Hz) in all noisy conditions. Increased functional connectivity was observed in subjects with impaired speech perception in noise in Noiseless and speech in noise conditions between supratemporal auditory cortices and left-dominant brain areas involved in semantic and attention processes. The difficulty to understand speech in a multi-talker background in subjects with impaired speech perception in noise appears to be related to an inaccurate auditory cortex tracking of speech at the syllable rate. The increased functional connectivity between supratemporal auditory cortices and language/attention-related neocortical areas probably aims at supporting speech perception and subsequent recognition in adverse auditory scenes. Overall, this study argues for a central origin of impaired speech perception in noise in the absence of any peripheral auditory dysfunction.
Collapse
Affiliation(s)
- Marc Vander Ghinst
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Mathieu Bourguignon
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Laboratory of Neurophysiology and Movement Biomechanics, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Basque Center on Cognition, Brain and Language (BCBL), Donostia/San Sebastian 20009, Spain
| | - Vincent Wens
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Clinics of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Gilles Naeije
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Service de Neurologie, ULB-Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Cecile Ducène
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Maxime Niesen
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Sergio Hassid
- Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Georges Choufani
- Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Serge Goldman
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Clinics of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Xavier De Tiège
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Clinics of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| |
Collapse
|
39
|
Maggu AR, Overath T. An Objective Approach Toward Understanding Auditory Processing Disorder. Am J Audiol 2021; 30:790-795. [PMID: 34153205 DOI: 10.1044/2021_aja-21-00007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose In the field of audiology, auditory processing disorder (APD) continues to be a topic of ongoing debate for clinicians and scientists alike, both in terms of theory and clinical practice. In the current viewpoint, we first lay out the main issues that are central to the controversy surrounding APD, and then suggest a framework toward their resolution. Method The current viewpoint is informed by reviewing existing studies in the field of APD to better understand the issues contributing to the controversies in APD. Results We found that, within the current definition of APD, the two main issues that make the APD diagnosis controversial are (a) comorbidity with other disorders and (b) the lack of domain specificity. These issues remain unresolved, especially with the use of the existing behavioral APD test batteries. In this viewpoint, we shed light on how they can be mitigated by implementing the administration of an objective, physiological test battery. Conclusions By administering an objective test battery, as proposed in this viewpoint, we believe that it will be possible to achieve a higher degree of specificity to the auditory domain that will not only contribute towards clinical practice but also contribute towards strengthening APD as a theoretical construct.
Collapse
Affiliation(s)
- Akshay R. Maggu
- Department of Psychology & Neuroscience, Duke University, Durham, NC
| | - Tobias Overath
- Department of Psychology & Neuroscience, Duke University, Durham, NC
- Duke Institute for Brain Sciences, Duke University, Durham, NC
- Center for Cognitive Neuroscience, Duke University, Durham, NC
| |
Collapse
|
40
|
Palandrani KN, Hoover EC, Stavropoulos T, Seitz AR, Isarangura S, Gallun FJ, Eddins DA. Temporal integration of monaural and dichotic frequency modulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:745. [PMID: 34470296 PMCID: PMC8337085 DOI: 10.1121/10.0005729] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 06/17/2021] [Accepted: 07/02/2021] [Indexed: 05/06/2023]
Abstract
Frequency modulation (FM) detection at low modulation frequencies is commonly used as an index of temporal fine-structure processing. The present study evaluated the rate of improvement in monaural and dichotic FM across a range of test parameters. In experiment I, dichotic and monaural FM detection was measured as a function of duration and modulator starting phase. Dichotic FM thresholds were lower than monaural FM thresholds and the modulator starting phase had no effect on detection. Experiment II measured monaural FM detection for signals that differed in modulation rate and duration such that the improvement with duration in seconds (carrier) or cycles (modulator) was compared. Monaural FM detection improved monotonically with the number of modulation cycles, suggesting that the modulator is extracted prior to detection. Experiment III measured dichotic FM detection for shorter signal durations to test the hypothesis that dichotic FM relies primarily on the signal onset. The rate of improvement decreased as duration increased, which is consistent with the use of primarily onset cues for the detection of dichotic FM. These results establish that improvement with duration occurs as a function of the modulation cycles at a rate consistent with the independent-samples model for monaural FM, but later cycles contribute less to detection in dichotic FM.
Collapse
Affiliation(s)
- Katherine N Palandrani
- Department of Communication Sciences and Disorders, University of Maryland, College Park, Maryland 20742, USA
| | - Eric C Hoover
- Department of Communication Sciences and Disorders, University of Maryland, College Park, Maryland 20742, USA
| | - Trevor Stavropoulos
- Brain Game Center, University of California Riverside, Riverside, California 92521, USA
| | - Aaron R Seitz
- Department of Psychology, University of California Riverside, Riverside, California 92521, USA
| | - Sittiprapa Isarangura
- Department of Communication Sciences and Disorders, Mahidol University, Phaya Thai, Bangkok 10400, Thailand
| | - Frederick J Gallun
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, Oregon 97239, USA
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA
| |
Collapse
|
41
|
Zhang M, Alamatsaz N, Ihlefeld A. Hemodynamic Responses Link Individual Differences in Informational Masking to the Vicinity of Superior Temporal Gyrus. Front Neurosci 2021; 15:675326. [PMID: 34366772 PMCID: PMC8339305 DOI: 10.3389/fnins.2021.675326] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 05/13/2021] [Indexed: 01/20/2023] Open
Abstract
Suppressing unwanted background sound is crucial for aural communication. A particularly disruptive type of background sound, informational masking (IM), often interferes in social settings. However, IM mechanisms are incompletely understood. At present, IM is identified operationally: when a target should be audible, based on suprathreshold target/masker energy ratios, yet cannot be heard because target-like background sound interferes. We here confirm that speech identification thresholds differ dramatically between low- vs. high-IM background sound. However, speech detection thresholds are comparable across the two conditions. Moreover, functional near infrared spectroscopy recordings show that task-evoked blood oxygenation changes near the superior temporal gyrus (STG) covary with behavioral speech detection performance for high-IM but not low-IM background sound, suggesting that the STG is part of an IM-dependent network. Moreover, listeners who are more vulnerable to IM show increased hemodynamic recruitment near STG, an effect that cannot be explained based on differences in task difficulty across low- vs. high-IM. In contrast, task-evoked responses near another auditory region of cortex, the caudal inferior frontal sulcus (cIFS), do not predict behavioral sensitivity, suggesting that the cIFS belongs to an IM-independent network. Results are consistent with the idea that cortical gating shapes individual vulnerability to IM.
Collapse
Affiliation(s)
- Min Zhang
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, United States
- Rutgers Biomedical and Health Sciences, Rutgers University, Newark, NJ, United States
| | - Nima Alamatsaz
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, United States
- Rutgers Biomedical and Health Sciences, Rutgers University, Newark, NJ, United States
| | - Antje Ihlefeld
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, United States
| |
Collapse
|
42
|
Listening to speech with a guinea pig-to-human brain-to-brain interface. Sci Rep 2021; 11:12231. [PMID: 34112826 PMCID: PMC8192924 DOI: 10.1038/s41598-021-90823-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 05/12/2021] [Indexed: 11/30/2022] Open
Abstract
Nicolelis wrote in his 2003 review on brain-machine interfaces (BMIs) that the design of a successful BMI relies on general physiological principles describing how neuronal signals are encoded. Our study explored whether neural information exchanged between brains of different species is possible, similar to the information exchange between computers. We show for the first time that single words processed by the guinea pig auditory system are intelligible to humans who receive the processed information via a cochlear implant. We recorded the neural response patterns to single-spoken words with multi-channel electrodes from the guinea inferior colliculus. The recordings served as a blueprint for trains of biphasic, charge-balanced electrical pulses, which a cochlear implant delivered to the cochlear implant user’s ear. Study participants completed a four-word forced-choice test and identified the correct word in 34.8% of trials. The participants' recognition, defined by the ability to choose the same word twice, whether right or wrong, was 53.6%. For all sessions, the participants received no training and no feedback. The results show that lexical information can be transmitted from an animal to a human auditory system. In the discussion, we will contemplate how learning from the animals might help developing novel coding strategies.
Collapse
|
43
|
Herrmann B, Butler BE. Hearing loss and brain plasticity: the hyperactivity phenomenon. Brain Struct Funct 2021; 226:2019-2039. [PMID: 34100151 DOI: 10.1007/s00429-021-02313-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 06/03/2021] [Indexed: 12/22/2022]
Abstract
Many aging adults experience some form of hearing problems that may arise from auditory peripheral damage. However, it has been increasingly acknowledged that hearing loss is not only a dysfunction of the auditory periphery but also results from changes within the entire auditory system, from periphery to cortex. Damage to the auditory periphery is associated with an increase in neural activity at various stages throughout the auditory pathway. Here, we review neurophysiological evidence of hyperactivity, auditory perceptual difficulties that may result from hyperactivity, and outline open conceptual and methodological questions related to the study of hyperactivity. We suggest that hyperactivity alters all aspects of hearing-including spectral, temporal, spatial hearing-and, in turn, impairs speech comprehension when background sound is present. By focusing on the perceptual consequences of hyperactivity and the potential challenges of investigating hyperactivity in humans, we hope to bring animal and human electrophysiologists closer together to better understand hearing problems in older adulthood.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Blake E Butler
- Department of Psychology & The Brain and Mind Institute, University of Western Ontario, London, ON, Canada.,National Centre for Audiology, University of Western Ontario, London, ON, Canada
| |
Collapse
|
44
|
Compression and amplification algorithms in hearing aids impair the selectivity of neural responses to speech. Nat Biomed Eng 2021; 6:717-730. [PMID: 33941898 PMCID: PMC7612903 DOI: 10.1038/s41551-021-00707-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 02/25/2021] [Indexed: 02/07/2023]
Abstract
In quiet environments, hearing aids improve the perception of low-intensity sounds. However, for high-intensity sounds in background noise, the aids often fail to provide a benefit to the wearer. Here, by using large-scale single-neuron recordings from hearing-impaired gerbils — an established animal model of human hearing — we show that hearing aids restore the sensitivity of neural responses to speech, but not their selectivity. Rather than reflecting a deficit in supra-threshold auditory processing, the low selectivity is a consequence of hearing-aid compression (which decreases the spectral and temporal contrasts of incoming sound) and of amplification (which distorts neural responses, regardless of whether hearing is impaired). Processing strategies that avoid the trade-off between neural sensitivity and selectivity should improve the performance of hearing aids.
Collapse
|
45
|
Pienkowski M. Loud Music and Leisure Noise Is a Common Cause of Chronic Hearing Loss, Tinnitus and Hyperacusis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:4236. [PMID: 33923580 PMCID: PMC8073416 DOI: 10.3390/ijerph18084236] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/12/2021] [Accepted: 04/14/2021] [Indexed: 12/20/2022]
Abstract
High sound levels capable of permanently damaging the ear are experienced not only in factories and war zones but in concert halls, nightclubs, sports stadiums, and many other leisure environments. This review summarizes evidence that loud music and other forms of "leisure noise" are common causes of noise-induced hearing loss, tinnitus, and hyperacusis, even if audiometric thresholds initially remain within clinically normal limits. Given the huge global burden of preventable noise-induced hearing loss, noise limits should be adopted in a much broader range of settings, and education to promote hearing conservation should be a higher public health priority.
Collapse
Affiliation(s)
- Martin Pienkowski
- Osborne College of Audiology, Salus University, Elkins Park, PA 19027, USA
| |
Collapse
|
46
|
Hennessy S, Wood A, Wilcox R, Habibi A. Neurophysiological improvements in speech-in-noise task after short-term choir training in older adults. Aging (Albany NY) 2021; 13:9468-9495. [PMID: 33824226 PMCID: PMC8064162 DOI: 10.18632/aging.202931] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 03/26/2021] [Indexed: 01/24/2023]
Abstract
Perceiving speech in noise (SIN) is important for health and well-being and decreases with age. Musicians show improved speech-in-noise abilities and reduced age-related auditory decline, yet it is unclear whether short term music engagement has similar effects. In this randomized control trial we used a pre-post design to investigate whether a 12-week music intervention in adults aged 50-65 without prior music training and with subjective hearing loss improves well-being, speech-in-noise abilities, and auditory encoding and voluntary attention as indexed by auditory evoked potentials (AEPs) in a syllable-in-noise task, and later AEPs in an oddball task. Age and gender-matched adults were randomized to a choir or control group. Choir participants sang in a 2-hr ensemble with 1-hr home vocal training weekly; controls listened to a 3-hr playlist weekly, attended concerts, and socialized online with fellow participants. From pre- to post-intervention, no differences between groups were observed on quantitative measures of well-being or behavioral speech-in-noise abilities. In the choir group, but not the control group, changes in the N1 component were observed for the syllable-in-noise task, with increased N1 amplitude in the passive condition and decreased N1 latency in the active condition. During the oddball task, larger N1 amplitudes to the frequent standard stimuli were also observed in the choir but not control group from pre to post intervention. Findings have implications for the potential role of music training to improve sound encoding in individuals who are in the vulnerable age range and at risk of auditory decline.
Collapse
Affiliation(s)
- Sarah Hennessy
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| | - Alison Wood
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| | - Rand Wilcox
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| |
Collapse
|
47
|
Resnik J, Polley DB. Cochlear neural degeneration disrupts hearing in background noise by increasing auditory cortex internal noise. Neuron 2021; 109:984-996.e4. [PMID: 33561398 PMCID: PMC7979519 DOI: 10.1016/j.neuron.2021.01.015] [Citation(s) in RCA: 59] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 12/09/2020] [Accepted: 01/14/2021] [Indexed: 12/29/2022]
Abstract
Correlational evidence in humans suggests that selective difficulties hearing in noisy, social settings may reflect premature auditory nerve degeneration. Here, we induced primary cochlear neural degeneration (CND) in adult mice and found direct behavioral evidence for selective detection deficits in background noise. To identify central determinants for this perceptual disorder, we tracked daily changes in ensembles of layer 2/3 auditory cortex parvalbumin-expressing inhibitory neurons and excitatory pyramidal neurons with chronic two-photon calcium imaging. CND induced distinct forms of plasticity in cortical excitatory and inhibitory neurons that culminated in net hyperactivity, increased neural gain, and reduced adaptation to background noise. Ensemble activity measured while mice detected targets in noise could accurately decode whether individual behavioral trials were hits or misses. After CND, random surges of hypercorrelated cortical activity occurring just before target onset reliably predicted impending detection failures, revealing a source of internal cortical noise underlying perceptual difficulties in external noise.
Collapse
Affiliation(s)
- Jennifer Resnik
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA.
| |
Collapse
|
48
|
Haro S, Smalt CJ, Ciccarelli GA, Quatieri TF. Deep Neural Network Model of Hearing-Impaired Speech-in-Noise Perception. Front Neurosci 2020; 14:588448. [PMID: 33384579 PMCID: PMC7770113 DOI: 10.3389/fnins.2020.588448] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 11/10/2020] [Indexed: 01/15/2023] Open
Abstract
Many individuals struggle to understand speech in listening scenarios that include reverberation and background noise. An individual's ability to understand speech arises from a combination of peripheral auditory function, central auditory function, and general cognitive abilities. The interaction of these factors complicates the prescription of treatment or therapy to improve hearing function. Damage to the auditory periphery can be studied in animals; however, this method alone is not enough to understand the impact of hearing loss on speech perception. Computational auditory models bridge the gap between animal studies and human speech perception. Perturbations to the modeled auditory systems can permit mechanism-based investigations into observed human behavior. In this study, we propose a computational model that accounts for the complex interactions between different hearing damage mechanisms and simulates human speech-in-noise perception. The model performs a digit classification task as a human would, with only acoustic sound pressure as input. Thus, we can use the model's performance as a proxy for human performance. This two-stage model consists of a biophysical cochlear-nerve spike generator followed by a deep neural network (DNN) classifier. We hypothesize that sudden damage to the periphery affects speech perception and that central nervous system adaptation over time may compensate for peripheral hearing damage. Our model achieved human-like performance across signal-to-noise ratios (SNRs) under normal-hearing (NH) cochlear settings, achieving 50% digit recognition accuracy at -20.7 dB SNR. Results were comparable to eight NH participants on the same task who achieved 50% behavioral performance at -22 dB SNR. We also simulated medial olivocochlear reflex (MOCR) and auditory nerve fiber (ANF) loss, which worsened digit-recognition accuracy at lower SNRs compared to higher SNRs. Our simulated performance following ANF loss is consistent with the hypothesis that cochlear synaptopathy impacts communication in background noise more so than in quiet. Following the insult of various cochlear degradations, we implemented extreme and conservative adaptation through the DNN. At the lowest SNRs (<0 dB), both adapted models were unable to fully recover NH performance, even with hundreds of thousands of training samples. This implies a limit on performance recovery following peripheral damage in our human-inspired DNN architecture.
Collapse
Affiliation(s)
- Stephanie Haro
- Human Health and Performance Systems, Massachusetts Institute of Technology Lincoln Laboratory, Lexington, MA, United States
- Speech and Hearing Biosciences and Technology, Harvard Medical School, Boston, MA, United States
| | - Christopher J. Smalt
- Human Health and Performance Systems, Massachusetts Institute of Technology Lincoln Laboratory, Lexington, MA, United States
| | - Gregory A. Ciccarelli
- Human Health and Performance Systems, Massachusetts Institute of Technology Lincoln Laboratory, Lexington, MA, United States
| | - Thomas F. Quatieri
- Human Health and Performance Systems, Massachusetts Institute of Technology Lincoln Laboratory, Lexington, MA, United States
- Speech and Hearing Biosciences and Technology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
49
|
Koerner TK, A. Papesh M, Gallun FJ. A Questionnaire Survey of Current Rehabilitation Practices for Adults With Normal Hearing Sensitivity Who Experience Auditory Difficulties. Am J Audiol 2020; 29:738-761. [PMID: 32966118 DOI: 10.1044/2020_aja-20-00027] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
Purpose A questionnaire survey was conducted to collect information from clinical audiologists about rehabilitation options for adult patients who report significant auditory difficulties despite having normal or near-normal hearing sensitivity. This work aimed to provide more information about what audiologists are currently doing in the clinic to manage auditory difficulties in this patient population and their views on the efficacy of recommended rehabilitation methods. Method A questionnaire survey containing multiple-choice and open-ended questions was developed and disseminated online. Invitations to participate were delivered via e-mail listservs and through business cards provided at annual audiology conferences. All responses were anonymous at the time of data collection. Results Responses were collected from 209 participants. The majority of participants reported seeing at least one normal-hearing patient per month who reported significant communication difficulties. However, few respondents indicated that their location had specific protocols for the treatment of these patients. Counseling was reported as the most frequent rehabilitation method, but results revealed that audiologists across various work settings are also successfully starting to fit patients with mild-gain hearing aids. Responses indicated that patient compliance with computer-based auditory training methods was regarded as low, with patients generally preferring device-based rehabilitation options. Conclusions Results from this questionnaire survey strongly suggest that audiologists frequently see normal-hearing patients who report auditory difficulties, but that few clinicians are equipped with established protocols for diagnosis and management. While many feel that mild-gain hearing aids provide considerable benefit for these patients, very little research has been conducted to date to support the use of hearing aids or other rehabilitation options for this unique patient population. This study reveals the critical need for additional research to establish evidence-based practice guidelines that will empower clinicians to provide a high level of clinical care and effective rehabilitation strategies to these patients.
Collapse
Affiliation(s)
- Tess K. Koerner
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, OR
| | - Melissa A. Papesh
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, OR
- Department of Otolaryngology - Head & Neck Surgery, Oregon Health & Science University, Portland
| | - Frederick J. Gallun
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, OR
- Department of Otolaryngology - Head & Neck Surgery, Oregon Health & Science University, Portland
| |
Collapse
|
50
|
Koerner TK, Muralimanohar RK, Gallun FJ, Billings CJ. Age-Related Deficits in Electrophysiological and Behavioral Measures of Binaural Temporal Processing. Front Neurosci 2020; 14:578566. [PMID: 33192263 PMCID: PMC7654338 DOI: 10.3389/fnins.2020.578566] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 09/25/2020] [Indexed: 01/15/2023] Open
Abstract
Binaural processing, particularly the processing of interaural phase differences, is important for sound localization and speech understanding in background noise. Age has been shown to impact the neural encoding and perception of these binaural temporal cues even in individuals with clinically normal hearing sensitivity. This work used a new electrophysiological response, called the interaural phase modulation-following response (IPM-FR), to examine the effects of age on the neural encoding of interaural phase difference cues. Relationships between neural recordings and performance on several behavioral measures of binaural processing were used to determine whether the IPM-FR is predictive of interaural phase difference sensitivity and functional speech understanding deficits. Behavioral binaural frequency modulation detection thresholds were measured to assess sensitivity to interaural phase differences while spatial release-from-masking thresholds were used to assess speech understanding abilities in spatialized noise. Thirty adults between the ages of 35 to 74 years with normal low-frequency hearing thresholds were used in this study. Data showed that older participants had weaker neural responses to the interaural phase difference cue and were less able to take advantage of binaural cues for speech understanding compared to younger participants. Results also showed that the IPM-FR was predictive of performance on the binaural frequency modulation detection task, but not on the spatial release-from-masking task after accounting the effects of age. These results confirm previous work that showed that the IPM-FR reflects age-related declines in binaural temporal processing and provide further evidence that this response may represent a useful objective tool for assessing binaural function. However, further research is needed to understand how the IPM-FR is related to speech understanding abilities.
Collapse
Affiliation(s)
- Tess K. Koerner
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
| | - Ramesh Kumar Muralimanohar
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- Department of Otolaryngology/Head and Neck Surgery, Oregon Health & Science University, Portland, OR, United States
| | - Frederick J. Gallun
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- Department of Otolaryngology/Head and Neck Surgery, Oregon Health & Science University, Portland, OR, United States
| | - Curtis J. Billings
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- Department of Otolaryngology/Head and Neck Surgery, Oregon Health & Science University, Portland, OR, United States
| |
Collapse
|