1
|
Koohi N, Holmes S, Male A, Bamiou DE, Dudziec MM, Ramdharry GM, Pizzamiglio C, Hanna MG, Pitceathly RDS, Kaski D. Beyond the cochlea: exploring the multifaceted nature of hearing loss in primary mitochondrial diseases. Brain Commun 2024; 6:fcae374. [PMID: 39584158 PMCID: PMC11583428 DOI: 10.1093/braincomms/fcae374] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 09/05/2024] [Accepted: 10/22/2024] [Indexed: 11/26/2024] Open
Abstract
Primary mitochondrial diseases, with diverse systemic manifestations, often present with auditory impairments due to mitochondrial dysfunction. This study provides an in-depth exploration of auditory deficits in primary mitochondrial diseases, highlighting the impact of various pathogenic variants on both cochlea and neural/central auditory functions. An observational study involving 72 adults with primary mitochondrial diseases was conducted. Participants underwent extensive audiological evaluations including pure-tone audiometry, tympanometry, acoustic reflex thresholds, quick speech-in-noise test, listening in spatialized noise-sentences test, auditory-evoked brainstem responses and distortion product otoacoustic emissions. Multivariate analysis of covariance and logistic regression analyses assessed the influence of various pathogenic DNA variants, accounting for age, cognitive status via the Montreal Cognitive Assessment and disease severity through the Newcastle Mitochondrial Disease Adult Scale. Participants with the pathogenic m.3243A>G/T variants (m.3243A>G n = 40; m.3243A>T n = 1) exhibited significant elevations in pure-tone audiometry thresholds, especially at high frequencies, suggesting cochlea involvement. Notably, the listening in spatialized noise-sentences test showed significant spatial processing deficits in the m.3243A>G/T group, possibly indicating a unique mutation-specific impact on central auditory processing. Auditory-evoked brainstem response results highlighted a higher likelihood of auditory brainstem response abnormalities in this group, further substantiating neural/central auditory pathway involvement. This study emphasizes the heterogeneous nature of hearing impairment in primary mitochondrial diseases, with a genotype-phenotype correlation, particularly in the m.3243A>G/T group. These insights advocate for personalized, genotype-specific auditory assessments and targeted management strategies. Conventional hearing aids and cochlear implants are ineffective for those with central auditory dysfunctions related to mitochondrial mutations. There is an urgent need for innovative rehabilitation strategies catering for both cochlear and neural/central auditory pathways.
Collapse
Affiliation(s)
- Nehzat Koohi
- Department of Clinical and Movement Neurosciences, University College London Queen Square Institute of Neurology, London WC1N 3BG, UK
- The Ear Institute, University College London, London WC1X 8EE, UK
| | - Sarah Holmes
- NHS Highly Specialised Service for Rare Mitochondrial Disorders, Queen Square Centre for Neuromuscular Diseases, The National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Queen Square Centre for Neuromuscular Diseases, The National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Amanda Male
- Department of Clinical and Movement Neurosciences, University College London Queen Square Institute of Neurology, London WC1N 3BG, UK
| | - Doris-Eva Bamiou
- The Ear Institute, University College London, London WC1X 8EE, UK
- National Institute for Health Research, University College London Hospitals Biomedical Research Centre (Deafness and Hearing Problems Theme), London WC1X 8EE, UK
| | - Magdalena M Dudziec
- Queen Square Centre for Neuromuscular Diseases, The National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Department of Neuromuscular Diseases, University College London Queen Square Institute of Neurology, London WC1N 3BG, UK
| | - Gita M Ramdharry
- Queen Square Centre for Neuromuscular Diseases, The National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Department of Neuromuscular Diseases, University College London Queen Square Institute of Neurology, London WC1N 3BG, UK
| | - Chiara Pizzamiglio
- NHS Highly Specialised Service for Rare Mitochondrial Disorders, Queen Square Centre for Neuromuscular Diseases, The National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Department of Neuromuscular Diseases, University College London Queen Square Institute of Neurology, London WC1N 3BG, UK
| | - Michael G Hanna
- NHS Highly Specialised Service for Rare Mitochondrial Disorders, Queen Square Centre for Neuromuscular Diseases, The National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Department of Neuromuscular Diseases, University College London Queen Square Institute of Neurology, London WC1N 3BG, UK
| | - Robert D S Pitceathly
- NHS Highly Specialised Service for Rare Mitochondrial Disorders, Queen Square Centre for Neuromuscular Diseases, The National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Department of Neuromuscular Diseases, University College London Queen Square Institute of Neurology, London WC1N 3BG, UK
| | - Diego Kaski
- Department of Clinical and Movement Neurosciences, University College London Queen Square Institute of Neurology, London WC1N 3BG, UK
- The Ear Institute, University College London, London WC1X 8EE, UK
| |
Collapse
|
2
|
de Hoz L, McAlpine D. Noises on-How the Brain Deals with Acoustic Noise. BIOLOGY 2024; 13:501. [PMID: 39056695 PMCID: PMC11274191 DOI: 10.3390/biology13070501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 07/01/2024] [Accepted: 07/01/2024] [Indexed: 07/28/2024]
Abstract
What is noise? When does a sound form part of the acoustic background and when might it come to our attention as part of the foreground? Our brain seems to filter out irrelevant sounds in a seemingly effortless process, but how this is achieved remains opaque and, to date, unparalleled by any algorithm. In this review, we discuss how noise can be both background and foreground, depending on what a listener/brain is trying to achieve. We do so by addressing questions concerning the brain's potential bias to interpret certain sounds as part of the background, the extent to which the interpretation of sounds depends on the context in which they are heard, as well as their ethological relevance, task-dependence, and a listener's overall mental state. We explore these questions with specific regard to the implicit, or statistical, learning of sounds and the role of feedback loops between cortical and subcortical auditory structures.
Collapse
Affiliation(s)
- Livia de Hoz
- Neuroscience Research Center, Charité—Universitätsmedizin Berlin, 10117 Berlin, Germany
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
| | - David McAlpine
- Neuroscience Research Center, Charité—Universitätsmedizin Berlin, 10117 Berlin, Germany
- Department of Linguistics, Macquarie University Hearing, Australian Hearing Hub, Sydney, NSW 2109, Australia
| |
Collapse
|
3
|
Kipping D, Nogueira W. A Computational Model of a Single Auditory Nerve Fiber for Electric-Acoustic Stimulation. J Assoc Res Otolaryngol 2022; 23:835-858. [PMID: 36333573 PMCID: PMC9789289 DOI: 10.1007/s10162-022-00870-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 08/21/2022] [Indexed: 11/06/2022] Open
Abstract
Cochlear implant (CI) recipients with preserved acoustic low-frequency hearing in the implanted ear are a growing group among traditional CI users who benefit from hybrid electric-acoustic stimulation (EAS). However, combined ipsilateral electric and acoustic stimulation also introduces interactions between the two modalities that can affect the performance of EAS users. A computational model of a single auditory nerve fiber that is excited by EAS was developed to study the interaction between electric and acoustic stimulation. Two existing models of sole electric or acoustic stimulation were coupled to simulate responses to combined EAS. Different methods of combining both models were implemented. In the coupled model variant, the refractoriness of the simulated fiber leads to suppressive interaction between electrically evoked and acoustically evoked spikes as well as spontaneous activity. The second model variant is an uncoupled EAS model without electric-acoustic interaction. By comparing predictions between the coupled and the noninteracting EAS model, it was possible to infer electric-acoustic interaction at the level of the auditory nerve. The EAS model was used to simulate fiber populations with realistic inter-unit variability, where each unit was represented by the single-fiber model. Predicted thresholds and dynamic ranges, spike rates, latencies, jitter, and vector strengths were compared to empirical data. The presented EAS model provides a framework for future studies of peripheral electric-acoustic interaction.
Collapse
Affiliation(s)
- Daniel Kipping
- Department of Otolaryngology, Hannover Medical School (MHH), Hannover, Germany
- Cluster of Excellence Hearing4all, Hannover, Germany
| | - Waldo Nogueira
- Department of Otolaryngology, Hannover Medical School (MHH), Hannover, Germany
- Cluster of Excellence Hearing4all, Hannover, Germany
| |
Collapse
|
4
|
Ivanov AZ, King AJ, Willmore BDB, Walker KMM, Harper NS. Cortical adaptation to sound reverberation. eLife 2022; 11:e75090. [PMID: 35617119 PMCID: PMC9213001 DOI: 10.7554/elife.75090] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 05/25/2022] [Indexed: 11/13/2022] Open
Abstract
In almost every natural environment, sounds are reflected by nearby objects, producing many delayed and distorted copies of the original sound, known as reverberation. Our brains usually cope well with reverberation, allowing us to recognize sound sources regardless of their environments. In contrast, reverberation can cause severe difficulties for speech recognition algorithms and hearing-impaired people. The present study examines how the auditory system copes with reverberation. We trained a linear model to recover a rich set of natural, anechoic sounds from their simulated reverberant counterparts. The model neurons achieved this by extending the inhibitory component of their receptive filters for more reverberant spaces, and did so in a frequency-dependent manner. These predicted effects were observed in the responses of auditory cortical neurons of ferrets in the same simulated reverberant environments. Together, these results suggest that auditory cortical neurons adapt to reverberation by adjusting their filtering properties in a manner consistent with dereverberation.
Collapse
Affiliation(s)
- Aleksandar Z Ivanov
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Ben DB Willmore
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Kerry MM Walker
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| |
Collapse
|
5
|
Hu H, Klug J, Dietz M. Simulation of ITD-Dependent Single-Neuron Responses Under Electrical Stimulation and with Amplitude-Modulated Acoustic Stimuli. J Assoc Res Otolaryngol 2022; 23:535-550. [PMID: 35334001 PMCID: PMC9437183 DOI: 10.1007/s10162-021-00823-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 11/03/2021] [Indexed: 11/30/2022] Open
Abstract
Interaural time difference (ITD) sensitivity with cochlear implant stimulation is remarkably similar to envelope ITD sensitivity using conventional acoustic stimulation. This holds true for human perception, as well as for neural response rates recorded in the inferior colliculus of several mammalian species. We hypothesize that robust excitatory-inhibitory (EI) interaction is the dominant mechanism. Therefore, we connected the same single EI-model neuron to either a model of the normal acoustic auditory periphery or to a model of the electrically stimulated auditory nerve. The model captured most features of the experimentally obtained response properties with electric stimulation, such as the shape of rate-ITD functions, the dependence on stimulation level, and the pulse rate or modulation-frequency dependence. Rate-ITD functions with high-rate, amplitude-modulated electric stimuli were very similar to their acoustic counterparts. Responses obtained with unmodulated electric pulse trains most resembled acoustic filtered clicks. The fairly rapid decline of ITD sensitivity at rates above 300 pulses or cycles per second is correctly simulated by the 3.1-ms time constant of the inhibitory post-synaptic conductance. As the model accounts for these basic properties, it is expected to help in understanding and quantifying the binaural hearing abilities with electric stimulation when integrated in bigger simulation frameworks.
Collapse
Affiliation(s)
- Hongmei Hu
- Department of Medical Physics and Acoustics and Cluster of Excellence "Hearing4all", University of Oldenburg, 26129, Oldenburg, Germany.
| | - Jonas Klug
- Department of Medical Physics and Acoustics and Cluster of Excellence "Hearing4all", University of Oldenburg, 26129, Oldenburg, Germany
| | - Mathias Dietz
- Department of Medical Physics and Acoustics and Cluster of Excellence "Hearing4all", University of Oldenburg, 26129, Oldenburg, Germany
| |
Collapse
|
6
|
Deep neural network models of sound localization reveal how perception is adapted to real-world environments. Nat Hum Behav 2022; 6:111-133. [PMID: 35087192 PMCID: PMC8830739 DOI: 10.1038/s41562-021-01244-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 10/29/2021] [Indexed: 11/15/2022]
Abstract
Mammals localize sounds using information from their two ears.
Localization in real-world conditions is challenging, as echoes provide
erroneous information, and noises mask parts of target sounds. To better
understand real-world localization we equipped a deep neural network with human
ears and trained it to localize sounds in a virtual environment. The resulting
model localized accurately in realistic conditions with noise and reverberation.
In simulated experiments, the model exhibited many features of human spatial
hearing: sensitivity to monaural spectral cues and interaural time and level
differences, integration across frequency, biases for sound onsets, and limits
on localization of concurrent sources. But when trained in unnatural
environments without either reverberation, noise, or natural sounds, these
performance characteristics deviated from those of humans. The results show how
biological hearing is adapted to the challenges of real-world environments and
illustrate how artificial neural networks can reveal the real-world constraints
that shape perception.
Collapse
|