1
|
Khakzand S, Maarefvand M, Ruzbahani M, Tajdini A. Assessment of Peripheral and Central Auditory Processing after Treatment for Idiopathic Sudden Sensorineural Hearing Loss. Int Arch Otorhinolaryngol 2024; 28:e415-e423. [PMID: 38974630 PMCID: PMC11226256 DOI: 10.1055/s-0043-1776728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 09/09/2023] [Indexed: 07/09/2024] Open
Abstract
Introduction When cases of idiopathic sudden sensorineural hearing loss (SSNHL) are treated successfully, most clinicians assume the normality and symmetry of the auditory processing. This assumption is based on the recovery of the detection ability on the part of the patients, but the auditory processing involves much more than detection alone. Since certain studies have suggested a possible involvement of the central auditory system during the acute phase of sudden hearing loss, the present study hypothesized that auditory processing would be asymmetric in people who have experienced sudden hearing loss. Objective To assess the physiologic and electrophysiological conditions of the cochlea and central auditory system, as well as behavioral discrimination, of three primary aspects of sound (intensity, frequency, and time) in subjects with normal ears and ears treated successfully for SSNHL. Methods The study included 19 SSNHL patients whose normal and treated ears were assessed for otoacoustic emissions, speech auditory brainstem response, intensity and pitch discrimination, and temporal resolution in a within-subject design. Results The otoacoustic emissions were poorer in the treated ears compared to the normal ears. Ear- and sex-dependent differences were observed regarding otoacoustic emissions and pitch discrimination. Conclusion The asymmetrical processing observed in the present study was not consistent with the hearing threshold values, which might suggest that the central auditory system would be affected regardless of the status of the peripheral hearing. Further experiments with larger samples, different recovery scenarios after treatment, and other assessments are required.
Collapse
Affiliation(s)
- Soheila Khakzand
- Audiology Department, School of Rehabilitation, Iran University of Medical Sciences, Tehran, Iran
| | - Mohammad Maarefvand
- Audiology Department, School of Rehabilitation, Iran University of Medical Sciences, Tehran, Iran
| | - Masoumeh Ruzbahani
- Audiology Department, School of Rehabilitation, Iran University of Medical Sciences, Tehran, Iran
| | - Ardavan Tajdini
- Ear, Nose and Throat Department, Amir-Alam Hospital, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
2
|
Schilling A, Sedley W, Gerum R, Metzner C, Tziridis K, Maier A, Schulze H, Zeng FG, Friston KJ, Krauss P. Predictive coding and stochastic resonance as fundamental principles of auditory phantom perception. Brain 2023; 146:4809-4825. [PMID: 37503725 PMCID: PMC10690027 DOI: 10.1093/brain/awad255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 06/27/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023] Open
Abstract
Mechanistic insight is achieved only when experiments are employed to test formal or computational models. Furthermore, in analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying healthy auditory perception. With a special focus on tinnitus-as the prime example of auditory phantom perception-we review recent work at the intersection of artificial intelligence, psychology and neuroscience. In particular, we discuss why everyone with tinnitus suffers from (at least hidden) hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that intrinsic neural noise is generated and amplified along the auditory pathway as a compensatory mechanism to restore normal hearing based on adaptive stochastic resonance. The neural noise increase can then be misinterpreted as auditory input and perceived as tinnitus. This mechanism can be formalized in the Bayesian brain framework, where the percept (posterior) assimilates a prior prediction (brain's expectations) and likelihood (bottom-up neural signal). A higher mean and lower variance (i.e. enhanced precision) of the likelihood shifts the posterior, evincing a misinterpretation of sensory evidence, which may be further confounded by plastic changes in the brain that underwrite prior predictions. Hence, two fundamental processing principles provide the most explanatory power for the emergence of auditory phantom perceptions: predictive coding as a top-down and adaptive stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles also play a crucial role in healthy auditory perception. Finally, in the context of neuroscience-inspired artificial intelligence, both processing principles may serve to improve contemporary machine learning techniques.
Collapse
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Richard Gerum
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Department of Physics and Astronomy and Center for Vision Research, York University, Toronto, ON M3J 1P3, Canada
| | - Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | | | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - Holger Schulze
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Fan-Gang Zeng
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology–Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, USA
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| |
Collapse
|
3
|
Alamri Y, Jennings SG. Computational modeling of the human compound action potential. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:2376. [PMID: 37092943 PMCID: PMC10119875 DOI: 10.1121/10.0017863] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 03/21/2023] [Accepted: 04/04/2023] [Indexed: 05/03/2023]
Abstract
The auditory nerve (AN) compound action potential (CAP) is an important tool for assessing auditory disorders and monitoring the health of the auditory periphery during surgical procedures. The CAP has been mathematically conceptualized as the convolution of a unit response (UR) waveform with the firing rate of a population of AN fibers. Here, an approach for predicting experimentally recorded CAPs in humans is proposed, which involves the use of human-based computational models to simulate AN activity. CAPs elicited by clicks, chirps, and amplitude-modulated carriers were simulated and compared with empirically recorded CAPs from human subjects. In addition, narrowband CAPs derived from noise-masked clicks and tone bursts were simulated. Many morphological, temporal, and spectral aspects of human CAPs were captured by the simulations for all stimuli tested. These findings support the use of model simulations of the human CAP to refine existing human-based models of the auditory periphery, aid in the design and analysis of auditory experiments, and predict the effects of hearing loss, synaptopathy, and other auditory disorders on the human CAP.
Collapse
Affiliation(s)
- Yousef Alamri
- Department of Biomedical Engineering, The University of Utah, 390 South, 1530 East, BEHS 1201, Salt Lake City, Utah 84112, USA
| | - Skyler G Jennings
- Department of Communication Sciences and Disorders, The University of Utah, 390 South, 1530 East, BEHS 1201, Salt Lake City, Utah 84112, USA
| |
Collapse
|
4
|
Yin D, Wang X, Ren L, Xie Y, Zhang T, Dai P. The role of medial olivocochlear activity in contralateral suppression of auditory steady-state responses. Auris Nasus Larynx 2023; 50:57-61. [PMID: 35649956 DOI: 10.1016/j.anl.2022.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 05/02/2022] [Accepted: 05/09/2022] [Indexed: 01/28/2023]
Abstract
OBJECTIVE The auditory steady-state response (ASSR) amplitudes fall in the presence of contralateral noise. However, whether and to what extent medial olivocochlear (MOC) activity involves in contralateral suppression of ASSR remain unclear. Therefore, we assess the role of MOC activity in contralateral suppression of ASSR. METHODS Mice were treated with strychnine to completely eliminate MOC activity and then measured ASSR amplitudes in the presence of contralateral noise. RESULTS The contralateral noise reduces ASSR amplitudes at some stimulus intensity. After treating with the strychnine to eliminate MOC activity, ASSR amplitudes recovered again. CONCLUSIONS MOC activity participated in contralateral suppression of ASSR.
Collapse
Affiliation(s)
- Dongming Yin
- Department of Otolaryngology, Zhongshan Hospital Fudan University, Shanghai, PR China; ENT Institute, Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai 200031, PR China; NHC Hearing Medicine Key Laboratory (Fudan University), Shanghai, PR China
| | - Xiaolei Wang
- Department of Cardiology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, PR China
| | - Liujie Ren
- ENT Institute, Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai 200031, PR China; NHC Hearing Medicine Key Laboratory (Fudan University), Shanghai, PR China; Department of Facial Plastic and Reconstructive Surgery, Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai 200031, PR China
| | - Youzhou Xie
- ENT Institute, Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai 200031, PR China; NHC Hearing Medicine Key Laboratory (Fudan University), Shanghai, PR China; Department of Facial Plastic and Reconstructive Surgery, Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai 200031, PR China
| | - Tianyu Zhang
- ENT Institute, Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai 200031, PR China; NHC Hearing Medicine Key Laboratory (Fudan University), Shanghai, PR China; Department of Facial Plastic and Reconstructive Surgery, Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai 200031, PR China
| | - Peidong Dai
- ENT Institute, Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai 200031, PR China; NHC Hearing Medicine Key Laboratory (Fudan University), Shanghai, PR China.
| |
Collapse
|
5
|
Moinuddin KA, Havugimana F, Al-Fahad R, Bidelman GM, Yeasin M. Unraveling Spatial-Spectral Dynamics of Speech Categorization Speed Using Convolutional Neural Networks. Brain Sci 2022; 13:brainsci13010075. [PMID: 36672055 PMCID: PMC9856675 DOI: 10.3390/brainsci13010075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/22/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
The process of categorizing sounds into distinct phonetic categories is known as categorical perception (CP). Response times (RTs) provide a measure of perceptual difficulty during labeling decisions (i.e., categorization). The RT is quasi-stochastic in nature due to individuality and variations in perceptual tasks. To identify the source of RT variation in CP, we have built models to decode the brain regions and frequency bands driving fast, medium and slow response decision speeds. In particular, we implemented a parameter optimized convolutional neural network (CNN) to classify listeners' behavioral RTs from their neural EEG data. We adopted visual interpretation of model response using Guided-GradCAM to identify spatial-spectral correlates of RT. Our framework includes (but is not limited to): (i) a data augmentation technique designed to reduce noise and control the overall variance of EEG dataset; (ii) bandpower topomaps to learn the spatial-spectral representation using CNN; (iii) large-scale Bayesian hyper-parameter optimization to find best performing CNN model; (iv) ANOVA and posthoc analysis on Guided-GradCAM activation values to measure the effect of neural regions and frequency bands on behavioral responses. Using this framework, we observe that α-β (10-20 Hz) activity over left frontal, right prefrontal/frontal, and right cerebellar regions are correlated with RT variation. Our results indicate that attention, template matching, temporal prediction of acoustics, motor control, and decision uncertainty are the most probable factors in RT variation.
Collapse
Affiliation(s)
| | - Felix Havugimana
- Department of EECE, University of Memphis, Memphis, TN 38152, USA
| | - Rakib Al-Fahad
- Department of EECE, University of Memphis, Memphis, TN 38152, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN 47408, USA
| | - Mohammed Yeasin
- Department of EECE, University of Memphis, Memphis, TN 38152, USA
| |
Collapse
|
6
|
Rodrigo H, Beukes EW, Andersson G, Manchaiah V. Predicting the Outcomes of Internet-Based Cognitive Behavioral Therapy for Tinnitus: Applications of Artificial Neural Network and Support Vector Machine. Am J Audiol 2022; 31:1167-1177. [PMID: 36215687 PMCID: PMC9907438 DOI: 10.1044/2022_aja-21-00270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 04/16/2022] [Accepted: 06/23/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Internet-based cognitive behavioral therapy (ICBT) has been found to be effective for tinnitus management, although there is limited understanding about who will benefit the most from ICBT. Traditional statistical models have largely failed to identify the nonlinear associations and hence find strong predictors of success with ICBT. This study aimed at examining the use of an artificial neural network (ANN) and support vector machine (SVM) to identify variables associated with treatment success in ICBT for tinnitus. METHOD The study involved a secondary analysis of data from 228 individuals who had completed ICBT in previous intervention studies. A 13-point reduction in Tinnitus Functional Index (TFI) was defined as a successful outcome. There were 33 predictor variables, including demographic, tinnitus, hearing-related and treatment-related variables, and clinical factors (anxiety, depression, insomnia, hyperacusis, hearing disability, cognitive function, and life satisfaction). Predictive models using ANN and SVM were developed and evaluated for classification accuracy. SHapley Additive exPlanations (SHAP) analysis was used to identify the relative predictor variable importance using the best predictive model for a successful treatment outcome. RESULTS The best predictive model was achieved with the ANN with an average area under the receiver operating characteristic value of 0.73 ± 0.03. The SHAP analysis revealed that having a higher education level and a greater baseline tinnitus severity were the most critical factors that influence treatment outcome positively. CONCLUSIONS Predictive models such as ANN and SVM help predict ICBT treatment outcomes and identify predictors of outcome. However, further work is needed to examine predictors that were not considered in this study as well as to improve the predictive power of these models. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21266487.
Collapse
Affiliation(s)
- Hansapani Rodrigo
- School of Mathematical and Statistical Sciences, University of Texas Rio Grande Valley, Edinburg
- Virtual Hearing Lab, Collaborative initiative between Lamar University, Beaumont, TX, and University of Pretoria, South Africa
| | - Eldré W. Beukes
- Virtual Hearing Lab, Collaborative initiative between Lamar University, Beaumont, TX, and University of Pretoria, South Africa
- Vision and Hearing Sciences Research Centre, School of Psychology and Sport Science, Anglia Ruskin University, Cambridge, United Kingdom
| | - Gerhard Andersson
- Department of Behavioral Sciences and Learning, Department of Biomedical and Clinical Sciences, Linköping University, Sweden
- Department of Clinical Neuroscience, Division of Psychiatry, Karolinska Institute, Stockholm, Sweden
| | - Vinaya Manchaiah
- Virtual Hearing Lab, Collaborative initiative between Lamar University, Beaumont, TX, and University of Pretoria, South Africa
- Department of Otolaryngology–Head and Neck Surgery, University of Colorado School of Medicine, Aurora
- UCHealth Hearing and Balance, University of Colorado Hospital, Aurora
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
- Department of Speech and Hearing, School of Allied Health Sciences, Manipal, India
| |
Collapse
|
7
|
Kates JM, Arehart KH. An overview of the HASPI and HASQI metrics for predicting speech intelligibility and speech quality for normal hearing, hearing loss, and hearing aids. Hear Res 2022; 426:108608. [PMID: 36137862 PMCID: PMC10833438 DOI: 10.1016/j.heares.2022.108608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 08/17/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022]
Abstract
Alterations of the speech signal, including additive noise and nonlinear distortion, can reduce speech intelligibility and quality. Hearing aids present an especially complicated situation since these devices may implement nonlinear processing designed to compensate for the hearing loss. Hearing-aid processing is often realized as time-varying multichannel gain adjustments, and may also include frequency reassignment. The challenge in designing metrics for hearing aids and hearing-impaired listeners is to accurately model the perceptual trade-offs between speech audibility and the nonlinear distortion introduced by hearing-aid processing. This paper focuses on the Hearing Aid Speech Perception Index (HASPI) and the Hearing Aid Speech Quality Index (HASQI) as representative metrics for predicting intelligibility and quality. These indices start with a model of the auditory periphery that can be adjusted to represent hearing loss. The peripheral model, the speech features computed from the model outputs, and the procedures used to fit the features to subject data are described. Examples are then presented for using the metrics to measure the effects of additive noise, evaluate noise-suppression processing, and to measure the differences among commercial hearing aids. Open questions and considerations in using these and related metrics are then discussed.
Collapse
Affiliation(s)
- James M Kates
- Department of Speech Language and Hearing Sciences, University of Colorado, Boulder, CO 80309, USA.
| | - Kathryn H Arehart
- Department of Speech Language and Hearing Sciences, University of Colorado, Boulder, CO 80309, USA
| |
Collapse
|
8
|
Valderrama JT, de la Torre A, McAlpine D. The hunt for hidden hearing loss in humans: From preclinical studies to effective interventions. Front Neurosci 2022; 16:1000304. [PMID: 36188462 PMCID: PMC9519997 DOI: 10.3389/fnins.2022.1000304] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 08/29/2022] [Indexed: 11/30/2022] Open
Abstract
Many individuals experience hearing problems that are hidden under a normal audiogram. This not only impacts on individual sufferers, but also on clinicians who can offer little in the way of support. Animal studies using invasive methodologies have developed solid evidence for a range of pathologies underlying this hidden hearing loss (HHL), including cochlear synaptopathy, auditory nerve demyelination, elevated central gain, and neural mal-adaptation. Despite progress in pre-clinical models, evidence supporting the existence of HHL in humans remains inconclusive, and clinicians lack any non-invasive biomarkers sensitive to HHL, as well as a standardized protocol to manage hearing problems in the absence of elevated hearing thresholds. Here, we review animal models of HHL as well as the ongoing research for tools with which to diagnose and manage hearing difficulties associated with HHL. We also discuss new research opportunities facilitated by recent methodological tools that may overcome a series of barriers that have hampered meaningful progress in diagnosing and treating of HHL.
Collapse
Affiliation(s)
- Joaquin T. Valderrama
- National Acoustic Laboratories, Sydney, NSW, Australia
- Department of Linguistics, Macquarie University Hearing, Macquarie University, Sydney, NSW, Australia
- *Correspondence: Joaquin T. Valderrama, ;
| | - Angel de la Torre
- Department of Signal Theory, Telematics and Communications, University of Granada, Granada, Spain
- Research Centre for Information and Communications Technologies (CITIC-UGR), University of Granada, Granada, Spain
| | - David McAlpine
- Department of Linguistics, Macquarie University Hearing, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
9
|
Diao T, Duan M, Ma X, Liu J, Yu L, Jing Y, Wang M. The impairment of speech perception in noise following pure tone hearing recovery in patients with sudden sensorineural hearing loss. Sci Rep 2022; 12:866. [PMID: 35039548 PMCID: PMC8763940 DOI: 10.1038/s41598-021-03847-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Accepted: 11/29/2021] [Indexed: 02/07/2023] Open
Abstract
To explore whether patients with unilateral idiopathic sudden sensorineural hearing loss (uISSNHL) have normal speech in noise (SIN) perception under different masking conditions after complete recovery of pure tone audiometry. Eight completely recovered uISSNHL patients were enrolled in ISSNHL group, while 8 normal-hearing adults matched with age, gender, and education experience were selected as the control group. Each group was tested SIN under four masking conditions, including noise and speech maskings with and without spatial separation cues. For both ISSNHL and control groups a two-way ANOVA showed a statistically significant effect of masking type (p = 0.007 vs p = 0.012). A significant effect of perceived spatial separation (p < 0.001 vs p < 0.001). A significant interaction between masking type and perceived spatial separation was found (p < 0.001 vs p < 0.001). A paired sample T-test showed that the SIN perception of the control group was statistically significant lower than that of ISSNHL patients only under speech masking without spatial separation cues (p = 0.011). There were still abnormalities in the auditory center shortly after complete recovery in the ISSNHL group (within 2 weeks). However, the auditory periphery and higher-level ability to use spatial cues was normal.
Collapse
Affiliation(s)
- Tongxiang Diao
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| | - Maoli Duan
- Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden.,Department of Otolaryngology Head and Neck Surgery & Audiology and Neurotology, Karolinska University Hospital, Karolinska Institute, 171 76, Stockholm, Sweden
| | - Xin Ma
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| | - Jinjun Liu
- School of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Lisheng Yu
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| | - Yuanyuan Jing
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| | - Mengyuan Wang
- School of Psychology, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
10
|
|