1
|
Hussain I, Kwon C, Noh TS, Kim HC, Suh MW, Ku Y. An interpretable tinnitus prediction framework using gap-prepulse inhibition in auditory late response and electroencephalogram. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 255:108371. [PMID: 39173295 DOI: 10.1016/j.cmpb.2024.108371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 08/07/2024] [Accepted: 08/09/2024] [Indexed: 08/24/2024]
Abstract
BACKGROUND AND OBJECTIVE Tinnitus is a neuropathological condition that results in mild buzzing or ringing of the ears without an external sound source. Current tinnitus diagnostic methods often rely on subjective assessment and require intricate medical examinations. This study aimed to propose an interpretable tinnitus diagnostic framework using auditory late response (ALR) and electroencephalogram (EEG), inspired by the gap-prepulse inhibition (GPI) paradigm. METHODS We collected spontaneous EEG and ALR data from 44 patients with tinnitus and 47 hearing loss-matched controls using specialized hardware to capture responses to sound stimuli with embedded gaps. In this cohort study of tinnitus and control groups, we examined EEG spectral and ALR features of N-P complexes, comparing the responses to gap durations of 50 and 20 ms alongside no-gap conditions. To this end, we developed an interpretable tinnitus diagnostic model using ALR and EEG metrics, boosting machine learning architecture, and explainable feature attribution approaches. RESULTS Our proposed model achieved 90 % accuracy in identifying tinnitus, with an area under the performance curve of 0.89. The explainable artificial intelligence approaches have revealed gap-embedded ALR features such as the GPI ratio of N1-P2 and EEG spectral ratio, which can serve as diagnostic metrics for tinnitus. Our method successfully provides personalized prediction explanations for tinnitus diagnosis using gap-embedded auditory and neurological features. CONCLUSIONS Deficits in GPI alongside activity in the EEG alpha-beta ratio offer a promising screening tool for assessing tinnitus risk, aligning with current clinical insights from hearing research.
Collapse
Affiliation(s)
- Iqram Hussain
- Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University College of Medicine, Seoul 03080, Republic of Korea; Department of Anesthesiology, Weill Cornell Medicine, Cornell University, New York, NY 10065, USA
| | - Chiheon Kwon
- Medical Device Research Center, Department of Biomedical Research Institute, Chungnam National University Hospital, Daejeon, Republic of Korea
| | - Tae-Soo Noh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul 03080, Republic of Korea
| | - Hee Chan Kim
- Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University College of Medicine, Seoul 03080, Republic of Korea; Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul 03080, Republic of Korea; Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul 08826, Republic of Korea
| | - Myung-Whan Suh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul 03080, Republic of Korea
| | - Yunseo Ku
- Medical Device Research Center, Department of Biomedical Research Institute, Chungnam National University Hospital, Daejeon, Republic of Korea; Department of Biomedical Engineering, College of Medicine, Chungnam National University, Daejeon 35015, Republic of Korea.
| |
Collapse
|
2
|
Wallace MN, Berger JI, Hockley A, Sumner CJ, Akeroyd MA, Palmer AR, McNaughton PA. Identifying tinnitus in mice by tracking the motion of body markers in response to an acoustic startle. Front Neurosci 2024; 18:1452450. [PMID: 39170684 PMCID: PMC11335616 DOI: 10.3389/fnins.2024.1452450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 07/24/2024] [Indexed: 08/23/2024] Open
Abstract
Rodent models of tinnitus are commonly used to study its mechanisms and potential treatments. Tinnitus can be identified by changes in the gap-induced prepulse inhibition of the acoustic startle (GPIAS), most commonly by using pressure detectors to measure the whole-body startle (WBS). Unfortunately, the WBS habituates quickly, the measuring system can introduce mechanical oscillations and the response shows considerable variability. We have instead used a motion tracking system to measure the localized motion of small reflective markers in response to an acoustic startle reflex in guinea pigs and mice. For guinea pigs, the pinna had the largest responses both in terms of displacement between pairs of markers and in terms of the speed of the reflex movement. Smaller, but still reliable responses were observed with markers on the thorax, abdomen and back. The peak speed of the pinna reflex was the most sensitive measure for calculating GPIAS in the guinea pig. Recording the pinna reflex in mice proved impractical due to removal of the markers during grooming. However, recordings from their back and tail allowed us to measure the peak speed and the twitch amplitude (area under curve) of reflex responses and both analysis methods showed robust GPIAS. When mice were administered high doses of sodium salicylate, which induces tinnitus in humans, there was a significant reduction in GPIAS, consistent with the presence of tinnitus. Thus, measurement of the peak speed or twitch amplitude of pinna, back and tail markers provides a reliable assessment of tinnitus in rodents.
Collapse
Affiliation(s)
- Mark N. Wallace
- Hearing Sciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Joel I. Berger
- Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, United States
| | - Adam Hockley
- Cognitive and Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León, University of Salamanca, Salamanca, Spain
| | | | - Michael A. Akeroyd
- Hearing Sciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Alan R. Palmer
- Hearing Sciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Peter A. McNaughton
- Wolfson Sensory, Pain and Regeneration Centre, King’s College London, London, United Kingdom
| |
Collapse
|
3
|
Fawcett TJ, Longenecker RJ, Brunelle DL, Berger JI, Wallace MN, Galazyuk AV, Rosen MJ, Salvi RJ, Walton JP. Universal automated classification of the acoustic startle reflex using machine learning. Hear Res 2023; 428:108667. [PMID: 36566642 PMCID: PMC10734095 DOI: 10.1016/j.heares.2022.108667] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 12/06/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022]
Abstract
The startle reflex (SR), a robust, motor response elicited by an intense auditory, visual, or somatosensory stimulus has been widely used as a tool to assess psychophysiology in humans and animals for almost a century in diverse fields such as schizophrenia, bipolar disorder, hearing loss, and tinnitus. Previously, SR waveforms have been ignored, or assessed with basic statistical techniques and/or simple template matching paradigms. This has led to considerable variability in SR studies from different laboratories, and species. In an effort to standardize SR assessment methods, we developed a machine learning algorithm and workflow to automatically classify SR waveforms in virtually any animal model including mice, rats, guinea pigs, and gerbils obtained with various paradigms and modalities from several laboratories. The universal features common to SR waveforms of various species and paradigms are examined and discussed in the context of each animal model. The procedure describes common results using the SR across species and how to fully implement the open-source R implementation. Since SR is widely used to investigate toxicological or pharmaceutical efficacy, a detailed and universal SR waveform classification protocol should be developed to aid in standardizing SR assessment procedures across different laboratories and species. This machine learning-based method will improve data reliability and translatability between labs that use the startle reflex paradigm.
Collapse
Affiliation(s)
- Timothy J Fawcett
- Global Center for Hearing and Speech Research, University of South Florida, Tampa, FL, USA; Research Computing, University of South Florida, Tampa, FL, USA.
| | - Ryan J Longenecker
- Sound Pharmaceuticals Inc, 4010 Stone Way N., Suite 120, Seattle, WA 98103, USA
| | - Dimitri L Brunelle
- Global Center for Hearing and Speech Research, University of South Florida, Tampa, FL, USA
| | - Joel I Berger
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Mark N Wallace
- Hearing Sciences, School of Medicine, University of Nottingham, Nottingham, UK
| | - Alex V Galazyuk
- Hearing Research Group, Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, OH, USA
| | - Merri J Rosen
- Hearing Research Group, Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, OH, USA
| | - Richard J Salvi
- Center for Hearing and Deafness, University at Buffalo, University of Buffalo, USA
| | - Joseph P Walton
- Global Center for Hearing and Speech Research, University of South Florida, Tampa, FL, USA; Department of Medical Engineering, University of South Florida, Tampa, FL, USA; Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, USA.
| |
Collapse
|
4
|
Abstract
Congenital hearing loss is the most common birth defect, estimated to affect 2-3 in every 1000 births. Currently there is no cure for hearing loss. Treatment options are limited to hearing aids for mild and moderate cases, and cochlear implants for severe and profound hearing loss. Here we provide a literature overview of the environmental and genetic causes of congenital hearing loss, common animal models and methods used for hearing research, as well as recent advances towards developing therapies to treat congenital deafness. © 2021 The Authors.
Collapse
Affiliation(s)
- Justine M Renauld
- Department of Otolaryngology, Head & Neck Surgery, Case Western Reserve University School of Medicine, Cleveland, Ohio
| | - Martin L Basch
- Department of Otolaryngology, Head & Neck Surgery, Case Western Reserve University School of Medicine, Cleveland, Ohio.,Department of Genetics and Genome Sciences, Case Western Reserve School of Medicine, Cleveland, Ohio.,Department of Biology, Case Western Reserve University, Cleveland, Ohio.,Department of Otolaryngology, Head & Neck Surgery, University Hospitals, Cleveland, Ohio
| |
Collapse
|
5
|
Fawcett TJ, Cooper CS, Longenecker RJ, Walton JP. Machine learning, waveform preprocessing and feature extraction methods for classification of acoustic startle waveforms. MethodsX 2020; 8:101166. [PMID: 33354518 PMCID: PMC7744771 DOI: 10.1016/j.mex.2020.101166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 11/27/2020] [Indexed: 11/23/2022] Open
Abstract
The acoustic startle response (ASR) is an involuntary muscle reflex that occurs in response to a transient loud sound and is a highly-utilized method of assessing hearing status in animal models. Currently, a high level of variability exists in the recording and interpretation of ASRs due to the lack of standardization for collecting and analyzing these measures. An ensembled machine learning model was trained to predict whether an ASR waveform is a startle or non-startle using highly-predictive features extracted from normalized ASR waveforms collected from young adult CBA/CaJ mice. Features were extracted from the normalized waveform as well as the power spectral density estimates and continuous wavelet transforms of the normalized waveform. Machine learning models utilizing methods from different families of algorithms were individually trained and then ensembled together, resulting in an extremely robust model.ASR waveforms were normalized using the mean and standard deviation computed before the startle elicitor was presented 9 machine learning algorithms from 4 different families of algorithms were individually trained using features extracted from the normalized ASR waveforms Trained machine learning models were ensembled to produce an extremely robust classifier
Collapse
Affiliation(s)
- Timothy J Fawcett
- Global Center for Hearing and Speech Research, University of South Florida, Tampa, FL, United States.,Research Computing, University of South Florida, Tampa, FL, United States.,Department of Chemical and Biomedical Engineering, University of South Florida, Tampa, FL, United States
| | - Chad S Cooper
- Global Center for Hearing and Speech Research, University of South Florida, Tampa, FL, United States
| | - Ryan J Longenecker
- Global Center for Hearing and Speech Research, University of South Florida, Tampa, FL, United States
| | - Joseph P Walton
- Global Center for Hearing and Speech Research, University of South Florida, Tampa, FL, United States.,Department of Chemical and Biomedical Engineering, University of South Florida, Tampa, FL, United States.,Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| |
Collapse
|