1
|
Varghese JJ, Shew MA, Walia A, Lefler SM, Durakovic N, Wick CC, Ortmann AJ, Herzog JA, Buchman CA. Validating an Evoked Potential Platform for Electrocochleography During Cochlear Implantation. Laryngoscope 2024. [PMID: 39189299 DOI: 10.1002/lary.31724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 07/28/2024] [Accepted: 08/06/2024] [Indexed: 08/28/2024]
Abstract
OBJECTIVE To validate electrocochleography (ECochG) between an auditory evoked potential (AEP) machine and an established cochlear implant (CI) manufacturer ECochG system. METHODS Intraoperative validation study at a tertiary referral center. Patients included adults and children undergoing cochlear implantation. Intraoperative ECochG was measured with both the Intelligent Hearing Systems (IHS) Duet AEP machine and Cochlear Corporation (CC) ECochG platform. Recording electrodes captured extracochlear measurements through a standard facial recess. Tone-bursts were presented from 250 Hz to 2 kHz (~110 dB SPL). A fast Fourier transform (FFT) of ECochG waveforms at key frequencies was summed into a total response (ECochG-TR). Pearson's correlation was utilized to evaluate the relationship between IHS-ECochG-TR and CC-ECochG-TR after confirming normality. RESULTS Thirty patients were enrolled with an average age of 67 years (SD 18.8). In the ear that was implanted, mean preoperative pure-tone average (PTA; 0.5, 1, 2, and 4 kHz) was 87.4 dB HL (SD 19.3) and mean preoperative word-recognition scores (WRS) was 17.0% correct (SD 19.1). There was strong correlation (r = 0.905, 95% confidence interval: 0.809 to 0.954) between IHS-ECochG-TR (median 2.30 μV, range 0.1-148.26) and CC-ECochG-TR (median 3.00 μV, range 0.1-239.63). Four patients underwent transtympanic ECochG with the IHS system for feasibility evaluation and achieved similar responses. CONCLUSION Extracochlear ECochG has been predictive of CI speech perception performance. The IHS duet system is a valid measure of extracochlear ECochG for the CI population. Future work will utilize this system for measuring transtympanic ECochG to improve preoperative estimation of CI performance. LEVEL OF EVIDENCE 3 Laryngoscope, 2024.
Collapse
Affiliation(s)
- Jordan J Varghese
- Department of Otolaryngology - Head and Neck Surgery, Washington University School of Medicine, St. Louis, Missouri, U.S.A
| | - Matthew A Shew
- Department of Otolaryngology - Head and Neck Surgery, Washington University School of Medicine, St. Louis, Missouri, U.S.A
| | - Amit Walia
- Department of Otolaryngology - Head and Neck Surgery, Washington University School of Medicine, St. Louis, Missouri, U.S.A
| | - Shannon M Lefler
- Department of Otolaryngology - Head and Neck Surgery, Washington University School of Medicine, St. Louis, Missouri, U.S.A
| | - Nedim Durakovic
- Department of Otolaryngology - Head and Neck Surgery, Washington University School of Medicine, St. Louis, Missouri, U.S.A
| | - Cameron C Wick
- Department of Otolaryngology - Head and Neck Surgery, Washington University School of Medicine, St. Louis, Missouri, U.S.A
| | - Amanda J Ortmann
- Department of Otolaryngology - Head and Neck Surgery, Washington University School of Medicine, St. Louis, Missouri, U.S.A
| | - Jacques A Herzog
- Department of Otolaryngology - Head and Neck Surgery, Washington University School of Medicine, St. Louis, Missouri, U.S.A
| | - Craig A Buchman
- Department of Otolaryngology - Head and Neck Surgery, Washington University School of Medicine, St. Louis, Missouri, U.S.A
| |
Collapse
|
2
|
Jacxsens L, Biot L, Escera C, Gilles A, Cardon E, Van Rompaey V, De Hertogh W, Lammers MJW. Frequency-Following Responses in Sensorineural Hearing Loss: A Systematic Review. J Assoc Res Otolaryngol 2024; 25:131-147. [PMID: 38334887 PMCID: PMC11018579 DOI: 10.1007/s10162-024-00932-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 01/18/2024] [Indexed: 02/10/2024] Open
Abstract
PURPOSE This systematic review aims to assess the impact of sensorineural hearing loss (SNHL) on various frequency-following response (FFR) parameters. METHODS Following PRISMA guidelines, a systematic review was conducted using PubMed, Web of Science, and Scopus databases up to January 2023. Studies evaluating FFRs in patients with SNHL and normal hearing controls were included. RESULTS Sixteen case-control studies were included, revealing variability in acquisition parameters. In the time domain, patients with SNHL exhibited prolonged latencies. The specific waves that were prolonged differed across studies. There was no consensus regarding wave amplitude in the time domain. In the frequency domain, focusing on studies that elicited FFRs with stimuli of 170 ms or longer, participants with SNHL displayed a significantly smaller fundamental frequency (F0). Results regarding changes in the temporal fine structure (TFS) were inconsistent. CONCLUSION Patients with SNHL may require more time for processing (speech) stimuli, reflected in prolonged latencies. However, the exact timing of this delay remains unclear. Additionally, when presenting longer stimuli (≥ 170 ms), patients with SNHL show difficulties tracking the F0 of (speech) stimuli. No definite conclusions could be drawn on changes in wave amplitude in the time domain and the TFS in the frequency domain. Patient characteristics, acquisition parameters, and FFR outcome parameters differed greatly across studies. Future studies should be performed in larger and carefully matched subject groups, using longer stimuli presented at the same intensity in dB HL for both groups, or at a carefully determined maximum comfortable loudness level.
Collapse
Affiliation(s)
- Laura Jacxsens
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium.
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium.
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium.
| | - Lana Biot
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Carles Escera
- Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, Brainlab - Cognitive, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Santa Rosa 39-57, 08950, Esplugues de Llobregat, Catalonia, Spain
| | - Annick Gilles
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Education, Health and Social Work, University College Ghent, Ghent, Belgium
| | - Emilie Cardon
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Vincent Van Rompaey
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Willem De Hertogh
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Marc J W Lammers
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
3
|
Xu C, Cheng FY, Medina S, Eng E, Gifford R, Smith S. Objective discrimination of bimodal speech using frequency following responses. Hear Res 2023; 437:108853. [PMID: 37441879 DOI: 10.1016/j.heares.2023.108853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/03/2023] [Accepted: 07/08/2023] [Indexed: 07/15/2023]
Abstract
Bimodal hearing, in which a contralateral hearing aid is combined with a cochlear implant (CI), provides greater speech recognition benefits than using a CI alone. Factors predicting individual bimodal patient success are not fully understood. Previous studies have shown that bimodal benefits may be driven by a patient's ability to extract fundamental frequency (f0) and/or temporal fine structure cues (e.g., F1). Both of these features may be represented in frequency following responses (FFR) to bimodal speech. Thus, the goals of this study were to: 1) parametrically examine neural encoding of f0 and F1 in simulated bimodal speech conditions; 2) examine objective discrimination of FFRs to bimodal speech conditions using machine learning; 3) explore whether FFRs are predictive of perceptual bimodal benefit. Three vowels (/ε/, /i/, and /ʊ/) with identical f0 were manipulated by a vocoder (right ear) and low-pass filters (left ear) to create five bimodal simulations for evoking FFRs: Vocoder-only, Vocoder +125 Hz, Vocoder +250 Hz, Vocoder +500 Hz, and Vocoder +750 Hz. Perceptual performance on the BKB-SIN test was also measured using the same five configurations. Results suggested that neural representation of f0 and F1 FFR components were enhanced with increasing acoustic bandwidth in the simulated "non-implanted" ear. As spectral differences between vowels emerged in the FFRs with increased acoustic bandwidth, FFRs were more accurately classified and discriminated using a machine learning algorithm. Enhancement of f0 and F1 neural encoding with increasing bandwidth were collectively predictive of perceptual bimodal benefit on a speech-in-noise task. Given these results, FFR may be a useful tool to objectively assess individual variability in bimodal hearing.
Collapse
Affiliation(s)
- Can Xu
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - Fan-Yin Cheng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - Sarah Medina
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - Erica Eng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - René Gifford
- Department of Speech, Language, and Hearing Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA.
| |
Collapse
|
4
|
Ghosh R, Hansen JHL. Bilateral Cochlear Implant Processing of Coding Strategies With CCi-MOBILE, an Open-Source Research Platform. IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 2023; 31:1839-1850. [PMID: 38046574 PMCID: PMC10691824 DOI: 10.1109/taslp.2023.3267608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
While speech understanding for cochlear implant (CI) users in quiet is relatively effective, listeners experience difficulty in identification of speaker and sound location. To assist for better residual hearing abilities and speech intelligibility support, bilateral and bimodal forms of assisted hearing is becoming popular among CI users. Effective bilateral processing calls for testing precise algorithm synchronization and fitting between both left and right ear channels in order to capture interaural time and level difference cues (ITD and ILDs). This work demonstrates bilateral implant algorithm processing using a custom-made CI research platform - CCi-MOBILE, which is capable of capturing precise source localization information and supports researchers in testing bilateral CI processing in real-time naturalistic environments. Simulation-based, objective, and subjective testing has been performed to validate the accuracy of the platform. The subjective test results produced an RMS error of ±8.66° for source localization, which is comparable to the performance of commercial CI processors.
Collapse
Affiliation(s)
- Ria Ghosh
- Center for Robust Speech Systems, CILab - Cochlear Implant Processing Lab, Department of Electrical and Computer Engineering, University of Texas at Dallas, Richardson, TX 75080 USA
| | - John H L Hansen
- Center for Robust Speech Systems, CILab - Cochlear Implant Processing Lab, Department of Electrical and Computer Engineering, Erik Jonsson School of Engineering and Computer Science, University of Texas at Dallas, Richardson, TX 75080 USA, and also with the School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX 75080 USA
| |
Collapse
|
5
|
Smith S. Translational Applications of Machine Learning in Auditory Electrophysiology. Semin Hear 2022; 43:240-250. [PMID: 36313047 PMCID: PMC9605807 DOI: 10.1055/s-0042-1756166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Machine learning (ML) is transforming nearly every aspect of modern life including medicine and its subfields, such as hearing science. This article presents a brief conceptual overview of selected ML approaches and describes how these techniques are being applied to outstanding problems in hearing science, with a particular focus on auditory evoked potentials (AEPs). Two vignettes are presented in which ML is used to analyze subcortical AEP data. The first vignette demonstrates how ML can be used to determine if auditory learning has influenced auditory neurophysiologic function. The second vignette demonstrates how ML analysis of AEPs may be useful in determining whether hearing devices are optimized for discriminating speech sounds.
Collapse
Affiliation(s)
- Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, Texas
| |
Collapse
|
6
|
Holder JT, Holcomb MA, Snapp H, Labadie RF, Vroegop J, Rocca C, Elgandy MS, Dunn C, Gifford RH. Guidelines for Best Practice in the Audiological Management of Adults Using Bimodal Hearing Configurations. OTOLOGY & NEUROTOLOGY OPEN 2022; 2:e011. [PMID: 36274668 PMCID: PMC9581116 DOI: 10.1097/ono.0000000000000011] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Clinics are treating a growing number of patients with greater amounts of residual hearing. These patients often benefit from a bimodal hearing configuration in which acoustic input from a hearing aid on 1 ear is combined with electrical stimulation from a cochlear implant on the other ear. The current guidelines aim to review the literature and provide best practice recommendations for the evaluation and treatment of individuals with bilateral sensorineural hearing loss who may benefit from bimodal hearing configurations. Specifically, the guidelines review: benefits of bimodal listening, preoperative and postoperative cochlear implant evaluation and programming, bimodal hearing aid fitting, contralateral routing of signal considerations, bimodal treatment for tinnitus, and aural rehabilitation recommendations.
Collapse
Affiliation(s)
| | | | | | | | | | - Christine Rocca
- Guy’s and St. Thomas’ Hearing Implant Centre, London, United Kingdom
| | | | | | | |
Collapse
|
7
|
Inguscio BMS, Mancini P, Greco A, Nicastri M, Giallini I, Leone CA, Grassia R, Di Nardo W, Di Cesare T, Rossi F, Canale A, Albera A, Giorgi A, Malerba P, Babiloni F, Cartocci G. ‘Musical effort’ and ‘musical pleasantness’: a pilot study on the neurophysiological correlates of classical music listening in adults normal hearing and unilateral cochlear implant users. HEARING, BALANCE AND COMMUNICATION 2022. [DOI: 10.1080/21695717.2022.2079325] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Carlo Antonio Leone
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Rosa Grassia
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Walter Di Nardo
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Tiziana Di Cesare
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Federica Rossi
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Andrea Canale
- Division of Otorhinolaryngology, Department of Surgical Sciences, University of Turin, Italy
| | - Andrea Albera
- Division of Otorhinolaryngology, Department of Surgical Sciences, University of Turin, Italy
| | | | | | - Fabio Babiloni
- BrainSigns Srl, Rome, Italy
- Department of Computer Science, Hangzhou Dianzi University, Hangzhou, China
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
| | - Giulia Cartocci
- BrainSigns Srl, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
8
|
Cheng FY, Smith S. Objective Detection of the Speech Frequency Following Response (sFFR): A Comparison of Two Methods. Audiol Res 2022; 12:89-94. [PMID: 35200259 PMCID: PMC8869319 DOI: 10.3390/audiolres12010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/22/2022] [Accepted: 01/24/2022] [Indexed: 02/01/2023] Open
Abstract
Speech frequency following responses (sFFRs) are increasingly used in translational auditory research. Statistically-based automated sFFR detection could aid response identification and provide a basis for stopping rules when recording responses in clinical and/or research applications. In this brief report, sFFRs were measured from 18 normal hearing adult listeners in quiet and speech-shaped noise. Two statistically-based automated response detection methods, the F-test and Hotelling’s T2 (HT2) test, were compared based on detection accuracy and test time. Similar detection accuracy across statistical tests and conditions was observed, although the HT2 test time was less variable. These findings suggest that automated sFFR detection is robust for responses recorded in quiet and speech-shaped noise using either the F-test or HT2 test. Future studies evaluating test performance with different stimuli and maskers are warranted to determine if the interchangeability of test performance extends to these conditions.
Collapse
|
9
|
Zhang H, Zhang J, Peng G, Ding H, Zhang Y. Bimodal Benefits Revealed by Categorical Perception of Lexical Tones in Mandarin-Speaking Kindergarteners With a Cochlear Implant and a Contralateral Hearing Aid. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:4238-4251. [PMID: 33186505 DOI: 10.1044/2020_jslhr-20-00224] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Pitch reception poses challenges for individuals with cochlear implants (CIs), and adding a hearing aid (HA) in the nonimplanted ear is potentially beneficial. The current study used fine-scale synthetic speech stimuli to investigate the bimodal benefit for lexical tone categorization in Mandarin-speaking kindergarteners using a CI and an HA in opposite ears. Method The data were collected from 16 participants who were required to complete two classical tasks for speech categorical perception (CP) with CI + HA device condition and CI alone condition. Linear mixed-effects models were constructed to evaluate the identification and discrimination scores across different device conditions. Results The bimodal kindergarteners showed CP for the continuum varying from Mandarin Tone 1 and Tone 2. Moreover, the additional acoustic information from the contralateral HA contributes to improved lexical tone categorization, with a steeper slope, a higher discrimination score of between-category stimuli pair, and an improved peakedness score (i.e., an increased benefit magnitude for discriminations of between-category over within-category pairs) for the CI + HA condition than the CI alone condition. The bimodal kindergarteners with better residual hearing thresholds at 250 Hz level in the nonimplanted ear could perceive lexical tones more categorically. Conclusion The enhanced CP results with bimodal listening provide clear evidence for the clinical practice to fit a contralateral HA in the nonimplanted ear in kindergarteners with unilateral CIs with direct benefits from the low-frequency acoustic hearing.
Collapse
Affiliation(s)
- Hao Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University
| | - Jing Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Gang Peng
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
10
|
Zhang H, Zhang J, Ding H, Zhang Y. Bimodal Benefits for Lexical Tone Recognition: An Investigation on Mandarin-speaking Preschoolers with a Cochlear Implant and a Contralateral Hearing Aid. Brain Sci 2020; 10:brainsci10040238. [PMID: 32316466 PMCID: PMC7226140 DOI: 10.3390/brainsci10040238] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Revised: 04/08/2020] [Accepted: 04/15/2020] [Indexed: 11/16/2022] Open
Abstract
Pitch perception is known to be difficult for individuals with cochlear implant (CI), and adding a hearing aid (HA) in the non-implanted ear is potentially beneficial. The current study aimed to investigate the bimodal benefit for lexical tone recognition in Mandarin-speaking preschoolers using a CI and an HA in opposite ears. The child participants were required to complete tone identification in quiet and in noise with CI + HA in comparison with CI alone. While the bimodal listeners showed confusion between Tone 2 and Tone 3 in recognition, the additional acoustic information from the contralateral HA alleviated confusion between these two tones in quiet. Moreover, significant improvement was demonstrated in the CI + HA condition over the CI alone condition in noise. The bimodal benefit for individual subjects could be predicted by the low-frequency hearing threshold of the non-implanted ear and the duration of bimodal use. The findings support the clinical practice to fit a contralateral HA in the non-implanted ear for the potential benefit in Mandarin tone recognition in CI children. The limitations call for further studies on auditory plasticity on an individual basis to gain insights on the contributing factors to the bimodal benefit or its absence.
Collapse
Affiliation(s)
- Hao Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (H.Z.); (J.Z.)
| | - Jing Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (H.Z.); (J.Z.)
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (H.Z.); (J.Z.)
- Correspondence: (H.D.); (Y.Z.); Tel.: +1-612-624-7878 (Y.Z.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (H.D.); (Y.Z.); Tel.: +1-612-624-7878 (Y.Z.)
| |
Collapse
|
11
|
D'Onofrio KL, Caldwell M, Limb C, Smith S, Kessler DM, Gifford RH. Musical Emotion Perception in Bimodal Patients: Relative Weighting of Musical Mode and Tempo Cues. Front Neurosci 2020; 14:114. [PMID: 32174809 PMCID: PMC7054459 DOI: 10.3389/fnins.2020.00114] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 01/29/2020] [Indexed: 11/13/2022] Open
Abstract
Several cues are used to convey musical emotion, the two primary being musical mode and musical tempo. Specifically, major and minor modes tend to be associated with positive and negative valence, respectively, and songs at fast tempi have been associated with more positive valence compared to songs at slow tempi (Balkwill and Thompson, 1999; Webster and Weir, 2005). In Experiment I, we examined the relative weighting of musical tempo and musical mode among adult cochlear implant (CI) users combining electric and contralateral acoustic stimulation, or "bimodal" hearing. Our primary hypothesis was that bimodal listeners would utilize both tempo and mode cues in their musical emotion judgments in a manner similar to normal-hearing listeners. Our secondary hypothesis was that low-frequency (LF) spectral resolution in the non-implanted ear, as quantified via psychophysical tuning curves (PTCs) at 262 and 440 Hz, would be significantly correlated with degree of bimodal benefit for musical emotion perception. In Experiment II, we investigated across-channel spectral resolution using a spectral modulation detection (SMD) task and neural representation of temporal fine structure via the frequency following response (FFR) for a 170-ms /da/ stimulus. Results indicate that CI-alone performance was driven almost exclusively by tempo cues, whereas bimodal listening demonstrated use of both tempo and mode. Additionally, bimodal benefit for musical emotion perception may be correlated with spectral resolution in the non-implanted ear via SMD, as well as neural representation of F0 amplitude via FFR - though further study with a larger sample size is warranted. Thus, contralateral acoustic hearing can offer significant benefit for musical emotion perception, and the degree of benefit may be dependent upon spectral resolution of the non-implanted ear.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | | | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Spencer Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, TX, United States
| | - David M Kessler
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - René H Gifford
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|