1
|
Yoon YS, Whitaker R, White N. Frequency importance functions in simulated bimodal cochlear-implant users with spectral holes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3589-3599. [PMID: 38829154 PMCID: PMC11151433 DOI: 10.1121/10.0026220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 04/29/2024] [Accepted: 05/12/2024] [Indexed: 06/05/2024]
Abstract
Frequency importance functions (FIFs) for simulated bimodal hearing were derived using sentence perception scores measured in quiet and noise. Acoustic hearing was simulated using low-pass filtering. Electric hearing was simulated using a six-channel vocoder with three input frequency ranges, resulting in overlap, meet, and gap maps, relative to the acoustic cutoff frequency. Spectral holes present in the speech spectra were created within electric stimulation by setting amplitude(s) of channels to zero. FIFs were significantly different between frequency maps. In quiet, the three FIFs were similar with gradually increasing weights with channels 5 and 6 compared to the first three channels. However, the most and least weighted channels slightly varied depending on the maps. In noise, the patterns of the three FIFs were similar to those in quiet, with steeper increasing weights with channels 5 and 6 compared to the first four channels. Thus, channels 5 and 6 contributed to speech perception the most, while channels 1 and 2 contributed the least, regardless of frequency maps. Results suggest that the contribution of cochlear implant frequency bands for bimodal speech perception depends on the degree of frequency overlap between acoustic and electric stimulation and if noise is absent or present.
Collapse
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, Texas 76798, USA
| | - Reagan Whitaker
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee 37232, USA
| | - Naomi White
- Department of Communication Sciences and Disorders, Baylor University, Waco, Texas 76798, USA
| |
Collapse
|
2
|
Yoon YS, Straw S. Interactions Between Slopes of Residual Hearing and Frequency Maps in Simulated Bimodal and Electric-Acoustic Stimulation Hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:282-295. [PMID: 38092067 PMCID: PMC11000803 DOI: 10.1044/2023_jslhr-22-00629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 03/16/2023] [Accepted: 10/18/2023] [Indexed: 01/10/2024]
Abstract
PURPOSE The aim of this study was to determine the effects of residual hearing slopes and cochlear implant frequency map settings on bimodal and electric-acoustic stimulation (EAS) benefits in speech perception. METHOD Adults with normal hearing were recruited for simulated bimodal and EAS hearing. Sentence perception was measured unilaterally and bilaterally. For the acoustic stimulation, three slopes of high-frequency hearing loss were created using low-pass filters with a cutoff frequency of 500 Hz: steep (96 dB/octave), medium (48 dB/octave), and shallow (24 dB/octave). For the electric stimulation, an eight-channel sinewave vocoder was used with an output frequency range (1000-7938 Hz) with three input frequency ranges to create frequency maps, overlap (188-7938 Hz), meet (500-7938 Hz), and gap (750-7938 Hz), relative to the cutoff frequency in the acoustic stimulation. RESULTS The largest bimodal/EAS benefit occurred with the shallow slope, and the smallest occurred with the steep slope. The effects of the slopes on bimodal/EAS benefit were greatest with the meet or gap map and the least with the overlap map. EAS benefit was greater than bimodal benefit at higher signal-to-noise ratios regardless of frequency map. CONCLUSIONS The results indicate that correlation between bimodal/EAS benefit and residual hearing could potentially improve if slopes were considered. The optimal frequency map differed with different slopes, suggesting that the slopes of residual hearing should be carefully considered in fitting bimodal and EAS hearing. EAS hearing provided greater benefit over bimodal hearing, suggesting that spectrotemporal integration was better within one ear (i.e., EAS) than across ears (i.e., bimodal).
Collapse
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - Shea Straw
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| |
Collapse
|
3
|
Holder JT, Holcomb MA, Snapp H, Labadie RF, Vroegop J, Rocca C, Elgandy MS, Dunn C, Gifford RH. Guidelines for Best Practice in the Audiological Management of Adults Using Bimodal Hearing Configurations. OTOLOGY & NEUROTOLOGY OPEN 2022; 2:e011. [PMID: 36274668 PMCID: PMC9581116 DOI: 10.1097/ono.0000000000000011] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Clinics are treating a growing number of patients with greater amounts of residual hearing. These patients often benefit from a bimodal hearing configuration in which acoustic input from a hearing aid on 1 ear is combined with electrical stimulation from a cochlear implant on the other ear. The current guidelines aim to review the literature and provide best practice recommendations for the evaluation and treatment of individuals with bilateral sensorineural hearing loss who may benefit from bimodal hearing configurations. Specifically, the guidelines review: benefits of bimodal listening, preoperative and postoperative cochlear implant evaluation and programming, bimodal hearing aid fitting, contralateral routing of signal considerations, bimodal treatment for tinnitus, and aural rehabilitation recommendations.
Collapse
Affiliation(s)
| | | | | | | | | | - Christine Rocca
- Guy’s and St. Thomas’ Hearing Implant Centre, London, United Kingdom
| | | | | | | |
Collapse
|
4
|
D'Onofrio KL, Gifford RH. Bimodal Benefit for Music Perception: Effect of Acoustic Bandwidth. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1341-1353. [PMID: 33784471 PMCID: PMC8608177 DOI: 10.1044/2020_jslhr-20-00390] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 10/15/2020] [Accepted: 12/04/2020] [Indexed: 05/29/2023]
Abstract
Purpose The challenges associated with cochlear implant (CI)-mediated listening are well documented; however, they can be mitigated through the provision of aided acoustic hearing in the contralateral ear-a configuration termed bimodal hearing. This study extends previous literature to examine the effect of acoustic bandwidth in the non-CI ear for music perception. The primary aim was to determine the minimum and optimum acoustic bandwidth necessary to obtain bimodal benefit for music perception and speech perception. Method Participants included 12 adult bimodal listeners and 12 adult control listeners with normal hearing. Music perception was assessed via measures of timbre perception and subjective sound quality of real-world music samples. Speech perception was assessed via monosyllabic word recognition in quiet. Acoustic stimuli were presented to the non-CI ear in the following filter conditions: < 125, < 250, < 500, and < 750 Hz, and wideband (full bandwidth). Results Generally, performance for all stimuli improved with increasing acoustic bandwidth; however, the bandwidth that is both minimally and optimally beneficial may be dependent upon stimulus type. On average, music sound quality required wideband amplification, whereas speech recognition with a male talker in quiet required a narrower acoustic bandwidth (< 250 Hz) for significant benefit. Still, average speech recognition performance continued to improve with increasing bandwidth. Conclusion Further research is warranted to examine optimal acoustic bandwidth for additional stimulus types; however, these findings indicate that wideband amplification is most appropriate for speech and music perception in individuals with bimodal hearing.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
5
|
Turton L, Souza P, Thibodeau L, Hickson L, Gifford R, Bird J, Stropahl M, Gailey L, Fulton B, Scarinci N, Ekberg K, Timmer B. Guidelines for Best Practice in the Audiological Management of Adults with Severe and Profound Hearing Loss. Semin Hear 2020; 41:141-246. [PMID: 33364673 PMCID: PMC7744249 DOI: 10.1055/s-0040-1714744] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Individuals with severe to profound hearing loss are likely to present with complex listening needs that require evidence-based solutions. This document is intended to inform the practice of hearing care professionals who are involved in the audiological management of adults with a severe to profound degree of hearing loss and will highlight the special considerations and practices required to optimize outcomes for these individuals.
Collapse
Affiliation(s)
- Laura Turton
- Department of Audiology, South Warwickshire NHS Foundation Trust, Warwick, United Kingdom
| | - Pamela Souza
- Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, Evanston, Illinois
| | - Linda Thibodeau
- University of Texas at Dallas, Callier Center for Communication Disorders, Dallas, Texas
| | - Louise Hickson
- School of Health and Rehabilitation Sciences, The University of Queensland, Australia
| | - René Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Judith Bird
- Cambridge University Hospital NHS Foundation Trust, United Kingdom
| | - Maren Stropahl
- Department of Science and Technology, Sonova AG, Stäfa, Switzerland
| | | | | | - Nerina Scarinci
- School of Health and Rehabilitation Sciences, The University of Queensland, Australia
| | - Katie Ekberg
- School of Health and Rehabilitation Sciences, The University of Queensland, Australia
| | - Barbra Timmer
- School of Health and Rehabilitation Sciences, The University of Queensland, Australia
| |
Collapse
|
6
|
Warren SE, Noelle Dunbar M, Bosworth C, Agrawal S. Evaluation of a novel bimodal fitting formula in Advanced Bionics cochlear implant recipients. Cochlear Implants Int 2020; 21:323-337. [PMID: 32664814 DOI: 10.1080/14670100.2020.1787622] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Purpose: The study's objectives were to (1) evaluate benefit from a novel bimodal fitting formula (Adaptive Phonak Digital Bimodal Fitting Formula [APDB]), and (2) compare outcomes with APDB and a traditional fitting formula (NAL-NL2). Methods: This prospective study evaluated outcomes in ten adults with unilateral Advanced Bionics (AB) cochlear implants (CI). Participants were tested bimodally with NAL-NL2 and APDB programed on Naída Link UP HAs. Measures of speech perception, sound quality, and preference were obtained with two bimodal configurations (CI + HANAL-NL2 and CI + HAAPDB). Participants used the CI + HAAPDB configuration for an acclimation period, after which measures were repeated. Results: Significant bimodal benefit was measured from both HA fitting formulae for speech perception in noise compared to the CI-only condition. Improved individual outcomes with the APDB were observed, but group differences were not statistically significant. Participants reported subjective benefit from APDB on blind comparisons of preference and sound quality. Conclusions: Significant benefit was found with both bimodal conditions compared to the CI-only condition; however, bimodal speech perception results were not significantly different. Users reported benefit from the APDB formula over NAL-NL2 formula. Due to individual improved speech perception and overall subjective preference for APDB, clinicians should consider APDB with AB CI recipients.
Collapse
Affiliation(s)
- Sarah E Warren
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA.,Arkansas Children's Hospital, Little Rock, AR, USA
| | - M Noelle Dunbar
- Columbia University Irving Medical Center, New York, NY, USA
| | | | | |
Collapse
|
7
|
Speech Perception Changes in the Acoustically Aided, Nonimplanted Ear after Cochlear Implantation: A Multicenter Study. J Clin Med 2020; 9:jcm9061758. [PMID: 32517138 PMCID: PMC7356938 DOI: 10.3390/jcm9061758] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 05/23/2020] [Accepted: 06/02/2020] [Indexed: 11/17/2022] Open
Abstract
In recent years there has been an increasing percentage of cochlear implant (CI) users who have usable residual hearing in the contralateral, nonimplanted ear, typically aided by acoustic amplification. This raises the issue of the extent to which the signal presented through the cochlear implant may influence how listeners process information in the acoustically stimulated ear. This multicenter retrospective study examined pre- to postoperative changes in speech perception in the nonimplanted ear, the implanted ear, and both together. Results in the latter two conditions showed the expected increases, but speech perception in the nonimplanted ear showed a modest yet meaningful decrease that could not be completely explained by changes in unaided thresholds, hearing aid malfunction, or several other demographic variables. Decreases in speech perception in the nonimplanted ear were more likely in individuals who had better levels of speech perception in the implanted ear, and in those who had better speech perception in the implanted than in the nonimplanted ear. This raises the possibility that, in some cases, bimodal listeners may rely on the higher quality signal provided by the implant and may disregard or even neglect the input provided by the nonimplanted ear.
Collapse
|
8
|
Music Is More Enjoyable With Two Ears, Even If One of Them Receives a Degraded Signal Provided By a Cochlear Implant. Ear Hear 2020; 41:476-490. [DOI: 10.1097/aud.0000000000000771] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
9
|
Dwyer RT, Roberts J, Gifford RH. Effect of Microphone Configuration and Sound Source Location on Speech Recognition for Adult Cochlear Implant Users with Current-Generation Sound Processors. J Am Acad Audiol 2020; 31:578-589. [PMID: 32340055 DOI: 10.1055/s-0040-1709449] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
BACKGROUND Microphone location has been shown to influence speech recognition with a microphone placed at the entrance to the ear canal yielding higher levels of speech recognition than top-of-the-pinna placement. Although this work is currently influencing cochlear implant programming practices, prior studies were completed with previous-generation microphone and sound processor technology. Consequently, the applicability of prior studies to current clinical practice is unclear. PURPOSE To investigate how microphone location (e.g., at the entrance to the ear canal, at the top of the pinna), speech-source location, and configuration (e.g., omnidirectional, directional) influence speech recognition for adult CI recipients with the latest in sound processor technology. RESEARCH DESIGN Single-center prospective study using a within-subjects, repeated-measures design. STUDY SAMPLE Eleven experienced adult Advanced Bionics cochlear implant recipients (five bilateral, six bimodal) using a Naída CI Q90 sound processor were recruited for this study. DATA COLLECTION AND ANALYSIS Sentences were presented from a single loudspeaker at 65 dBA for source azimuths of 0°, 90°, or 270° with semidiffuse noise originating from the remaining loudspeakers in the R-SPACE array. Individualized signal-to-noise ratios were determined to obtain 50% correct in the unilateral cochlear implant condition with the signal at 0°. Performance was compared across the following microphone sources: T-Mic 2, integrated processor microphone (formerly behind-the-ear mic), processor microphone + T-Mic 2, and two types of beamforming: monaural, adaptive beamforming (UltraZoom) and binaural beamforming (StereoZoom). Repeated-measures analyses were completed for both speech recognition and microphone output for each microphone location and configuration as well as sound source location. A two-way analysis of variance using mic and azimuth as the independent variables and output for pink noise as the dependent variable was used to characterize the acoustic output characteristics of each microphone source. RESULTS No significant differences in speech recognition across omnidirectional mic location at any source azimuth or listening condition were observed. Secondary findings were (1) omnidirectional microphone configurations afforded significantly higher speech recognition for conditions in which speech was directed to ± 90° (when compared with directional microphone configurations), (2) omnidirectional microphone output was significantly greater when the signal was presented off-axis, and (3) processor microphone output was significantly greater than T-Mic 2 when the sound originated from 0°, which contributed to better aided detection at 2 and 6 kHz with the processor microphone in this group. CONCLUSIONS Unlike previous-generation microphones, we found no statistically significant effect of microphone location on speech recognition in noise from any source azimuth. Directional microphones significantly improved speech recognition in the most difficult listening environments.
Collapse
Affiliation(s)
- Robert T Dwyer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Jillian Roberts
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
10
|
Digeser FM, Engler M, Hoppe U. Comparison of bimodal benefit for the use of DSL v5.0 and NAL-NL2 in cochlear implant listeners. Int J Audiol 2019; 59:383-391. [PMID: 31809219 DOI: 10.1080/14992027.2019.1697902] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Objective: For a group of bimodal subjects with moderate to severe hearing loss contralateral to the cochlear implant (CI), the bimodal benefit of the hearing aid (HA) gain prescriptions DSL v5.0, NAL-NL2 and the recipients' own gain setting were assessed.Design: Speech perception in quiet and in noise as well as self-reported ratings of benefit were determined for all three gain-settings. Speech tests were performed in the bimodal, the HA alone and the CI alone condition. The bimodal benefit was assessed for each prescription as the difference score of the bimodal condition and the better ear.Study Sample: Twenty adults with post-lingual hearing loss.Results: Speech perception with DSL v5.0 was significantly higher compared to NAL-NL2 and the own prescription in both quiet and noise. The median bimodal benefit was highest for DSL v5.0 with an average of 15 percentage points for both words in quiet and sentences in noise.Conclusions: DSL v5.0 and NAL-NL2 are both suitable for HA fitting in bimodal users. For subjects with moderate to severe hearing loss and HA experience contralateral to the implanted side, DSL v5.0 may provide better speech perception and bimodal benefit.
Collapse
Affiliation(s)
- Frank M Digeser
- Audiologie, HNO Klinik, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Max Engler
- Audiologie, HNO Klinik, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Ulrich Hoppe
- Audiologie, HNO Klinik, Universitätsklinikum Erlangen, Erlangen, Germany
| |
Collapse
|