1
|
Bernstein JGW, Voelker J, Phatak SA. Headphones over the cochlear-implant sound processor to replace direct audio input. JASA EXPRESS LETTERS 2024; 4:094406. [PMID: 39315944 DOI: 10.1121/10.0028737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 08/26/2024] [Indexed: 09/25/2024]
Abstract
Psychoacoustic stimulus presentation to the cochlear implant via direct audio input (DAI) is no longer possible for many newer sound processors (SPs). This study assessed the feasibility of placing circumaural headphones over the SP. Calibration spectra for loudspeaker, DAI, and headphone modalities were estimated by measuring cochlear-implant electrical output levels for tones presented to SPs on an acoustic manikin. Differences in calibration spectra between modalities arose mainly from microphone-response characteristics (high-frequency differences between DAI and the other modalities) or a proximity effect (low-frequency differences between headphones and loudspeaker). Calibration tables are provided to adjust for differences between the three modalities.
Collapse
Affiliation(s)
- Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, , ,
| | - Julianna Voelker
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, , ,
| | - Sandeep A Phatak
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, , ,
| |
Collapse
|
2
|
Andren KG, Duffin K, Ryan MT, Riley CA, Tolisano AM. Postoperative optimization of cochlear implantation for single sided deafness and asymmetric hearing loss: a systematic review. Cochlear Implants Int 2023; 24:342-353. [PMID: 37490782 DOI: 10.1080/14670100.2023.2239512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
OBJECTIVE Identify and evaluate the effectiveness of methods for improving postoperative cochlear implant (CI) hearing performance in subjects with single-sided deafness (SSD) and asymmetric hearing loss (AHL). DATA SOURCES Embase, PubMed, Scopus. REVIEW METHODS Systematic review and narrative synthesis. English language studies of adult CI recipients with SSD and AHL reporting a postoperative intervention and comparative audiometric data pertaining to speech in noise, speech in quiet and sound localization were included. RESULTS 32 studies met criteria for full text review and 6 (n = 81) met final inclusion criteria. Interventions were categorized as: formal auditory training, programming techniques, or hardware optimization. Formal auditory training (n = 10) found no objective improvement in hearing outcomes. Experimental CI maps did not improve audiologic outcomes (n = 9). Programed CI signal delays to improve synchronization demonstrated improved sound localization (n = 12). Hardware optimization, including multidirectional (n = 29) and remote (n = 11) microphones, improved sound localization and speech in noise, respectively. CONCLUSION Few studies meeting inclusion criteria and small sample sizes highlight the need for further study. Formal auditory training did not appear to improve hearing outcomes. Programming techniques, such as CI signal delay, and hardware optimization, such as multidirectional and remote microphones, show promise to improve outcomes for SSD and AHL CI users.
Collapse
Affiliation(s)
- Kristofer G Andren
- Department of Otolaryngology - Head & Neck Surgery, San Antonio Uniformed Services Health Education Consortium, San Antonio, TX, USA
| | - Kevin Duffin
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Matthew T Ryan
- Department of Otolaryngology - Head & Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Charles A Riley
- Department of Otolaryngology - Head & Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA
- Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Anthony M Tolisano
- Department of Otolaryngology - Head & Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA
- Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
3
|
Öz O, D'Alessandro HD, Batuk MÖ, Sennaroğlu G, Govaerts PJ. Assessment of Binaural Benefits in Hearing and Hearing-Impaired Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3633-3648. [PMID: 37494143 DOI: 10.1044/2023_jslhr-23-00077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
PURPOSE The purpose of this study was to (a) investigate which speech material is most appropriate as stimulus in head shadow effect (HSE) and binaural squelch (SQ) tests, (b) obtain normative values of both tests using the material decided to be optimal, and (c) explore the results in bilateral cochlear implant (CI) users. METHOD Study participants consisted of 30 normal-hearing (NH) persons and 34 bilateral CI users. This study consisted of three phases. In the first phase, three different speech materials (1) monosyllabic words, (2) spondee words, and (3) sentences were compared in terms of (a) effect size, (b) test-retest reliability, and (c) interindividual variability. In the second phase, the speech material selected in the first phase was used to test a further 24 NHs to obtain normative values for both tests. In the third phase, tests were administered to a further 23 bilateral CI users, together with localization test and the Speech, Spatial, and Qualities of Hearing scale. RESULTS The results of the first phase indicated that spondees and sentences were more robust materials compared with monosyllables. Although the effect size and interindividual variability were comparable for spondees and sentences, sentences had higher test-retest reliability in this sample of CI users. With sentences, the mean (± standard deviation) HSE and SQ in the NH group were 58 ± 14% and 22 ± 11%, respectively. In the CI group, the mean HSE and SQ were 49 ± 13% and 13 ± 14%, respectively. There were no statistically significant correlations between the test results and the interval between the implantations, the length of binaural listening experience, or the asymmetry between the ears. CONCLUSIONS Sentences are preferred as stimulus material in the binaural HSE and SQ tests. Normative data are given for HSE and SQ with the LiCoS (linguistically controlled sentences) test. HSE is present for all bilateral CI users, whereas SQ is present in approximately seven out of 10 cases.
Collapse
Affiliation(s)
- Okan Öz
- The Eargroup, Antwerp, Belgium
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | | | - Merve Özbal Batuk
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Gonca Sennaroğlu
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Paul J Govaerts
- The Eargroup, Antwerp, Belgium
- Faculty of Medicine and Health Sciences, Translational Neurosciences, Otorhinolaryngology & Head and Neck Surgery, University of Antwerp, Belgium
| |
Collapse
|
4
|
Stronks HC, Briaire J, Frijns J. Beamforming and Single-Microphone Noise Reduction: Effects on Signal-to-Noise Ratio and Speech Recognition of Bimodal Cochlear Implant Users. Trends Hear 2022; 26:23312165221112762. [PMID: 35862265 PMCID: PMC9310275 DOI: 10.1177/23312165221112762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
We have investigated the effectiveness of three noise-reduction algorithms, namely an adaptive monaural beamformer (MB), a fixed binaural beamformer (BB), and a single-microphone stationary-noise reduction algorithm (SNRA) by assessing the speech reception threshold (SRT) in a group of 15 bimodal cochlear implant users. Speech was presented frontally towards the listener and background noise was established as a homogeneous field of long-term speech-spectrum-shaped (LTSS) noise or 8-talker babble. We pursued four research questions, namely: whether the benefits of beamforming on the SRT differ between LTSS noise and 8-talker babble; whether BB is more effective than MB; whether SNRA improves the SRT in LTSS noise; and whether the SRT benefits of MB and BB are comparable to their improvement of the signal-to-noise ratio (SNR). The results showed that MB and BB significantly improved SRTs by an average of 2.6 dB and 2.9 dB, respectively. These benefits did not statistically differ between noise types or between the two beamformers. By contrast, physical SNR improvements obtained with a manikin revealed substantially greater benefits of BB (6.6 dB) than MB (3.3 dB). SNRA did not significantly affect SRTs per se in omnidirectional microphone settings, nor in combination with MB and BB. We conclude that in the group of bimodal listeners tested, BB had no additional benefits on speech recognition over MB in homogeneous noise, despite the finding that BB had a substantial larger benefit on the SNR than MB. SNRA did not improve speech recognition.
Collapse
Affiliation(s)
- H Christiaan Stronks
- Department of Otorhinolaryngology - Head & Neck Surgery, 4501Leiden University Medical Center, Leiden, The Netherlands
| | - Jeroen Briaire
- Department of Otorhinolaryngology - Head & Neck Surgery, 4501Leiden University Medical Center, Leiden, The Netherlands
| | - Johan Frijns
- Department of Otorhinolaryngology - Head & Neck Surgery, 4501Leiden University Medical Center, Leiden, The Netherlands.,Leiden Institute for Brain and Cognition, Leiden, The Netherlands
| |
Collapse
|
5
|
Imsiecke M, Krüger B, Büchner A, Lenarz T, Nogueira W. Interaction Between Electric and Acoustic Stimulation Influences Speech Perception in Ipsilateral EAS Users. Ear Hear 2021; 41:868-882. [PMID: 31592902 PMCID: PMC7676483 DOI: 10.1097/aud.0000000000000807] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 08/30/2019] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The aim of this study was to determine electric-acoustic masking in cochlear implant users with ipsilateral residual hearing and different electrode insertion depths and to investigate the influence on speech reception. The effects of different fitting strategies-meet, overlap, and a newly developed masking adjusted fitting (UNMASKfit)-on speech reception are compared. If electric-acoustic masking has a detrimental effect on speech reception, the individualized UNMASKfit map might be able to reduce masking and thereby enhance speech reception. DESIGN Fifteen experienced MED-EL Flex electrode recipients with ipsilateral residual hearing participated in a crosssover design study using three fitting strategies for 4 weeks each. The following strategies were compared: (1) a meet fitting, dividing the frequency range between electric and acoustic stimulation, (2) an overlap fitting, delivering part of the frequency range both acoustically and electrically, and (3) the UNMASKfit, reducing the electric stimulation according to the individual electric-on-acoustic masking strength. A psychoacoustic masking procedure was used to measure the changes in acoustic thresholds due to the presence of electric maskers. Speech reception was measured in noise with the Oldenburg Matrix Sentence test. RESULTS Behavioral thresholds of acoustic probe tones were significantly elevated in the presence of electric maskers. A maximum of masking was observed when the difference in location between the electric and acoustic stimulation was around one octave in place frequency. Speech reception scores and strength of masking showed a dependency on residual hearing, and speech reception was significantly reduced in the overlap fitting strategy. Electric- acoustic stimulation significantly improved speech reception over electric stimulation alone, with a tendency toward a larger benefit with the UNMASKfit map. In addition, masking was significantly inversely correlated to the speech reception performance difference between the overlap and the meet fitting. CONCLUSIONS (1) This study confirmed the interaction between ipsilateral electric and acoustic stimulation in a psychoacoustic masking experiment. (2) The overlap fitting yielded poorer speech reception performance in stationary noise especially in subjects with strong masking. (3) The newly developed UNMASKfit strategy yielded similar speech reception thresholds with an enhanced acoustic benefit, while at the same time reducing the electric stimulation. This could be beneficial in the long-term if applied as a standard fitting, as hair cells are exposed to less possibly adverse electric stimulation. In this study, the UNMASKfit allowed the participants a better use of their natural hearing even after 1 month of adaptation. It might be feasible to transfer these results to the clinic, by fitting patients with the UNMASKfit upon their first fitting appointment, so that longer adaptation times can further improve speech reception.
Collapse
Affiliation(s)
- Marina Imsiecke
- Department of Otorhinolaryngology, Hanover Medical School, Hannover, Germany
| | - Benjamin Krüger
- Department of Otorhinolaryngology, Hanover Medical School, Hannover, Germany
- Cluster of Excellence ‘Hearing4all,' Hanover, Germany
| | - Andreas Büchner
- Department of Otorhinolaryngology, Hanover Medical School, Hannover, Germany
- Cluster of Excellence ‘Hearing4all,' Hanover, Germany
| | - Thomas Lenarz
- Department of Otorhinolaryngology, Hanover Medical School, Hannover, Germany
- Cluster of Excellence ‘Hearing4all,' Hanover, Germany
| | - Waldo Nogueira
- Department of Otorhinolaryngology, Hanover Medical School, Hannover, Germany
- Cluster of Excellence ‘Hearing4all,' Hanover, Germany
| |
Collapse
|
6
|
Sheffield SW, Goupell MJ, Spencer NJ, Stakhovskaya OA, Bernstein JGW. Binaural Optimization of Cochlear Implants: Discarding Frequency Content Without Sacrificing Head-Shadow Benefit. Ear Hear 2021; 41:576-590. [PMID: 31436754 PMCID: PMC7028504 DOI: 10.1097/aud.0000000000000784] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Single-sided deafness cochlear-implant (SSD-CI) listeners and bilateral cochlear-implant (BI-CI) listeners gain near-normal levels of head-shadow benefit but limited binaural benefits. One possible reason for these limited binaural benefits is that cochlear places of stimulation tend to be mismatched between the ears. SSD-CI and BI-CI patients might benefit from a binaural fitting that reallocates frequencies to reduce interaural place mismatch. However, this approach could reduce monaural speech recognition and head-shadow benefit by excluding low- or high-frequency information from one ear. This study examined how much frequency information can be excluded from a CI signal in the poorer-hearing ear without reducing head-shadow benefits and how these outcomes are influenced by interaural asymmetry in monaural speech recognition. DESIGN Speech-recognition thresholds for sentences in speech-shaped noise were measured for 6 adult SSD-CI listeners, 12 BI-CI listeners, and 9 normal-hearing listeners presented with vocoder simulations. Stimuli were presented using nonindividualized in-the-ear or behind-the-ear head-related impulse-response simulations with speech presented from a 70° azimuth (poorer-hearing side) and noise from 70° (better-hearing side), thereby yielding a better signal-to-noise ratio (SNR) at the poorer-hearing ear. Head-shadow benefit was computed as the improvement in bilateral speech-recognition thresholds gained from enabling the CI in the poorer-hearing, better-SNR ear. High- or low-pass filtering was systematically applied to the head-related impulse-response-filtered stimuli presented to the poorer-hearing ear. For the SSD-CI listeners and SSD-vocoder simulations, only high-pass filtering was applied, because the CI frequency allocation would never need to be adjusted downward to frequency-match the ears. For the BI-CI listeners and BI-vocoder simulations, both low and high pass filtering were applied. The normal-hearing listeners were tested with two levels of performance to examine the effect of interaural asymmetry in monaural speech recognition (vocoder synthesis-filter slopes: 5 or 20 dB/octave). RESULTS Mean head-shadow benefit was smaller for the SSD-CI listeners (~7 dB) than for the BI-CI listeners (~14 dB). For SSD-CI listeners, frequencies <1236 Hz could be excluded; for BI-CI listeners, frequencies <886 or >3814 Hz could be excluded from the poorer-hearing ear without reducing head-shadow benefit. Bilateral performance showed greater immunity to filtering than monaural performance, with gradual changes in performance as a function of filter cutoff. Real and vocoder-simulated CI users with larger interaural asymmetry in monaural performance had less head-shadow benefit. CONCLUSIONS The "exclusion frequency" ranges that could be removed without diminishing head-shadow benefit are interpreted in terms of low importance in the speech intelligibility index and a small head-shadow magnitude at low frequencies. Although groups and individuals with greater performance asymmetry gained less head-shadow benefit, the magnitudes of these factors did not predict the exclusion frequency range. Overall, these data suggest that for many SSD-CI and BI-CI listeners, the frequency allocation for the poorer-ear CI can be shifted substantially without sacrificing head-shadow benefit, at least for energetic maskers. Considering the two ears together as a single system may allow greater flexibility in discarding redundant frequency content from a CI in one ear when considering bilateral programming solutions aimed at reducing interaural frequency mismatch.
Collapse
Affiliation(s)
- Sterling W. Sheffield
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, FL, USA
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD USA
| | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA
| | | | - Olga A. Stakhovskaya
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD USA
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA
| | - Joshua G. W. Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD USA
| |
Collapse
|
7
|
Abstract
OBJECTIVES Currently, bilateral cochlear implants (CIs) are independently programmed in clinics using frequency allocations based on the relative location of a given electrode from the end of each electrode array. By pairing electrodes based on this method, bilateral CI recipients may have decreased sensitivity to interaural time differences (ITD) and/or interaural level differences (ILD), two cues critical for binaural tasks. There are multiple different binaural measures that can potentially be used to determine the optimal way to pair electrodes across the ears. Previous studies suggest that the optimal electrode pairing between the left and right ears may vary depending on the binaural task used. These studies, however, have only used one reference location or a single bilateral CI user. In both instances, it is difficult to determine if the results that were obtained reflect a measurement error or a systematic difference across binaural tasks. It is also difficult to determine from these studies if the differences between the three cues vary across electrode regions, which could result from differences in the availability of binaural cues across frequency regions. The purpose of this study was to determine if, after experience-dependent adaptation, there are systematic differences in the optimal pairing of electrodes at different points along the array for the optimal perception of ITD, ILD, and pitch. DESIGN Data from seven bilateral Nucleus users was collected and analyzed. Participants were tested with ITD, ILD, and pitch-matching tasks using five different reference electrodes in one ear, spaced across the array. Comparisons were conducted to determine if the optimal bilateral electrode pairs systematically differed in different regions depending on whether they were measured based on ITD sensitivity, ILD sensitivity, or pitch matching, and how those pairs differed from the pairing in the participants' clinical programs. RESULTS Results indicate that there was a significant difference in the optimal pairing depending on the cue measured, but only at the basal end of the array. CONCLUSION The results suggest that optimal electrode pairings differ depending on the cue measured to determine optimal pairing, at least for the basal end of the array. This also suggests that the improvements seen when using optimally paired electrodes may be tied to the particular percept being measured both to determine electrode pairing and to assess performance, at least for the basal end of the array.
Collapse
|
8
|
D'Onofrio K, Richards V, Gifford R. Spatial Release From Informational and Energetic Masking in Bimodal and Bilateral Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3816-3833. [PMID: 33049147 PMCID: PMC8582905 DOI: 10.1044/2020_jslhr-20-00044] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 04/27/2020] [Accepted: 07/24/2020] [Indexed: 06/11/2023]
Abstract
Purpose Spatially separating speech and background noise improves speech understanding in normal-hearing listeners, an effect referred to as spatial release from masking (SRM). In cochlear implant (CI) users, SRM has often been demonstrated using asymmetric noise configurations, which maximize benefit from head shadow and the potential availability of binaural cues. In contrast, SRM in symmetrical configurations has been minimal to absent in CI users. We examined the interaction between two types of maskers (informational and energetic) and SRM in bimodal and bilateral CI users. We hypothesized that SRM would be absent or "negative" using symmetrically separated noise maskers. Second, we hypothesized that bimodal listeners would exhibit greater release from informational masking due to access to acoustic information in the non-CI ear. Method Participants included 10 bimodal and 10 bilateral CI users. Speech understanding in noise was tested in 24 conditions: 3 spatial configurations (S0N0, S0N45&315, S0N90&270) × 2 masker types (speech, signal-correlated noise) × 2 listening configurations (best-aided, CI-alone) × 2 talker gender conditions (different-gender, same-gender). Results In support of our first hypothesis, both groups exhibited negative SRM with increasing spatial separation. In opposition to our second hypothesis, both groups exhibited similar magnitudes of release from informational masking. The magnitude of release was greater for bimodal listeners, though this difference failed to reach statistical significance. Conclusions Both bimodal and bilateral CI recipients exhibited negative SRM. This finding is consistent with CI signal processing limitations, the audiologic factors associated with SRM, and known effects of behind-the-ear microphone technology. Though release from informational masking was not significantly different across groups, the magnitude of release was greater for bimodal listeners. This suggests that bimodal listeners may be at least marginally more susceptible to informational masking than bilateral CI users, though further research is warranted.
Collapse
Affiliation(s)
- Kristen D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | - René Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
9
|
Dwyer RT, Roberts J, Gifford RH. Effect of Microphone Configuration and Sound Source Location on Speech Recognition for Adult Cochlear Implant Users with Current-Generation Sound Processors. J Am Acad Audiol 2020; 31:578-589. [PMID: 32340055 DOI: 10.1055/s-0040-1709449] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
BACKGROUND Microphone location has been shown to influence speech recognition with a microphone placed at the entrance to the ear canal yielding higher levels of speech recognition than top-of-the-pinna placement. Although this work is currently influencing cochlear implant programming practices, prior studies were completed with previous-generation microphone and sound processor technology. Consequently, the applicability of prior studies to current clinical practice is unclear. PURPOSE To investigate how microphone location (e.g., at the entrance to the ear canal, at the top of the pinna), speech-source location, and configuration (e.g., omnidirectional, directional) influence speech recognition for adult CI recipients with the latest in sound processor technology. RESEARCH DESIGN Single-center prospective study using a within-subjects, repeated-measures design. STUDY SAMPLE Eleven experienced adult Advanced Bionics cochlear implant recipients (five bilateral, six bimodal) using a Naída CI Q90 sound processor were recruited for this study. DATA COLLECTION AND ANALYSIS Sentences were presented from a single loudspeaker at 65 dBA for source azimuths of 0°, 90°, or 270° with semidiffuse noise originating from the remaining loudspeakers in the R-SPACE array. Individualized signal-to-noise ratios were determined to obtain 50% correct in the unilateral cochlear implant condition with the signal at 0°. Performance was compared across the following microphone sources: T-Mic 2, integrated processor microphone (formerly behind-the-ear mic), processor microphone + T-Mic 2, and two types of beamforming: monaural, adaptive beamforming (UltraZoom) and binaural beamforming (StereoZoom). Repeated-measures analyses were completed for both speech recognition and microphone output for each microphone location and configuration as well as sound source location. A two-way analysis of variance using mic and azimuth as the independent variables and output for pink noise as the dependent variable was used to characterize the acoustic output characteristics of each microphone source. RESULTS No significant differences in speech recognition across omnidirectional mic location at any source azimuth or listening condition were observed. Secondary findings were (1) omnidirectional microphone configurations afforded significantly higher speech recognition for conditions in which speech was directed to ± 90° (when compared with directional microphone configurations), (2) omnidirectional microphone output was significantly greater when the signal was presented off-axis, and (3) processor microphone output was significantly greater than T-Mic 2 when the sound originated from 0°, which contributed to better aided detection at 2 and 6 kHz with the processor microphone in this group. CONCLUSIONS Unlike previous-generation microphones, we found no statistically significant effect of microphone location on speech recognition in noise from any source azimuth. Directional microphones significantly improved speech recognition in the most difficult listening environments.
Collapse
Affiliation(s)
- Robert T Dwyer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Jillian Roberts
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
10
|
Sivonen V, Willberg T, Aarnisalo AA, Dietz A. The efficacy of microphone directionality in improving speech recognition in noise for three commercial cochlear-implant systems. Cochlear Implants Int 2020; 21:153-159. [PMID: 32160829 DOI: 10.1080/14670100.2019.1701236] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Objectives: To investigate the effect of fixed and adaptive microphone directionality on speech reception threshold (SRT) in noise when compared to omnidirectional mode in unilateral cochlear-implant (CI) use for three different CI systems.Methods: Twenty-four CI recipients with bilateral severe-to-profound hearing loss participated in the study. Eight recipients of each CI system were enrolled, and their SRT in noise was measured when the speech and noise signals were co-located in the front to serve as a baseline. The acute effect of different microphone directionalities on SRT in noise was measured with the noise emanating at 90° in the horizontal plane from the side of the CI sound processor (S0NCI).Results: When compared to the baseline condition, the individual data revealed fairly similar patterns within each CI system. In the S0NCI condition, the average improvement in SRT in noise for fixed and adaptive directionalities over the omnidirectional mode was statistically significant and ranged from 1.2 to 6.0 dB SNR and from 3.7 to 12.7 dB SNR depending on the CI system, respectively.Discussion: Directional microphones significantly improve SRT in noise for all three CI systems. However, relatively large differences were observed in the directional microphone efficacy between CI systems.
Collapse
Affiliation(s)
- Ville Sivonen
- Department of Otorhinolaryngology - Head and Neck Surgery, Helsinki University Hospital, Helsinki, Finland
| | - Tytti Willberg
- Department of Otorhinolaryngology, Turku University Hospital, Turku, Finland.,Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | - Antti A Aarnisalo
- Department of Otorhinolaryngology - Head and Neck Surgery, Helsinki University Hospital, Helsinki, Finland
| | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
| |
Collapse
|
11
|
Holder JT, Taylor AL, Sunderhaus LW, Gifford RH. Effect of Microphone Location and Beamforming Technology on Speech Recognition in Pediatric Cochlear Implant Recipients. J Am Acad Audiol 2020; 31:506-512. [PMID: 32119817 DOI: 10.3766/jaaa.19025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Despite improvements in cochlear implant (CI) technology, pediatric CI recipients continue to have more difficulty understanding speech than their typically hearing peers in background noise. A variety of strategies have been evaluated to help mitigate this disparity, such as signal processing, remote microphone technology, and microphone placement. Previous studies regarding microphone placement used speech processors that are now dated, and most studies investigating the improvement of speech recognition in background noise included adult listeners only. PURPOSE The purpose of the present study was to investigate the effects of microphone location and beamforming technology on speech understanding for pediatric CI recipients in noise. RESEARCH DESIGN A prospective, repeated-measures, within-participant design was used to compare performance across listening conditions. STUDY SAMPLE A total of nine children (aged 6.6 to 15.3 years) with at least one Advanced Bionics CI were recruited for this study. DATA COLLECTION AND ANALYSIS The Basic English Lexicon Sentences and AzBio Sentences were presented at 0o azimuth at 65-dB SPL in +5 signal-to-noise ratio noise presented from seven speakers using the R-SPACE system (Advanced Bionics, Valencia, CA). Performance was compared across three omnidirectional microphone configurations (processor microphone, T-Mic 2, and processor + T-Mic 2) and two directional microphone configurations (UltraZoom and auto UltraZoom). The two youngest participants were not tested in the directional microphone configurations. RESULTS No significant differences were found between the various omnidirectional microphone configurations. UltraZoom provided significant benefit over all omnidirectional microphone configurations (T-Mic 2, p = 0.004, processor microphone, p < 0.001, and processor microphone + T-Mic 2, p = 0.018) but was not significantly different from auto UltraZoom (p = 0.176). CONCLUSIONS All omnidirectional microphone configurations yielded similar performance, suggesting that a child's listening performance in noise will not be compromised by choosing the microphone configuration best suited for the child. UltraZoom (adaptive beamformer) yielded higher performance than all omnidirectional microphones in moderate background noise for adolescents aged 9 to 15 years. The implications of these data suggest that for older children who are able to reliably use manual controls, UltraZoom will yield significantly higher performance in background noise when the target is in front of the listener.
Collapse
Affiliation(s)
- Jourdan T Holder
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Adrian L Taylor
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Linsey W Sunderhaus
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
12
|
Impact of Microphone Configuration on Speech Perception of Cochlear Implant Users in Traffic Noise. Otol Neurotol 2020; 40:e198-e205. [PMID: 30741896 DOI: 10.1097/mao.0000000000002135] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE The aim of this study was to investigate the impact of microphone configuration and noise reduction algorithm on speech perception of cochlear implant (CI) users in a moving noise setup. METHOD Eleven CI users provided with Advanced Bionics implant systems participated in this study. All tests were conducted with three different microphone settings: (a) omnidirectional behind the ear (BTE), (b) inside the pinna (ITP), and (c) adaptive directional microphone (adaptive beamformer, ABF). Speech reception thresholds (SRTs) were measured using the Oldenburg sentence test in a moving noise source condition. Furthermore, the effect of a noise reduction algorithm on speech perception was measured in a condition with an additional static noise source. RESULTS The ABF setting significantly improved SRT by 5.7 dB compared with the BTE microphone, and by 4.7 dB compared with the ITP microphone in the moving noise condition. In the presence of an additional static noise source, there was a significant improvement in SRT of 0.9 dB with the use of NR in addition to ABF. CONCLUSION Adaptive beamforming can significantly improve speech perception in moving noise. Depending on the noise condition, the combination of ABF with NR can provide additional benefit.
Collapse
|
13
|
The Effects of Dynamic-range Automatic Gain Control on Sentence Intelligibility With a Speech Masker in Simulated Cochlear Implant Listening. Ear Hear 2019; 40:710-724. [PMID: 30204615 DOI: 10.1097/aud.0000000000000653] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES "Channel-linked" and "multi-band" front-end automatic gain control (AGC) were examined as alternatives to single-band, channel-unlinked AGC in simulated bilateral cochlear implant (CI) processing. In channel-linked AGC, the same gain control signal was applied to the input signals to both of the two CIs ("channels"). In multi-band AGC, gain control acted independently on each of a number of narrow frequency regions per channel. DESIGN Speech intelligibility performance was measured with a single target (to the left, at -15 or -30°) and a single, symmetrically-opposed masker (to the right) at a signal-to-noise ratio (SNR) of -2 decibels. Binaural sentence intelligibility was measured as a function of whether channel linking was present and of the number of AGC bands. Analysis of variance was performed to assess condition effects on percent correct across the two spatial arrangements, both at a high and a low AGC threshold. Acoustic analysis was conducted to compare postcompressed better-ear SNR, interaural differences, and monaural within-band envelope levels across processing conditions. RESULTS Analyses of variance indicated significant main effects of both channel linking and number of bands at low threshold, and of channel linking at high threshold. These improvements were accompanied by several acoustic changes. Linked AGC produced a more favorable better-ear SNR and better preserved broadband interaural level difference statistics, but did not reduce dynamic range as much as unlinked AGC. Multi-band AGC sometimes improved better-ear SNR statistics and always improved broadband interaural level difference statistics whenever the AGC channels were unlinked. Multi-band AGC produced output envelope levels that were higher than single-band AGC. CONCLUSIONS These results favor strategies that incorporate channel-linked AGC and multi-band AGC for bilateral CIs. Linked AGC aids speech intelligibility in spatially separated speech, but reduces the degree to which dynamic range is compressed. Combining multi-band and channel-linked AGC offsets the potential impact of diminished dynamic range with linked AGC without sacrificing the intelligibility gains observed with linked AGC.
Collapse
|
14
|
Zaleski-King A, Goupell MJ, Barac-Cikoja D, Bakke M. Bimodal Cochlear Implant Listeners' Ability to Perceive Minimal Audible Angle Differences. J Am Acad Audiol 2019; 30:659-671. [PMID: 30417825 PMCID: PMC6561832 DOI: 10.3766/jaaa.17012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Bilateral inputs should ideally improve sound localization and speech understanding in noise. However, for many bimodal listeners [i.e., individuals using a cochlear implant (CI) with a contralateral hearing aid (HA)], such bilateral benefits are at best, inconsistent. The degree to which clinically available HA and CI devices can function together to preserve interaural time and level differences (ITDs and ILDs, respectively) enough to support the localization of sound sources is a question with important ramifications for speech understanding in complex acoustic environments. PURPOSE To determine if bimodal listeners are sensitive to changes in spatial location in a minimum audible angle (MAA) task. RESEARCH DESIGN Repeated-measures design. STUDY SAMPLE Seven adult bimodal CI users (28-62 years). All listeners reported regular use of digital HA technology in the nonimplanted ear. DATA COLLECTION AND ANALYSIS Seven bimodal listeners were asked to balance the loudness of prerecorded single syllable utterances. The loudness-balanced stimuli were then presented via direct audio inputs of the two devices with an ITD applied. The task of the listener was to determine the perceived difference in processing delay (the interdevice delay [IDD]) between the CI and HA devices. Finally, virtual free-field MAA performance was measured for different spatial locations both with and without inclusion of the IDD correction, which was added with the intent to perceptually synchronize the devices. RESULTS During the loudness-balancing task, all listeners required increased acoustic input to the HA relative to the CI most comfortable level to achieve equal interaural loudness. During the ITD task, three listeners could perceive changes in intracranial position by distinguishing sounds coming from the left or from the right hemifield; when the CI was delayed by 0.73, 0.67, or 1.7 msec, the signal lateralized from one side to the other. When MAA localization performance was assessed, only three of the seven listeners consistently achieved above-chance performance, even when an IDD correction was included. It is not clear whether the listeners who were able to consistently complete the MAA task did so via binaural comparison or by extracting monaural loudness cues. Four listeners could not perform the MAA task, even though they could have used a monaural loudness cue strategy. CONCLUSIONS These data suggest that sound localization is extremely difficult for most bimodal listeners. This difficulty does not seem to be caused by large loudness imbalances and IDDs. Sound localization is best when performed via a binaural comparison, where frequency-matched inputs convey ITD and ILD information. Although low-frequency acoustic amplification with a HA when combined with a CI may produce an overlapping region of frequency-matched inputs and thus provide an opportunity for binaural comparisons for some bimodal listeners, our study showed that this may not be beneficial or useful for spatial location discrimination tasks. The inability of our listeners to use monaural-level cues to perform the MAA task highlights the difficulty of using a HA and CI together to glean information on the direction of a sound source.
Collapse
Affiliation(s)
| | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland College Park, MD 20742
| | | | | |
Collapse
|
15
|
Pitch Matching Adapts Even for Bilateral Cochlear Implant Users with Relatively Small Initial Pitch Differences Across the Ears. J Assoc Res Otolaryngol 2019; 20:595-603. [PMID: 31385149 DOI: 10.1007/s10162-019-00733-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 07/18/2019] [Indexed: 10/26/2022] Open
Abstract
There is often a mismatch for bilateral cochlear implant (CI) users between the electrodes in the two ears that receive the same frequency allocation and the electrodes that, when stimulated, yield the same pitch. Studies with CI users who have extreme mismatches between the two ears show that adaptation occurs in terms of pitch matching, reducing the difference between which electrodes receive the same frequency allocation and which ones produce the same pitch. The considerable adaptation that occurs for these extreme cases suggests that adaptation should be sufficient to overcome the relatively minor mismatches seen with typical bilateral CI users. However, even those with many years of bilateral CI use continue to demonstrate a mismatch. This may indicate that adaptation only occurs when there are large mismatches. Alternatively, it may indicate that adaptation occurs regardless of the magnitude of the mismatch, but that adaptation is proportional to the magnitude of the mismatch, and thus never fully counters the original mismatch. To investigate this, six bilateral CI users with initial pitch-matching mismatches of less than 3 mm completed a pitch-matching task near the time of activation, 6 months after activation, and 1 year after activation. Despite relatively small initial mismatches, the results indicated that adaptation still occurred.
Collapse
|
16
|
Yu F, Li H, Zhou X, Tang X, Galvin III JJ, Fu QJ, Yuan W. Effects of Training on Lateralization for Simulations of Cochlear Implants and Single-Sided Deafness. Front Hum Neurosci 2018; 12:287. [PMID: 30065641 PMCID: PMC6056606 DOI: 10.3389/fnhum.2018.00287] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Accepted: 06/27/2018] [Indexed: 11/13/2022] Open
Abstract
While cochlear implantation has benefitted many patients with single-sided deafness (SSD), there is great variability in cochlear implant (CI) outcomes and binaural performance remains poorer than that of normal-hearing (NH) listeners. Differences in sound quality across ears-temporal fine structure (TFS) information with acoustic hearing vs. coarse spectro-temporal envelope information with electric hearing-may limit integration of acoustic and electric patterns. Binaural performance may also be limited by inter-aural mismatch between the acoustic input frequency and the place of stimulation in the cochlea. SSD CI patients must learn to accommodate these differences between acoustic and electric stimulation to maximize binaural performance. It is possible that training may increase and/or accelerate accommodation and further improve binaural performance. In this study, we evaluated lateralization training in NH subjects listening to broad simulations of SSD CI signal processing. A 16-channel vocoder was used to simulate the coarse spectro-temporal cues available with electric hearing; the degree of inter-aural mismatch was varied by adjusting the simulated insertion depth (SID) to be 25 mm (SID25), 22 mm (SID22) and 19 mm (SID19) from the base of the cochlea. Lateralization was measured using headphones and head-related transfer functions (HRTFs). Baseline lateralization was measured for unprocessed speech (UN) delivered to the left ear to simulate SSD and for binaural performance with the acoustic ear combined with the 16-channel vocoders (UN+SID25, UN+SID22 and UN+SID19). After completing baseline measurements, subjects completed six lateralization training exercises with the UN+SID22 condition, after which performance was re-measured for all baseline conditions. Post-training performance was significantly better than baseline for all conditions (p < 0.05 in all cases), with no significant difference in training benefits among conditions. Given that there was no significant difference between the SSD and the SSD CI conditions before or after training, the results suggest that NH listeners were unable to integrate TFS and coarse spectro-temporal cues across ears for lateralization, and that inter-aural mismatch played a secondary role at best. While lateralization training may benefit SSD CI patients, the training may largely improve spectral analysis with the acoustic ear alone, rather than improve integration of acoustic and electric hearing.
Collapse
Affiliation(s)
- Fei Yu
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - Hai Li
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - Xiaoqing Zhou
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - XiaoLin Tang
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | | | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Wei Yuan
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| |
Collapse
|
17
|
Gajęcki T, Nogueira W. Deep learning models to remix music for cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:3602. [PMID: 29960485 DOI: 10.1121/1.5042056] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The severe hearing loss problems that some people suffer can be treated by providing them with a surgically implanted electrical device called cochlear implant (CI). CI users struggle to perceive complex audio signals such as music; however, previous studies show that CI recipients find music more enjoyable when the vocals are enhanced with respect to the background music. In this manuscript source separation (SS) algorithms are used to remix pop songs by applying gain to the lead singing voice. This work uses deep convolutional auto-encoders, a deep recurrent neural network, a multilayer perceptron (MLP), and non-negative matrix factorization to be evaluated objectively and subjectively through two different perceptual experiments which involve normal hearing subjects and CI recipients. The evaluation assesses the relevance of the artifacts introduced by the SS algorithms considering their computation time, as this study aims at proposing one of the algorithms for real-time implementation. Results show that the MLP performs in a robust way throughout the tested data while providing levels of distortions and artifacts which are not perceived by CI users. Thus, an MLP is proposed to be implemented for real-time monaural audio SS to remix music for CI users.
Collapse
Affiliation(s)
- Tom Gajęcki
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Hannover, 30625, Germany
| | - Waldo Nogueira
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Hannover, 30625, Germany
| |
Collapse
|
18
|
Gifford RH, Loiselle L, Natale S, Sheffield SW, Sunderhaus LW, S. Dietrich M, Dorman MF. Speech Understanding in Noise for Adults With Cochlear Implants: Effects of Hearing Configuration, Source Location Certainty, and Head Movement. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1306-1321. [PMID: 29800361 PMCID: PMC6195075 DOI: 10.1044/2018_jslhr-h-16-0444] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2016] [Revised: 07/27/2017] [Accepted: 02/04/2018] [Indexed: 05/11/2023]
Abstract
Purpose The primary purpose of this study was to assess speech understanding in quiet and in diffuse noise for adult cochlear implant (CI) recipients utilizing bimodal hearing or bilateral CIs. Our primary hypothesis was that bilateral CI recipients would demonstrate less effect of source azimuth in the bilateral CI condition due to symmetric interaural head shadow. Method Sentence recognition was assessed for adult bilateral (n = 25) CI users and bimodal listeners (n = 12) in three conditions: (1) source location certainty regarding fixed target azimuth, (2) source location uncertainty regarding roving target azimuth, and (3) Condition 2 repeated, allowing listeners to turn their heads, as needed. Results (a) Bilateral CI users exhibited relatively similar performance regardless of source azimuth in the bilateral CI condition; (b) bimodal listeners exhibited higher performance for speech directed to the better hearing ear even in the bimodal condition; (c) the unilateral, better ear condition yielded higher performance for speech presented to the better ear versus speech to the front or to the poorer ear; (d) source location certainty did not affect speech understanding performance; and (e) head turns did not improve performance. The results confirmed our hypothesis that bilateral CI users exhibited less effect of source azimuth than bimodal listeners. That is, they exhibited similar performance for speech recognition irrespective of source azimuth, whereas bimodal listeners exhibited significantly poorer performance with speech originating from the poorer hearing ear (typically the nonimplanted ear). Conclusions Bilateral CI users overcame ear and source location effects observed for the bimodal listeners. Bilateral CI users have access to head shadow on both sides, whereas bimodal listeners generally have interaural asymmetry in both speech understanding and audible bandwidth limiting the head shadow benefit obtained from the poorer ear (generally the nonimplanted ear). In summary, we found that, in conditions with source location uncertainty and increased ecological validity, bilateral CI performance was superior to bimodal listening.
Collapse
Affiliation(s)
| | - Louise Loiselle
- Arizona State University, Tempe, AZ
- MED-EL Corporation, Durham, NC
| | | | | | | | | | | |
Collapse
|
19
|
Davis TJ, Gifford RH. Spatial Release From Masking in Adults With Bilateral Cochlear Implants: Effects of Distracter Azimuth and Microphone Location. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:752-761. [PMID: 29450488 PMCID: PMC5963045 DOI: 10.1044/2017_jslhr-h-16-0441] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Revised: 08/20/2017] [Accepted: 10/04/2017] [Indexed: 06/01/2023]
Abstract
PURPOSE The primary purpose of this study was to derive spatial release from masking (SRM) performance-azimuth functions for bilateral cochlear implant (CI) users to provide a thorough description of SRM as a function of target/distracter spatial configuration. The secondary purpose of this study was to investigate the effect of the microphone location for SRM in a within-subject study design. METHOD Speech recognition was measured in 12 adults with bilateral CIs for 11 spatial separations ranging from -90° to +90° in 20° steps using an adaptive block design. Five of the 12 participants were tested with both the behind-the-ear microphones and a T-mic configuration to further investigate the effect of mic location on SRM. RESULTS SRM can be significantly affected by the hemifield origin of the distracter stimulus-particularly for listeners with interaural asymmetry in speech understanding. The greatest SRM was observed with a distracter positioned 50° away from the target. There was no effect of mic location on SRM for the current experimental design. CONCLUSION Our results demonstrate that the traditional assessment of SRM with a distracter positioned at 90° azimuth may underestimate maximum performance for individuals with bilateral CIs.
Collapse
Affiliation(s)
- Timothy J. Davis
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - René H. Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| |
Collapse
|
20
|
Abstract
OBJECTIVES Cochlear-implant (CI) users with single-sided deafness (SSD)-that is, one normal-hearing (NH) ear and one CI ear-can obtain some unmasking benefits when a mixture of target and masking voices is presented to the NH ear and a copy of just the masking voices is presented to the CI ear. NH listeners show similar benefits in a simulation of SSD-CI listening, whereby a mixture of target and masking voices is presented to one ear and a vocoded copy of the masking voices is presented to the opposite ear. However, the magnitude of the benefit for SSD-CI listeners is highly variable across individuals and is on average less than for NH listeners presented with vocoded stimuli. One possible explanation for the limited benefit observed for some SSD-CI users is that temporal and spectral discrepancies between the acoustic and electric ears might interfere with contralateral unmasking. The present study presented vocoder simulations to NH participants to examine the effects of interaural temporal and spectral mismatches on contralateral unmasking. DESIGN Speech-reception performance was measured in a competing-talker paradigm for NH listeners presented with vocoder simulations of SSD-CI listening. In the monaural condition, listeners identified target speech masked by two same-gender interferers, presented to the left ear. In the bilateral condition, the same stimuli were presented to the left ear, but the right ear was presented with a noise-vocoded copy of the interfering voices. This paradigm tested whether listeners could integrate the interfering voices across the ears to better hear the monaural target. Three common distortions inherent in CI processing were introduced to the vocoder processing: spectral shifts, temporal delays, and reduced frequency selectivity. RESULTS In experiment 1, contralateral unmasking (i.e., the benefit from adding the vocoded maskers to the second ear) was impaired by spectral mismatches of four equivalent rectangular bandwidths or greater. This is equivalent to roughly a 3.6-mm mismatch between the cochlear places stimulated in the electric and acoustic ears, which is on the low end of the average expected mismatch for SSD-CI listeners. In experiment 2, performance was negatively affected by a temporal mismatch of 24 ms or greater, but not for mismatches in the 0 to 12 ms range expected for SSD-CI listeners. Experiment 3 showed an interaction between spectral shift and spectral resolution, with less effect of interaural spectral mismatches when the number of vocoder channels was reduced. Experiment 4 applied interaural spectral and temporal mismatches in combination. Performance was best when both frequency and timing were aligned, but in cases where a mismatch was present in one dimension (either frequency or latency), the addition of mismatch in the second dimension did not further disrupt performance. CONCLUSIONS These results emphasize the need for interaural alignment-in timing and especially in frequency-to maximize contralateral unmasking for NH listeners presented with vocoder simulations of SSD-CI listening. Improved processing strategies that reduce mismatch between the electric and acoustic ears of SSD-CI listeners might improve their ability to obtain binaural benefits in multitalker environments.
Collapse
|
21
|
Landsberger DM, Vermeire K, Claes A, Van Rompaey V, Van de Heyning P. Qualities of Single Electrode Stimulation as a Function of Rate and Place of Stimulation with a Cochlear Implant. Ear Hear 2018; 37:e149-59. [PMID: 26583480 DOI: 10.1097/aud.0000000000000250] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Although it has been shown previously that changes in temporal coding produce changes in pitch in all cochlear regions, research has suggested that temporal coding might be best encoded in relatively apical locations. The authors hypothesized that although temporal coding may provide useable information at any cochlear location, low rates of stimulation might provide better sound quality in apical regions that are more likely to encode temporal information in the normal ear. In the present study, sound qualities of single electrode pulse trains were scaled to provide insight into the combined effects of cochlear location and stimulation rate on sound quality. DESIGN Ten long-term users of MED-EL cochlear implants with 31-mm electrode arrays (Standard or FLEX) were asked to scale the sound quality of single electrode pulse trains in terms of how "Clean," "Noisy," "High," and "Annoying" they sounded. Pulse trains were presented on most electrodes between 1 and 12 representing the entire range of the long electrode array at stimulation rates of 100, 150, 200, 400, or 1500 pulses per second. RESULTS Although high rates of stimulation are scaled as having a Clean sound quality across the entire array, only the most apical electrodes (typically 1 through 3) were considered Clean at low rates. Low rates on electrodes 6 through 12 were not rated as Clean, whereas the low-rate quality of electrodes 4 and 5 were typically in between. Scaling of Noisy responses provided an approximately inverse pattern as Clean responses. High responses show the trade-off between rate and place of stimulation on pitch. Because High responses did not correlate with Clean responses, subjects were not rating sound quality based on pitch. CONCLUSIONS If explicit temporal coding is to be provided in a cochlear implant, it is likely to sound better when provided apically. In addition, the finding that low rates sound clean only at apical places of stimulation is consistent with previous findings that a change in rate of stimulation corresponds to an equivalent change in perceived pitch at apical locations. Collectively, the data strongly suggest that temporal coding with a cochlear implant is optimally provided by electrodes placed well into the second cochlear turn.
Collapse
Affiliation(s)
- David M Landsberger
- 1Department of Otolaryngology, New York University School of Medicine, New York, New York, USA; 2Department of Otorhinolaryngology & Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium; 3Hearing and Speech Center, Long Island Jewish Medical Center, New Hyde Park, New York, USA; and 4Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | | | | | | | | |
Collapse
|
22
|
Perceptual changes with monopolar and phantom electrode stimulation. Hear Res 2017; 359:64-75. [PMID: 29325874 DOI: 10.1016/j.heares.2017.12.019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Revised: 12/17/2017] [Accepted: 12/23/2017] [Indexed: 11/21/2022]
Abstract
Phantom electrode (PE) stimulation is achieved by simultaneously stimulating out-of-phase from two adjacent intra-cochlear electrodes with different amplitudes. If the basal electrode stimulates with a smaller amplitude than the apical electrode of the pair, the resulting electrical field is pushed away from the basal electrode producing a lower pitch. There is great interest in using PE stimulation in a processing strategy as it can be used to provide stimulation to regions of the cochlea located more apically than the most apical contact on the electrode array. The result is that even lower pitch sensations can be provided without additional risk of a deeper insertion. However, it is unknown if there are perceptual differences between monopolar (MP) and PE stimulation other than a shift in place pitch. Furthermore, it is unknown if the effect and magnitude of changing from MP to PE stimulation is dependent on electrode location. This study investigates the perceptual differences (including pitch and other sound quality differences) at multiple electrode positions using MP and PE stimulation using both a multidimensional scaling procedure (MDS) and a traditional scaling procedure. 10 Advanced Bionics users reported the perceptual distances between 5 single electrode (typically 1, 3, 5, 7, and 9) stimuli in either MP or PE (σ = 0.5) mode. Subjects were asked to report how perceptually different each pair of stimuli were using any perceived differences except loudness. Subsequently, each stimulus was presented in isolation and subjects scaled how "high" or how "clean" each sounded. Results from the MDS task suggest that perceptual differences between MP and PE stimulation can be explained by a single dimension. The traditional scaling suggests that the single dimension is place pitch. PE stimulation elicits lower pitch perceptions in all cochlear regions. Analysis of Cone Beam Computer Tomography (CBCT) data suggests that PE stimulation may be more effective at the apical part of the cochlea. PE stimulation can be used for new sound coding strategies in order to extend the pitch range for cochlear implant (CI) users without perceptual side effects.
Collapse
|
23
|
Aronoff JM, Stelmach J, Padilla M, Landsberger DM. Interleaved Processors Improve Cochlear Implant Patients' Spectral Resolution. Ear Hear 2016; 37:e85-90. [PMID: 26656190 DOI: 10.1097/aud.0000000000000249] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE Cochlear implant patients have difficulty in noisy environments, in part, because of channel interaction. Interleaving the signal by sending every other channel to the opposite ear has the potential to reduce channel interaction by increasing the space between channels in each ear. Interleaving still potentially provides the same amount of spectral information when the two ears are combined. Although this method has been successful in other populations such as hearing aid users, interleaving with cochlear implant patients has not yielded consistent benefits. This may be because perceptual misalignment between the two ears, and the spacing between stimulation locations must be taken into account before interleaving. DESIGN Eight bilateral cochlear implant users were tested. After perceptually aligning the two ears, 12-channel maps were made that spanned the entire aligned portions of the array. Interleaved maps were created by removing every other channel from each ear. Participants' spectral resolution and localization abilities were measured with perceptually aligned processing strategies both with and without interleaving. RESULTS There was a significant improvement in spectral resolution with interleaving. However, there was no significant effect of interleaving on localization abilities. CONCLUSIONS The results indicate that interleaving can improve cochlear implant users' spectral resolution. However, it may be necessary to perceptually align the two ears and/or use relatively large spacing between stimulation locations.
Collapse
Affiliation(s)
- Justin M Aronoff
- 1Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign, Illinois, USA; 2Department of Otolaryngology - Head and Neck Surgery, University of Illinois at Chicago, Chicago, Illinois, USA; 3Communication and Neuroscience Division, House Ear Institute, Los Angeles, California, USA; and 4Department of Otolaryngology, New York University, New York, New York, USA
| | | | | | | |
Collapse
|
24
|
|
25
|
Kolberg ER, Sheffield SW, Davis TJ, Sunderhaus LW, Gifford RH. Cochlear implant microphone location affects speech recognition in diffuse noise. J Am Acad Audiol 2015; 26:51-8; quiz 109-10. [PMID: 25597460 DOI: 10.3766/jaaa.26.1.6] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Despite improvements in cochlear implants (CIs), CI recipients continue to experience significant communicative difficulty in background noise. Many potential solutions have been proposed to help increase signal-to-noise ratio in noisy environments, including signal processing and external accessories. To date, however, the effect of microphone location on speech recognition in noise has focused primarily on hearing aid users. PURPOSE The purpose of this study was to (1) measure physical output for the T-Mic as compared with the integrated behind-the-ear (BTE) processor mic for various source azimuths, and (2) to investigate the effect of CI processor mic location for speech recognition in semi-diffuse noise with speech originating from various source azimuths as encountered in everyday communicative environments. RESEARCH DESIGN A repeated-measures, within-participant design was used to compare performance across listening conditions. STUDY SAMPLE A total of 11 adults with Advanced Bionics CIs were recruited for this study. DATA COLLECTION AND ANALYSIS Physical acoustic output was measured on a Knowles Experimental Mannequin for Acoustic Research (KEMAR) for the T-Mic and BTE mic, with broadband noise presented at 0 and 90° (directed toward the implant processor). In addition to physical acoustic measurements, we also assessed recognition of sentences constructed by researchers at Texas Instruments, the Massachusetts Institute of Technology, and the Stanford Research Institute (TIMIT sentences) at 60 dBA for speech source azimuths of 0, 90, and 270°. Sentences were presented in a semi-diffuse restaurant noise originating from the R-SPACE 8-loudspeaker array. Signal-to-noise ratio was determined individually to achieve approximately 50% correct in the unilateral implanted listening condition with speech at 0°. Performance was compared across the T-Mic, 50/50, and the integrated BTE processor mic. RESULTS The integrated BTE mic provided approximately 5 dB attenuation from 1500-4500 Hz for signals presented at 0° as compared with 90° (directed toward the processor). The T-Mic output was essentially equivalent for sources originating from 0 and 90°. Mic location also significantly affected sentence recognition as a function of source azimuth, with the T-Mic yielding the highest performance for speech originating from 0°. CONCLUSIONS These results have clinical implications for (1) future implant processor design with respect to mic location, (2) mic settings for implant recipients, and (3) execution of advanced speech testing in the clinic.
Collapse
Affiliation(s)
- Elizabeth R Kolberg
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | | | - Timothy J Davis
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - Linsey W Sunderhaus
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| |
Collapse
|
26
|
Aronoff JM, Padilla M, Fu QJ, Landsberger DM. Contralateral masking in bilateral cochlear implant patients: a model of medial olivocochlear function loss. PLoS One 2015; 10:e0121591. [PMID: 25798581 PMCID: PMC4370517 DOI: 10.1371/journal.pone.0121591] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2014] [Accepted: 02/13/2015] [Indexed: 11/30/2022] Open
Abstract
Contralateral masking is the phenomenon where a masker presented to one ear affects the ability to detect a signal in the opposite ear. For normal hearing listeners, contralateral masking results in masking patterns that are both sharper and dramatically smaller in magnitude than ipsilateral masking. The goal of this study was to investigate whether medial olivocochlear (MOC) efferents are needed for the sharpness and relatively small magnitude of the contralateral masking function. To do this, bilateral cochlear implant patients were tested because, by directly stimulating the auditory nerve, cochlear implants circumvent the effects of the MOC efferents. The results indicated that, as with normal hearing listeners, the contralateral masking function was sharper than the ipsilateral masking function. However, although there was a reduction in the magnitude of the contralateral masking function compared to the ipsilateral masking function, it was relatively modest. This is in sharp contrast to the results of normal hearing listeners where the magnitude of the contralateral masking function is greatly reduced. These results suggest that MOC function may not play a large role in the sharpness of the contralateral masking function but may play a considerable role in the magnitude of the contralateral masking function.
Collapse
Affiliation(s)
- Justin M. Aronoff
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign, Illinois, United States of America
- Communication and Neuroscience Division, House Research Institute, Los Angeles, California, United States of America
- * E-mail:
| | - Monica Padilla
- Communication and Neuroscience Division, House Research Institute, Los Angeles, California, United States of America
- Department of Otolaryngology, New York University, New York, New York, United States of America
| | - Qian-Jie Fu
- Communication and Neuroscience Division, House Research Institute, Los Angeles, California, United States of America
- Department of Head and Neck Surgery, University of California Los Angeles, Los Angeles, California, United States of America
| | - David M. Landsberger
- Communication and Neuroscience Division, House Research Institute, Los Angeles, California, United States of America
- Department of Otolaryngology, New York University, New York, New York, United States of America
| |
Collapse
|
27
|
Aronoff JM, Shayman C, Prasad A, Suneel D, Stelmach J. Unilateral spectral and temporal compression reduces binaural fusion for normal hearing listeners with cochlear implant simulations. Hear Res 2014; 320:24-9. [PMID: 25549574 DOI: 10.1016/j.heares.2014.12.005] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/22/2014] [Revised: 12/10/2014] [Accepted: 12/16/2014] [Indexed: 10/24/2022]
Abstract
Patients with single sided deafness have recently begun receiving cochlear implants in their deaf ear. These patients gain a significant benefit from having a cochlear implant. However, despite this benefit, they are considerably slower to develop binaural abilities such as summation compared to bilateral cochlear implant patients. This suggests that these patients have difficulty fusing electric and acoustic signals. Although this may reflect inherent differences between electric and acoustic stimulation, it may also reflect properties of the processor and fitting system, which result in spectral and temporal compression. To examine the possibility that unilateral spectral and temporal compression can adversely affect binaural fusion, this study tested normal hearing listeners' binaural fusion through the use of vocoded speech with unilateral spectral and temporal compression. The results indicate that unilateral spectral and temporal compression can each hinder binaural fusion and thus may adversely affect binaural abilities in patients with single sided deafness who use a cochlear implant in their deaf ear.
Collapse
Affiliation(s)
- Justin M Aronoff
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 S. 6th St., Champaign, IL 61820, USA.
| | - Corey Shayman
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 S. 6th St., Champaign, IL 61820, USA; Department of Molecular and Cell Biology, University of Illinois at Urbana-Champaign, 393 Morrill Hall, 505 S. Goodwin Ave., Urbana, IL 61801, USA.
| | - Akila Prasad
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 S. 6th St., Champaign, IL 61820, USA.
| | - Deepa Suneel
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 S. 6th St., Champaign, IL 61820, USA.
| | - Julia Stelmach
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 S. 6th St., Champaign, IL 61820, USA.
| |
Collapse
|
28
|
Kan A, Stoelb C, Litovsky RY, Goupell MJ. Effect of mismatched place-of-stimulation on binaural fusion and lateralization in bilateral cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:2923-36. [PMID: 24116428 PMCID: PMC3799729 DOI: 10.1121/1.4820889] [Citation(s) in RCA: 120] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Bilateral cochlear implants (CIs) have provided some success in improving spatial hearing abilities to patients, but with large variability in performance. One reason for the variability is that there may be a mismatch in the place-of-stimulation arising from electrode arrays being inserted at different depths in each cochlea. Goupell et al. [(2013b). J. Acoust. Soc. Am. 133(4), 2272-2287] showed that increasing interaural mismatch led to non-fused auditory images and poor lateralization of interaural time differences in normal hearing subjects listening to a vocoder. However, a greater bandwidth of activation helped mitigate these effects. In the present study, the same experiments were conducted in post-lingually deafened bilateral CI users with deliberate and controlled interaural mismatch of single electrode pairs. Results show that lateralization was still possible with up to 3 mm of interaural mismatch, even when off-center, or multiple, auditory images were perceived. However, mismatched inputs are not ideal since it leads to a distorted auditory spatial map. Comparison of CI and normal hearing listeners showed that the CI data were best modeled by a vocoder using Gaussian-pulsed tones with 1.5 mm bandwidth. These results suggest that interaural matching of electrodes is important for binaural cues to be maximally effective.
Collapse
Affiliation(s)
- Alan Kan
- Waisman Center, 1500 Highland Avenue, University of Wisconsin, Madison, Wisconsin 53705
| | | | | | | |
Collapse
|
29
|
Aronoff JM, Landsberger DM. The development of a modified spectral ripple test. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:EL217-22. [PMID: 23927228 PMCID: PMC3732300 DOI: 10.1121/1.4813802] [Citation(s) in RCA: 103] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2013] [Accepted: 06/28/2013] [Indexed: 05/26/2023]
Abstract
Poor spectral resolution can be a limiting factor for hearing impaired listeners, particularly for complex listening tasks such as speech understanding in noise. Spectral ripple tests are commonly used to measure spectral resolution, but these tests contain a number of potential confounds that can make interpretation of the results difficult. To measure spectral resolution while avoiding those confounds, a modified spectral ripple test with dynamically changing ripples was created, referred to as the spectral-temporally modulated ripple test (SMRT). This paper describes the SMRT and provides evidence that it is sensitive to changes in spectral resolution.
Collapse
Affiliation(s)
- Justin M Aronoff
- Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California 90057, USA.
| | | |
Collapse
|
30
|
The benefit of bilateral versus unilateral cochlear implantation to speech intelligibility in noise. Ear Hear 2013; 33:673-82. [PMID: 22717687 DOI: 10.1097/aud.0b013e3182587356] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES To develop a predictive model of spatial release from masking (SRM) for cochlear implantees, and validate this model against data from the literature. To establish the spatial configurations for which the model predicts a large advantage of bilateral over unilateral implantation. To collect data to support these predictions and generate predictions of more typical advantages of bilateral implantation. DESIGN The model initially assumed that bilateral cochlear implantees had equally effective implants on each side, with which they could perform optimal better-ear listening. Predictions were compared with measurements of SRM, using one and two implants with up to three interfering noises. The effect of relaxing the assumption of equally effective implants was explored. Novel measurements of SRM for eight unilateral implantees were collected, including measurements using speech and noise at azimuths of ± 60 degrees, and compared with prediction. A spatial map of bilateral implant benefit was generated for a situation with one interfering noise in anechoic conditions, and predictions of benefit were generated from binaural room impulse responses in a variety of real rooms. RESULTS The model accurately predicted data from a previous study for multiple interfering noises in a variety of spatial configurations, even when implants were assumed to be equally effective (r = 0.97). It predicted that the maximum benefit of bilateral implantation was 18 dB. Predictions were little affected if the implants were not assumed to be equally effective. The new measurements supported the 18 dB advantage prediction. The spatial map of predicted benefit showed that, for a listener facing the target voice, bilateral implantees could enjoy an advantage of about 10 dB over unilateral implantees in a wide range of situations. Predictions based on real-room measurements with speech and noise at 1 m showed that large benefits can occur even in reverberant spaces. CONCLUSIONS In optimal conditions, the benefit of bilateral implantation to speech intelligibility in noise can be much larger than has previously been reported. This benefit is thus considerably larger than reported benefits of summation or squelch and is robust in reverberation when the interfering source is close.
Collapse
|
31
|
Srinivasan AG, Padilla M, Shannon RV, Landsberger DM. Improving speech perception in noise with current focusing in cochlear implant users. Hear Res 2013; 299:29-36. [PMID: 23467170 DOI: 10.1016/j.heares.2013.02.004] [Citation(s) in RCA: 90] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2012] [Revised: 02/11/2013] [Accepted: 02/15/2013] [Indexed: 10/27/2022]
Abstract
Cochlear implant (CI) users typically have excellent speech recognition in quiet but struggle with understanding speech in noise. It is thought that broad current spread from stimulating electrodes causes adjacent electrodes to activate overlapping populations of neurons which results in interactions across adjacent channels. Current focusing has been studied as a way to reduce spread of excitation, and therefore, reduce channel interactions. In particular, partial tripolar stimulation has been shown to reduce spread of excitation relative to monopolar stimulation. However, the crucial question is whether this benefit translates to improvements in speech perception. In this study, we compared speech perception in noise with experimental monopolar and partial tripolar speech processing strategies. The two strategies were matched in terms of number of active electrodes, microphone, filterbanks, stimulation rate and loudness (although both strategies used a lower stimulation rate than typical clinical strategies). The results of this study showed a significant improvement in speech perception in noise with partial tripolar stimulation. All subjects benefited from the current focused speech processing strategy. There was a mean improvement in speech recognition threshold of 2.7 dB in a digits in noise task and a mean improvement of 3 dB in a sentences in noise task with partial tripolar stimulation relative to monopolar stimulation. Although the experimental monopolar strategy was worse than the clinical, presumably due to different microphones, frequency allocations and stimulation rates, the experimental partial-tripolar strategy, which had the same changes, showed no acute deficit relative to the clinical.
Collapse
Affiliation(s)
- Arthi G Srinivasan
- Department of Communication and Auditory Neuroscience, House Research Institute, 2100 West 3rd Street, Los Angeles, CA 90057, USA.
| | | | | | | |
Collapse
|
32
|
Eskridge EN, Galvin JJ, Aronoff JM, Li T, Fu QJ. Speech perception with music maskers by cochlear implant users and normal-hearing listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:800-810. [PMID: 22223890 PMCID: PMC5847337 DOI: 10.1044/1092-4388(2011/11-0124)] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
PURPOSE The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. METHOD Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their clinical processors; NH subjects were tested while listening to unprocessed audio. Speech was presented with different music maskers (excerpts from musical pieces) and with steady, speech-shaped noise. To estimate the contributions of energetic and informational masking, SRTs were also measured in "music-shaped noise" and in music-shaped noise modulated by the music temporal envelopes. RESULTS NH performance was much better than CI performance. For both subject groups, SRTs were much lower with the music-related maskers than with speech-shaped noise. SRTs were strongly predicted by the amount of energetic masking in the music maskers. Unlike CI users, NH listeners obtained release from masking with envelope and fine structure cues in the modulated noise and music maskers. CONCLUSIONS Although speech understanding was greatly limited by energetic masking in both subject groups, CI performance worsened as more spectrotemporal complexity was added to the maskers, most likely due to poor spectral resolution.
Collapse
|
33
|
Aronoff JM, Freed DJ, Fisher LM, Pal I, Soli SD. Cochlear implant patients' localization using interaural level differences exceeds that of untrained normal hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:EL382-7. [PMID: 22559456 PMCID: PMC3338575 DOI: 10.1121/1.3699017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2012] [Accepted: 03/12/2012] [Indexed: 05/25/2023]
Abstract
Bilateral cochlear implant patients are unable to localize as well as normal hearing listeners. Although poor sensitivity to interaural time differences clearly contributes to this deficit, it is unclear whether deficits in terms of interaural level differences are also a contributing factor. In this study, localization was tested while manipulating interaural time and level cues using head-related transfer functions. The results indicate that bilateral cochlear implant users' ability to localize based on interaural level differences is actually greater than that of untrained normal hearing listeners.
Collapse
Affiliation(s)
- Justin M Aronoff
- Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California 90057, USA.
| | | | | | | | | |
Collapse
|