1
|
Hu H, Ewert SD, Kollmeier B, Vickers D. Rate dependent neural responses of interaural-time-difference cues in fine-structure and envelope. PeerJ 2024; 12:e17104. [PMID: 38680894 PMCID: PMC11055513 DOI: 10.7717/peerj.17104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 02/22/2024] [Indexed: 05/01/2024] Open
Abstract
Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.
Collapse
Affiliation(s)
- Hongmei Hu
- SOUND Lab, Cambridge Hearing Group, Department of Clinical Neuroscience, Cambridge University, Cambridge, United Kingdom
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Stephan D. Ewert
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Birger Kollmeier
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Deborah Vickers
- SOUND Lab, Cambridge Hearing Group, Department of Clinical Neuroscience, Cambridge University, Cambridge, United Kingdom
| |
Collapse
|
2
|
Chen B, Zhang X, Chen J, Shi Y, Zou X, Liu P, Li Y, Galvin JJ, Fu QJ. Tonal language experience facilitates the use of spatial cues for segregating competing speech in bimodal cochlear implant listeners. JASA EXPRESS LETTERS 2024; 4:034401. [PMID: 38426890 PMCID: PMC10926108 DOI: 10.1121/10.0025058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 02/09/2024] [Indexed: 03/02/2024]
Abstract
English-speaking bimodal and bilateral cochlear implant (CI) users can segregate competing speech using talker sex cues but not spatial cues. While tonal language experience allows for greater utilization of talker sex cues for listeners with normal hearing, tonal language benefits remain unclear for CI users. The present study assessed the ability of Mandarin-speaking bilateral and bimodal CI users to recognize target sentences amidst speech maskers that varied in terms of spatial cues and/or talker sex cues, relative to the target. Different from English-speaking CI users, Mandarin-speaking CI users exhibited greater utilization of spatial cues, particularly in bimodal listening.
Collapse
Affiliation(s)
- Biao Chen
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Ministry of Education of China, Beijing, People's Republic of China
| | - Xinyi Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Ministry of Education of China, Beijing, People's Republic of China
| | - Jingyuan Chen
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Ministry of Education of China, Beijing, People's Republic of China
| | - Ying Shi
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Ministry of Education of China, Beijing, People's Republic of China
| | - Xinyue Zou
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Ministry of Education of China, Beijing, People's Republic of China
| | - Ping Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Ministry of Education of China, Beijing, People's Republic of China
| | - Yongxin Li
- Department of Otolaryngology, Head and Neck Surgery, Beijing TongRen Hospital, Capital Medical University, Ministry of Education of China, Beijing, People's Republic of China
| | - John J Galvin
- House Institute Foundation, Los Angeles, California 90057, USA
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095, , , , , , , , ,
| |
Collapse
|
3
|
Cychosz M, Xu K, Fu QJ. Effects of spectral smearing on speech understanding and masking release in simulated bilateral cochlear implants. PLoS One 2023; 18:e0287728. [PMID: 37917727 PMCID: PMC10621938 DOI: 10.1371/journal.pone.0287728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 06/11/2023] [Indexed: 11/04/2023] Open
Abstract
Differences in spectro-temporal degradation may explain some variability in cochlear implant users' speech outcomes. The present study employs vocoder simulations on listeners with typical hearing to evaluate how differences in degree of channel interaction across ears affects spatial speech recognition. Speech recognition thresholds and spatial release from masking were measured in 16 normal-hearing subjects listening to simulated bilateral cochlear implants. 16-channel sine-vocoded speech simulated limited, broad, or mixed channel interaction, in dichotic and diotic target-masker conditions, across ears. Thresholds were highest with broad channel interaction in both ears but improved when interaction decreased in one ear and again in both ears. Masking release was apparent across conditions. Results from this simulation study on listeners with typical hearing show that channel interaction may impact speech recognition more than masking release, and may have implications for the effects of channel interaction on cochlear implant users' speech recognition outcomes.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Kevin Xu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| |
Collapse
|
4
|
Dennison SR, Thakkar T, Kan A, Litovsky RY. Lateralization of binaural envelope cues measured with a mobile cochlear-implant research processora). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:3543-3558. [PMID: 37390320 PMCID: PMC10314808 DOI: 10.1121/10.0019879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 06/09/2023] [Accepted: 06/09/2023] [Indexed: 07/02/2023]
Abstract
Bilateral cochlear implant (BICI) listeners do not have full access to the binaural cues that normal hearing (NH) listeners use for spatial hearing tasks such as localization. When using their unsynchronized everyday processors, BICI listeners demonstrate sensitivity to interaural level differences (ILDs) in the envelopes of sounds, but interaural time differences (ITDs) are less reliably available. It is unclear how BICI listeners use combinations of ILDs and envelope ITDs, and how much each cue contributes to perceived sound location. The CCi-MOBILE is a bilaterally synchronized research processor with the untested potential to provide spatial cues to BICI listeners. In the present study, the CCi-MOBILE was used to measure the ability of BICI listeners to perceive lateralized sound sources when single pairs of electrodes were presented amplitude-modulated stimuli with combinations of ILDs and envelope ITDs. Young NH listeners were also tested using amplitude-modulated high-frequency tones. A cue weighting analysis with six BICI and ten NH listeners revealed that ILDs contributed more than envelope ITDs to lateralization for both groups. Moreover, envelope ITDs contributed to lateralization for NH listeners but had negligible contribution for BICI listeners. These results suggest that the CCi-MOBILE is suitable for binaural testing and developing bilateral processing strategies.
Collapse
Affiliation(s)
| | - Tanvi Thakkar
- University of Wisconsin-La Crosse, La Crosse, Wisconsin 54601, USA
| | - Alan Kan
- Macquarie University, Macquarie Park, New South Wales, Australia
| | - Ruth Y Litovsky
- University of Wisconsin-Madison, Madison, Wisconsin 53711, USA
| |
Collapse
|
5
|
Thomas M, Galvin JJ, Fu QJ. Importance of ipsilateral residual hearing for spatial hearing by bimodal cochlear implant users. Sci Rep 2023; 13:4960. [PMID: 36973380 PMCID: PMC10042848 DOI: 10.1038/s41598-023-32135-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 03/22/2023] [Indexed: 03/29/2023] Open
Abstract
AbstractBimodal cochlear implant (CI) listeners have difficulty utilizing spatial cues to segregate competing speech, possibly due to tonotopic mismatch between the acoustic input frequency and electrode place of stimulation. The present study investigated the effects of tonotopic mismatch in the context of residual acoustic hearing in the non-CI ear or residual hearing in both ears. Speech recognition thresholds (SRTs) were measured with two co-located or spatially separated speech maskers in normal-hearing adults listening to acoustic simulations of CIs; low frequency acoustic information was available in the non-CI ear (bimodal listening) or in both ears. Bimodal SRTs were significantly better with tonotopically matched than mismatched electric hearing for both co-located and spatially separated speech maskers. When there was no tonotopic mismatch, residual acoustic hearing in both ears provided a significant benefit when maskers were spatially separated, but not when co-located. The simulation data suggest that hearing preservation in the implanted ear for bimodal CI listeners may significantly benefit utilization of spatial cues to segregate competing speech, especially when the residual acoustic hearing is comparable across two ears. Also, the benefits of bilateral residual acoustic hearing may be best ascertained for spatially separated maskers.
Collapse
|
6
|
Cochlear Implant Facilitates the Use of Talker Sex and Spatial Cues to Segregate Competing Speech in Unilaterally Deaf Listeners. Ear Hear 2023; 44:77-91. [PMID: 35733275 DOI: 10.1097/aud.0000000000001254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVES Talker sex and spatial cues can facilitate segregation of competing speech. However, the spectrotemporal degradation associated with cochlear implants (CIs) can limit the benefit of talker sex and spatial cues. Acoustic hearing in the nonimplanted ear can improve access to talker sex cues in CI users. However, it's unclear whether the CI can improve segregation of competing speech when maskers are symmetrically placed around the target (i.e., when spatial cues are available), compared with acoustic hearing alone. The aim of this study was to investigate whether a CI can improve segregation of competing speech by individuals with unilateral hearing loss. DESIGN Speech recognition thresholds (SRTs) for competing speech were measured in 16 normal-hearing (NH) adults and 16 unilaterally deaf CI users. All participants were native speakers of Mandarin Chinese. CI users were divided into two groups according to thresholds in the nonimplanted ear: (1) single-sided deaf (SSD); pure-tone thresholds <25 dB HL at all audiometric frequencies, and (2) Asymmetric hearing loss (AHL; one or more thresholds > 25 dB HL). SRTs were measured for target sentences produced by a male talker in the presence of two masker talkers (different male or female talkers). The target sentence was always presented via loudspeaker directly in front of the listener (0°), and the maskers were either colocated with the target (0°) or spatially separated from the target at ±90°. Three segregation cue conditions were tested to measure masking release (MR) relative to the baseline condition: (1) Talker sex, (2) Spatial, and (3) Talker sex + Spatial. For CI users, SRTs were measured with the CI on or off. RESULTS Binaural MR was significantly better for the NH group than for the AHL or SSD groups ( P < 0.001 in all cases). For the NH group, mean MR was largest with the Talker sex + spatial cues (18.8 dB) and smallest for the Talker sex cues (10.7 dB). In contrast, mean MR for the SSD group was largest with the Talker sex + spatial cues (14.7 dB), and smallest with the Spatial cues (4.8 dB). For the AHL group, mean MR was largest with the Talker sex + spatial cues (7.8 dB) and smallest with the Talker sex (4.8 dB) and the Spatial cues (4.8 dB). MR was significantly better with the CI on than off for both the AHL ( P = 0.014) and SSD groups ( P < 0.001). Across all unilaterally deaf CI users, monaural (acoustic ear alone) and binaural MR were significantly correlated with unaided pure-tone average thresholds in the nonimplanted ear for the Talker sex and Talker sex + spatial conditions ( P < 0.001 in both cases) but not for the Spatial condition. CONCLUSION Although the CI benefitted unilaterally deaf listeners' segregation of competing speech, MR was much poorer than that observed in NH listeners. Different from previous findings with steady noise maskers, the CI benefit for segregation of competing speech from a different talker sex was greater in the SSD group than in the AHL group.
Collapse
|
7
|
Jürgens T, Wesarg T, Oetting D, Jung L, Williges B. Spatial speech-in-noise performance in simulated single-sided deaf and bimodal cochlear implant users in comparison with real patients. Int J Audiol 2023; 62:30-43. [PMID: 34962428 DOI: 10.1080/14992027.2021.2015633] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 11/30/2021] [Accepted: 12/03/2021] [Indexed: 01/05/2023]
Abstract
OBJECTIVE Speech reception thresholds (SRTs) in spatial scenarios were measured in simulated cochlear implant (CI) listeners with either contralateral normal hearing, or aided hearing impairment (bimodal), and compared to SRTs of real patients, who were measured using the exact same paradigm, to assess goodness of simulation. DESIGN CI listening was simulated using a vocoder incorporating actual CI signal processing and physiologic details of electric stimulation on one side. Unprocessed signals or simulation of aided moderate or profound hearing impairment was used contralaterally. Three spatial speech-in-noise scenarios were tested using virtual acoustics to assess spatial release from masking (SRM) and combined benefit. STUDY SAMPLE Eleven normal-hearing listeners participated in the experiment. RESULTS For contralateral normal and aided moderately impaired hearing, bilaterally assessed SRTs were not statistically different from unilateral SRTs of the better ear, indicating "better-ear-listening". Combined benefit was only found for contralateral profound impaired hearing. As in patients, SRM was highest for contralateral normal hearing and decreased systematically with more severe simulated impairment. Comparison to actual patients showed good reproduction of SRTs, SRM, and better-ear-listening. CONCLUSIONS The simulations reproduced better-ear-listening as in patients and suggest that combined benefit in spatial scenes predominantly occurs when both ears show poor speech-in-noise performance.
Collapse
Affiliation(s)
- Tim Jürgens
- Institute of Acoustics, University of Applied Sciences Lübeck, Lübeck, Germany
- Medical Physics and Cluster of Excellence "Hearing4all", Carl-von-Ossietzky University, Oldenburg, Germany
| | - Thomas Wesarg
- Faculty of Medicine, Department of Otorhinolaryngology - Head and Neck Surgery, Medical Center, University of Freiburg, Freiburg, Germany
| | | | - Lorenz Jung
- Faculty of Medicine, Department of Otorhinolaryngology - Head and Neck Surgery, Medical Center, University of Freiburg, Freiburg, Germany
| | - Ben Williges
- Medical Physics and Cluster of Excellence "Hearing4all", Carl-von-Ossietzky University, Oldenburg, Germany
- SOUND Lab, Cambridge Hearing Group, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
8
|
Bestel J, Legris E, Rembaud F, Mom T, Galvin JJ. Speech understanding in diffuse steady noise in typically hearing and hard of hearing listeners. PLoS One 2022; 17:e0274435. [PMID: 36103551 PMCID: PMC9473430 DOI: 10.1371/journal.pone.0274435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 08/29/2022] [Indexed: 12/02/2022] Open
Abstract
Spatial cues can facilitate segregation of target speech from maskers. However, in clinical practice, masked speech understanding is most often evaluated using co-located speech and maskers (i.e., without spatial cues). Many hearing aid centers in France are equipped with five-loudspeaker arrays, allowing masked speech understanding to be measured with spatial cues. It is unclear how hearing status may affect utilization of spatial cues to segregate speech and noise. In this study, speech reception thresholds (SRTs) for target speech in “diffuse noise” (target speech from 1 speaker, noise from the remaining 4 speakers) in 297 adult listeners across 9 Audilab hearing centers. Participants were categorized according to pure-tone-average (PTA) thresholds: typically-hearing (TH; ≤ 20 dB HL), mild hearing loss (Mild; >20 ≤ 40 dB HL), moderate hearing loss 1 (Mod-1; >40 ≤ 55 dB HL), and moderate hearing loss 2 (Mod-2; >55 ≤ 65 dB HL). All participants were tested without aided hearing. SRTs in diffuse noise were significantly correlated with PTA thresholds, age at testing, as well as word and phoneme recognition scores in quiet. Stepwise linear regression analysis showed that SRTs in diffuse noise were significantly predicted by a combination of PTA threshold and word recognition scores in quiet. SRTs were also measured in co-located and diffuse noise in 65 additional participants. SRTs were significantly lower in diffuse noise than in co-located noise only for the TH and Mild groups; masking release with diffuse noise (relative to co-located noise) was significant only for the TH group. The results are consistent with previous studies that found that hard of hearing listeners have greater difficulty using spatial cues to segregate competing speech. The data suggest that speech understanding in diffuse noise provides additional insight into difficulties that hard of hearing individuals experience in complex listening environments.
Collapse
Affiliation(s)
| | | | | | - Thierry Mom
- Centre Hospitalier Universitaire de Clermont-Ferrand, Clermont-Ferrand, France
| | - John J. Galvin
- University Hospital Center of Tours, Tours, France
- House Institute Foundation, Los Angeles, CA, United States of America
- * E-mail:
| |
Collapse
|
9
|
Hu H, Hartog L, Kollmeier B, Ewert SD. Spectral and binaural loudness summation of equally loud narrowband signals in single-sided-deafness and bilateral cochlear implant users. Front Neurosci 2022; 16:931748. [PMID: 36071716 PMCID: PMC9444060 DOI: 10.3389/fnins.2022.931748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/11/2022] [Indexed: 01/31/2023] Open
Abstract
Recent studies on loudness perception of binaural broadband signals in hearing impaired listeners found large individual differences, suggesting the use of such signals in hearing aid fitting. Likewise, clinical cochlear implant (CI) fitting with narrowband/single-electrode signals might cause suboptimal loudness perception in bilateral and bimodal CI listeners. Here spectral and binaural loudness summation in normal hearing (NH) listeners, bilateral CI (biCI) users, and unilateral CI (uCI) users with normal hearing in the unaided ear was investigated to assess the relevance of binaural/bilateral fitting in CI users. To compare the three groups, categorical loudness scaling was performed for an equal categorical loudness noise (ECLN) consisting of the sum of six spectrally separated third-octave noises at equal loudness. The acoustical ECLN procedure was adapted to an equivalent procedure in the electrical domain using direct stimulation. To ensure the same broadband loudness in binaural measurements with simultaneous electrical and acoustical stimulation, a modified binaural ECLN was introduced and cross validated with self-adjusted loudness in a loudness balancing experiment. Results showed a higher (spectral) loudness summation of the six equally loud narrowband signals in the ECLN in CI compared to NH. Binaural loudness summation was found for all three listener groups (NH, uCI, and biCI). No increased binaural loudness summation could be found for the current uCI and biCI listeners compared to the NH group. In uCI loudness balancing between narrowband signals and single electrodes did not automatically result in a balanced loudness perception across ears, emphasizing the importance of binaural/bilateral fitting.
Collapse
Affiliation(s)
- Hongmei Hu
- Medizinische Physik and Cluster of Excellence “Hearing4all”, Department of Medical Physics and Acoustics, Universität Oldenburg, Oldenburg, Germany,*Correspondence: Hongmei Hu,
| | - Laura Hartog
- Medizinische Physik and Cluster of Excellence “Hearing4all”, Department of Medical Physics and Acoustics, Universität Oldenburg, Oldenburg, Germany,Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
| | - Birger Kollmeier
- Medizinische Physik and Cluster of Excellence “Hearing4all”, Department of Medical Physics and Acoustics, Universität Oldenburg, Oldenburg, Germany,Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
| | - Stephan D. Ewert
- Medizinische Physik and Cluster of Excellence “Hearing4all”, Department of Medical Physics and Acoustics, Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
10
|
Gibbs BE, Bernstein JGW, Brungart DS, Goupell MJ. Effects of better-ear glimpsing, binaural unmasking, and spectral resolution on spatial release from masking in cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1230. [PMID: 36050186 PMCID: PMC9420049 DOI: 10.1121/10.0013746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 08/04/2022] [Accepted: 08/06/2022] [Indexed: 06/15/2023]
Abstract
Bilateral cochlear-implant (BICI) listeners obtain less spatial release from masking (SRM; speech-recognition improvement for spatially separated vs co-located conditions) than normal-hearing (NH) listeners, especially for symmetrically placed maskers that produce similar long-term target-to-masker ratios at the two ears. Two experiments examined possible causes of this deficit, including limited better-ear glimpsing (using speech information from the more advantageous ear in each time-frequency unit), limited binaural unmasking (using interaural differences to improve signal-in-noise detection), or limited spectral resolution. Listeners had NH (presented with unprocessed or vocoded stimuli) or BICIs. Experiment 1 compared natural symmetric maskers, idealized monaural better-ear masker (IMBM) stimuli that automatically performed better-ear glimpsing, and hybrid stimuli that added worse-ear information, potentially restoring binaural cues. BICI and NH-vocoded SRM was comparable to NH-unprocessed SRM for idealized stimuli but was 14%-22% lower for symmetric stimuli, suggesting limited better-ear glimpsing ability. Hybrid stimuli improved SRM for NH-unprocessed listeners but degraded SRM for BICI and NH-vocoded listeners, suggesting they experienced across-ear interference instead of binaural unmasking. In experiment 2, increasing the number of vocoder channels did not change NH-vocoded SRM. BICI SRM deficits likely reflect a combination of across-ear interference, limited better-ear glimpsing, and poorer binaural unmasking that stems from cochlear-implant-processing limitations other than reduced spectral resolution.
Collapse
Affiliation(s)
- Bobby E Gibbs
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Douglas S Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
11
|
The Impact of Synchronized Cochlear Implant Sampling and Stimulation on Free-Field Spatial Hearing Outcomes: Comparing the ciPDA Research Processor to Clinical Processors. Ear Hear 2022; 43:1262-1272. [PMID: 34882619 PMCID: PMC9174346 DOI: 10.1097/aud.0000000000001179] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVES Bilateral cochlear implant (BiCI) listeners use independent processors in each ear. This independence and lack of shared hardware prevents control of the timing of sampling and stimulation across ears, which precludes the development of bilaterally-coordinated signal processing strategies. As a result, these devices potentially reduce access to binaural cues and introduce disruptive artifacts. For example, measurements from two clinical processors demonstrate that independently-running processors introduce interaural incoherence. These issues are typically avoided in the laboratory by using research processors with bilaterally-synchronized hardware. However, these research processors do not typically run in real-time and are difficult to take out into the real-world due to their benchtop nature. Hence, the question of whether just applying hardware synchronization to reduce bilateral stimulation artifacts (and thereby potentially improve functional spatial hearing performance) has been difficult to answer. The CI personal digital assistant (ciPDA) research processor, which uses one clock to drive two processors, presented an opportunity to examine whether synchronization of hardware can have an impact on spatial hearing performance. DESIGN Free-field sound localization and spatial release from masking (SRM) were assessed in 10 BiCI listeners using both their clinical processors and the synchronized ciPDA processor. For sound localization, localization accuracy was compared within-subject for the two processor types. For SRM, speech reception thresholds were compared for spatially separated and co-located configurations, and the amount of unmasking was compared for synchronized and unsynchronized hardware. There were no deliberate changes of the sound processing strategy on the ciPDA to restore or improve binaural cues. RESULTS There was no significant difference in localization accuracy between unsynchronized and synchronized hardware (p = 0.62). Speech reception thresholds were higher with the ciPDA. In addition, although five of eight participants demonstrated improved SRM with synchronized hardware, there was no significant difference in the amount of unmasking due to spatial separation between synchronized and unsynchronized hardware (p = 0.21). CONCLUSIONS Using processors with synchronized hardware did not yield an improvement in sound localization or SRM for all individuals, suggesting that mere synchronization of hardware is not sufficient for improving spatial hearing outcomes. Further work is needed to improve sound coding strategies to facilitate access to spatial hearing cues. This study provides a benchmark for spatial hearing performance with real-time, bilaterally-synchronized research processors.
Collapse
|
12
|
Thomas M, Willis S, Galvin JJ, Fu QJ. Effects of tonotopic matching and spatial cues on segregation of competing speech in simulations of bilateral cochlear implants. PLoS One 2022; 17:e0270759. [PMID: 35788202 PMCID: PMC9255761 DOI: 10.1371/journal.pone.0270759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 06/16/2022] [Indexed: 11/18/2022] Open
Abstract
In the clinical fitting of cochlear implants (CIs), the lowest input acoustic frequency is typically much lower than the characteristic frequency associated with the most apical electrode position, due to the limited electrode insertion depth. For bilateral CI users, electrode positions may differ across ears. However, the same acoustic-to-electrode frequency allocation table (FAT) is typically assigned to both ears. As such, bilateral CI users may experience both intra-aural frequency mismatch within each ear and inter-aural mismatch across ears. This inter-aural mismatch may limit the ability of bilateral CI users to take advantage of spatial cues when attempting to segregate competing speech. Adjusting the FAT to tonotopically match the electrode position in each ear (i.e., increasing the low acoustic input frequency) is theorized to reduce this inter-aural mismatch. Unfortunately, this approach may also introduce the loss of acoustic information below the modified input acoustic frequency. The present study explored the trade-off between reduced inter-aural frequency mismatch and low-frequency information loss for segregation of competing speech. Normal-hearing participants were tested while listening to acoustic simulations of bilateral CIs. Speech reception thresholds (SRTs) were measured for target sentences produced by a male talker in the presence of two different male talkers. Masker speech was either co-located with or spatially separated from the target speech. The bilateral CI simulations were produced by 16-channel sinewave vocoders; the simulated insertion depth was fixed in one ear and varied in the other ear, resulting in an inter-aural mismatch of 0, 2, or 6 mm in terms of cochlear place. Two FAT conditions were compared: 1) clinical (200-8000 Hz in both ears), or 2) matched to the simulated insertion depth in each ear. Results showed that SRTs were significantly lower with the matched than with the clinical FAT, regardless of the insertion depth or spatial configuration of the masker speech. The largest improvement in SRTs with the matched FAT was observed when the inter-aural mismatch was largest (6 mm). These results suggest that minimizing inter-aural mismatch with tonotopically matched FATs may benefit bilateral CI users' ability to segregate competing speech despite substantial low-frequency information loss in ears with shallow insertion depths.
Collapse
Affiliation(s)
- Mathew Thomas
- Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, Los Angeles, CA, United States of America
| | - Shelby Willis
- Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, Los Angeles, CA, United States of America
| | - John J. Galvin
- House Institute Foundation, Los Angeles, California, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, Los Angeles, CA, United States of America
- * E-mail:
| |
Collapse
|
13
|
Xu K, Willis S, Gopen Q, Fu QJ. Effects of Spectral Resolution and Frequency Mismatch on Speech Understanding and Spatial Release From Masking in Simulated Bilateral Cochlear Implants. Ear Hear 2021; 41:1362-1371. [PMID: 32132377 DOI: 10.1097/aud.0000000000000865] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Due to interaural frequency mismatch, bilateral cochlear-implant (CI) users may be less able to take advantage of binaural cues that normal-hearing (NH) listeners use for spatial hearing, such as interaural time differences and interaural level differences. As such, bilateral CI users have difficulty segregating competing speech even when the target and competing talkers are spatially separated. The goal of this study was to evaluate the effects of spectral resolution, tonotopic mismatch (the frequency mismatch between the acoustic center frequency assigned to CI electrode within an implanted ear relative to the expected spiral ganglion characteristic frequency), and interaural mismatch (differences in the degree of tonotopic mismatch in each ear) on speech understanding and spatial release from masking (SRM) in the presence of competing talkers in NH subjects listening to bilateral vocoder simulations. DESIGN During testing, both target and masker speech were presented in five-word sentences that had the same syntax but were not necessarily meaningful. The sentences were composed of five categories in fixed order (Name, Verb, Number, Color, and Clothes), each of which had 10 items, such that multiple sentences could be generated by randomly selecting a word from each category. Speech reception thresholds (SRTs) for the target sentence presented in competing speech maskers were measured. The target speech was delivered to both ears and the two speech maskers were delivered to (1) both ears (diotic masker), or (2) different ears (dichotic masker: one delivered to the left ear and the other delivered to the right ear). Stimuli included the unprocessed speech and four 16-channel sine-vocoder simulations with different interaural mismatch (0, 1, and 2 mm). SRM was calculated as the difference between the diotic and dichotic listening conditions. RESULTS With unprocessed speech, SRTs were 0.3 and -18.0 dB for the diotic and dichotic maskers, respectively. For the spectrally degraded speech with mild tonotopic mismatch and no interaural mismatch, SRTs were 5.6 and -2.0 dB for the diotic and dichotic maskers, respectively. When the tonotopic mismatch increased in both ears, SRTs worsened to 8.9 and 2.4 dB for the diotic and dichotic maskers, respectively. When the two ears had different tonotopic mismatch (e.g., there was interaural mismatch), the performance drop in SRTs was much larger for the dichotic than for the diotic masker. The largest SRM was observed with unprocessed speech (18.3 dB). With the CI simulations, SRM was significantly reduced to 7.6 dB even with mild tonotopic mismatch but no interaural mismatch; SRM was further reduced with increasing interaural mismatch. CONCLUSIONS The results demonstrate that frequency resolution, tonotopic mismatch, and interaural mismatch have differential effects on speech understanding and SRM in simulation of bilateral CIs. Minimizing interaural mismatch may be critical to optimize binaural benefits and improve CI performance for competing speech, a typical listening environment. SRM (the difference in SRTs between diotic and dichotic maskers) may be a useful clinical tool to assess interaural frequency mismatch in bilateral CI users and to evaluate the benefits of optimization methods that minimize interaural mismatch.
Collapse
Affiliation(s)
- Kevin Xu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California, USA
| | | | | | | |
Collapse
|
14
|
Yun D, Jennings TR, Kidd G, Goupell MJ. Benefits of triple acoustic beamforming during speech-on-speech masking and sound localization for bilateral cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3052. [PMID: 34241104 PMCID: PMC8102069 DOI: 10.1121/10.0003933] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 03/03/2021] [Accepted: 03/06/2021] [Indexed: 05/30/2023]
Abstract
Bilateral cochlear-implant (CI) users struggle to understand speech in noisy environments despite receiving some spatial-hearing benefits. One potential solution is to provide acoustic beamforming. A headphone-based experiment was conducted to compare speech understanding under natural CI listening conditions and for two non-adaptive beamformers, one single beam and one binaural, called "triple beam," which provides an improved signal-to-noise ratio (beamforming benefit) and usable spatial cues by reintroducing interaural level differences. Speech reception thresholds (SRTs) for speech-on-speech masking were measured with target speech presented in front and two maskers in co-located or narrow/wide separations. Numerosity judgments and sound-localization performance also were measured. Natural spatial cues, single-beam, and triple-beam conditions were compared. For CI listeners, there was a negligible change in SRTs when comparing co-located to separated maskers for natural listening conditions. In contrast, there were 4.9- and 16.9-dB improvements in SRTs for the beamformer and 3.5- and 12.3-dB improvements for triple beam (narrow and wide separations). Similar results were found for normal-hearing listeners presented with vocoded stimuli. Single beam improved speech-on-speech masking performance but yielded poor sound localization. Triple beam improved speech-on-speech masking performance, albeit less than the single beam, and sound localization. Thus, triple beam was the most versatile across multiple spatial-hearing domains.
Collapse
Affiliation(s)
- David Yun
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Todd R Jennings
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
15
|
Willis S, Xu K, Thomas M, Gopen Q, Ishiyama A, Galvin JJ, Fu QJ. Bilateral and bimodal cochlear implant listeners can segregate competing speech using talker sex cues, but not spatial cues. JASA EXPRESS LETTERS 2021; 1:014401. [PMID: 33521793 PMCID: PMC7814501 DOI: 10.1121/10.0003049] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 10/05/2020] [Indexed: 05/30/2023]
Abstract
Cochlear implant (CI) users have greater difficulty perceiving talker sex and spatial cues than do normal-hearing (NH) listeners. The present study measured recognition of target sentences in the presence of two co-located or spatially separated speech maskers in NH, bilateral CI, and bimodal CI listeners; masker sex was the same as or different than the target. NH listeners demonstrated a large masking release with masker sex and/or spatial cues. For CI listeners, significant masking release was observed with masker sex cues, but not with spatial cues, at least for the spatially symmetrically placed maskers and listening task used in this study.
Collapse
Affiliation(s)
- Shelby Willis
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - Kevin Xu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - Mathew Thomas
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - Quinton Gopen
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - Akira Ishiyama
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - John J Galvin
- House Ear Institute, 2100 West Third Street, Los Angeles, California 90057, USA , , , , , ,
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095, USA
| |
Collapse
|
16
|
D'Onofrio K, Richards V, Gifford R. Spatial Release From Informational and Energetic Masking in Bimodal and Bilateral Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3816-3833. [PMID: 33049147 PMCID: PMC8582905 DOI: 10.1044/2020_jslhr-20-00044] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 04/27/2020] [Accepted: 07/24/2020] [Indexed: 06/11/2023]
Abstract
Purpose Spatially separating speech and background noise improves speech understanding in normal-hearing listeners, an effect referred to as spatial release from masking (SRM). In cochlear implant (CI) users, SRM has often been demonstrated using asymmetric noise configurations, which maximize benefit from head shadow and the potential availability of binaural cues. In contrast, SRM in symmetrical configurations has been minimal to absent in CI users. We examined the interaction between two types of maskers (informational and energetic) and SRM in bimodal and bilateral CI users. We hypothesized that SRM would be absent or "negative" using symmetrically separated noise maskers. Second, we hypothesized that bimodal listeners would exhibit greater release from informational masking due to access to acoustic information in the non-CI ear. Method Participants included 10 bimodal and 10 bilateral CI users. Speech understanding in noise was tested in 24 conditions: 3 spatial configurations (S0N0, S0N45&315, S0N90&270) × 2 masker types (speech, signal-correlated noise) × 2 listening configurations (best-aided, CI-alone) × 2 talker gender conditions (different-gender, same-gender). Results In support of our first hypothesis, both groups exhibited negative SRM with increasing spatial separation. In opposition to our second hypothesis, both groups exhibited similar magnitudes of release from informational masking. The magnitude of release was greater for bimodal listeners, though this difference failed to reach statistical significance. Conclusions Both bimodal and bilateral CI recipients exhibited negative SRM. This finding is consistent with CI signal processing limitations, the audiologic factors associated with SRM, and known effects of behind-the-ear microphone technology. Though release from informational masking was not significantly different across groups, the magnitude of release was greater for bimodal listeners. This suggests that bimodal listeners may be at least marginally more susceptible to informational masking than bilateral CI users, though further research is warranted.
Collapse
Affiliation(s)
- Kristen D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | - René Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
17
|
Misurelli SM, Goupell MJ, Burg EA, Jocewicz R, Kan A, Litovsky RY. Auditory Attention and Spatial Unmasking in Children With Cochlear Implants. Trends Hear 2020; 24:2331216520946983. [PMID: 32812515 PMCID: PMC7446264 DOI: 10.1177/2331216520946983] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
The ability to attend to target speech in background noise is an important skill, particularly for children who spend many hours in noisy environments. Intelligibility improves as a result of spatial or binaural unmasking in the free-field for normal-hearing children; however, children who use bilateral cochlear implants (BiCIs) demonstrate little benefit in similar situations. It was hypothesized that poor auditory attention abilities might explain the lack of unmasking observed in children with BiCIs. Target and interferer speech stimuli were presented to either or both ears of BiCI participants via their clinical processors. Speech reception thresholds remained low when the target and interferer were in opposite ears, but they did not show binaural unmasking when the interferer was presented to both ears and the target only to one ear. These results demonstrate that, in the most extreme cases of stimulus separation, children with BiCIs can ignore an interferer and attend to target speech, but there is weak or absent binaural unmasking. It appears that children with BiCIs mostly experience poor encoding of binaural cues rather than deficits in ability to selectively attend to target speech.
Collapse
Affiliation(s)
- Sara M Misurelli
- Waisman Center, University of Wisconsin-Madison.,Department of Surgery, Division of Otolaryngology, University of Wisconsin School of Medicine and Public Health
| | | | | | | | - Alan Kan
- Waisman Center, University of Wisconsin-Madison.,School of Engineering, Macquarie University, Sydney, Australia
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison.,Department of Surgery, Division of Otolaryngology, University of Wisconsin School of Medicine and Public Health
| |
Collapse
|
18
|
Zhang J, Wang X, Wang NY, Fu X, Gan T, Galvin JJ, Willis S, Xu K, Thomas M, Fu QJ. Tonal Language Speakers Are Better Able to Segregate Competing Speech According to Talker Sex Differences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2801-2810. [PMID: 32692939 PMCID: PMC7872724 DOI: 10.1044/2020_jslhr-19-00421] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2019] [Revised: 04/01/2020] [Accepted: 05/15/2020] [Indexed: 06/01/2023]
Abstract
Purpose The aim of this study was to compare release from masking (RM) between Mandarin-speaking and English-speaking listeners with normal hearing for competing speech when target-masker sex cues, spatial cues, or both were available. Method Speech recognition thresholds (SRTs) for competing speech were measured in 21 Mandarin-speaking and 15 English-speaking adults with normal hearing using a modified coordinate response measure task. SRTs were measured for target sentences produced by a male talker in the presence of two masker talkers (different male talkers or female talkers). The target sentence was always presented directly in front of the listener, and the maskers were either colocated with the target or were spatially separated from the target (+90°, -90°). Stimuli were presented via headphones and were virtually spatialized using head-related transfer functions. Three masker conditions were used to measure RM relative to the baseline condition: (a) talker sex cues, (b) spatial cues, or (c) combined talker sex and spatial cues. Results The results showed large amounts of RM according to talker sex and/or spatial cues. There was no significant difference in SRTs between Chinese and English listeners for the baseline condition, where no talker sex or spatial cues were available. Furthermore, there was no significant difference in RM between Chinese and English listeners when spatial cues were available. However, RM was significantly larger for Chinese listeners when talker sex cues or combined talker sex and spatial cues were available. Conclusion Listeners who speak a tonal language such as Mandarin Chinese may be able to take greater advantage of talker sex cues than listeners who do not speak a tonal language.
Collapse
Affiliation(s)
- Juan Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China
| | - Xing Wang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China
| | - Ning-yu Wang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China
| | - Xin Fu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China
| | - Tian Gan
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China
| | | | - Shelby Willis
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles
| | - Kevin Xu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles
| | - Mathew Thomas
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles
| |
Collapse
|
19
|
Biberger T, Ewert SD. The effect of room acoustical parameters on speech reception thresholds and spatial release from masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:2188. [PMID: 31671969 DOI: 10.1121/1.5126694] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 08/30/2019] [Indexed: 06/10/2023]
Abstract
In daily life, speech intelligibility is affected by masking caused by interferers and by reverberation. For a frontal target speaker and two interfering sources symmetrically placed to either side, spatial release from masking (SRM) is observed in comparison to frontal interferers. In this case, the auditory system can make use of temporally fluctuating interaural time/phase and level differences promoting binaural unmasking (BU) and better-ear glimpsing (BEG). Reverberation affects the waveforms of the target and maskers, and the interaural differences, depending on the spatial configuration and on the room acoustical properties. In this study, the effect of room acoustics, temporal structure of the interferers, and target-masker positions on speech reception thresholds and SRM was assessed. The results were compared to an optimal better-ear glimpsing strategy to help disentangle energetic masking including effects of BU and BEG as well as informational masking (IM). In anechoic and moderate reverberant conditions, BU and BEG contributed to SRM of fluctuating speech-like maskers, while BU did not contribute in highly reverberant conditions. In highly reverberant rooms a SRM of up to 3 dB was observed for speech maskers, including effects of release from IM based on binaural cues.
Collapse
Affiliation(s)
- Thomas Biberger
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
20
|
Li N, Wang S, Wang X, Xu L. Contributions of lexical tone to Mandarin sentence recognition in hearing-impaired listeners under noisy conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:EL99. [PMID: 31472569 PMCID: PMC6909998 DOI: 10.1121/1.5120543] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 07/11/2019] [Accepted: 07/16/2019] [Indexed: 06/10/2023]
Abstract
Mandarin sentence recognition using natural-tone and flat-tone sentences was tested in 22 subjects with sensorineural hearing loss (SNHL) and 25 listeners with normal hearing (NH) in quiet, speech-shaped noise, and two-talker-babble conditions. While little effects of flat tones on sentence recognition were seen in the NH listeners when the signal-to-noise ratio (SNR) was ≥0 dB, the SNHL listeners showed decreases in flat-tone-sentence recognition in quiet and at +5-dB SNR. Such declined performance was correlated with their degrees of hearing loss. Lexical tone contributes greatly to sentence recognition in hearing-impaired listeners in both quiet and in noise listening conditions.
Collapse
Affiliation(s)
- Nan Li
- Beijing Tongren Hospital, Beijing Institute of Otolaryngology, Capital Medical University, Beijing, ,
| | - Shuo Wang
- Beijing Tongren Hospital, Beijing Institute of Otolaryngology, Capital Medical University, Beijing, ,
| | - Xianhui Wang
- Communication Sciences and Disorders, Ohio University, Athens, Ohio 45701, ,
| | - Li Xu
- Communication Sciences and Disorders, Ohio University, Athens, Ohio 45701, ,
| |
Collapse
|
21
|
Eurich B, Klenzner T, Oehler M. Impact of room acoustic parameters on speech and music perception among participants with cochlear implants. Hear Res 2019; 377:122-132. [PMID: 30933704 DOI: 10.1016/j.heares.2019.03.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 03/09/2019] [Accepted: 03/13/2019] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Besides numerous other factors, listening experience with cochlear implants is substantially impaired by room acoustics. Even for persons without hearing impairment, the perception of auditory scenes, for example, concerning speech intelligibility, acoustic quality or audibility, is considerably influenced by room acoustics. For CI users, complex listening environments are usually associated with heavy losses. The aim of the present study was to determine room acoustic criteria that particularly influence speech pleasantness for CI users. DESIGN Accordingly, speech material of the Oldenburg Sentence Test (Oldenburger Satztest, OLSA) as well as basic music material (major and minor triads) were auralized using the software Auratorium which allows auralization of simulated rooms. The constructed rooms for speech stimuli were based on the standard DIN 18041:2016-03 concerning acoustic quality in rooms, the binding standard referred to by room acoustic consultants in Germany, which also includes specifications for inclusive applications in schools. For the music perception tests, two typical concert halls of different sizes were modelled. The auralized test stimuli were unilaterally presented to 10 CI users via their auxiliary input as well as to 18 participants with typical hearing via headphones (control group). Speech pleasantness was evaluated using modified MUSHRA tests. Concerning music perception, chord discrimination was tested using paired comparisons. RESULTS A strong preference of small source to listener distances by CI users was found, but no significant preference for room acoustic attenuation which exceeded the recommended for inclusive applications in schools. The analyses of the energy-time-structures suggested that a dense concentration of early reflections makes a beneficial impact on CI listeners' pleasantness ratings. Music materials were distinguished more consistently without any room acoustic impact, while any room acoustic impact led to performance close to chance level. This effect is probably due to spectral smearing effects caused by reverberation. CONCLUSIONS These results suggest that in terms of pleasantness of speech, for CI-users, source-to-listener distance is the more influential parameter than room attenuation which goes beyond the German standard recommendation. Reflections from which CI users can benefit seem to occur much earlier than those from which NH listeners benefit. Future studies on chord discrimination concerning room acoustics are needed.
Collapse
Affiliation(s)
- Bernhard Eurich
- Institute for Sound and Vibration Engineering, University of Applied Sciences Düsseldorf, Düsseldorf, Germany
| | - Thomas Klenzner
- Hörzentrum, Dept. Otorhinolaryngology, Head & Neck Surgery, University Hospital Düsseldorf, Heinrich-Heine-Universität, Düsseldorf, Germany
| | - Michael Oehler
- Music & Media Technology Department, Osnabrück University, Osnabrück, Germany.
| |
Collapse
|
22
|
Williges B, Jürgens T, Hu H, Dietz M. Coherent Coding of Enhanced Interaural Cues Improves Sound Localization in Noise With Bilateral Cochlear Implants. Trends Hear 2019; 22:2331216518781746. [PMID: 29956589 PMCID: PMC6048749 DOI: 10.1177/2331216518781746] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Bilateral cochlear implant (BCI) users only have very limited spatial hearing
abilities. Speech coding strategies transmit interaural level differences (ILDs)
but in a distorted manner. Interaural time difference (ITD) information
transmission is even more limited. With these cues, most BCI users can coarsely
localize a single source in quiet, but performance quickly declines in the
presence of other sound. This proof-of-concept study presents a novel signal
processing algorithm specific for BCIs, with the aim to improve sound
localization in noise. The core part of the BCI algorithm duplicates a
monophonic electrode pulse pattern and applies quasistationary natural or
artificial ITDs or ILDs based on the estimated direction of the dominant source.
Three experiments were conducted to evaluate different algorithm variants:
Experiment 1 tested if ITD transmission alone enables BCI subjects to lateralize
speech. Results showed that six out of nine BCI subjects were able to lateralize
intelligible speech in quiet solely based on ITDs. Experiments 2 and 3 assessed
azimuthal angle discrimination in noise with natural or modified ILDs and ITDs.
Angle discrimination for frontal locations was possible with all variants,
including the pure ITD case, but for lateral reference angles, it was only
possible with a linearized ILD mapping. Speech intelligibility in noise,
limitations, and challenges of this interaural cue transmission approach are
discussed alongside suggestions for modifying and further improving the BCI
algorithm.
Collapse
Affiliation(s)
- Ben Williges
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all," Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Tim Jürgens
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all," Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.,2 Institute of Acoustics, University of Applied Sciences Lübeck, Lübeck, Germany
| | - Hongmei Hu
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all," Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Mathias Dietz
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all," Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.,3 National Centre for Audiology, School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
| |
Collapse
|