1
|
Dolhopiatenko H, Segovia-Martinez M, Nogueira W. The temporal mismatch across listening sides affects cortical auditory evoked responses in normal hearing listeners and cochlear implant users with contralateral acoustic hearing. Hear Res 2024; 451:109088. [PMID: 39032483 DOI: 10.1016/j.heares.2024.109088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 06/10/2024] [Accepted: 07/16/2024] [Indexed: 07/23/2024]
Abstract
Combining a cochlear implant with contralateral acoustic hearing typically enhances speech understanding, although this improvement varies among CI users and can lead to an interference effect. This variability may be associated with the effectiveness of the integration between electric and acoustic stimulation, which might be affected by the temporal mismatch between the two listening sides. Finding methods to compensate for the temporal mismatch might contribute to the optimal adjustment of bimodal devices and to improve hearing in CI users with contralateral acoustic hearing. The current study investigates cortical auditory evoked potentials (CAEPs) in normal hearing listeners (NH) and CI users with contralateral acoustic hearing. In NH, the amplitude of the N1 peak and the maximum phase locking value (PLV) were analyzed under monaural, binaural, and binaural temporally mismatched conditions. In CI users, CAEPs were measured when listening with CI only (CIS_only), acoustically only (AS_only) and with both sides together (CIS+AS). When listening with CIS+AS, various interaural delays were introduced between the electric and acoustic stimuli. In NH listeners, interaural temporal mismatch resulted in decreased N1 amplitude and PLV. Moreover, PLV is suggested as a more sensitive measure to investigate the integration of information between the two listening sides. CI users showed varied N1 latencies between the AS_only and CIS_only listening conditions, with increased N1 amplitude when the temporal mismatch was compensated. A tendency towards increased PLV was also observed, however, to a lesser extent than in NH listeners, suggesting a limited integration between electric and acoustic stimulation. This work highlights the potential of CAEPs measurement to investigate cortical processing of the information between two listening sides in NH and bimodal CI users.
Collapse
Affiliation(s)
- Hanna Dolhopiatenko
- Medical University Hannover, Cluster of Excellence 'Hearing4all', Hannover, Germany
| | | | - Waldo Nogueira
- Medical University Hannover, Cluster of Excellence 'Hearing4all', Hannover, Germany.
| |
Collapse
|
2
|
Dolhopiatenko H, Nogueira W. Selective attention decoding in bimodal cochlear implant users. Front Neurosci 2023; 16:1057605. [PMID: 36711138 PMCID: PMC9874229 DOI: 10.3389/fnins.2022.1057605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/20/2022] [Indexed: 01/12/2023] Open
Abstract
The growing group of cochlear implant (CI) users includes subjects with preserved acoustic hearing on the opposite side to the CI. The use of both listening sides results in improved speech perception in comparison to listening with one side alone. However, large variability in the measured benefit is observed. It is possible that this variability is associated with the integration of speech across electric and acoustic stimulation modalities. However, there is a lack of established methods to assess speech integration between electric and acoustic stimulation and consequently to adequately program the devices. Moreover, existing methods do not provide information about the underlying physiological mechanisms of this integration or are based on simple stimuli that are difficult to relate to speech integration. Electroencephalography (EEG) to continuous speech is promising as an objective measure of speech perception, however, its application in CIs is challenging because it is influenced by the electrical artifact introduced by these devices. For this reason, the main goal of this work is to investigate a possible electrophysiological measure of speech integration between electric and acoustic stimulation in bimodal CI users. For this purpose, a selective attention decoding paradigm has been designed and validated in bimodal CI users. The current study included behavioral and electrophysiological measures. The behavioral measure consisted of a speech understanding test, where subjects repeated words to a target speaker in the presence of a competing voice listening with the CI side (CIS) only, with the acoustic side (AS) only or with both listening sides (CIS+AS). Electrophysiological measures included cortical auditory evoked potentials (CAEPs) and selective attention decoding through EEG. CAEPs were recorded to broadband stimuli to confirm the feasibility to record cortical responses with CIS only, AS only, and CIS+AS listening modes. In the selective attention decoding paradigm a co-located target and a competing speech stream were presented to the subjects using the three listening modes (CIS only, AS only, and CIS+AS). The main hypothesis of the current study is that selective attention can be decoded in CI users despite the presence of CI electrical artifact. If selective attention decoding improves combining electric and acoustic stimulation with respect to electric stimulation alone, the hypothesis can be confirmed. No significant difference in behavioral speech understanding performance when listening with CIS+AS and AS only was found, mainly due to the ceiling effect observed with these two listening modes. The main finding of the current study is the possibility to decode selective attention in CI users even if continuous artifact is present. Moreover, an amplitude reduction of the forward transfer response function (TRF) of selective attention decoding was observed when listening with CIS+AS compared to AS only. Further studies to validate selective attention decoding as an electrophysiological measure of electric acoustic speech integration are required.
Collapse
|
3
|
Jürgens T, Wesarg T, Oetting D, Jung L, Williges B. Spatial speech-in-noise performance in simulated single-sided deaf and bimodal cochlear implant users in comparison with real patients. Int J Audiol 2023; 62:30-43. [PMID: 34962428 DOI: 10.1080/14992027.2021.2015633] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 11/30/2021] [Accepted: 12/03/2021] [Indexed: 01/05/2023]
Abstract
OBJECTIVE Speech reception thresholds (SRTs) in spatial scenarios were measured in simulated cochlear implant (CI) listeners with either contralateral normal hearing, or aided hearing impairment (bimodal), and compared to SRTs of real patients, who were measured using the exact same paradigm, to assess goodness of simulation. DESIGN CI listening was simulated using a vocoder incorporating actual CI signal processing and physiologic details of electric stimulation on one side. Unprocessed signals or simulation of aided moderate or profound hearing impairment was used contralaterally. Three spatial speech-in-noise scenarios were tested using virtual acoustics to assess spatial release from masking (SRM) and combined benefit. STUDY SAMPLE Eleven normal-hearing listeners participated in the experiment. RESULTS For contralateral normal and aided moderately impaired hearing, bilaterally assessed SRTs were not statistically different from unilateral SRTs of the better ear, indicating "better-ear-listening". Combined benefit was only found for contralateral profound impaired hearing. As in patients, SRM was highest for contralateral normal hearing and decreased systematically with more severe simulated impairment. Comparison to actual patients showed good reproduction of SRTs, SRM, and better-ear-listening. CONCLUSIONS The simulations reproduced better-ear-listening as in patients and suggest that combined benefit in spatial scenes predominantly occurs when both ears show poor speech-in-noise performance.
Collapse
Affiliation(s)
- Tim Jürgens
- Institute of Acoustics, University of Applied Sciences Lübeck, Lübeck, Germany
- Medical Physics and Cluster of Excellence "Hearing4all", Carl-von-Ossietzky University, Oldenburg, Germany
| | - Thomas Wesarg
- Faculty of Medicine, Department of Otorhinolaryngology - Head and Neck Surgery, Medical Center, University of Freiburg, Freiburg, Germany
| | | | - Lorenz Jung
- Faculty of Medicine, Department of Otorhinolaryngology - Head and Neck Surgery, Medical Center, University of Freiburg, Freiburg, Germany
| | - Ben Williges
- Medical Physics and Cluster of Excellence "Hearing4all", Carl-von-Ossietzky University, Oldenburg, Germany
- SOUND Lab, Cambridge Hearing Group, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
4
|
Modelling speech reception thresholds and their improvements due to spatial noise reduction algorithms in bimodal cochlear implant users. Hear Res 2022; 420:108507. [PMID: 35484022 PMCID: PMC9188268 DOI: 10.1016/j.heares.2022.108507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 04/05/2022] [Accepted: 04/07/2022] [Indexed: 11/22/2022]
Abstract
This paper compares two modelling approaches to predict the speech recognition ability of bimodal CI users and the benefit of using beamformers. The modelling approaches vary in computational complexity and fitting requirements. A complex cafeteria spatial scenario with three localized single noise source scenario and a diffuse multi-talker babble noise is used. The automatic speech recognizer is more accurate across the different spatial scenarios and noise types and requires less fitting compared to the statistical modelling approach.
Spatial noise reduction algorithms (“beamformers”) can considerably improve speech reception thresholds (SRTs) for bimodal cochlear implant (CI) users. The goal of this study was to model SRTs and SRT-benefit due to beamformers for bimodal CI users. Two existing model approaches varying in computational complexity and binaural processing assumption were compared: (i) the framework of auditory discrimination experiments (FADE) and (ii) the binaural speech intelligibility model (BSIM), both with CI and aided hearing-impaired front-ends. The exact same acoustic scenarios, and open-access beamformers as in the comparison clinical study Zedan et al. (2021) were used to quantify goodness of prediction. FADE was capable of modeling SRTs ab-initio, i.e., no calibration of the model was necessary to achieve high correlations and low root-mean square errors (RMSE) to both, measured SRTs (r = 0.85, RMSE = 2.8 dB) and to measured SRT-benefits (r = 0.96). BSIM achieved somewhat poorer predictions to both, measured SRTs (r = 0.78, RMSE = 6.7 dB) and to measured SRT-benefits (r = 0.91) and needs to be calibrated for matching average SRTs in one condition. Greatest deviations in predictions of BSIM were observed in diffuse multi-talker babble noise, which were not found with FADE. SRT-benefit predictions of both models were similar to instrumental signal-to-noise ratio (iSNR) improvements due to the beamformers. This indicates that FADE is preferrable for modeling absolute SRTs. However, for prediction of SRT-benefit due to spatial noise reduction algorithms in bimodal CI users, the average iSNR is a much simpler approach with similar performance.
Collapse
|
5
|
Angermeier J, Hemmert W, Zirn S. Measuring and Modeling Cue Dependent Spatial Release from Masking in the Presence of Typical Delays in the Treatment of Hearing Loss. Trends Hear 2022; 26:23312165221094202. [PMID: 35473484 PMCID: PMC9052821 DOI: 10.1177/23312165221094202] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In asymmetric treatment of hearing loss, processing latencies of the modalities typically
differ. This often alters the reference interaural time difference (ITD) (i.e., the ITD at
0° azimuth) by several milliseconds. Such changes in reference ITD have shown to influence
sound source localization in bimodal listeners provided with a hearing aid (HA) in one and
a cochlear implant (CI) in the contralateral ear. In this study, the effect of changes in
reference ITD on speech understanding, especially spatial release from masking (SRM) in
normal-hearing subjects was explored. Speech reception thresholds (SRT) were measured in
ten normal-hearing subjects for reference ITDs of 0, 1.75, 3.5, 5.25 and 7 ms with
spatially collocated (S0N0) and spatially separated
(S0N90) sound sources. Further, the cues for separation of target
and masker were manipulated to measure the effect of a reference ITD on unmasking by A)
ITDs and interaural level differences (ILDs), B) ITDs only and C) ILDs only. A blind
equalization-cancellation (EC) model was applied to simulate all measured conditions. SRM
decreased significantly in conditions A) and B) when the reference ITD was increased: In
condition A) from 8.8 dB SNR on average at 0 ms reference ITD to 4.6 dB at 7 ms, in
condition B) from 5.5 dB to 1.1 dB. In condition C) no significant effect was found. These
results were accurately predicted by the applied EC-model. The outcomes show that
interaural processing latency differences should be considered in asymmetric treatment of
hearing loss.
Collapse
Affiliation(s)
- Julian Angermeier
- Peter Osypka Institute of Medical Engineering, Faculty of Electrical Engineering, Medical Engineering and Computer Sciences, 64369University of Applied Sciences Offenburg.,Bio-Inspired Information Processing, Munich Institute of Biomedical Engineering, 9184Technical University of Munich
| | - Werner Hemmert
- Bio-Inspired Information Processing, Munich Institute of Biomedical Engineering, 9184Technical University of Munich
| | - Stefan Zirn
- Peter Osypka Institute of Medical Engineering, Faculty of Electrical Engineering, Medical Engineering and Computer Sciences, 64369University of Applied Sciences Offenburg
| |
Collapse
|
6
|
Stronks HC, Briaire J, Frijns J. Beamforming and Single-Microphone Noise Reduction: Effects on Signal-to-Noise Ratio and Speech Recognition of Bimodal Cochlear Implant Users. Trends Hear 2022; 26:23312165221112762. [PMID: 35862265 PMCID: PMC9310275 DOI: 10.1177/23312165221112762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
We have investigated the effectiveness of three noise-reduction algorithms, namely an adaptive monaural beamformer (MB), a fixed binaural beamformer (BB), and a single-microphone stationary-noise reduction algorithm (SNRA) by assessing the speech reception threshold (SRT) in a group of 15 bimodal cochlear implant users. Speech was presented frontally towards the listener and background noise was established as a homogeneous field of long-term speech-spectrum-shaped (LTSS) noise or 8-talker babble. We pursued four research questions, namely: whether the benefits of beamforming on the SRT differ between LTSS noise and 8-talker babble; whether BB is more effective than MB; whether SNRA improves the SRT in LTSS noise; and whether the SRT benefits of MB and BB are comparable to their improvement of the signal-to-noise ratio (SNR). The results showed that MB and BB significantly improved SRTs by an average of 2.6 dB and 2.9 dB, respectively. These benefits did not statistically differ between noise types or between the two beamformers. By contrast, physical SNR improvements obtained with a manikin revealed substantially greater benefits of BB (6.6 dB) than MB (3.3 dB). SNRA did not significantly affect SRTs per se in omnidirectional microphone settings, nor in combination with MB and BB. We conclude that in the group of bimodal listeners tested, BB had no additional benefits on speech recognition over MB in homogeneous noise, despite the finding that BB had a substantial larger benefit on the SNR than MB. SNRA did not improve speech recognition.
Collapse
Affiliation(s)
- H Christiaan Stronks
- Department of Otorhinolaryngology - Head & Neck Surgery, 4501Leiden University Medical Center, Leiden, The Netherlands
| | - Jeroen Briaire
- Department of Otorhinolaryngology - Head & Neck Surgery, 4501Leiden University Medical Center, Leiden, The Netherlands
| | - Johan Frijns
- Department of Otorhinolaryngology - Head & Neck Surgery, 4501Leiden University Medical Center, Leiden, The Netherlands.,Leiden Institute for Brain and Cognition, Leiden, The Netherlands
| |
Collapse
|
7
|
Pieper SH, Hamze N, Brill S, Hochmuth S, Exter M, Polak M, Radeloff A, Buschermöhle M, Dietz M. Considerations for Fitting Cochlear Implants Bimodally and to the Single-Sided Deaf. Trends Hear 2022; 26:23312165221108259. [PMID: 35726211 PMCID: PMC9218456 DOI: 10.1177/23312165221108259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
When listening with a cochlear implant through one ear and acoustically through the other, binaural benefits and spatial hearing abilities are generally poorer than in other bilaterally stimulated configurations. With the working hypothesis that binaural neurons require interaurally matched inputs, we review causes for mismatch, their perceptual consequences, and experimental methods for mismatch measurements. The focus is on the three primary interaural dimensions of latency, frequency, and level. Often, the mismatch is not constant, but rather highly stimulus-dependent. We report on mismatch compensation strategies, taking into consideration the specific needs of the respective patient groups. Practical challenges typically faced by audiologists in the proposed fitting procedure are discussed. While improvement in certain areas (e.g., speaker localization) is definitely achievable, a more comprehensive mismatch compensation is a very ambitious endeavor. Even in the hypothetical ideal fitting case, performance is not expected to exceed that of a good bilateral cochlear implant user.
Collapse
Affiliation(s)
- Sabrina H Pieper
- Department of Medical Physics and Acoustic, University of Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
| | - Noura Hamze
- MED-EL Medical Electronics GmbH, Innsbruck, Austria
| | - Stefan Brill
- MED-EL Medical Electronics Germany GmbH, Starnberg, Germany
| | - Sabine Hochmuth
- Division of Otorhinolaryngology, University of Oldenburg, Oldenburg, Germany
| | - Mats Exter
- Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany.,Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
| | - Marek Polak
- MED-EL Medical Electronics GmbH, Innsbruck, Austria
| | - Andreas Radeloff
- Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany.,Division of Otorhinolaryngology, University of Oldenburg, Oldenburg, Germany.,Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| | | | - Mathias Dietz
- Department of Medical Physics and Acoustic, University of Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany.,Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
8
|
Bakal TA, Milvae KD, Chen C, Goupell MJ. Head Shadow, Summation, and Squelch in Bilateral Cochlear-Implant Users With Linked Automatic Gain Controls. Trends Hear 2021; 25:23312165211018147. [PMID: 34057387 PMCID: PMC8182628 DOI: 10.1177/23312165211018147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Speech understanding in noise is poorer in bilateral cochlear-implant (BICI) users compared to normal-hearing counterparts. Independent automatic gain controls (AGCs) may contribute to this because adjusting processor gain independently can reduce interaural level differences that BICI listeners rely on for bilateral benefits. Bilaterally linked AGCs may improve bilateral benefits by increasing the magnitude of interaural level differences. The effects of linked AGCs on bilateral benefits (summation, head shadow, and squelch) were measured in nine BICI users. Speech understanding for a target talker at 0° masked by a single talker at 0°, 90°, or −90° azimuth was assessed under headphones with sentences at five target-to-masker ratios. Research processors were used to manipulate AGC type (independent or linked) and test ear (left, right, or both). Sentence recall was measured in quiet to quantify individual interaural asymmetry in functional performance. The results showed that AGC type did not significantly change performance or bilateral benefits. Interaural functional asymmetries, however, interacted with ear such that greater summation and squelch benefit occurred when there was larger functional asymmetry, and interacted with interferer location such that smaller head shadow benefit occurred when there was larger functional asymmetry. The larger benefits for those with larger asymmetry were driven by improvements from adding a better-performing ear, rather than a true binaural-hearing benefit. In summary, linked AGCs did not significantly change bilateral benefits in cases of speech-on-speech masking with a single-talker masker, but there was also no strong detriment across a range of target-to-masker ratios, within a small and diverse BICI listener population.
Collapse
Affiliation(s)
- Taylor A Bakal
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Kristina DeRoy Milvae
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Chen Chen
- Advanced Bionics LLC, Research and Technology, Valencia, California, United States
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| |
Collapse
|
9
|
Yun D, Jennings TR, Kidd G, Goupell MJ. Benefits of triple acoustic beamforming during speech-on-speech masking and sound localization for bilateral cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3052. [PMID: 34241104 PMCID: PMC8102069 DOI: 10.1121/10.0003933] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 03/03/2021] [Accepted: 03/06/2021] [Indexed: 05/30/2023]
Abstract
Bilateral cochlear-implant (CI) users struggle to understand speech in noisy environments despite receiving some spatial-hearing benefits. One potential solution is to provide acoustic beamforming. A headphone-based experiment was conducted to compare speech understanding under natural CI listening conditions and for two non-adaptive beamformers, one single beam and one binaural, called "triple beam," which provides an improved signal-to-noise ratio (beamforming benefit) and usable spatial cues by reintroducing interaural level differences. Speech reception thresholds (SRTs) for speech-on-speech masking were measured with target speech presented in front and two maskers in co-located or narrow/wide separations. Numerosity judgments and sound-localization performance also were measured. Natural spatial cues, single-beam, and triple-beam conditions were compared. For CI listeners, there was a negligible change in SRTs when comparing co-located to separated maskers for natural listening conditions. In contrast, there were 4.9- and 16.9-dB improvements in SRTs for the beamformer and 3.5- and 12.3-dB improvements for triple beam (narrow and wide separations). Similar results were found for normal-hearing listeners presented with vocoded stimuli. Single beam improved speech-on-speech masking performance but yielded poor sound localization. Triple beam improved speech-on-speech masking performance, albeit less than the single beam, and sound localization. Thus, triple beam was the most versatile across multiple spatial-hearing domains.
Collapse
Affiliation(s)
- David Yun
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Todd R Jennings
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
10
|
Dieudonné B, Van Wilderode M, Francart T. Temporal quantization deteriorates the discrimination of interaural time differences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:815. [PMID: 32873012 DOI: 10.1121/10.0001759] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 07/24/2020] [Indexed: 06/11/2023]
Abstract
Cochlear implants (CIs) often replace acoustic temporal fine structure by a fixed-rate pulse train. If the pulse timing is arbitrary (that is, not based on the phase information of the acoustic signal), temporal information is quantized by the pulse period. This temporal quantization is probably imperceptible with current clinical devices. However, it could result in large temporal jitter for strategies that aim to improve bilateral and bimodal CI users' perception of interaural time differences (ITDs), such as envelope enhancement. In an experiment with 16 normal-hearing listeners, it is shown that such jitter could deteriorate ITD perception for temporal quantization that corresponds to the often-used stimulation rate of 900 pulses per second (pps): the just-noticeable difference in ITD with quantization was 177 μs as compared to 129 μs without quantization. For smaller quantization step sizes, no significant deterioration of ITD perception was found. In conclusion, the binaural system can only average out the effect of temporal quantization to some extent, such that pulse timing should be well-considered. As this psychophysical procedure was somewhat unconventional, different procedural parameters were compared by simulating a number of commonly used two-down one-up adaptive procedures in Appendix B.
Collapse
Affiliation(s)
- Benjamin Dieudonné
- Experimental Oto-rhino-laryngology, Department of Neurosciences, Katholieke Universiteit (KU) Leuven-University of Leuven, Herestraat 49 bus 721, Leuven, 3000, Belgium
| | - Mira Van Wilderode
- Experimental Oto-rhino-laryngology, Department of Neurosciences, Katholieke Universiteit (KU) Leuven-University of Leuven, Herestraat 49 bus 721, Leuven, 3000, Belgium
| | - Tom Francart
- Experimental Oto-rhino-laryngology, Department of Neurosciences, Katholieke Universiteit (KU) Leuven-University of Leuven, Herestraat 49 bus 721, Leuven, 3000, Belgium
| |
Collapse
|
11
|
Spirrov D, Kludt E, Verschueren E, Büchner A, Francart T. Effect of (Mis)Matched Compression Speed on Speech Recognition in Bimodal Listeners. Trends Hear 2020; 24:2331216520948974. [PMID: 32865486 PMCID: PMC7466877 DOI: 10.1177/2331216520948974] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 07/20/2020] [Accepted: 07/20/2020] [Indexed: 11/15/2022] Open
Abstract
Automatic gain control (AGC) compresses the wide dynamic range of sounds to the narrow dynamic range of hearing-impaired listeners. Setting AGC parameters (time constants and knee points) is an important part of the fitting of hearing devices. These parameters do not only influence overall loudness elicited by the hearing devices but can also affect the recognition of speech in noise. We investigated whether matching knee points and time constants of the AGC between the cochlear implant and the hearing aid of bimodal listeners would improve speech recognition in noise. We recruited 18 bimodal listeners and provided them all with the same cochlear-implant processor and hearing aid. We compared the matched AGCs with the default device settings with mismatched AGCs. As a baseline, we also included a condition with the mismatched AGCs of the participants' own devices. We tested speech recognition in quiet and in noise presented from different directions. The time constants affected outcomes in the monaural testing condition with the cochlear implant alone. There were no specific binaural performance differences between the two AGC settings. Therefore, the performance was mostly dependent on the monaural cochlear implant alone condition.
Collapse
Affiliation(s)
| | | | | | | | - Tom Francart
- ExpORL, Department of Neurosciences, KU Leuven, Belgium
| |
Collapse
|