1
|
Sharma S, H.M. Mens L, F.M. Snik A, van Opstal AJ, van Wanrooij MM. Hearing Asymmetry Biases Spatial Hearing in Bimodal Cochlear-Implant Users Despite Bilateral Low-Frequency Hearing Preservation. Trends Hear 2023; 27:23312165221143907. [PMID: 36605011 PMCID: PMC9829999 DOI: 10.1177/23312165221143907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Many cochlear implant users with binaural residual (acoustic) hearing benefit from combining electric and acoustic stimulation (EAS) in the implanted ear with acoustic amplification in the other. These bimodal EAS listeners can potentially use low-frequency binaural cues to localize sounds. However, their hearing is generally asymmetric for mid- and high-frequency sounds, perturbing or even abolishing binaural cues. Here, we investigated the effect of a frequency-dependent binaural asymmetry in hearing thresholds on sound localization by seven bimodal EAS listeners. Frequency dependence was probed by presenting sounds with power in low-, mid-, high-, or mid-to-high-frequency bands. Frequency-dependent hearing asymmetry was present in the bimodal EAS listening condition (when using both devices) but was also induced by independently switching devices on or off. Using both devices, hearing was near symmetric for low frequencies, asymmetric for mid frequencies with better hearing thresholds in the implanted ear, and monaural for high frequencies with no hearing in the non-implanted ear. Results show that sound-localization performance was poor in general. Typically, localization was strongly biased toward the better hearing ear. We observed that hearing asymmetry was a good predictor for these biases. Notably, even when hearing was symmetric a preferential bias toward the ear using the hearing aid was revealed. We discuss how frequency dependence of any hearing asymmetry may lead to binaural cues that are spatially inconsistent as the spectrum of a sound changes. We speculate that this inconsistency may prevent accurate sound-localization even after long-term exposure to the hearing asymmetry.
Collapse
Affiliation(s)
- Snandan Sharma
- Department of Biophysics, Radboud
University, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
| | - Lucas H.M. Mens
- Department of Otorhinolaryngology, Radboud University Medical
Centre, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
| | - Ad F.M. Snik
- Department of Biophysics, Radboud
University, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
| | - A. John van Opstal
- Department of Biophysics, Radboud
University, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
| | - Marc M. van Wanrooij
- Department of Biophysics, Radboud
University, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
- Marc van Wanrooij, Department of
Biophysics, Radboud University, Donders Institute for Brain, Cognition and
Behavior, The Netherlands.
| |
Collapse
|
2
|
Ausili SA, Backus B, Agterberg MJH, van Opstal AJ, van Wanrooij MM. Sound Localization in Real-Time Vocoded Cochlear-Implant Simulations With Normal-Hearing Listeners. Trends Hear 2019; 23:2331216519847332. [PMID: 31088265 PMCID: PMC6535744 DOI: 10.1177/2331216519847332] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted the device-related issues by measuring sound localization performance of 11 NH listeners, listening to free-field stimuli processed by a real-time CI vocoder. The use of a real-time vocoder is a new approach, which enables testing in a free-field environment. For the NH listening condition, all listeners accurately and precisely localized sounds according to a linear stimulus–response relationship with an optimal gain and a minimal bias both in the azimuth and in the elevation directions. In contrast, when listening with bilateral real-time vocoders, listeners tended to orient either to the left or to the right in azimuth and were unable to determine sound source elevation. When listening with an NH ear and a unilateral vocoder, localization was impoverished on the vocoder side but improved toward the NH side. Localization performance was also reflected by systematic variations in reaction times across listening conditions. We conclude that perturbation of interaural temporal cues, reduction of interaural level cues, and removal of spectral pinna cues by the vocoder impairs sound localization. Listeners seem to ignore cues that were made unreliable by the vocoder, leading to acute reweighting of available localization cues. We discuss how current CI processors prevent CI users from localizing sounds in everyday environments.
Collapse
Affiliation(s)
- Sebastian A Ausili
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | | | - Martijn J H Agterberg
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.,3 Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Medical Center, the Netherlands
| | - A John van Opstal
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Marc M van Wanrooij
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
3
|
van de Rijt LPH, Roye A, Mylanus EAM, van Opstal AJ, van Wanrooij MM. The Principle of Inverse Effectiveness in Audiovisual Speech Perception. Front Hum Neurosci 2019; 13:335. [PMID: 31611780 PMCID: PMC6775866 DOI: 10.3389/fnhum.2019.00335] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 09/11/2019] [Indexed: 11/13/2022] Open
Abstract
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic noise. In simple audiovisual perceptual tasks, inverse effectiveness is often observed, which holds that the weaker the unimodal stimuli, or the poorer their signal-to-noise ratio, the stronger the audiovisual benefit. So far, however, inverse effectiveness has not been demonstrated for complex audiovisual speech stimuli. Here we assess whether this multisensory integration effect can also be observed for the recognizability of spoken words. To that end, we presented audiovisual sentences to 18 native-Dutch normal-hearing participants, who had to identify the spoken words from a finite list. Speech-recognition performance was determined for auditory-only, visual-only (lipreading), and auditory-visual conditions. To modulate acoustic task difficulty, we systematically varied the auditory signal-to-noise ratio. In line with a commonly observed multisensory enhancement on speech recognition, audiovisual words were more easily recognized than auditory-only words (recognition thresholds of -15 and -12 dB, respectively). We here show that the difficulty of recognizing a particular word, either acoustically or visually, determines the occurrence of inverse effectiveness in audiovisual word integration. Thus, words that are better heard or recognized through lipreading, benefit less from bimodal presentation. Audiovisual performance at the lowest acoustic signal-to-noise ratios (45%) fell below the visual recognition rates (60%), reflecting an actual deterioration of lipreading in the presence of excessive acoustic noise. This suggests that the brain may adopt a strategy in which attention has to be divided between listening and lipreading.
Collapse
Affiliation(s)
- Luuk P. H. van de Rijt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Anja Roye
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Emmanuel A. M. Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - A. John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Marc M. van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
4
|
Ege R, Van Opstal AJ, Van Wanrooij MM. Perceived Target Range Shapes Human Sound-Localization Behavior. eNeuro 2019; 6:ENEURO.0111-18.2019. [PMID: 30963103 PMCID: PMC6451157 DOI: 10.1523/eneuro.0111-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 02/03/2019] [Accepted: 02/03/2019] [Indexed: 11/21/2022] Open
Abstract
The auditory system relies on binaural differences and spectral pinna cues to localize sounds in azimuth and elevation. However, the acoustic input can be unreliable, due to uncertainty about the environment, and neural noise. A possible strategy to reduce sound-location uncertainty is to integrate the sensory observations with sensorimotor information from previous experience, to infer where sounds are more likely to occur. We investigated whether and how human sound localization performance is affected by the spatial distribution of target sounds, and changes thereof. We tested three different open-loop paradigms, in which we varied the spatial range of sounds in different ways. For the narrowest ranges, target-response gains were highly idiosyncratic and deviated from an optimal gain predicted by error-minimization; in the horizontal plane the deviation typically consisted of a response overshoot. Moreover, participants adjusted their behavior by rapidly adapting their gain to the target range, both in elevation and in azimuth, yielding behavior closer to optimal for larger target ranges. Notably, gain changes occurred without any exogenous feedback about performance. We discuss how the findings can be explained by a sub-optimal model in which the motor-control system reduces its response error across trials to within an acceptable range, rather than strictly minimizing the error.
Collapse
Affiliation(s)
- Rachel Ege
- Department of Biophysics, Radboud University, Donders Institute for Brain, Cognition and Behaviour, 6525 AJ Nijmegen, The Netherlands
| | - A John Van Opstal
- Department of Biophysics, Radboud University, Donders Institute for Brain, Cognition and Behaviour, 6525 AJ Nijmegen, The Netherlands
| | - Marc M Van Wanrooij
- Department of Biophysics, Radboud University, Donders Institute for Brain, Cognition and Behaviour, 6525 AJ Nijmegen, The Netherlands
| |
Collapse
|
5
|
Zonooz B, Arani E, Opstal AJV. Learning to localise weakly-informative sound spectra with and without feedback. Sci Rep 2018; 8:17933. [PMID: 30560940 PMCID: PMC6298951 DOI: 10.1038/s41598-018-36422-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 11/20/2018] [Indexed: 11/12/2022] Open
Abstract
How the human auditory system learns to map complex pinna-induced spectral-shape cues onto veridical estimates of sound-source elevation in the median plane is still unclear. Earlier studies demonstrated considerable sound-localisation plasticity after applying pinna moulds, and to altered vision. Several factors may contribute to auditory spatial learning, like visual or motor feedback, or updated priors. We here induced perceptual learning for sounds with degraded spectral content, having weak, but consistent, elevation-dependent cues, as demonstrated by low-gain stimulus-response relations. During training, we provided visual feedback for only six targets in the midsagittal plane, to which listeners gradually improved their response accuracy. Interestingly, listeners' performance also improved without visual feedback, albeit less strongly. Post-training results showed generalised improved response behaviour, also to non-trained locations and acoustic spectra, presented throughout the two-dimensional frontal hemifield. We argue that the auditory system learns to reweigh contributions from low-informative spectral bands to update its prior elevation estimates, and explain our results with a neuro-computational model.
Collapse
Affiliation(s)
- Bahram Zonooz
- Biophysics Department, Donders Center for Neuroscience, Radboud University, Heyendaalseweg 135, 6525, AJ, Nijmegen, The Netherlands
| | - Elahe Arani
- Biophysics Department, Donders Center for Neuroscience, Radboud University, Heyendaalseweg 135, 6525, AJ, Nijmegen, The Netherlands
| | - A John Van Opstal
- Biophysics Department, Donders Center for Neuroscience, Radboud University, Heyendaalseweg 135, 6525, AJ, Nijmegen, The Netherlands.
| |
Collapse
|
6
|
Veugen LCE, Chalupper J, Mens LHM, Snik AFM, van Opstal AJ. Effect of extreme adaptive frequency compression in bimodal listeners on sound localization and speech perception. Cochlear Implants Int 2017; 18:266-277. [PMID: 28726592 DOI: 10.1080/14670100.2017.1353762] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVES This study aimed to improve access to high-frequency interaural level differences (ILD), by applying extreme frequency compression (FC) in the hearing aid (HA) of 13 bimodal listeners, using a cochlear implant (CI) and conventional HA in opposite ears. DESIGN An experimental signal-adaptive frequency-lowering algorithm was tested, compressing frequencies above 160 Hz into the individual audible range of residual hearing, but only for consonants (adaptive FC), thus protecting vowel formants, with the aim to preserve speech perception. In a cross-over design with at least 5 weeks of acclimatization between sessions, bimodal performance with and without adaptive FC was compared for horizontal sound localization, speech understanding in quiet and in noise, and vowel, consonant and voice-pitch perception. RESULTS On average, adaptive FC did not significantly affect any of the test results. Yet, two subjects who were fitted with a relatively weak frequency compression ratio, showed improved horizontal sound localization. After the study, four subjects preferred adaptive FC, four preferred standard frequency mapping, and four had no preference. Noteworthy, the subjects preferring adaptive FC were those with best performance on all tasks, both with and without adaptive FC. CONCLUSION On a group level, extreme adaptive FC did not change sound localization and speech understanding in bimodal listeners. Possible reasons are too strong compression ratios, insufficient residual hearing or that the adaptive switching, although preserving vowel perception, may have been ineffective to produce consistent ILD cues. Individual results suggested that two subjects were able to integrate the frequency-compressed HA input with that of the CI, and benefitted from enhanced binaural cues for horizontal sound localization.
Collapse
Affiliation(s)
- Lidwien C E Veugen
- a Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour , Radboud University Nijmegen , Nijmegen , The Netherlands
| | - Josef Chalupper
- b Advanced Bionics European Research Centre (AB ERC) , Hannover , Germany
| | - Lucas H M Mens
- c Department of Otorhinolaryngology , Radboud University Nijmegen Medical Centre , The Netherlands
| | - Ad F M Snik
- c Department of Otorhinolaryngology , Radboud University Nijmegen Medical Centre , The Netherlands
| | - A John van Opstal
- a Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour , Radboud University Nijmegen , Nijmegen , The Netherlands
| |
Collapse
|
7
|
Horizontal sound localization in cochlear implant users with a contralateral hearing aid. Hear Res 2016; 336:72-82. [DOI: 10.1016/j.heares.2016.04.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/05/2016] [Revised: 04/26/2016] [Accepted: 04/28/2016] [Indexed: 11/21/2022]
|
8
|
van de Rijt LPH, van Opstal AJ, Mylanus EAM, Straatman LV, Hu HY, Snik AFM, van Wanrooij MM. Temporal Cortex Activation to Audiovisual Speech in Normal-Hearing and Cochlear Implant Users Measured with Functional Near-Infrared Spectroscopy. Front Hum Neurosci 2016; 10:48. [PMID: 26903848 PMCID: PMC4750083 DOI: 10.3389/fnhum.2016.00048] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Accepted: 01/29/2016] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Speech understanding may rely not only on auditory, but also on visual information. Non-invasive functional neuroimaging techniques can expose the neural processes underlying the integration of multisensory processes required for speech understanding in humans. Nevertheless, noise (from functional MRI, fMRI) limits the usefulness in auditory experiments, and electromagnetic artifacts caused by electronic implants worn by subjects can severely distort the scans (EEG, fMRI). Therefore, we assessed audio-visual activation of temporal cortex with a silent, optical neuroimaging technique: functional near-infrared spectroscopy (fNIRS). METHODS We studied temporal cortical activation as represented by concentration changes of oxy- and deoxy-hemoglobin in four, easy-to-apply fNIRS optical channels of 33 normal-hearing adult subjects and five post-lingually deaf cochlear implant (CI) users in response to supra-threshold unisensory auditory and visual, as well as to congruent auditory-visual speech stimuli. RESULTS Activation effects were not visible from single fNIRS channels. However, by discounting physiological noise through reference channel subtraction (RCS), auditory, visual and audiovisual (AV) speech stimuli evoked concentration changes for all sensory modalities in both cohorts (p < 0.001). Auditory stimulation evoked larger concentration changes than visual stimuli (p < 0.001). A saturation effect was observed for the AV condition. CONCLUSIONS Physiological, systemic noise can be removed from fNIRS signals by RCS. The observed multisensory enhancement of an auditory cortical channel can be plausibly described by a simple addition of the auditory and visual signals with saturation.
Collapse
Affiliation(s)
- Luuk P H van de Rijt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical CentreNijmegen, Netherlands; Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University NijmegenNijmegen, Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| | - Emmanuel A M Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre Nijmegen, Netherlands
| | - Louise V Straatman
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre Nijmegen, Netherlands
| | - Hai Yin Hu
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| | - Ad F M Snik
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre Nijmegen, Netherlands
| | - Marc M van Wanrooij
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical CentreNijmegen, Netherlands; Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University NijmegenNijmegen, Netherlands
| |
Collapse
|
9
|
Mendonça C, Escher A, van de Par S, Colonius H. Predicting auditory space calibration from recent multisensory experience. Exp Brain Res 2015; 233:1983-91. [PMID: 25795081 PMCID: PMC4464732 DOI: 10.1007/s00221-015-4259-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Accepted: 03/12/2015] [Indexed: 11/05/2022]
Abstract
Multisensory experience can lead to auditory space recalibration. After exposure to discrepant audiovisual stimulation, sound percepts are displaced in space, in the direction of the previous visual stimulation. This study focuses on identifying the factors in recent sensory experience leading to such auditory space shifts. Sequences of five audiovisual pairs were presented, each randomly congruent or discrepant in space. Each sequence was followed by a single auditory trial and two visual trials. In each trial, participants had to identify the perceived stimuli positions. We found that auditory localization is shifted during audiovisual discrepant trials and during subsequent auditory trials, suggesting a recalibration effect. Time did not lead to greater recalibration effects. The last audiovisual trial affects the subsequent auditory shift the most. The number of discrepant trials in a sequence, and the number of consecutive trials in sequence, also correlated with the subsequent auditory shift. To estimate the individual contribution of previously presented trials to the recalibration effect, a best-fitting model was developed to predict the shift in a linear weighted combination of stimulus features: (1) whether matching or discrepant trials occurred in the sequence, (2) total number of discrepant trials, and (3) maximum number of consecutive discrepant trials, (4) whether the last trial was discrepant or not. The selected model consists of a function including as properties the type of stimulus of the last audiovisual sequence trial and the overall probability of mismatching trials in sequence.
Collapse
Affiliation(s)
- Catarina Mendonça
- Department of Signal Processing and Acoustics, Aalto University, Otakaari 5, 02150, Espoo, Finland,
| | | | | | | |
Collapse
|
10
|
Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization. PLoS One 2013; 8:e72562. [PMID: 24009691 PMCID: PMC3757015 DOI: 10.1371/journal.pone.0072562] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2013] [Accepted: 07/12/2013] [Indexed: 11/19/2022] Open
Abstract
A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3–1.7 degrees, or 22–28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
Collapse
|