1
|
Hu J, Vetter P. How the eyes respond to sounds. Ann N Y Acad Sci 2024; 1532:18-36. [PMID: 38152040 DOI: 10.1111/nyas.15093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023]
Abstract
Eye movements have been extensively studied with respect to visual stimulation. However, we live in a multisensory world, and how the eyes are driven by other senses has been explored much less. Here, we review the evidence on how audition can trigger and drive different eye responses and which cortical and subcortical neural correlates are involved. We provide an overview on how different types of sounds, from simple tones and noise bursts to spatially localized sounds and complex linguistic stimuli, influence saccades, microsaccades, smooth pursuit, pupil dilation, and eye blinks. The reviewed evidence reveals how the auditory system interacts with the oculomotor system, both behaviorally and neurally, and how this differs from visually driven eye responses. Some evidence points to multisensory interaction, and potential multisensory integration, but the underlying computational and neural mechanisms are still unclear. While there are marked differences in how the eyes respond to auditory compared to visual stimuli, many aspects of auditory-evoked eye responses remain underexplored, and we summarize the key open questions for future research.
Collapse
Affiliation(s)
- Junchao Hu
- Visual and Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Petra Vetter
- Visual and Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
2
|
Ausili SA, Snapp HA. Contralateral Routing of Signal Disrupts Monaural Sound Localization. Audiol Res 2023; 13:586-599. [PMID: 37622927 PMCID: PMC10451350 DOI: 10.3390/audiolres13040051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 07/10/2023] [Accepted: 07/26/2023] [Indexed: 08/26/2023] Open
Abstract
OBJECTIVES In the absence of binaural hearing, individuals with single-sided deafness can adapt to use monaural level and spectral cues to improve their spatial hearing abilities. Contralateral routing of signal is the most common form of rehabilitation for individuals with single-sided deafness. However, little is known about how these devices affect monaural localization cues, which single-sided deafness listeners may become reliant on. This study aimed to investigate the effects of contralateral routing of signal hearing aids on localization performance in azimuth and elevation under monaural listening conditions. DESIGN Localization was assessed in 10 normal hearing adults under three listening conditions: (1) normal hearing (NH), (2) unilateral plug (NH-plug), and (3) unilateral plug and CROS aided (NH-plug + CROS). Monaural hearing simulation was achieved by plugging the ear with E-A-Rsoft™ FX™ foam earplugs. Stimuli consisted of 150 ms high-pass noise bursts (3-20 kHz), presented in a random order from fifty locations spanning ±70° in the horizontal and ±30° in the vertical plane at 45, 55, and 65 dBA. RESULTS In the unilateral plugged listening condition, participants demonstrated good localization in elevation and a response bias in azimuth for signals directed at the open ear. A significant decrease in performance in elevation occurs with the contralateral routing of signal hearing device on, evidenced by significant reductions in response gain and low r2 value. Additionally, performance in azimuth is further reduced for contralateral routing of signal aided localization compared to the simulated unilateral hearing loss condition. Use of the contralateral routing of signal device also results in a reduction in promptness of the listener response and an increase in response variability. CONCLUSIONS Results suggest contralateral routing of signal hearing aids disrupt monaural spectral and level cues, which leads to detriments in localization performance in both the horizontal and vertical dimensions. Increased reaction time and increasing variability in responses suggests localization is more effortful when wearing the contralateral rerouting of signal device.
Collapse
Affiliation(s)
- Sebastian A. Ausili
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 Nijmegen, The Netherlands
- Department of Otolaryngology, University of Miami, 1120 NW 14th Street, 5th Floor, Miami, FL 33136, USA
| | - Hillary A. Snapp
- Department of Otolaryngology, University of Miami, 1120 NW 14th Street, 5th Floor, Miami, FL 33136, USA
| |
Collapse
|
3
|
Lokša P, Kopčo N. Toward a Unified Theory of the Reference Frame of the Ventriloquism Aftereffect. Trends Hear 2023; 27:23312165231201020. [PMID: 37715636 PMCID: PMC10505348 DOI: 10.1177/23312165231201020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 08/21/2023] [Accepted: 08/26/2023] [Indexed: 09/18/2023] Open
Abstract
The ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audio-visual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered vs. eye-centered). Previous experimental studies reported inconsistent results, observing either a mixture of head-centered and eye-centered frames, or a predominantly head-centered frame. Here, a computational model is introduced, examining the neural mechanisms underlying these effects. The basic model version assumes that the auditory spatial map is head-centered and the visual signals are converted to head-centered frame prior to inducing the adaptation. Two mechanisms are considered as extended model versions to describe the mixed-frame experimental data: (1) additional presence of visual signals in eye-centered frame and (2) eye-gaze direction-dependent attenuation in VAE when eyes shift away from the training fixation. Simulation results show that the mixed-frame results are mainly due to the second mechanism, suggesting that the RF of VAE is mainly head-centered. Additionally, a mechanism is proposed to explain a new ventriloquism-aftereffect-like phenomenon in which adaptation is induced by aligned audio-visual signals when saccades are used for responding to auditory targets. A version of the model extended to consider such response-method-related biases accurately predicts the new phenomenon. When attempting to model all the experimentally observed phenomena simultaneously, the model predictions are qualitatively similar but less accurate, suggesting that the proposed neural mechanisms interact in a more complex way than assumed in the model.
Collapse
Affiliation(s)
- Peter Lokša
- Institute of Computer Science, Faculty of Science, P. J. Šafárik University in Košice, Košice, Slovakia
| | - Norbert Kopčo
- Institute of Computer Science, Faculty of Science, P. J. Šafárik University in Košice, Košice, Slovakia
| |
Collapse
|
4
|
Yamagishi S, Furukawa S. Factors Influencing Saccadic Reaction Time: Effect of Task Modality, Stimulus Saliency, Spatial Congruency of Stimuli, and Pupil Size. Front Hum Neurosci 2020; 14:571893. [PMID: 33324183 PMCID: PMC7726206 DOI: 10.3389/fnhum.2020.571893] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 11/03/2020] [Indexed: 11/13/2022] Open
Abstract
It is often assumed that the reaction time of a saccade toward visual and/or auditory stimuli reflects the sensitivities of our oculomotor-orienting system to stimulus saliency. Endogenous factors, as well as stimulus-related factors, would also affect the saccadic reaction time (SRT). However, it was not clear how these factors interact and to what extent visual and auditory-targeting saccades are accounted for by common mechanisms. The present study examined the effect of, and the interaction between, stimulus saliency and audiovisual spatial congruency on the SRT for visual- and for auditory-target conditions. We also analyzed pre-target pupil size to examine the relationship between saccade preparation and pupil size. Pupil size is considered to reflect arousal states coupling with locus-coeruleus (LC) activity during a cognitive task. The main findings were that (1) the pattern of the examined effects on the SRT varied between visual- and auditory-auditory target conditions, (2) the effect of stimulus saliency was significant for the visual-target condition, but not significant for the auditory-target condition, (3) Pupil velocity, not absolute pupil size, was sensitive to task set (i.e., visual-targeting saccade vs. auditory-targeting saccade), and (4) there was a significant correlation between the pre-saccade absolute pupil size and the SRTs for the visual-target condition but not for the auditory-target condition. The discrepancy between target modalities for the effect of pupil velocity and between the absolute pupil size and pupil velocity for the correlation with SRT may imply that the pupil effect for the visual-target condition was caused by a modality-specific link between pupil size modulation and the SC rather than by the LC-NE (locus coeruleus-norepinephrine) system. These results support the idea that different threshold mechanisms in the SC may be involved in the initiation of saccades toward visual and auditory targets.
Collapse
Affiliation(s)
- Shimpei Yamagishi
- Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Shigeto Furukawa
- Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| |
Collapse
|
5
|
Eklöf M, Asp F, Berninger E. Sound localization latency in normal hearing and simulated unilateral hearing loss. Hear Res 2020; 395:108011. [PMID: 32792116 DOI: 10.1016/j.heares.2020.108011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 05/14/2020] [Accepted: 05/28/2020] [Indexed: 10/24/2022]
Abstract
Directing gaze towards auditory events is a natural behavior. In addition to the well-known accuracy of auditory elicited gaze responses for normal binaural listening, their latency is a measure of possible clinical interest and methodological importance. The aim was to develop a clinically feasible method to assess sound localization latency (SLL), and to study SLL as a function of simulated unilateral hearing loss (SUHL) and the relationship with accuracy. Eight healthy and normal-hearing adults (18-40 years) participated in this study. Horizontal gaze responses, recorded by non-invasive corneal reflection eye-tracking, were obtained during azimuthal shifts (24 trials) of a 3-min continuous auditory stimulus. In each trial, a sigmoid function was fitted to gaze samples. Latency was estimated by the abscissa corresponding to 50% of the arctangent amplitude. SLL was defined as the mean latency across trials. SLL was measured in normal-hearing and simulated SUHL conditions (SUHL30 and SUHL43: mean threshold of 30 dB HL and 43 dB HL across 0.5, 1, 2, and 4 kHz). In the normal-hearing condition, the mean ± SD SLL was 280 ± 40 ms (n = 8) with a test-retest SD = 20 ms. A linear mixed model showed a statistically significant effect of listening condition on SLL. The SUHL30 and SUHL43 conditions revealed a mean SLL of 370 ± 49 ms and 540 ± 120 ms, respectively. Repeated measures correlation analysis showed a clear relationship between SLL and the average sound localization accuracy (R2 = 0.94). The rapid and reliable method to obtain SLL may be an important clinical tool for evaluation of binaural processing. Future studies in clinical cohorts are needed to assess whether SLL may reveal information about binaural processing abilities beyond that afforded by sound localization accuracy.
Collapse
Affiliation(s)
- Martin Eklöf
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden; Department of ENT, Section of Hearing Implants, Karolinska University Hospital, Stockholm, Sweden.
| | - Filip Asp
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden; Department of ENT, Section of Hearing Implants, Karolinska University Hospital, Stockholm, Sweden
| | - Erik Berninger
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden; Department of Audiology and Neurotology, Karolinska University Hospital, Stockholm, Sweden
| |
Collapse
|
6
|
Ausili SA, Backus B, Agterberg MJH, van Opstal AJ, van Wanrooij MM. Sound Localization in Real-Time Vocoded Cochlear-Implant Simulations With Normal-Hearing Listeners. Trends Hear 2019; 23:2331216519847332. [PMID: 31088265 PMCID: PMC6535744 DOI: 10.1177/2331216519847332] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted the device-related issues by measuring sound localization performance of 11 NH listeners, listening to free-field stimuli processed by a real-time CI vocoder. The use of a real-time vocoder is a new approach, which enables testing in a free-field environment. For the NH listening condition, all listeners accurately and precisely localized sounds according to a linear stimulus–response relationship with an optimal gain and a minimal bias both in the azimuth and in the elevation directions. In contrast, when listening with bilateral real-time vocoders, listeners tended to orient either to the left or to the right in azimuth and were unable to determine sound source elevation. When listening with an NH ear and a unilateral vocoder, localization was impoverished on the vocoder side but improved toward the NH side. Localization performance was also reflected by systematic variations in reaction times across listening conditions. We conclude that perturbation of interaural temporal cues, reduction of interaural level cues, and removal of spectral pinna cues by the vocoder impairs sound localization. Listeners seem to ignore cues that were made unreliable by the vocoder, leading to acute reweighting of available localization cues. We discuss how current CI processors prevent CI users from localizing sounds in everyday environments.
Collapse
Affiliation(s)
- Sebastian A Ausili
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | | | - Martijn J H Agterberg
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.,3 Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Medical Center, the Netherlands
| | - A John van Opstal
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Marc M van Wanrooij
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
7
|
Plass J, Ahn E, Towle VL, Stacey WC, Wasade VS, Tao J, Wu S, Issa NP, Brang D. Joint Encoding of Auditory Timing and Location in Visual Cortex. J Cogn Neurosci 2019; 31:1002-1017. [PMID: 30912728 DOI: 10.1162/jocn_a_01399] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Co-occurring sounds can facilitate perception of spatially and temporally correspondent visual events. Separate lines of research have identified two putatively distinct neural mechanisms underlying two types of crossmodal facilitations: Whereas crossmodal phase resetting is thought to underlie enhancements based on temporal correspondences, lateralized occipital evoked potentials (ERPs) are thought to reflect enhancements based on spatial correspondences. Here, we sought to clarify the relationship between these two effects to assess whether they reflect two distinct mechanisms or, rather, two facets of the same underlying process. To identify the neural generators of each effect, we examined crossmodal responses to lateralized sounds in visually responsive cortex of 22 patients using electrocorticographic recordings. Auditory-driven phase reset and ERP responses in visual cortex displayed similar topography, revealing significant activity in pericalcarine, inferior occipital-temporal, and posterior parietal cortex, with maximal activity in lateral occipitotemporal cortex (potentially V5/hMT+). Laterality effects showed similar but less widespread topography. To test whether lateralized and nonlateralized components of crossmodal ERPs emerged from common or distinct neural generators, we compared responses throughout visual cortex. Visual electrodes responded to both contralateral and ipsilateral sounds with a contralateral bias, suggesting that previously observed laterality effects do not emerge from a distinct neural generator but rather reflect laterality-biased responses in the same neural populations that produce phase-resetting responses. These results suggest that crossmodal phase reset and ERP responses previously found to reflect spatial and temporal facilitation in visual cortex may reflect the same underlying mechanism. We propose a new unified model to account for these and previous results.
Collapse
|
8
|
Multisensory integration in orienting behavior: Pupil size, microsaccades, and saccades. Biol Psychol 2017; 129:36-44. [DOI: 10.1016/j.biopsycho.2017.07.024] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Revised: 06/26/2017] [Accepted: 07/31/2017] [Indexed: 11/22/2022]
|
9
|
The auditory dorsal pathway: Orienting vision. Neurosci Biobehav Rev 2011; 35:2162-73. [PMID: 21530585 DOI: 10.1016/j.neubiorev.2011.04.005] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2010] [Revised: 03/16/2011] [Accepted: 04/10/2011] [Indexed: 11/24/2022]
|