1
|
van der Willigen RF, Versnel H, van Opstal AJ. Spectral-temporal processing of naturalistic sounds in monkeys and humans. J Neurophysiol 2024; 131:38-63. [PMID: 37965933 DOI: 10.1152/jn.00129.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 10/23/2023] [Accepted: 11/13/2023] [Indexed: 11/16/2023] Open
Abstract
Human speech and vocalizations in animals are rich in joint spectrotemporal (S-T) modulations, wherein acoustic changes in both frequency and time are functionally related. In principle, the primate auditory system could process these complex dynamic sounds based on either an inseparable representation of S-T features or, alternatively, a separable representation. The separability hypothesis implies an independent processing of spectral and temporal modulations. We collected comparative data on the S-T hearing sensitivity in humans and macaque monkeys to a wide range of broadband dynamic spectrotemporal ripple stimuli employing a yes-no signal-detection task. Ripples were systematically varied, as a function of density (spectral modulation frequency), velocity (temporal modulation frequency), or modulation depth, to cover a listener's full S-T modulation sensitivity, derived from a total of 87 psychometric ripple detection curves. Audiograms were measured to control for normal hearing. Determined were hearing thresholds, reaction time distributions, and S-T modulation transfer functions (MTFs), both at the ripple detection thresholds and at suprathreshold modulation depths. Our psychophysically derived MTFs are consistent with the hypothesis that both monkeys and humans employ analogous perceptual strategies: S-T acoustic information is primarily processed separable. Singular value decomposition (SVD), however, revealed a small, but consistent, inseparable spectral-temporal interaction. Finally, SVD analysis of the known visual spatiotemporal contrast sensitivity function (CSF) highlights that human vision is space-time inseparable to a much larger extent than is the case for S-T sensitivity in hearing. Thus, the specificity with which the primate brain encodes natural sounds appears to be less strict than is required to adequately deal with natural images.NEW & NOTEWORTHY We provide comparative data on primate audition of naturalistic sounds comprising hearing thresholds, reaction time distributions, and spectral-temporal modulation transfer functions. Our psychophysical experiments demonstrate that auditory information is primarily processed in a spectral-temporal-independent manner by both monkeys and humans. Singular value decomposition of known visual spatiotemporal contrast sensitivity, in comparison to our auditory spectral-temporal sensitivity, revealed a striking contrast in how the brain encodes natural sounds as opposed to natural images, as vision appears to be space-time inseparable.
Collapse
Affiliation(s)
- Robert F van der Willigen
- Section Neurophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- School of Communication, Media and Information Technology, Rotterdam University of Applied Sciences, Rotterdam, The Netherlands
- Research Center Creating 010, Rotterdam University of Applied Sciences, Rotterdam, The Netherlands
| | - Huib Versnel
- Section Neurophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Otorhinolaryngology and Head & Neck Surgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - A John van Opstal
- Section Neurophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
2
|
van Opstal AJ. Neural encoding of instantaneous kinematics of eye-head gaze shifts in monkey superior Colliculus. Commun Biol 2023; 6:927. [PMID: 37689726 PMCID: PMC10492853 DOI: 10.1038/s42003-023-05305-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 08/31/2023] [Indexed: 09/11/2023] Open
Abstract
The midbrain superior colliculus is a crucial sensorimotor stage for programming and generating saccadic eye-head gaze shifts. Although it is well established that superior colliculus cells encode a neural command that specifies the amplitude and direction of the upcoming gaze-shift vector, there is controversy about the role of the firing-rate dynamics of these neurons during saccades. In our earlier work, we proposed a simple quantitative model that explains how the recruited superior colliculus population may specify the detailed kinematics (trajectories and velocity profiles) of head-restrained saccadic eye movements. We here show that the same principles may apply to a wide range of saccadic eye-head gaze shifts with strongly varying kinematics, despite the substantial nonlinearities and redundancy in programming and execute rapid goal-directed eye-head gaze shifts to peripheral targets. Our findings could provide additional evidence for an important role of the superior colliculus in the optimal control of saccades.
Collapse
Affiliation(s)
- A John van Opstal
- Section Neurophysics, Donders Centre for Neuroscience, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
3
|
Sharma S, H.M. Mens L, F.M. Snik A, van Opstal AJ, van Wanrooij MM. Hearing Asymmetry Biases Spatial Hearing in Bimodal Cochlear-Implant Users Despite Bilateral Low-Frequency Hearing Preservation. Trends Hear 2023; 27:23312165221143907. [PMID: 36605011 PMCID: PMC9829999 DOI: 10.1177/23312165221143907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Many cochlear implant users with binaural residual (acoustic) hearing benefit from combining electric and acoustic stimulation (EAS) in the implanted ear with acoustic amplification in the other. These bimodal EAS listeners can potentially use low-frequency binaural cues to localize sounds. However, their hearing is generally asymmetric for mid- and high-frequency sounds, perturbing or even abolishing binaural cues. Here, we investigated the effect of a frequency-dependent binaural asymmetry in hearing thresholds on sound localization by seven bimodal EAS listeners. Frequency dependence was probed by presenting sounds with power in low-, mid-, high-, or mid-to-high-frequency bands. Frequency-dependent hearing asymmetry was present in the bimodal EAS listening condition (when using both devices) but was also induced by independently switching devices on or off. Using both devices, hearing was near symmetric for low frequencies, asymmetric for mid frequencies with better hearing thresholds in the implanted ear, and monaural for high frequencies with no hearing in the non-implanted ear. Results show that sound-localization performance was poor in general. Typically, localization was strongly biased toward the better hearing ear. We observed that hearing asymmetry was a good predictor for these biases. Notably, even when hearing was symmetric a preferential bias toward the ear using the hearing aid was revealed. We discuss how frequency dependence of any hearing asymmetry may lead to binaural cues that are spatially inconsistent as the spectrum of a sound changes. We speculate that this inconsistency may prevent accurate sound-localization even after long-term exposure to the hearing asymmetry.
Collapse
Affiliation(s)
- Snandan Sharma
- Department of Biophysics, Radboud
University, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
| | - Lucas H.M. Mens
- Department of Otorhinolaryngology, Radboud University Medical
Centre, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
| | - Ad F.M. Snik
- Department of Biophysics, Radboud
University, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
| | - A. John van Opstal
- Department of Biophysics, Radboud
University, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
| | - Marc M. van Wanrooij
- Department of Biophysics, Radboud
University, Donders Institute for
Brain, Cognition and Behavior, Nijmegen, The
Netherlands
- Marc van Wanrooij, Department of
Biophysics, Radboud University, Donders Institute for Brain, Cognition and
Behavior, The Netherlands.
| |
Collapse
|
4
|
Abstract
We tested whether sensitivity to acoustic spectrotemporal modulations can be observed from reaction times for normal-hearing and impaired-hearing conditions. In a manual reaction-time task, normal-hearing listeners had to detect the onset of a ripple (with density between 0–8 cycles/octave and a fixed modulation depth of 50%), that moved up or down the log-frequency axis at constant velocity (between 0–64 Hz), in an otherwise-unmodulated broadband white-noise. Spectral and temporal modulations elicited band-pass filtered sensitivity characteristics, with fastest detection rates around 1 cycle/oct and 32 Hz for normal-hearing conditions. These results closely resemble data from other studies that typically used the modulation-depth threshold as a sensitivity criterion. To simulate hearing-impairment, stimuli were processed with a 6-channel cochlear-implant vocoder, and a hearing-aid simulation that introduced separate spectral smearing and low-pass filtering. Reaction times were always much slower compared to normal hearing, especially for the highest spectral densities. Binaural performance was predicted well by the benchmark race model of binaural independence, which models statistical facilitation of independent monaural channels. For the impaired-hearing simulations this implied a “best-of-both-worlds” principle in which the listeners relied on the hearing-aid ear to detect spectral modulations, and on the cochlear-implant ear for temporal-modulation detection. Although singular-value decomposition indicated that the joint spectrotemporal sensitivity matrix could be largely reconstructed from independent temporal and spectral sensitivity functions, in line with time-spectrum separability, a substantial inseparable spectral-temporal interaction was present in all hearing conditions. These results suggest that the reaction-time task yields a valid and effective objective measure of acoustic spectrotemporal-modulation sensitivity.
Collapse
Affiliation(s)
- Lidwien C E Veugen
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, 6029Radboud University, Nijmegen, Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, 6029Radboud University, Nijmegen, Netherlands
| | - Marc M van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, 6029Radboud University, Nijmegen, Netherlands
| |
Collapse
|
5
|
Sharma S, Nogueira W, van Opstal AJ, Chalupper J, Mens LHM, van Wanrooij MM. Amount of Frequency Compression in Bimodal Cochlear Implant Users Is a Poor Predictor for Audibility and Spatial Hearing. J Speech Lang Hear Res 2021; 64:5000-5013. [PMID: 34714704 DOI: 10.1044/2021_jslhr-20-00653] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Speech understanding in noise and horizontal sound localization is poor in most cochlear implant (CI) users with a hearing aid (bimodal stimulation). This study investigated the effect of static and less-extreme adaptive frequency compression in hearing aids on spatial hearing. By means of frequency compression, we aimed to restore high-frequency audibility, and thus improve sound localization and spatial speech recognition. METHOD Sound-detection thresholds, sound localization, and spatial speech recognition were measured in eight bimodal CI users, with and without frequency compression. We tested two compression algorithms: a static algorithm, which compressed frequencies beyond the compression knee point (160 or 480 Hz), and an adaptive algorithm, which aimed to compress only consonants leaving vowels unaffected (adaptive knee-point frequencies from 736 to 2946 Hz). RESULTS Compression yielded a strong audibility benefit (high-frequency thresholds improved by 40 and 24 dB for static and adaptive compression, respectively), no meaningful improvement in localization performance (errors remained > 30 deg), and spatial speech recognition across all participants. Localization biases without compression (toward the hearing-aid and implant side for low- and high-frequency sounds, respectively) disappeared or reversed with compression. The audibility benefits provided to each bimodal user partially explained any individual improvements in localization performance; shifts in bias; and, for six out of eight participants, benefits in spatial speech recognition. CONCLUSIONS We speculate that limiting factors such as a persistent hearing asymmetry and mismatch in spectral overlap prevent compression in bimodal users from improving sound localization. Therefore, the benefit in spatial release from masking by compression is likely due to a shift of attention to the ear with the better signal-to-noise ratio facilitated by compression, rather than an improved spatial selectivity. Supplemental Material https://doi.org/10.23641/asha.16869485.
Collapse
Affiliation(s)
- Snandan Sharma
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Waldo Nogueira
- Department of Otolaryngology, Cluster of Excellence Hearing4all, Medical University Hannover, Germany
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Josef Chalupper
- Advanced Bionics, European Research Center, Hannover, Germany
| | - Lucas H M Mens
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Marc M van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
6
|
van de Rijt LPH, van Opstal AJ, van Wanrooij MM. Multisensory Integration-Attention Trade-Off in Cochlear-Implanted Deaf Individuals. Front Neurosci 2021; 15:683804. [PMID: 34393707 PMCID: PMC8358073 DOI: 10.3389/fnins.2021.683804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 06/21/2021] [Indexed: 11/13/2022] Open
Abstract
The cochlear implant (CI) allows profoundly deaf individuals to partially recover hearing. Still, due to the coarse acoustic information provided by the implant, CI users have considerable difficulties in recognizing speech, especially in noisy environments. CI users therefore rely heavily on visual cues to augment speech recognition, more so than normal-hearing individuals. However, it is unknown how attention to one (focused) or both (divided) modalities plays a role in multisensory speech recognition. Here we show that unisensory speech listening and reading were negatively impacted in divided-attention tasks for CI users—but not for normal-hearing individuals. Our psychophysical experiments revealed that, as expected, listening thresholds were consistently better for the normal-hearing, while lipreading thresholds were largely similar for the two groups. Moreover, audiovisual speech recognition for normal-hearing individuals could be described well by probabilistic summation of auditory and visual speech recognition, while CI users were better integrators than expected from statistical facilitation alone. Our results suggest that this benefit in integration comes at a cost. Unisensory speech recognition is degraded for CI users when attention needs to be divided across modalities. We conjecture that CI users exhibit an integration-attention trade-off. They focus solely on a single modality during focused-attention tasks, but need to divide their limited attentional resources in situations with uncertainty about the upcoming stimulus modality. We argue that in order to determine the benefit of a CI for speech recognition, situational factors need to be discounted by presenting speech in realistic or complex audiovisual environments.
Collapse
Affiliation(s)
- Luuk P H van de Rijt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboudumc, Nijmegen, Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Marc M van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
7
|
Wang L, Noordanus E, van Opstal AJ. Estimating multiple latencies in the auditory system from auditory steady-state responses on a single EEG channel. Sci Rep 2021; 11:2150. [PMID: 33495484 PMCID: PMC7835249 DOI: 10.1038/s41598-021-81232-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 01/05/2021] [Indexed: 01/30/2023] Open
Abstract
The latency of the auditory steady-state response (ASSR) may provide valuable information regarding the integrity of the auditory system, as it could potentially reveal the presence of multiple intracerebral sources. To estimate multiple latencies from high-order ASSRs, we propose a novel two-stage procedure that consists of a nonparametric estimation method, called apparent latency from phase coherence (ALPC), followed by a heuristic sequential forward selection algorithm (SFS). Compared with existing methods, ALPC-SFS requires few prior assumptions, and is straightforward to implement for higher-order nonlinear responses to multi-cosine sound complexes with their initial phases set to zero. It systematically evaluates the nonlinear components of the ASSRs by estimating multiple latencies, automatically identifies involved ASSR components, and reports a latency consistency index. To verify the proposed method, we performed simulations for several scenarios: two nonlinear subsystems with different or overlapping outputs. We compared the results from our method with predictions from existing, parametric methods. We also recorded the EEG from ten normal-hearing adults by bilaterally presenting superimposed tones with four frequencies that evoke a unique set of ASSRs. From these ASSRs, two major latencies were found to be stable across subjects on repeated measurement days. The two latencies are dominated by low-frequency (LF) (near 40 Hz, at around 41-52 ms) and high-frequency (HF) (> 80 Hz, at around 21-27 ms) ASSR components. The frontal-central brain region showed longer latencies on LF components, but shorter latencies on HF components, when compared with temporal-lobe regions. In conclusion, the proposed nonparametric ALPC-SFS method, applied to zero-phase, multi-cosine sound complexes is more suitable for evaluating embedded nonlinear systems underlying ASSRs than existing methods. It may therefore be a promising objective measure for hearing performance and auditory cortex (dys)function.
Collapse
Affiliation(s)
- Lei Wang
- Department of Biophysics, Radboud University, Nijmegen, 6525 AJ, The Netherlands.
- Donders Centre for Neuroscience, Radboud University, Nijmegen, 6525 AJ, The Netherlands.
| | - Elisabeth Noordanus
- Department of Biophysics, Radboud University, Nijmegen, 6525 AJ, The Netherlands
- Donders Centre for Neuroscience, Radboud University, Nijmegen, 6525 AJ, The Netherlands
| | - A John van Opstal
- Department of Biophysics, Radboud University, Nijmegen, 6525 AJ, The Netherlands
- Donders Centre for Neuroscience, Radboud University, Nijmegen, 6525 AJ, The Netherlands
| |
Collapse
|
8
|
Abstract
To program a goal-directed response in the presence of acoustic reflections, the audio-motor system should suppress the detection of time-delayed sources. We examined the effects of spatial separation and interstimulus delay on the ability of human listeners to localize a pair of broadband sounds in the horizontal plane. Participants indicated how many sounds were heard and where these were perceived by making one or two head-orienting localization responses. Results suggest that perceptual fusion of the two sounds depends on delay and spatial separation. Leading and lagging stimuli in close spatial proximity required longer stimulus delays to be perceptually separated than those further apart. Whenever participants heard one sound, their localization responses for synchronous sounds were oriented to a weighted average of both source locations. For short delays, responses were directed toward the leading stimulus location. Increasing spatial separation enhanced this effect. For longer delays, responses were again directed toward a weighted average. When participants perceived two sounds, the first and the second response were directed to either of the leading and lagging source locations. Perceived locations were interchanged often in their temporal order (in ∼40% of trials). We show that the percept of two sounds occurring requires sufficient spatiotemporal separation, after which localization can be performed with high accuracy. We propose that the percept of temporal order of two concurrent sounds results from a different process than localization and discuss how dynamic lateral excitatory-inhibitory interactions within a spatial sensorimotor map could explain the findings.NEW & NOTEWORTHY Sound localization requires spectral and temporal processing of implicit acoustic cues, and is seriously challenged when multiple sources coincide closely in space and time. We systematically varied spatial-temporal disparities for two sounds and instructed listeners to generate goal-directed head movements. We found that even when the auditory system has accurate representations of both sources, it still has trouble to decide whether the scene contained one or two sounds, and in which order they appeared.
Collapse
Affiliation(s)
- Guus C van Bentum
- Department of Biophysics, Donders Center for Neuroscience, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Marc M van Wanrooij
- Department of Biophysics, Donders Center for Neuroscience, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Center for Neuroscience, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
9
|
Ausili SA, Backus B, Agterberg MJH, van Opstal AJ, van Wanrooij MM. Sound Localization in Real-Time Vocoded Cochlear-Implant Simulations With Normal-Hearing Listeners. Trends Hear 2019; 23:2331216519847332. [PMID: 31088265 PMCID: PMC6535744 DOI: 10.1177/2331216519847332] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted the device-related issues by measuring sound localization performance of 11 NH listeners, listening to free-field stimuli processed by a real-time CI vocoder. The use of a real-time vocoder is a new approach, which enables testing in a free-field environment. For the NH listening condition, all listeners accurately and precisely localized sounds according to a linear stimulus–response relationship with an optimal gain and a minimal bias both in the azimuth and in the elevation directions. In contrast, when listening with bilateral real-time vocoders, listeners tended to orient either to the left or to the right in azimuth and were unable to determine sound source elevation. When listening with an NH ear and a unilateral vocoder, localization was impoverished on the vocoder side but improved toward the NH side. Localization performance was also reflected by systematic variations in reaction times across listening conditions. We conclude that perturbation of interaural temporal cues, reduction of interaural level cues, and removal of spectral pinna cues by the vocoder impairs sound localization. Listeners seem to ignore cues that were made unreliable by the vocoder, leading to acute reweighting of available localization cues. We discuss how current CI processors prevent CI users from localizing sounds in everyday environments.
Collapse
Affiliation(s)
- Sebastian A Ausili
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | | | - Martijn J H Agterberg
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.,3 Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Medical Center, the Netherlands
| | - A John van Opstal
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Marc M van Wanrooij
- 1 Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
10
|
van de Rijt LPH, Roye A, Mylanus EAM, van Opstal AJ, van Wanrooij MM. The Principle of Inverse Effectiveness in Audiovisual Speech Perception. Front Hum Neurosci 2019; 13:335. [PMID: 31611780 PMCID: PMC6775866 DOI: 10.3389/fnhum.2019.00335] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 09/11/2019] [Indexed: 11/13/2022] Open
Abstract
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic noise. In simple audiovisual perceptual tasks, inverse effectiveness is often observed, which holds that the weaker the unimodal stimuli, or the poorer their signal-to-noise ratio, the stronger the audiovisual benefit. So far, however, inverse effectiveness has not been demonstrated for complex audiovisual speech stimuli. Here we assess whether this multisensory integration effect can also be observed for the recognizability of spoken words. To that end, we presented audiovisual sentences to 18 native-Dutch normal-hearing participants, who had to identify the spoken words from a finite list. Speech-recognition performance was determined for auditory-only, visual-only (lipreading), and auditory-visual conditions. To modulate acoustic task difficulty, we systematically varied the auditory signal-to-noise ratio. In line with a commonly observed multisensory enhancement on speech recognition, audiovisual words were more easily recognized than auditory-only words (recognition thresholds of -15 and -12 dB, respectively). We here show that the difficulty of recognizing a particular word, either acoustically or visually, determines the occurrence of inverse effectiveness in audiovisual word integration. Thus, words that are better heard or recognized through lipreading, benefit less from bimodal presentation. Audiovisual performance at the lowest acoustic signal-to-noise ratios (45%) fell below the visual recognition rates (60%), reflecting an actual deterioration of lipreading in the presence of excessive acoustic noise. This suggests that the brain may adopt a strategy in which attention has to be divided between listening and lipreading.
Collapse
Affiliation(s)
- Luuk P. H. van de Rijt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Anja Roye
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Emmanuel A. M. Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - A. John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Marc M. van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
11
|
Sharma S, Mens LHM, Snik AFM, van Opstal AJ, van Wanrooij MM. An Individual With Hearing Preservation and Bimodal Hearing Using a Cochlear Implant and Hearing Aids Has Perturbed Sound Localization but Preserved Speech Perception. Front Neurol 2019; 10:637. [PMID: 31293495 PMCID: PMC6598447 DOI: 10.3389/fneur.2019.00637] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 05/30/2019] [Indexed: 11/13/2022] Open
Abstract
This study describes sound localization and speech-recognition-in-noise abilities of a cochlear-implant user with electro-acoustic stimulation (EAS) in one ear, and a hearing aid in the contralateral ear. This listener had low-frequency, up to 250 Hz, residual hearing within the normal range in both ears. The objective was to determine how hearing devices affect spatial hearing for an individual with substantial unaided low-frequency residual hearing. Sound-localization performance was assessed for three sounds with different bandpass characteristics: low center frequency (100-400 Hz), mid center frequency (500-1,500 Hz) and high frequency broad-band (500-20,000 Hz) noise. Speech recognition was assessed with the Dutch Matrix sentence test presented in noise. Tests were performed while the listener used several on-off combinations of the devices. The listener localized low-center frequency sounds well in all hearing conditions, but mid-center frequency and high frequency broadband sounds were localized well almost exclusively in the completely unaided condition (mid-center frequency sounds were also localized well with the EAS device alone). Speech recognition was best in the fully aided condition with speech presented in the front and noise presented at either side. Furthermore, there was no significant improvement in speech recognition with all devices on, compared to when the listener used her cochlear implant only. Hearing aids and cochlear implant impair high frequency spatial hearing due to improper weighing of interaural time and level difference cues. The results reinforce the notion that hearing symmetry is important for sound localization. The symmetry is perturbed by the hearing devices for higher frequencies. Speech recognition depends mainly on hearing through the cochlear implant and is not significantly improved with the added information from hearing aids. A contralateral hearing aid provides benefit when the noise is spatially separated from the speech. However, this benefit is explained by the head shadow in that ear, rather than by an ability to spatially segregate noise from speech, as sound localization was perturbed with all devices in use.
Collapse
Affiliation(s)
- Snandan Sharma
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Lucas H M Mens
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behavior, Radboud University Medical Center, Nijmegen, Netherlands
| | - Ad F M Snik
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Marc M van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
12
|
van Opstal AJ, Kasap B. Electrical stimulation in a spiking neural network model of monkey superior colliculus. Prog Brain Res 2019; 249:153-166. [PMID: 31325975 PMCID: PMC6744279 DOI: 10.1016/bs.pbr.2019.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
The superior colliculus (SC) generates saccades by recruiting a population of cells in its topographically organized motor map. Supra-threshold electrical stimulation in the SC produces a normometric saccade with little effect of the stimulation parameters. Moreover, the kinematics of electrically evoked saccades strongly resemble natural, visual-evoked saccades. These findings support models in which the saccade vector is determined by a center-of-gravity computation of activated neurons, while trajectory and kinematics arise in brainstem-cerebellar feedback circuits. Recent single-unit recordings, however, have indicated that the SC population also specifies the instantaneous saccade kinematics, supporting an alternative model, in which the saccade trajectory results from dynamic summation of movement effects of all SC spike trains. Here we reconcile the linear summation model with stimulation results, by assuming that the electric field directly activates a relatively small set of neurons around the electrode tip, which subsequently sets up a large population response through lateral synaptic interactions.
Collapse
Affiliation(s)
- A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | - Bahadir Kasap
- Department of Biophysics, Radboud University, Donders Centre for Neuroscience, Nijmegen, The Netherlands
| |
Collapse
|
13
|
Kasap B, van Opstal AJ. Microstimulation in a spiking neural network model of the midbrain superior colliculus. PLoS Comput Biol 2019; 15:e1006522. [PMID: 30978180 PMCID: PMC6481873 DOI: 10.1371/journal.pcbi.1006522] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 04/24/2019] [Accepted: 02/19/2019] [Indexed: 11/19/2022] Open
Abstract
The midbrain superior colliculus (SC) generates a rapid saccadic eye movement to a sensory stimulus by recruiting a population of cells in its topographically organized motor map. Supra-threshold electrical microstimulation in the SC reveals that the site of stimulation produces a normometric saccade vector with little effect of the stimulation parameters. Moreover, electrically evoked saccades (E-saccades) have kinematic properties that strongly resemble natural, visual-evoked saccades (V-saccades). These findings support models in which the saccade vector is determined by a center-of-gravity computation of activated neurons, while its trajectory and kinematics arise from downstream feedback circuits in the brainstem. Recent single-unit recordings, however, have indicated that the SC population also specifies instantaneous kinematics. These results support an alternative model, in which the desired saccade trajectory, including its kinematics, follows from instantaneous summation of movement effects of all SC spike trains. But how to reconcile this model with microstimulation results? Although it is thought that microstimulation activates a large population of SC neurons, the mechanism through which it arises is unknown. We developed a spiking neural network model of the SC, in which microstimulation directly activates a relatively small set of neurons around the electrode tip, which subsequently sets up a large population response through lateral synaptic interactions. We show that through this mechanism the population drives an E-saccade with near-normal kinematics that are largely independent of the stimulation parameters. Only at very low stimulus intensities the network recruits a population with low firing rates, resulting in abnormally slow saccades.
Collapse
Affiliation(s)
- Bahadir Kasap
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - A. John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
14
|
van Opstal AJ, Kasap B. Maps and sensorimotor transformations for eye-head gaze shifts: Role of the midbrain superior colliculus. Prog Brain Res 2019; 249:19-33. [PMID: 31325979 PMCID: PMC6745020 DOI: 10.1016/bs.pbr.2019.01.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2023]
Abstract
Single-unit recordings in head-restrained monkeys indicated that the population of saccade-related cells in the midbrain Superior Colliculus (SC) encodes the kinematics of desired straight saccade trajectories by the cumulative number of spikes. In addition, the nonlinear main sequence of saccades (their amplitude-peak velocity saturation) emerges from a spatial gradient of peak-firing rates of collicular neurons, rather than from neural saturation at brainstem burst generators. We here extend this idea to eye-head gaze shifts and illustrate how the cumulative spike-count in head-unrestrained monkeys relates to the desired gaze trajectory and its kinematics. We argue that the output of the motor SC is an abstract desired gaze-motor signal, which drives in a feedforward way the instantaneous kinematics of ongoing gaze shifts, including the strong influence of initial eye position on gaze kinematics. We propose that the neural population acts as a vectorial gaze pulse-generator for eye-head saccades, which is subsequently decomposed into signals that drive both motor systems in appropriate craniocentric reference frames within a dynamic gaze-velocity feedback loop.
Collapse
Affiliation(s)
- A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | - Bahadir Kasap
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
15
|
van de Rijt LPH, van Wanrooij MM, Snik AFM, Mylanus EAM, van Opstal AJ, Roye A. Measuring Cortical Activity During Auditory Processing with Functional Near-Infrared Spectroscopy. ACTA ACUST UNITED AC 2018; 8:9-18. [PMID: 31534793 PMCID: PMC6751080 DOI: 10.17430/1003278] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Functional near-infrared spectroscopy (fNIRS) is an optical, non-invasive neuroimaging technique that investigates human brain activity by calculating concentrations of oxy- and deoxyhemoglobin. The aim of this publication is to review the current state of the art as to how fNIRS has been used to study auditory function. We address temporal and spatial characteristics of the hemodynamic response to auditory stimulation as well as experimental factors that affect fNIRS data such as acoustic and stimulus-driven effects. The rising importance that fNIRS is generating in auditory neuroscience underlines the strong potential of the technology, and it seems likely that fNIRS will become a useful clinical tool.
Collapse
Affiliation(s)
- Luuk P H van de Rijt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands.,Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Marc M van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Ad F M Snik
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Emmanuel A M Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Anja Roye
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
16
|
Abstract
The midbrain superior colliculus (SC) is a crucial sensorimotor interface in the generation of rapid saccadic gaze shifts. For every saccade it recruits a large population of cells in its vectorial motor map. Supra-threshold electrical microstimulation in the SC reveals that the stimulated site produces the saccade vector specified by the motor map. Electrically evoked saccades (E-saccades) have kinematic properties that strongly resemble natural, visual-evoked saccades (V-saccades), with little influence of the stimulation parameters. Moreover, synchronous stimulation at two sites yields eye movements that resemble a weighted vector average of the individual stimulation effects. Single-unit recordings have indicated that the SC population acts as a vectorial pulse generator by specifying the instantaneous gaze-kinematics through dynamic summation of the movement effects of all SC spike trains. But how to reconcile the a-specific stimulation pulses with these intricate saccade properties? We recently developed a spiking neural network model of the SC, in which microstimulation initially activates a relatively small set of (~50) neurons around the electrode tip, which subsequently sets up a large population response (~5,000 neurons) through lateral synaptic interactions. Single-site microstimulation in this network thus produces the saccade properties and firing rate profiles as seen in single-unit recording experiments. We here show that this mechanism also accounts for many results of simultaneous double stimulation at different SC sites. The resulting E-saccade trajectories resemble a weighted average of the single-site effects, in which stimulus current strength of the electrode pulses serve as weighting factors. We discuss under which conditions the network produces effects that deviate from experimental results.
Collapse
|
17
|
Ege R, van Opstal AJ, Bremen P, van Wanrooij MM. Testing the Precedence Effect in the Median Plane Reveals Backward Spatial Masking of Sound. Sci Rep 2018; 8:8670. [PMID: 29875363 PMCID: PMC5989261 DOI: 10.1038/s41598-018-26834-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Accepted: 05/17/2018] [Indexed: 11/17/2022] Open
Abstract
Two synchronous sounds at different locations in the midsagittal plane induce a fused percept at a weighted-average position, with weights depending on relative sound intensities. In the horizontal plane, sound fusion (stereophony) disappears with a small onset asynchrony of 1-4 ms. The leading sound then fully determines the spatial percept (the precedence effect). Given that accurate localisation in the median plane requires an analysis of pinna-related spectral-shape cues, which takes ~25-30 ms of sound input to complete, we wondered at what time scale a precedence effect for elevation would manifest. Listeners localised the first of two sounds, with spatial disparities between 10-80 deg, and inter-stimulus delays between 0-320 ms. We demonstrate full fusion (averaging), and largest response variability, for onset asynchronies up to at least 40 ms for all spatial disparities. Weighted averaging persisted, and gradually decayed, for delays >160 ms, suggesting considerable backward masking. Moreover, response variability decreased with increasing delays. These results demonstrate that localisation undergoes substantial spatial blurring in the median plane by lagging sounds. Thus, the human auditory system, despite its high temporal resolution, is unable to spatially dissociate sounds in the midsagittal plane that co-occur within a time window of at least 160 ms.
Collapse
Affiliation(s)
- Rachel Ege
- Biophysics Department, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 AJ, Nijmegen, The Netherlands
| | - A John van Opstal
- Biophysics Department, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 AJ, Nijmegen, The Netherlands.
| | - Peter Bremen
- Biophysics Department, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 AJ, Nijmegen, The Netherlands
- Department of Neuroscience, Erasmus Medical Center, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands
| | - Marc M van Wanrooij
- Biophysics Department, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 AJ, Nijmegen, The Netherlands.
| |
Collapse
|
18
|
Abstract
In dynamic visual or auditory gaze double-steps, a brief target flash or sound burst is presented in midflight of an ongoing eye-head gaze shift. Behavioral experiments in humans and monkeys have indicated that the subsequent eye and head movements to the target are goal-directed, regardless of stimulus timing, first gaze shift characteristics, and initial conditions. This remarkable behavior requires that the gaze-control system 1) has continuous access to accurate signals about eye-in-head position and ongoing eye-head movements, 2) that it accounts for different internal signal delays, and 3) that it is able to update the retinal ( TE) and head-centric ( TH) target coordinates into appropriate eye-centered and head-centered motor commands on millisecond time scales. As predictive, feedforward remapping of targets cannot account for this behavior, we propose that targets are transformed and stored into a stable reference frame as soon as their sensory information becomes available. We present a computational model, in which recruited cells in the midbrain superior colliculus drive eyes and head to the stored target location through a common dynamic oculocentric gaze-velocity command, which is continuously updated from the stable goal and transformed into appropriate oculocentric and craniocentric motor commands. We describe two equivalent, yet conceptually different, implementations that both account for the complex, but accurate, kinematic behaviors and trajectories of eye-head gaze shifts under a variety of challenging multisensory conditions, such as in dynamic visual-auditory multisteps.
Collapse
Affiliation(s)
- Bahadir Kasap
- Radboud University, Donders Institute for Brain, Cognition and Behavior, Department of Biophysics , Nijmegen , the Netherlands
| | - A John van Opstal
- Radboud University, Donders Institute for Brain, Cognition and Behavior, Department of Biophysics , Nijmegen , the Netherlands
| |
Collapse
|
19
|
Kasap B, van Opstal AJ. A spiking neural network model of the midbrain superior colliculus that generates saccadic motor commands. Biol Cybern 2017; 111:249-268. [PMID: 28528360 PMCID: PMC5506246 DOI: 10.1007/s00422-017-0719-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Accepted: 05/08/2017] [Indexed: 06/07/2023]
Abstract
Single-unit recordings suggest that the midbrain superior colliculus (SC) acts as an optimal controller for saccadic gaze shifts. The SC is proposed to be the site within the visuomotor system where the nonlinear spatial-to-temporal transformation is carried out: the population encodes the intended saccade vector by its location in the motor map (spatial), and its trajectory and velocity by the distribution of firing rates (temporal). The neurons' burst profiles vary systematically with their anatomical positions and intended saccade vectors, to account for the nonlinear main-sequence kinematics of saccades. Yet, the underlying collicular mechanisms that could result in these firing patterns are inaccessible to current neurobiological techniques. Here, we propose a simple spiking neural network model that reproduces the spike trains of saccade-related cells in the intermediate and deep SC layers during saccades. The model assumes that SC neurons have distinct biophysical properties for spike generation that depend on their anatomical position in combination with a center-surround lateral connectivity. Both factors are needed to account for the observed firing patterns. Our model offers a basis for neuronal algorithms for spatiotemporal transformations and bio-inspired optimal controllers.
Collapse
Affiliation(s)
- Bahadir Kasap
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, HG00.800, Heyendaalseweg 135, 6525 AJ, Nijmegen, The Netherlands.
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, HG00.800, Heyendaalseweg 135, 6525 AJ, Nijmegen, The Netherlands
| |
Collapse
|
20
|
Veugen LCE, Chalupper J, Mens LHM, Snik AFM, van Opstal AJ. Effect of extreme adaptive frequency compression in bimodal listeners on sound localization and speech perception. Cochlear Implants Int 2017; 18:266-277. [PMID: 28726592 DOI: 10.1080/14670100.2017.1353762] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVES This study aimed to improve access to high-frequency interaural level differences (ILD), by applying extreme frequency compression (FC) in the hearing aid (HA) of 13 bimodal listeners, using a cochlear implant (CI) and conventional HA in opposite ears. DESIGN An experimental signal-adaptive frequency-lowering algorithm was tested, compressing frequencies above 160 Hz into the individual audible range of residual hearing, but only for consonants (adaptive FC), thus protecting vowel formants, with the aim to preserve speech perception. In a cross-over design with at least 5 weeks of acclimatization between sessions, bimodal performance with and without adaptive FC was compared for horizontal sound localization, speech understanding in quiet and in noise, and vowel, consonant and voice-pitch perception. RESULTS On average, adaptive FC did not significantly affect any of the test results. Yet, two subjects who were fitted with a relatively weak frequency compression ratio, showed improved horizontal sound localization. After the study, four subjects preferred adaptive FC, four preferred standard frequency mapping, and four had no preference. Noteworthy, the subjects preferring adaptive FC were those with best performance on all tasks, both with and without adaptive FC. CONCLUSION On a group level, extreme adaptive FC did not change sound localization and speech understanding in bimodal listeners. Possible reasons are too strong compression ratios, insufficient residual hearing or that the adaptive switching, although preserving vowel perception, may have been ineffective to produce consistent ILD cues. Individual results suggested that two subjects were able to integrate the frequency-compressed HA input with that of the CI, and benefitted from enhanced binaural cues for horizontal sound localization.
Collapse
Affiliation(s)
- Lidwien C E Veugen
- a Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour , Radboud University Nijmegen , Nijmegen , The Netherlands
| | - Josef Chalupper
- b Advanced Bionics European Research Centre (AB ERC) , Hannover , Germany
| | - Lucas H M Mens
- c Department of Otorhinolaryngology , Radboud University Nijmegen Medical Centre , The Netherlands
| | - Ad F M Snik
- c Department of Otorhinolaryngology , Radboud University Nijmegen Medical Centre , The Netherlands
| | - A John van Opstal
- a Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour , Radboud University Nijmegen , Nijmegen , The Netherlands
| |
Collapse
|
21
|
Abstract
Conclusion In users of a cochlear implant (CI) and a hearing aid (HA) in contralateral ears, frequency-dependent loudness balancing between devices did, on average, not lead to improved speech understanding as compared to broadband balancing. However, nine out of 15 bimodal subjects showed significantly better speech understanding with either one of the fittings. Objectives Sub-optimal fittings and mismatches in loudness are possible explanations for the large individual differences seen in listeners using bimodal stimulation. Methods HA gain was adjusted for soft and loud input sounds in three frequency bands (0-548, 548-1000, and >1000 Hz) to match loudness with the CI. This procedure was compared to a simple broadband balancing procedure that reflected current clinical practice. In a three-visit cross-over design with 4 weeks between sessions, speech understanding was tested in quiet and in noise and questionnaires were administered to assess benefit in real world. Results Both procedures resulted in comparable HA gains. For speech in noise, a marginal bimodal benefit of 0.3 ± 4 dB was found, with large differences between subjects and spatial configurations. Speech understanding in quiet and in noise did not differ between the two loudness balancing procedures.
Collapse
Affiliation(s)
- Lidwien C. E. Veugen
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Josef Chalupper
- Advanced Bionics European Research Centre (AB ERC), Hannover, Germany
| | - Ad F. M. Snik
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Medical Centre, the Netherlands
| | - A. John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Lucas H. M. Mens
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Medical Centre, the Netherlands
| |
Collapse
|
22
|
van de Rijt LPH, van Opstal AJ, Mylanus EAM, Straatman LV, Hu HY, Snik AFM, van Wanrooij MM. Temporal Cortex Activation to Audiovisual Speech in Normal-Hearing and Cochlear Implant Users Measured with Functional Near-Infrared Spectroscopy. Front Hum Neurosci 2016; 10:48. [PMID: 26903848 PMCID: PMC4750083 DOI: 10.3389/fnhum.2016.00048] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Accepted: 01/29/2016] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Speech understanding may rely not only on auditory, but also on visual information. Non-invasive functional neuroimaging techniques can expose the neural processes underlying the integration of multisensory processes required for speech understanding in humans. Nevertheless, noise (from functional MRI, fMRI) limits the usefulness in auditory experiments, and electromagnetic artifacts caused by electronic implants worn by subjects can severely distort the scans (EEG, fMRI). Therefore, we assessed audio-visual activation of temporal cortex with a silent, optical neuroimaging technique: functional near-infrared spectroscopy (fNIRS). METHODS We studied temporal cortical activation as represented by concentration changes of oxy- and deoxy-hemoglobin in four, easy-to-apply fNIRS optical channels of 33 normal-hearing adult subjects and five post-lingually deaf cochlear implant (CI) users in response to supra-threshold unisensory auditory and visual, as well as to congruent auditory-visual speech stimuli. RESULTS Activation effects were not visible from single fNIRS channels. However, by discounting physiological noise through reference channel subtraction (RCS), auditory, visual and audiovisual (AV) speech stimuli evoked concentration changes for all sensory modalities in both cohorts (p < 0.001). Auditory stimulation evoked larger concentration changes than visual stimuli (p < 0.001). A saturation effect was observed for the AV condition. CONCLUSIONS Physiological, systemic noise can be removed from fNIRS signals by RCS. The observed multisensory enhancement of an auditory cortical channel can be plausibly described by a simple addition of the auditory and visual signals with saturation.
Collapse
Affiliation(s)
- Luuk P H van de Rijt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical CentreNijmegen, Netherlands; Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University NijmegenNijmegen, Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| | - Emmanuel A M Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre Nijmegen, Netherlands
| | - Louise V Straatman
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre Nijmegen, Netherlands
| | - Hai Yin Hu
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| | - Ad F M Snik
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre Nijmegen, Netherlands
| | - Marc M van Wanrooij
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical CentreNijmegen, Netherlands; Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University NijmegenNijmegen, Netherlands
| |
Collapse
|
23
|
Bezgin G, Rybacki K, van Opstal AJ, Bakker R, Shen K, Vakorin VA, McIntosh AR, Kötter R. Auditory-prefrontal axonal connectivity in the macaque cortex: quantitative assessment of processing streams. Brain Lang 2014; 135:73-84. [PMID: 24980416 DOI: 10.1016/j.bandl.2014.05.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2013] [Revised: 04/26/2014] [Accepted: 05/26/2014] [Indexed: 06/03/2023]
Abstract
Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visual cortex, which features rather prominent segregation into spatial and non-spatial domains. It has been hypothesised that other sensory systems, including auditory, are organised in a similar way on the cortical level. Recent studies offer rich qualitative evidence for the dual stream hypothesis. Here we provide a new paradigm to quantitatively uncover these patterns in the auditory system, based on an analysis of multiple anatomical studies using multivariate techniques. As a test case, we also apply our assessment techniques to more ubiquitously-explored visual system. Importantly, the introduced framework opens the possibility for these techniques to be applied to other neural systems featuring a dichotomised organisation, such as language or music perception.
Collapse
Affiliation(s)
- Gleb Bezgin
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada; Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands; C. & O. Vogt Brain Research Institute, Heinrich Heine University, D-40225 Düsseldorf, Germany; Institute of Computer Science, Heinrich Heine University, D-40225 Düsseldorf, Germany.
| | - Konrad Rybacki
- C. & O. Vogt Brain Research Institute, Heinrich Heine University, D-40225 Düsseldorf, Germany; Department of Diagnostic and Interventional Neuroradiology, HELIOS Medical Center Wuppertal, University Hospital Witten/Herdecke, Wuppertal, Germany
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands
| | - Rembrandt Bakker
- Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands; Institute of Neuroscience and Medicine (INM-6), Research Center Jülich, Germany; Department of Biology II, Ludwig-Maximilians-Universität München, Germany
| | - Kelly Shen
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada
| | - Vasily A Vakorin
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada; The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
| | - Anthony R McIntosh
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada; Department of Psychology, University of Toronto, Toronto, Ontario M5S 3G3, Canada
| | - Rolf Kötter
- Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands; C. & O. Vogt Brain Research Institute, Heinrich Heine University, D-40225 Düsseldorf, Germany
| |
Collapse
|
24
|
Visser E, Zwiers MP, Kan CC, Hoekstra L, van Opstal AJ, Buitelaar JK. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders. J Psychiatry Neurosci 2013; 38:398-406. [PMID: 24148845 PMCID: PMC3819154 DOI: 10.1503/jpn.120177] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2012] [Revised: 01/14/2013] [Accepted: 03/20/2013] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. METHODS We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). RESULTS Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. LIMITATIONS The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. CONCLUSION Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.
Collapse
Affiliation(s)
- Eelke Visser
- Visser, Zwiers, Buitelaar — Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Nijmegen, the Netherlands; Visser, Zwiers, Hoekstra, Buitelaar — Radboud University Nijmegen Medical Centre, Department of Cognitive Neuroscience, Nijmegen, the Netherlands; Kan, Hoekstra — Radboud University Nijmegen Medical Centre, Department of Psychiatry, Nijmegen, the Netherlands; van Opstal — Radboud University Nijmegen, Department of Biophysics, Nijmegen and Donders Institute for Brain, Cognition and Behaviour, Centre for Neuroscience, Nijmegen, the Netherlands; Hoekstra, Buitelaar — Karakter, Child and Adolescent Psychiatry University Centre, Nijmegen, the Netherlands
| | - Marcel P. Zwiers
- Visser, Zwiers, Buitelaar — Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Nijmegen, the Netherlands; Visser, Zwiers, Hoekstra, Buitelaar — Radboud University Nijmegen Medical Centre, Department of Cognitive Neuroscience, Nijmegen, the Netherlands; Kan, Hoekstra — Radboud University Nijmegen Medical Centre, Department of Psychiatry, Nijmegen, the Netherlands; van Opstal — Radboud University Nijmegen, Department of Biophysics, Nijmegen and Donders Institute for Brain, Cognition and Behaviour, Centre for Neuroscience, Nijmegen, the Netherlands; Hoekstra, Buitelaar — Karakter, Child and Adolescent Psychiatry University Centre, Nijmegen, the Netherlands
| | - Cornelis C. Kan
- Visser, Zwiers, Buitelaar — Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Nijmegen, the Netherlands; Visser, Zwiers, Hoekstra, Buitelaar — Radboud University Nijmegen Medical Centre, Department of Cognitive Neuroscience, Nijmegen, the Netherlands; Kan, Hoekstra — Radboud University Nijmegen Medical Centre, Department of Psychiatry, Nijmegen, the Netherlands; van Opstal — Radboud University Nijmegen, Department of Biophysics, Nijmegen and Donders Institute for Brain, Cognition and Behaviour, Centre for Neuroscience, Nijmegen, the Netherlands; Hoekstra, Buitelaar — Karakter, Child and Adolescent Psychiatry University Centre, Nijmegen, the Netherlands
| | - Liesbeth Hoekstra
- Visser, Zwiers, Buitelaar — Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Nijmegen, the Netherlands; Visser, Zwiers, Hoekstra, Buitelaar — Radboud University Nijmegen Medical Centre, Department of Cognitive Neuroscience, Nijmegen, the Netherlands; Kan, Hoekstra — Radboud University Nijmegen Medical Centre, Department of Psychiatry, Nijmegen, the Netherlands; van Opstal — Radboud University Nijmegen, Department of Biophysics, Nijmegen and Donders Institute for Brain, Cognition and Behaviour, Centre for Neuroscience, Nijmegen, the Netherlands; Hoekstra, Buitelaar — Karakter, Child and Adolescent Psychiatry University Centre, Nijmegen, the Netherlands
| | - A. John van Opstal
- Visser, Zwiers, Buitelaar — Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Nijmegen, the Netherlands; Visser, Zwiers, Hoekstra, Buitelaar — Radboud University Nijmegen Medical Centre, Department of Cognitive Neuroscience, Nijmegen, the Netherlands; Kan, Hoekstra — Radboud University Nijmegen Medical Centre, Department of Psychiatry, Nijmegen, the Netherlands; van Opstal — Radboud University Nijmegen, Department of Biophysics, Nijmegen and Donders Institute for Brain, Cognition and Behaviour, Centre for Neuroscience, Nijmegen, the Netherlands; Hoekstra, Buitelaar — Karakter, Child and Adolescent Psychiatry University Centre, Nijmegen, the Netherlands
| | - Jan K. Buitelaar
- Visser, Zwiers, Buitelaar — Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Nijmegen, the Netherlands; Visser, Zwiers, Hoekstra, Buitelaar — Radboud University Nijmegen Medical Centre, Department of Cognitive Neuroscience, Nijmegen, the Netherlands; Kan, Hoekstra — Radboud University Nijmegen Medical Centre, Department of Psychiatry, Nijmegen, the Netherlands; van Opstal — Radboud University Nijmegen, Department of Biophysics, Nijmegen and Donders Institute for Brain, Cognition and Behaviour, Centre for Neuroscience, Nijmegen, the Netherlands; Hoekstra, Buitelaar — Karakter, Child and Adolescent Psychiatry University Centre, Nijmegen, the Netherlands
| |
Collapse
|
25
|
Bezgin G, Vakorin VA, van Opstal AJ, McIntosh AR, Bakker R. Hundreds of brain maps in one atlas: registering coordinate-independent primate neuro-anatomical data to a standard brain. Neuroimage 2012; 62:67-76. [PMID: 22521477 DOI: 10.1016/j.neuroimage.2012.04.013] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2011] [Revised: 03/24/2012] [Accepted: 04/07/2012] [Indexed: 01/06/2023] Open
Abstract
Non-invasive measuring methods such as EEG/MEG, fMRI and DTI are increasingly utilised to extract quantitative information on functional and anatomical connectivity in the human brain. These methods typically register their data in Euclidean space, so that one can refer to a particular activity pattern by specifying its spatial coordinates. Since each of these methods has limited resolution in either the time or spatial domain, incorporating additional data, such as those obtained from invasive animal studies, would be highly beneficial to link structure and function. Here we describe an approach to spatially register all cortical brain regions from the macaque structural connectivity database CoCoMac, which contains the combined tracing study results from 459 publications (http://cocomac.g-node.org). Brain regions from 9 different brain maps were directly mapped to a standard macaque cortex using the tool Caret (Van Essen and Dierker, 2007). The remaining regions in the CoCoMac database were semantically linked to these 9 maps using previously developed algebraic and machine-learning techniques (Bezgin et al., 2008; Stephan et al., 2000). We analysed neural connectivity using several graph-theoretical measures to capture global properties of the derived network, and found that Markov Centrality provides the most direct link between structure and function. With this registration approach, users can query the CoCoMac database by specifying spatial coordinates. Availability of deformation tools and homology evidence then allow one to directly attribute detailed anatomical animal data to human experimental results.
Collapse
Affiliation(s)
- Gleb Bezgin
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario, Canada.
| | | | | | | | | |
Collapse
|
26
|
Agterberg MJH, Hol MKS, Cremers CWRJ, Mylanus EAM, van Opstal AJ, Snik AFM. Conductive hearing loss and bone conduction devices: restored binaural hearing? Adv Otorhinolaryngol 2011; 71:84-91. [PMID: 21389708 DOI: 10.1159/000323587] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
An important aspect of binaural hearing is the proper detection of interaural sound level differences and interaural timing differences. Assessments of binaural hearing were made in patients with acquired unilateral conductive hearing loss (UCHL, n = 11) or congenital UCHL (n = 10) after unilateral application of a bone conduction device (BCD), and in patients with bilateral conductive or mixed hearing loss after bilateral BCD application. Benefit (bilateral versus unilateral listening) was assessed by measuring directional hearing, compensation of the acoustic head shadow, binaural summation and binaural squelch. Measurements were performed after an acclimatization time of at least 10 weeks. Unilateral BCD application was beneficial, but there was less benefit in the patients with congenital UCHL as compared to patients with acquired UCHL. In adults with bilateral hearing loss, bilateral BCD application was clearly beneficial as compared to unilateral BCD application. Binaural summation was present, but binaural squelch could not be proven. To explain the poor results in the patients with congenital UCHL, two factors seemed to be important. First, a critical period in the development of binaural hearing might affect the binaural hearing abilities. Second, crossover stimulation, referring to additional stimulation of the cochlea contralateral to the BCD side, might deteriorate binaural hearing in patients with UCHL.
Collapse
|
27
|
Bremen P, Van der Willigen RF, Van Wanrooij MM, Schaling DF, Martens MB, Van Grootel TJ, van Opstal AJ. Applying double-magnetic induction to measure head-unrestrained gaze shifts: calibration and validation in monkey. Biol Cybern 2010; 103:415-432. [PMID: 21082199 DOI: 10.1007/s00422-010-0408-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2010] [Accepted: 10/28/2010] [Indexed: 05/30/2023]
Abstract
The double magnetic induction (DMI) method has successfully been used to record head-unrestrained gaze shifts in human subjects (Bremen et al., J Neurosci Methods 160:75-84, 2007a, J Neurophysiol, 98:3759-3769, 2007b). This method employs a small golden ring placed on the eye that, when positioned within oscillating magnetic fields, induces orientation-dependent voltages in a pickup coil in front of the eye. Here we develop and test a streamlined calibration routine for use with experimental animals, in particular, with monkeys. The calibration routine requires the animal solely to accurately follow visual targets presented at random locations in the visual field. Animals can readily learn this task. In addition, we use the fact that the pickup coil can be fixed rigidly and reproducibly on implants on the animal's skull. Therefore, accumulation of calibration data leads to increasing accuracy. As a first step, we simulated gaze shifts and the resulting DMI signals. Our simulations showed that the complex DMI signals can be effectively calibrated with the use of random target sequences, which elicit substantial decoupling of eye- and head orientations in a natural way. Subsequently, we tested our paradigm on three macaque monkeys. Our results show that the data for a successful calibration can be collected in a single recording session, in which the monkey makes about 1,500-2,000 goal-directed saccades. We obtained a resolution of 30 arc minutes (measurement range [-60,+60]°). This resolution compares to the fixation resolution of the monkey's oculomotor system, and to the standard scleral search-coil method.
Collapse
Affiliation(s)
- Peter Bremen
- Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Radboud University Nijmegen, Geert Grooteplein 21, 6525 EZ, Nijmegen, The Netherlands
| | | | | | | | | | | | | |
Collapse
|
28
|
Abstract
Such perisaccadic mislocalization is maximal in the direction of the saccade and varies systematically with the target-saccade onset delay. We have recently shown that under head-fixed conditions perisaccadic errors do not follow the quantitative predictions of current visuomotor models that explain these mislocalizations in terms of spatial updating. These models all assume sluggish eye-movement feedback and therefore predict that errors should vary systematically with the amplitude and kinematics of the intervening saccade. Instead, we reported that errors depend only weakly on the saccade amplitude. An alternative explanation for the data is that around the saccade the perceived target location undergoes a uniform transient shift in the saccade direction, but that the oculomotor feedback is, on average, accurate. This "visual shift" hypothesis predicts that errors will also remain insensitive to kinematic variability within much larger head-free gaze shifts. Here we test this prediction by presenting a brief visual probe near the onset of gaze saccades between 40 and 70 degrees amplitude. According to models with inaccurate gaze-motor feedback, the expected perisaccadic errors for such gaze shifts should be as large as 30 degrees and depend heavily on the kinematics of the gaze shift. In contrast, we found that the actual peak errors were similar to those reported for much smaller saccadic eye movements, i.e., on average about 10 degrees, and that neither gaze-shift amplitude nor kinematics plays a systematic role. Our data further corroborate the visual origin of perisaccadic mislocalization under open-loop conditions and strengthen the idea that efferent feedback signals in the gaze-control system are fast and accurate.
Collapse
Affiliation(s)
- Sigrid M C I van Wetter
- Faculty of Science, Radboud University Nijmegen, Donders Centre for Neuroscience, Department of Biophysics, Geert Grooteplein 21, 6525 EZ Nijmegen, The Netherlands
| | | |
Collapse
|