1
|
O'Donohue M, Lacherez P, Yamamoto N. Audiovisual spatial ventriloquism is reduced in musicians. Hear Res 2023; 440:108918. [PMID: 37992516 DOI: 10.1016/j.heares.2023.108918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 11/24/2023]
Abstract
There is great scientific and public interest in claims that musical training improves general cognitive and perceptual abilities. While this is controversial, recent and rather convincing evidence suggests that musical training refines the temporal integration of auditory and visual stimuli at a general level. We investigated whether musical training also affects integration in the spatial domain, via an auditory localisation experiment that measured ventriloquism (where localisation is biased towards visual stimuli on audiovisual trials) and recalibration (a unimodal localisation aftereffect). While musicians (n = 22) and non-musicians (n = 22) did not have significantly different unimodal precision or accuracy, musicians were significantly less susceptible than non-musicians to ventriloquism, with large effect sizes. We replicated these results in another experiment with an independent sample of 24 musicians and 21 non-musicians. Across both experiments, spatial recalibration did not significantly differ between the groups even though musicians resisted ventriloquism. Our results suggest that the multisensory expertise afforded by musical training refines spatial integration, a process that underpins multisensory perception.
Collapse
Affiliation(s)
- Matthew O'Donohue
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia.
| | - Philippe Lacherez
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia
| | - Naohide Yamamoto
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia; Queensland University of Technology (QUT), Centre for Vision and Eye Research, Kelvin Grove, QLD 4059, Australia
| |
Collapse
|
2
|
Lin R, Zeng F, Wang Q, Chen A. Cross-Modal Plasticity during Self-Motion Perception. Brain Sci 2023; 13:1504. [PMID: 38002465 PMCID: PMC10669852 DOI: 10.3390/brainsci13111504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 10/13/2023] [Accepted: 10/23/2023] [Indexed: 11/26/2023] Open
Abstract
To maintain stable and coherent perception in an ever-changing environment, the brain needs to continuously and dynamically calibrate information from multiple sensory sources, using sensory and non-sensory information in a flexible manner. Here, we review how the vestibular and visual signals are recalibrated during self-motion perception. We illustrate two different types of recalibration: one long-term cross-modal (visual-vestibular) recalibration concerning how multisensory cues recalibrate over time in response to a constant cue discrepancy, and one rapid-term cross-modal (visual-vestibular) recalibration concerning how recent prior stimuli and choices differentially affect subsequent self-motion decisions. In addition, we highlight the neural substrates of long-term visual-vestibular recalibration, with profound differences observed in neuronal recalibration across multisensory cortical areas. We suggest that multisensory recalibration is a complex process in the brain, is modulated by many factors, and requires the coordination of many distinct cortical areas. We hope this review will shed some light on research into the neural circuits of visual-vestibular recalibration and help develop a more generalized theory for cross-modal plasticity.
Collapse
Affiliation(s)
- Rushi Lin
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Qingjun Wang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200122, China
| |
Collapse
|
3
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
4
|
Kayser C, Park H, Heuer H. Cumulative multisensory discrepancies shape the ventriloquism aftereffect but not the ventriloquism bias. PLoS One 2023; 18:e0290461. [PMID: 37607201 PMCID: PMC10443876 DOI: 10.1371/journal.pone.0290461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023] Open
Abstract
Multisensory integration and recalibration are two processes by which perception deals with discrepant signals. Both are often studied in the spatial ventriloquism paradigm. There, integration is probed by the presentation of discrepant audio-visual stimuli, while recalibration manifests as an aftereffect in subsequent judgements of unisensory sounds. Both biases are typically quantified against the degree of audio-visual discrepancy, reflecting the possibility that both may arise from common underlying multisensory principles. We tested a specific prediction of this: that both processes should also scale similarly with the history of multisensory discrepancies, i.e. the sequence of discrepancies in several preceding audio-visual trials. Analyzing data from ten experiments with randomly varying spatial discrepancies we confirmed the expected dependency of each bias on the immediately presented discrepancy. And in line with the aftereffect being a cumulative process, this scaled with the discrepancies presented in at least three preceding audio-visual trials. However, the ventriloquism bias did not depend on this three-trial history of multisensory discrepancies and also did not depend on the aftereffect biases in previous trials - making these two multisensory processes experimentally dissociable. These findings support the notion that the ventriloquism bias and the aftereffect reflect distinct functions, with integration maintaining a stable percept by reducing immediate sensory discrepancies and recalibration maintaining an accurate percept by accounting for consistent discrepancies.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Hame Park
- Department of Neurophysiology & Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
5
|
Kirsch W, Kunde W. Changes in body perception following virtual object manipulation are accompanied by changes of the internal reference scale. Sci Rep 2023; 13:7137. [PMID: 37130888 PMCID: PMC10154308 DOI: 10.1038/s41598-023-34311-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 04/27/2023] [Indexed: 05/04/2023] Open
Abstract
Changes in body perception often arise when observers are confronted with related yet discrepant multisensory signals. Some of these effects are interpreted as outcomes of sensory integration of various signals, whereas related biases are ascribed to learning-dependent recalibration of coding individual signals. The present study explored whether the same sensorimotor experience entails changes in body perception that are indicative of multisensory integration and those that indicate recalibration. Participants enclosed visual objects by a pair of visual cursors controlled by finger movements. Then either they judged their perceived finger posture (indicating multisensory integration) or they produced a certain finger posture (indicating recalibration). An experimental variation of the size of the visual object resulted in systematic and opposite biases of the perceived and produced finger distances. This pattern of results is consistent with the assumption that multisensory integration and recalibration had a common origin in the task we used.
Collapse
Affiliation(s)
- Wladimir Kirsch
- Department of Psychology, University of Würzburg, Röntgenring 11, 97070, Würzburg, Germany.
| | - Wilfried Kunde
- Department of Psychology, University of Würzburg, Röntgenring 11, 97070, Würzburg, Germany
| |
Collapse
|
6
|
Zhu H, Tang X, Chen T, Yang J, Wang A, Zhang M. Audiovisual illusion training improves multisensory temporal integration. Conscious Cogn 2023; 109:103478. [PMID: 36753896 DOI: 10.1016/j.concog.2023.103478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 01/26/2023] [Accepted: 01/26/2023] [Indexed: 02/08/2023]
Abstract
When we perceive external physical stimuli from the environment, the brain must remain somewhat flexible to unaligned stimuli within a specific range, as multisensory signals are subject to different transmission and processing delays. Recent studies have shown that the width of the 'temporal binding window (TBW)' can be reduced by perceptual learning. However, to date, the vast majority of studies examining the mechanisms of perceptual learning have focused on experience-dependent effects, failing to reach a consensus on its relationship with the underlying perception influenced by audiovisual illusion. The sound-induced flash illusion (SiFI) training is a reliable function for improving perceptual sensitivity. The present study utilized the classic auditory-dominated SiFI paradigm with feedback training to investigate the effect of a 5-day SiFI training on multisensory temporal integration, as evaluated by a simultaneity judgment (SJ) task and temporal order judgment (TOJ) task. We demonstrate that audiovisual illusion training enhances multisensory temporal integration precision in the form of (i) the point of subjective simultaneity (PSS) shifts to reality (0 ms) and (ii) a narrowing TBW. The results are consistent with a Bayesian model of causal inference, suggesting that perception learning reduce the susceptibility to SiFI, whilst improving the precision of audiovisual temporal estimation.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Tingji Chen
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Jiajia Yang
- Applied Brain Science Lab Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
7
|
Debats NB, Heuer H, Kayser C. Short-term effects of visuomotor discrepancies on multisensory integration, proprioceptive recalibration, and motor adaptation. J Neurophysiol 2023; 129:465-478. [PMID: 36651909 DOI: 10.1152/jn.00478.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Information about the position of our hand is provided by multisensory signals that are often not perfectly aligned. Discrepancies between the seen and felt hand position or its movement trajectory engage the processes of 1) multisensory integration, 2) sensory recalibration, and 3) motor adaptation, which adjust perception and behavioral responses to apparently discrepant signals. To foster our understanding of the coemergence of these three processes, we probed their short-term dependence on multisensory discrepancies in a visuomotor task that has served as a model for multisensory perception and motor control previously. We found that the well-established integration of discrepant visual and proprioceptive signals is tied to the immediate discrepancy and independent of the outcome of the integration of discrepant signals in immediately preceding trials. However, the strength of integration was context dependent, being stronger in an experiment featuring stimuli that covered a smaller range of visuomotor discrepancies (±15°) compared with one covering a larger range (±30°). Both sensory recalibration and motor adaptation for nonrepeated movement directions were absent after two bimodal trials with same or opposite visuomotor discrepancies. Hence our results suggest that short-term sensory recalibration and motor adaptation are not an obligatory consequence of the integration of preceding discrepant multisensory signals.NEW & NOTEWORTHY The functional relation between multisensory integration and recalibration remains debated. We here refute the notion that they coemerge in an obligatory manner and support the hypothesis that they serve distinct goals of perception.
Collapse
Affiliation(s)
- Nienke B Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|
8
|
The development of audio-visual temporal precision precedes its rapid recalibration. Sci Rep 2022; 12:21591. [PMID: 36517503 PMCID: PMC9751280 DOI: 10.1038/s41598-022-25392-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 11/29/2022] [Indexed: 12/15/2022] Open
Abstract
Through development, multisensory systems reach a balance between stability and flexibility: the systems integrate optimally cross-modal signals from the same events, while remaining adaptive to environmental changes. Is continuous intersensory recalibration required to shape optimal integration mechanisms, or does multisensory integration develop prior to recalibration? Here, we examined the development of multisensory integration and rapid recalibration in the temporal domain by re-analyzing published datasets for audio-visual, audio-tactile, and visual-tactile combinations. Results showed that children reach an adult level of precision in audio-visual simultaneity perception and show the first sign of rapid recalibration at 9 years of age. In contrast, there was very weak rapid recalibration for other cross-modal combinations at all ages, even when adult levels of temporal precision had developed. Thus, the development of audio-visual rapid recalibration appears to require the maturation of temporal precision. It may serve to accommodate distance-dependent travel time differences between light and sound.
Collapse
|
9
|
Musical training refines audiovisual integration but does not influence temporal recalibration. Sci Rep 2022; 12:15292. [PMID: 36097277 PMCID: PMC9468170 DOI: 10.1038/s41598-022-19665-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/01/2022] [Indexed: 11/11/2022] Open
Abstract
When the brain is exposed to a temporal asynchrony between the senses, it will shift its perception of simultaneity towards the previously experienced asynchrony (temporal recalibration). It is unknown whether recalibration depends on how accurately an individual integrates multisensory cues or on experiences they have had over their lifespan. Hence, we assessed whether musical training modulated audiovisual temporal recalibration. Musicians (n = 20) and non-musicians (n = 18) made simultaneity judgements to flash-tone stimuli before and after adaptation to asynchronous (± 200 ms) flash-tone stimuli. We analysed these judgements via an observer model that described the left and right boundaries of the temporal integration window (decisional criteria) and the amount of sensory noise that affected these judgements. Musicians’ boundaries were narrower (closer to true simultaneity) than non-musicians’, indicating stricter criteria for temporal integration, and they also exhibited enhanced sensory precision. However, while both musicians and non-musicians experienced cumulative and rapid recalibration, these recalibration effects did not differ between the groups. Unexpectedly, cumulative recalibration was caused by auditory-leading but not visual-leading adaptation. Overall, these findings suggest that the precision with which observers perceptually integrate audiovisual temporal cues does not predict their susceptibility to recalibration.
Collapse
|
10
|
Park H, Kayser C. The context of experienced sensory discrepancies shapes multisensory integration and recalibration differently. Cognition 2022; 225:105092. [DOI: 10.1016/j.cognition.2022.105092] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 03/02/2022] [Accepted: 03/07/2022] [Indexed: 11/03/2022]
|
11
|
Aller M, Mihalik A, Noppeney U. Audiovisual adaptation is expressed in spatial and decisional codes. Nat Commun 2022; 13:3924. [PMID: 35798733 PMCID: PMC9262908 DOI: 10.1038/s41467-022-31549-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 06/21/2022] [Indexed: 11/09/2022] Open
Abstract
The brain adapts dynamically to the changing sensory statistics of its environment. Recent research has started to delineate the neural circuitries and representations that support this cross-sensory plasticity. Combining psychophysics and model-based representational fMRI and EEG we characterized how the adult human brain adapts to misaligned audiovisual signals. We show that audiovisual adaptation is associated with changes in regional BOLD-responses and fine-scale activity patterns in a widespread network from Heschl's gyrus to dorsolateral prefrontal cortices. Audiovisual recalibration relies on distinct spatial and decisional codes that are expressed with opposite gradients and time courses across the auditory processing hierarchy. Early activity patterns in auditory cortices encode sounds in a continuous space that flexibly adapts to misaligned visual inputs. Later activity patterns in frontoparietal cortices code decisional uncertainty consistent with these spatial transformations. Our findings suggest that regions within the auditory processing hierarchy multiplex spatial and decisional codes to adapt flexibly to the changing sensory statistics in the environment.
Collapse
Affiliation(s)
- Máté Aller
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK.
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.
| | - Agoston Mihalik
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
- Department of Psychiatry, University of Cambridge, Cambridge, UK
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
12
|
Bruns P, Li L, Guerreiro MJ, Shareef I, Rajendran SS, Pitchaimuthu K, Kekunnaya R, Röder B. Audiovisual spatial recalibration but not integration is shaped by early sensory experience. iScience 2022; 25:104439. [PMID: 35874923 PMCID: PMC9301879 DOI: 10.1016/j.isci.2022.104439] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 02/14/2022] [Accepted: 05/06/2022] [Indexed: 11/15/2022] Open
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Corresponding author
| | - Lux Li
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Department of Epidemiology and Biostatistics, Schulich School of Medicine & Dentistry, Western University, London, ON N6G 2M1, Canada
| | - Maria J.S. Guerreiro
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, University of Oldenburg, 26111 Oldenburg, Germany
| | - Idris Shareef
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Siddhart S. Rajendran
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Kabilan Pitchaimuthu
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Ramesh Kekunnaya
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
| |
Collapse
|
13
|
Watson DM, Akeroyd MA, Roach NW, Webb BS. Multiple spatial reference frames underpin perceptual recalibration to audio-visual discrepancies. PLoS One 2021; 16:e0251827. [PMID: 33999940 PMCID: PMC8128243 DOI: 10.1371/journal.pone.0251827] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 05/03/2021] [Indexed: 11/17/2022] Open
Abstract
In dynamic multisensory environments, the perceptual system corrects for discrepancies arising between modalities. For instance, in the ventriloquism aftereffect (VAE), spatial disparities introduced between visual and auditory stimuli lead to a perceptual recalibration of auditory space. Previous research has shown that the VAE is underpinned by multiple recalibration mechanisms tuned to different timescales, however it remains unclear whether these mechanisms use common or distinct spatial reference frames. Here we asked whether the VAE operates in eye- or head-centred reference frames across a range of adaptation timescales, from a few seconds to a few minutes. We developed a novel paradigm for selectively manipulating the contribution of eye- versus head-centred visual signals to the VAE by manipulating auditory locations relative to either the head orientation or the point of fixation. Consistent with previous research, we found both eye- and head-centred frames contributed to the VAE across all timescales. However, we found no evidence for an interaction between spatial reference frames and adaptation duration. Our results indicate that the VAE is underpinned by multiple spatial reference frames that are similarly leveraged by the underlying time-sensitive mechanisms.
Collapse
Affiliation(s)
- David Mark Watson
- School of Psychology, University of Nottingham, Nottingham, United Kingdom.,Department of Psychology, University of York, York, United Kingdom
| | - Michael A Akeroyd
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Neil W Roach
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Ben S Webb
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
14
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
15
|
The Neurophysiological Basis of the Trial-Wise and Cumulative Ventriloquism Aftereffects. J Neurosci 2021; 41:1068-1079. [PMID: 33273069 PMCID: PMC7880291 DOI: 10.1523/jneurosci.2091-20.2020] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 10/12/2020] [Accepted: 11/08/2020] [Indexed: 01/23/2023] Open
Abstract
Our senses often receive conflicting multisensory information, which our brain reconciles by adaptive recalibration. A classic example is the ventriloquism aftereffect, which emerges following both cumulative (long-term) and trial-wise exposure to spatially discrepant multisensory stimuli. Despite the importance of such adaptive mechanisms for interacting with environments that change over multiple timescales, it remains debated whether the ventriloquism aftereffects observed following trial-wise and cumulative exposure arise from the same neurophysiological substrate. We address this question by probing electroencephalography recordings from healthy humans (both sexes) for processes predictive of the aftereffect biases following the exposure to spatially offset audiovisual stimuli. Our results support the hypothesis that discrepant multisensory evidence shapes aftereffects on distinct timescales via common neurophysiological processes reflecting sensory inference and memory in parietal-occipital regions, while the cumulative exposure to consistent discrepancies additionally recruits prefrontal processes. During the subsequent unisensory trial, both trial-wise and cumulative exposure bias the encoding of the acoustic information, but do so distinctly. Our results posit a central role of parietal regions in shaping multisensory spatial recalibration, suggest that frontal regions consolidate the behavioral bias for persistent multisensory discrepancies, but also show that the trial-wise and cumulative exposure bias sound position encoding via distinct neurophysiological processes. SIGNIFICANCE STATEMENT Our brain easily reconciles conflicting multisensory information, such as seeing an actress on screen while hearing her voice over headphones. These adaptive mechanisms exert a persistent influence on the perception of subsequent unisensory stimuli, known as the ventriloquism aftereffect. While this aftereffect emerges following trial-wise or cumulative exposure to multisensory discrepancies, it remained unclear whether both arise from a common neural substrate. We here rephrase this hypothesis using human electroencephalography recordings. Our data suggest that parietal regions involved in multisensory and spatial memory mediate the aftereffect following both trial-wise and cumulative adaptation, but also show that additional and distinct processes are involved in consolidating and implementing the aftereffect following prolonged exposure.
Collapse
|
16
|
Park H, Nannt J, Kayser C. Sensory- and memory-related drivers for altered ventriloquism effects and aftereffects in older adults. Cortex 2021; 135:298-310. [PMID: 33422888 PMCID: PMC7856550 DOI: 10.1016/j.cortex.2020.12.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 10/21/2020] [Accepted: 12/03/2020] [Indexed: 01/05/2023]
Abstract
The manner in which humans exploit multisensory information for subsequent decisions changes with age. Multiple causes for such age-effects are being discussed, including a reduced precision in peripheral sensory representations, changes in cognitive inference about causal relations between sensory cues, and a decline in memory contributing to altered sequential patterns of multisensory behaviour. To dissociate these putative contributions, we investigated how healthy young and older adults integrate audio-visual spatial information within trials (the ventriloquism effect) and between trials (the ventriloquism aftereffect). With both a model-free and (Bayesian) model-based analyses we found that both biases differed between groups. Our results attribute the age-change in the ventriloquism bias to a decline in spatial hearing rather than a change in cognitive processes. This decline in peripheral function, combined with a more prominent influence from preceding responses rather than preceding stimuli in the elderly, can also explain the observed age-effect in the ventriloquism aftereffect. Our results suggest a transition from a sensory-to a behavior-driven influence of past multisensory experience on perceptual decisions with age, due to reduced sensory precision and change in memory capacity.
Collapse
Affiliation(s)
- Hame Park
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld, Germany; Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany.
| | - Julia Nannt
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld, Germany; Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld, Germany; Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany.
| |
Collapse
|
17
|
Park H, Kayser C. Robust spatial ventriloquism effect and trial-by-trial aftereffect under memory interference. Sci Rep 2020; 10:20826. [PMID: 33257687 PMCID: PMC7705722 DOI: 10.1038/s41598-020-77730-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 11/17/2020] [Indexed: 11/21/2022] Open
Abstract
Our brain adapts to discrepancies in the sensory inputs. One example is provided by the ventriloquism effect, experienced when the sight and sound of an object are displaced. Here the discrepant multisensory stimuli not only result in a biased localization of the sound, but also recalibrate the perception of subsequent unisensory acoustic information in the so-called ventriloquism aftereffect. This aftereffect has been linked to memory-related processes based on its parallels to general sequential effects in perceptual decision making experiments and insights obtained in neuroimaging studies. For example, we have recently implied memory-related medial parietal regions in the trial-by-trial ventriloquism aftereffect. Here, we tested the hypothesis that the trial-by-trial (or immediate) ventriloquism aftereffect is indeed susceptible to manipulations interfering with working memory. Across three experiments we systematically manipulated the temporal delays between stimuli and response for either the ventriloquism or the aftereffect trials, or added a sensory-motor masking trial in between. Our data reveal no significant impact of either of these manipulations on the aftereffect, suggesting that the recalibration reflected by the trial-by-trial ventriloquism aftereffect is surprisingly resilient to manipulations interfering with memory-related processes.
Collapse
Affiliation(s)
- Hame Park
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany.
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany.
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany.
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany.
| |
Collapse
|
18
|
Bruns P, Dinse HR, Röder B. Differential effects of the temporal and spatial distribution of audiovisual stimuli on cross-modal spatial recalibration. Eur J Neurosci 2020; 52:3763-3775. [PMID: 32403183 DOI: 10.1111/ejn.14779] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 05/05/2020] [Accepted: 05/06/2020] [Indexed: 12/17/2022]
Abstract
Visual input constantly recalibrates auditory spatial representations. Exposure to isochronous audiovisual stimuli with a fixed spatial disparity typically results in a subsequent auditory localization bias (ventriloquism aftereffect, VAE), whereas exposure to spatially congruent audiovisual stimuli improves subsequent auditory localization (multisensory enhancement, ME). Here, we tested whether cross-modal recalibration is affected by the stimulation rate and/or the distribution of audiovisual spatial disparities during training. Auditory localization was tested before and after participants were exposed either to audiovisual stimuli with a constant spatial disparity of 13.5° (VAE) or to spatially congruent audiovisual stimulation (ME). In a between-subjects design, audiovisual stimuli were presented either at a low frequency of 2 Hz, as used in previous studies of VAE and ME, or intermittently at a high frequency of 10 Hz, which mimics long-term potentiation (LTP) protocols and which was found superior in eliciting unisensory perceptual learning. Compared to low-frequency stimulation, VAE was reduced after high-frequency stimulation, whereas ME occurred regardless of the stimulation protocol. In two additional groups, we manipulated the spatial distribution of audiovisual stimuli in the low-frequency condition. Stimuli were presented with varying audiovisual disparities centered around 13.5° (VAE) or 0° (ME). Both VAE and ME were equally strong compared to a fixed spatial relationship of 13.5° or 0°, respectively. Taken together, our results suggest (a) that VAE and ME represent partly dissociable forms of learning and (b) that auditory representations adjust to the overall stimulus statistics rather than to a specific audiovisual spatial relationship.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Hubert R Dinse
- Neural Plasticity Lab, Institute of Neuroinformatics, Ruhr University Bochum, Bochum, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
19
|
Rohlf S, Li L, Bruns P, Röder B. Multisensory Integration Develops Prior to Crossmodal Recalibration. Curr Biol 2020; 30:1726-1732.e7. [DOI: 10.1016/j.cub.2020.02.048] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 12/05/2019] [Accepted: 02/18/2020] [Indexed: 11/30/2022]
|
20
|
Alpha Activity Reflects the Magnitude of an Individual Bias in Human Perception. J Neurosci 2020; 40:3443-3454. [PMID: 32179571 DOI: 10.1523/jneurosci.2359-19.2020] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 02/05/2020] [Accepted: 02/07/2020] [Indexed: 01/28/2023] Open
Abstract
Biases in sensory perception can arise from both experimental manipulations and personal trait-like features. These idiosyncratic biases and their neural underpinnings are often overlooked in studies on the physiology underlying perception. A potential candidate mechanism reflecting such idiosyncratic biases could be spontaneous alpha band activity, a prominent brain rhythm known to influence perceptual reports in general. Using a temporal order judgment task, we here tested the hypothesis that alpha power reflects the overcoming of an idiosyncratic bias. Importantly, to understand the interplay between idiosyncratic biases and contextual (temporary) biases induced by experimental manipulations, we quantified this relation before and after temporal recalibration. Using EEG recordings in human participants (male and female), we find that prestimulus frontal alpha power correlates with the tendency to respond relative to an own idiosyncratic bias, with stronger α leading to responses matching the bias. In contrast, alpha power does not predict response correctness. These results also hold after temporal recalibration and are specific to the alpha band, suggesting that alpha band activity reflects, directly or indirectly, processes that help to overcome an individual's momentary bias in perception. We propose that combined with established roles of parietal α in the encoding of sensory information frontal α reflects complementary mechanisms influencing perceptual decisions.SIGNIFICANCE STATEMENT The brain is a biased organ, frequently generating systematically distorted percepts of the world, leading each of us to evolve in our own subjective reality. However, such biases are often overlooked or considered noise when studying the neural mechanisms underlying perception. We show that spontaneous alpha band activity predicts the degree of biasedness of human choices in a time perception task, suggesting that alpha activity indexes processes needed to overcome an individual's idiosyncratic bias. This result provides a window onto the neural underpinnings of subjective perception, and offers the possibility to quantify or manipulate such priors in future studies.
Collapse
|
21
|
Badde S, Navarro KT, Landy MS. Modality-specific attention attenuates visual-tactile integration and recalibration effects by reducing prior expectations of a common source for vision and touch. Cognition 2020; 197:104170. [PMID: 32036027 DOI: 10.1016/j.cognition.2019.104170] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 10/25/2022]
Abstract
At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Karen T Navarro
- Department of Psychology, University of Minnesota, 75 E River Rd., Minneapolis, MN, 55455, USA
| | - Michael S Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
22
|
Kramer A, Röder B, Bruns P. Feedback Modulates Audio-Visual Spatial Recalibration. Front Integr Neurosci 2020; 13:74. [PMID: 32009913 PMCID: PMC6979315 DOI: 10.3389/fnint.2019.00074] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
In an ever-changing environment, crossmodal recalibration is crucial to maintain precise and coherent spatial estimates across different sensory modalities. Accordingly, it has been found that perceived auditory space is recalibrated toward vision after consistent exposure to spatially misaligned audio-visual stimuli (VS). While this so-called ventriloquism aftereffect (VAE) yields internal consistency between vision and audition, it does not necessarily lead to consistency between the perceptual representation of space and the actual environment. For this purpose, feedback about the true state of the external world might be necessary. Here, we tested whether the size of the VAE is modulated by external feedback and reward. During adaptation audio-VS with a fixed spatial discrepancy were presented. Participants had to localize the sound and received feedback about the magnitude of their localization error. In half of the sessions the feedback was based on the position of the VS and in the other half it was based on the position of the auditory stimulus. An additional monetary reward was given if the localization error fell below a certain threshold that was based on participants’ performance in the pretest. As expected, when error feedback was based on the position of the VS, auditory localization during adaptation trials shifted toward the position of the VS. Conversely, feedback based on the position of the auditory stimuli reduced the visual influence on auditory localization (i.e., the ventriloquism effect) and improved sound localization accuracy. After adaptation with error feedback based on the VS position, a typical auditory VAE (but no visual aftereffect) was observed in subsequent unimodal localization tests. By contrast, when feedback was based on the position of the auditory stimuli during adaptation, no auditory VAE was observed in subsequent unimodal auditory trials. Importantly, in this situation no visual aftereffect was found either. As feedback did not change the physical attributes of the audio-visual stimulation during adaptation, the present findings suggest that crossmodal recalibration is subject to top–down influences. Such top–down influences might help prevent miscalibration of audition toward conflicting visual stimulation in situations in which external feedback indicates that visual information is inaccurate.
Collapse
Affiliation(s)
- Alexander Kramer
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
23
|
Kumpik DP, Campbell C, Schnupp JWH, King AJ. Re-weighting of Sound Localization Cues by Audiovisual Training. Front Neurosci 2019; 13:1164. [PMID: 31802997 PMCID: PMC6873890 DOI: 10.3389/fnins.2019.01164] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 10/15/2019] [Indexed: 11/28/2022] Open
Abstract
Sound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using virtual acoustic space stimuli, we measured changes in subjects’ sound localization biases and binaural localization cue weights after ∼50 min of training on audiovisual tasks in which visual stimuli were either informative or not about the location of broadband sounds. Four different spatial configurations were used in which we varied the relative reliability of the binaural cues: interaural time differences (ITDs) and frequency-dependent interaural level differences (ILDs). In most subjects and experiments, ILDs were weighted more highly than ITDs before training. When visual cues were spatially uninformative, some subjects showed a reduction in auditory localization bias and the relative weighting of ILDs increased after training with congruent binaural cues. ILDs were also upweighted if they were paired with spatially-congruent visual cues, and the largest group-level improvements in sound localization accuracy occurred when both binaural cues were matched to visual stimuli. These data suggest that binaural cue reweighting reflects baseline differences in the relative weights of ILDs and ITDs, but is also shaped by the availability of congruent visual stimuli. Training subjects with consistently misaligned binaural and visual cues produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the inter-subject variability in sound localization judgments or their binaural cue weights. Our results show that the relative weighting of different auditory localization cues can be changed by training in ways that depend on their reliability as well as the availability of visual spatial information, with the largest improvements in sound localization likely to result from training with fully congruent audiovisual information.
Collapse
Affiliation(s)
- Daniel P Kumpik
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Connor Campbell
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan W H Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
24
|
Bruns P. The Ventriloquist Illusion as a Tool to Study Multisensory Processing: An Update. Front Integr Neurosci 2019; 13:51. [PMID: 31572136 PMCID: PMC6751356 DOI: 10.3389/fnint.2019.00051] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Accepted: 08/22/2019] [Indexed: 12/02/2022] Open
Abstract
Ventriloquism, the illusion that a voice appears to come from the moving mouth of a puppet rather than from the actual speaker, is one of the classic examples of multisensory processing. In the laboratory, this illusion can be reliably induced by presenting simple meaningless audiovisual stimuli with a spatial discrepancy between the auditory and visual components. Typically, the perceived location of the sound source is biased toward the location of the visual stimulus (the ventriloquism effect). The strength of the visual bias reflects the relative reliability of the visual and auditory inputs as well as prior expectations that the two stimuli originated from the same source. In addition to the ventriloquist illusion, exposure to spatially discrepant audiovisual stimuli results in a subsequent recalibration of unisensory auditory localization (the ventriloquism aftereffect). In the past years, the ventriloquism effect and aftereffect have seen a resurgence as an experimental tool to elucidate basic mechanisms of multisensory integration and learning. For example, recent studies have: (a) revealed top-down influences from the reward and motor systems on cross-modal binding; (b) dissociated recalibration processes operating at different time scales; and (c) identified brain networks involved in the neuronal computations underlying multisensory integration and learning. This mini review article provides a brief overview of established experimental paradigms to measure the ventriloquism effect and aftereffect before summarizing these pathbreaking new advancements. Finally, it is pointed out how the ventriloquism effect and aftereffect could be utilized to address some of the current open questions in the field of multisensory research.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
25
|
Park H, Kayser C. Shared neural underpinnings of multisensory integration and trial-by-trial perceptual recalibration in humans. eLife 2019; 8:47001. [PMID: 31246172 PMCID: PMC6660215 DOI: 10.7554/elife.47001] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Accepted: 06/26/2019] [Indexed: 01/05/2023] Open
Abstract
Perception adapts to mismatching multisensory information, both when different cues appear simultaneously and when they appear sequentially. While both multisensory integration and adaptive trial-by-trial recalibration are central for behavior, it remains unknown whether they are mechanistically linked and arise from a common neural substrate. To relate the neural underpinnings of sensory integration and recalibration, we measured whole-brain magnetoencephalography while human participants performed an audio-visual ventriloquist task. Using single-trial multivariate analysis, we localized the perceptually-relevant encoding of multisensory information within and between trials. While we found neural signatures of multisensory integration within temporal and parietal regions, only medial superior parietal activity encoded past and current sensory information and mediated the perceptual recalibration within and between trials. These results highlight a common neural substrate of sensory integration and perceptual recalibration, and reveal a role of medial parietal regions in linking present and previous multisensory evidence to guide adaptive behavior. A good ventriloquist will make their audience experience an illusion. The speech the spectators hear appears to come from the mouth of the puppet and not from the puppeteer. Moviegoers experience the same illusion: they perceive dialogue as coming from the mouths of the actors on screen, rather than from the loudspeakers mounted on the walls. Known as the ventriloquist effect, this ‘trick’ exists because the brain assumes that sights and sounds which occur at the same time have the same origin, and it therefore combines the two sets of sensory stimuli. A version of the ventriloquist effect can be induced in the laboratory. Participants hear a sound while watching a simple visual stimulus (for instance, a circle) appear on a screen. When asked to pinpoint the origin of the noise, volunteers choose a location shifted towards the circle, even if this was not where the sound came from. In addition, this error persists when the visual stimulus is no longer present: if a standard trial is followed by a trial that features a sound but no circle, participants perceive the sound in the second test as ‘drawn’ towards the direction of the former shift. This is known as the ventriloquist aftereffect. By scanning the brains of healthy volunteers performing this task, Park and Kayser show that a number of brain areas contribute to the ventriloquist effect. All of these regions help to combine what we see with what we hear, but only one maintains representations of the combined sensory inputs over time. Called the medial superior parietal cortex, this area is unique in contributing to both the ventriloquist effect and its aftereffect. We must constantly use past and current sensory information to adapt our behavior to the environment. The results by Park and Kayser shed light on the brain structures that underpin our capacity to combine information from several senses, as well as our ability to encode memories. Such knowledge should be useful to explore how we can make flexible decisions.
Collapse
Affiliation(s)
- Hame Park
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld, Germany.,Center of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld, Germany.,Center of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
26
|
Distinct mechanisms govern recalibration to audio-visual discrepancies in remote and recent history. Sci Rep 2019; 9:8513. [PMID: 31186503 PMCID: PMC6559981 DOI: 10.1038/s41598-019-44984-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Accepted: 05/28/2019] [Indexed: 11/08/2022] Open
Abstract
To maintain perceptual coherence, the brain corrects for discrepancies between the senses. If, for example, lights are consistently offset from sounds, representations of auditory space are remapped to reduce this error (spatial recalibration). While recalibration effects have been observed following both brief and prolonged periods of adaptation, the relative contribution of discrepancies occurring over these timescales is unknown. Here we show that distinct multisensory recalibration mechanisms operate in remote and recent history. To characterise the dynamics of this spatial recalibration, we adapted human participants to audio-visual discrepancies for different durations, from 32 to 256 seconds, and measured the aftereffects on perceived auditory location. Recalibration effects saturated rapidly but decayed slowly, suggesting a combination of transient and sustained adaptation mechanisms. When long-term adaptation to an audio-visual discrepancy was immediately followed by a brief period of de-adaptation to an opposing discrepancy, recalibration was initially cancelled but subsequently reappeared with further testing. These dynamics were best fit by a multiple-exponential model that monitored audio-visual discrepancies over distinct timescales. Recent and remote recalibration mechanisms enable the brain to balance rapid adaptive changes to transient discrepancies that should be quickly forgotten against slower adaptive changes to persistent discrepancies likely to be more permanent.
Collapse
|
27
|
Rapid recalibration to audiovisual asynchrony follows the physical-not the perceived-temporal order. Atten Percept Psychophys 2019; 80:2060-2068. [PMID: 29968078 DOI: 10.3758/s13414-018-1540-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In natural scenes, audiovisual events deriving from the same source are synchronized at their origin. However, from the perspective of the observer, there are likely to be significant multisensory delays due to physical and neural latencies. Fortunately, our brain appears to compensate for the resulting latency differences by rapidly adapting to asynchronous audiovisual events by shifting the point of subjective synchrony (PSS) in the direction of the leading modality of the most recent event. Here we examined whether it is the perceived modality order of this prior lag or its physical order that determines the direction of the subsequent rapid recalibration. On each experimental trial, a brief tone pip and flash were presented across a range of stimulus onset asynchronies (SOAs). The participants' task alternated over trials: On adaptor trials, audition either led or lagged vision with fixed SOAs, and participants judged the order of the audiovisual event; on test trials, the SOA as well as the modality order varied randomly, and participants judged whether or not the event was synchronized. For test trials, we showed that the PSS shifted in the direction of the physical rather than the perceived (reported) modality order of the preceding adaptor trial. These results suggest that rapid temporal recalibration is determined by the physical timing of the preceding events, not by one's prior perceptual decisions.
Collapse
|
28
|
Hanada GM, Ahveninen J, Calabro FJ, Yengo-Kahn A, Vaina LM. Cross-Modal Cue Effects in Motion Processing. Multisens Res 2018; 32:45-65. [PMID: 30613468 PMCID: PMC6317375 DOI: 10.1163/22134808-20181313] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.
Collapse
Affiliation(s)
- G. M. Hanada
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - J. Ahveninen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - F. J. Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - A. Yengo-Kahn
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - L. M. Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Neurology, Massachusetts General Hospital and Brigham and Women’s Hospital, MA, USA
| |
Collapse
|
29
|
Xu J, Bi T, Wu J, Meng F, Wang K, Hu J, Han X, Zhang J, Zhou X, Keniston L, Yu L. Spatial receptive field shift by preceding cross-modal stimulation in the cat superior colliculus. J Physiol 2018; 596:5033-5050. [PMID: 30144059 DOI: 10.1113/jp275427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 08/21/2018] [Indexed: 12/11/2022] Open
Abstract
KEY POINTS It has been known for some time that sensory information of one type can bias the spatial perception of another modality. However, there is a lack of evidence of this occurring in individual neurons. In the present study, we found that the spatial receptive field of superior colliculus multisensory neurons could be dynamically shifted by a preceding stimulus in a different modality. The extent to which the receptive field shifted was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. This result provides a neural mechanism that could underlie the process of cross-modal spatial calibration. ABSTRACT Psychophysical studies have shown that the different senses can be spatially entrained by each other. This can be observed in certain phenomena, such as ventriloquism, in which a visual stimulus can attract the perceived location of a spatially discordant sound. However, the neural mechanism underlying this cross-modal spatial recalibration has remained unclear, as has whether it takes place dynamically. We explored these issues in multisensory neurons of the cat superior colliculus (SC), a midbrain structure that involves both cross-modal and sensorimotor integration. Sequential cross-modal stimulation showed that the preceding stimulus can shift the receptive field (RF) of the lagging response. This cross-modal spatial calibration took place in both auditory and visual RFs, although auditory RFs shifted slightly more. By contrast, if a preceding stimulus was from the same modality, it failed to induce a similarly substantial RF shift. The extent of the RF shift was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. A narrow time gap and high stimulus salience were able to induce larger RF shifts. In addition, when both visual and auditory stimuli were presented simultaneously, a substantial RF shift toward the location-fixed stimulus was also induced. These results, taken together, reveal an online cross-modal process and reflect the details of the organization of SC inter-sensory spatial calibration.
Collapse
Affiliation(s)
- Jinghong Xu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Tingting Bi
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Fanzhu Meng
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Kun Wang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiawei Hu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiao Han
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD, USA
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| |
Collapse
|
30
|
Bosen AK, Fleming JT, Allen PD, O’Neill WE, Paige GD. Multiple time scales of the ventriloquism aftereffect. PLoS One 2018; 13:e0200930. [PMID: 30067790 PMCID: PMC6070234 DOI: 10.1371/journal.pone.0200930] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 07/05/2018] [Indexed: 11/18/2022] Open
Abstract
The ventriloquism aftereffect (VAE) refers to a shift in auditory spatial perception following exposure to a spatial disparity between auditory and visual stimuli. The VAE has been previously measured on two distinct time scales. Hundreds or thousands of exposures to a an audio-visual spatial disparity produces enduring VAE that persists after exposure ceases. Exposure to a single audio-visual spatial disparity produces immediate VAE that decays over seconds. To determine if these phenomena are two extremes of a continuum or represent distinct processes, we conducted an experiment with normal hearing listeners that measured VAE in response to a repeated, constant audio-visual disparity sequence, both immediately after exposure to each audio-visual disparity and after the end of the sequence. In each experimental session, subjects were exposed to sequences of auditory and visual targets that were constantly offset by +8° or −8° in azimuth from one another, then localized auditory targets presented in isolation following each sequence. Eye position was controlled throughout the experiment, to avoid the effects of gaze on auditory localization. In contrast to other studies that did not control eye position, we found both a large shift in auditory perception that decayed rapidly after each AV disparity exposure, along with a gradual shift in auditory perception that grew over time and persisted after exposure to the AV disparity ceased. We modeled the temporal and spatial properties of the measured auditory shifts using grey box nonlinear system identification, and found that two models could explain the data equally well. In the power model, the temporal decay of the ventriloquism aftereffect was modeled with a power law relationship. This causes an initial rapid drop in auditory shift, followed by a long tail which accumulates with repeated exposure to audio-visual disparity. In the double exponential model, two separate processes were required to explain the data, one which accumulated and decayed exponentially and the other which slowly integrated over time. Both models fit the data best when the spatial spread of the ventriloquism aftereffect was limited to a window around the location of the audio-visual disparity. We directly compare the predictions made by each model, and suggest additional measurements that could help distinguish which model best describes the mechanisms underlying the VAE.
Collapse
Affiliation(s)
- Adam K. Bosen
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States of America
- * E-mail:
| | - Justin T. Fleming
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| | - Paul D. Allen
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| | - William E. O’Neill
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| | - Gary D. Paige
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| |
Collapse
|
31
|
Ball F, Fuehrmann F, Stratil F, Noesselt T. Phasic and sustained interactions of multisensory interplay and temporal expectation. Sci Rep 2018; 8:10208. [PMID: 29976998 PMCID: PMC6033875 DOI: 10.1038/s41598-018-28495-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 06/25/2018] [Indexed: 12/18/2022] Open
Abstract
Every moment organisms are confronted with complex streams of information which they use to generate a reliable mental model of the world. There is converging evidence for several optimization mechanisms instrumental in integrating (or segregating) incoming information; among them are multisensory interplay (MSI) and temporal expectation (TE). Both mechanisms can account for enhanced perceptual sensitivity and are well studied in isolation; how these two mechanisms interact is currently less well-known. Here, we tested in a series of four psychophysical experiments for TE effects in uni- and multisensory contexts with different levels of modality-related and spatial uncertainty. We found that TE enhanced perceptual sensitivity for the multisensory relative to the best unisensory condition (i.e. multisensory facilitation according to the max-criterion). In the latter TE effects even vanished if stimulus-related spatial uncertainty was increased. Accordingly, computational modelling indicated that TE, modality-related and spatial uncertainty predict multisensory facilitation. Finally, the analysis of stimulus history revealed that matching expectation at trial n-1 selectively improves multisensory performance irrespective of stimulus-related uncertainty. Together, our results indicate that benefits of multisensory stimulation are enhanced by TE especially in noisy environments, which allows for more robust information extraction to boost performance on both short and sustained time ranges.
Collapse
Affiliation(s)
- Felix Ball
- Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.
- Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.
| | - Fabienne Fuehrmann
- Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Fenja Stratil
- Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Toemme Noesselt
- Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
- Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| |
Collapse
|
32
|
Wan Y, Chen L. Temporal Reference, Attentional Modulation, and Crossmodal Assimilation. Front Comput Neurosci 2018; 12:39. [PMID: 29922143 PMCID: PMC5996128 DOI: 10.3389/fncom.2018.00039] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 05/16/2018] [Indexed: 11/18/2022] Open
Abstract
Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval) of the subsequent task-relevant target events in another sensory modality. In current experiments, using visual Ternus display, we examined the roles of temporal reference, materialized as the time information accumulated before the onset of target event, as well as the attentional modulation in crossmodal temporal interaction. Specifically, we examined how the global time interval, the mean auditory inter-intervals and the last interval in the auditory sequence assimilate and bias the subsequent percept of visual Ternus motion (element motion vs. group motion). We demonstrated that both the ensemble (geometric) mean and the last interval in the auditory sequence contribute to bias the percept of visual motion. Longer mean (or last) interval elicited more reports of group motion, whereas the shorter mean (or last) auditory intervals gave rise to more dominant percept of element motion. Importantly, observers have shown dynamic adaptation to the temporal reference of crossmodal assimilation: when the target visual Ternus stimuli were separated by a long gap interval after the preceding sound sequence, the assimilation effect by ensemble mean was reduced. Our findings suggested that crossmodal assimilation relies on a suitable temporal reference on adaptation level, and revealed a general temporal perceptual grouping principle underlying complex audio-visual interactions in everyday dynamic situations.
Collapse
Affiliation(s)
| | - Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
33
|
Jicol C, Proulx MJ, Pollick FE, Petrini K. Long-term music training modulates the recalibration of audiovisual simultaneity. Exp Brain Res 2018; 236:1869-1880. [PMID: 29687204 DOI: 10.1007/s00221-018-5269-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Accepted: 04/17/2018] [Indexed: 11/27/2022]
Abstract
To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.
Collapse
Affiliation(s)
- Crescent Jicol
- Department of Psychology, University of Bath, Bath, UK.
- Department of Computer Science, University of Bath, Claverton Down, Bath, BA2 7AY, UK.
| | | | | | - Karin Petrini
- Department of Psychology, University of Bath, Bath, UK
| |
Collapse
|
34
|
Bruns P, Röder B. Spatial and frequency specificity of the ventriloquism aftereffect revisited. PSYCHOLOGICAL RESEARCH 2017; 83:1400-1415. [PMID: 29285647 DOI: 10.1007/s00426-017-0965-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Accepted: 12/18/2017] [Indexed: 11/28/2022]
Abstract
Exposure to audiovisual stimuli with a consistent spatial misalignment seems to result in a recalibration of unisensory auditory spatial representations. The previous studies have suggested that this so-called ventriloquism aftereffect is confined to the trained region of space, but yielded inconsistent results as to whether or not recalibration generalizes to untrained sound frequencies. Here, we reassessed the spatial and frequency specificity of the ventriloquism aftereffect by testing whether auditory spatial perception can be independently recalibrated for two different sound frequencies and/or at two different spatial locations. Recalibration was confined to locations within the trained hemifield, suggesting that spatial representations were independently adjusted for the two hemifields. The frequency specificity of the ventriloquism aftereffect depended on the presence or the absence of conflicting audiovisual adaptation stimuli within the same hemifield. Moreover, adaptation of two different sound frequencies in opposite directions (leftward vs. rightward) resulted in a selective suppression of leftward recalibration, even when the adapting stimuli were presented in different hemifields. Thus, instead of representing a fixed stimulus-driven process, cross-modal recalibration seems to critically depend on the sensory context and takes into account inconsistencies in the cross-modal input.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany. .,Department of Cognitive, Linguistic and Psychological Sciences, Brown University, 190 Thayer Street, Providence, RI, 02912, USA.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
35
|
The role of auditory cortex in the spatial ventriloquism aftereffect. Neuroimage 2017; 162:257-268. [PMID: 28889003 DOI: 10.1016/j.neuroimage.2017.09.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Revised: 08/15/2017] [Accepted: 09/01/2017] [Indexed: 11/21/2022] Open
Abstract
Cross-modal recalibration allows the brain to maintain coherent sensory representations of the world. Using functional magnetic resonance imaging (fMRI), the present study aimed at identifying the neural mechanisms underlying recalibration in an audiovisual ventriloquism aftereffect paradigm. Participants performed a unimodal sound localization task, before and after they were exposed to adaptation blocks, in which sounds were paired with spatially disparate visual stimuli offset by 14° to the right. Behavioral results showed a significant rightward shift in sound localization following adaptation, indicating a ventriloquism aftereffect. Regarding fMRI results, left and right planum temporale (lPT/rPT) were found to respond more to contralateral sounds than to central sounds at pretest. Contrasting posttest with pretest blocks revealed significantly enhanced fMRI-signals in space-sensitive lPT after adaptation, matching the behavioral rightward shift in sound localization. Moreover, a region-of-interest analysis in lPT/rPT revealed that the lPT activity correlated positively with the localization shift for right-side sounds, whereas rPT activity correlated negatively with the localization shift for left-side and central sounds. Finally, using functional connectivity analysis, we observed enhanced coupling of the lPT with left and right inferior parietal areas as well as left motor regions following adaptation and a decoupling of lPT/rPT with contralateral auditory cortex, which scaled with participants' degree of adaptation. Together, the fMRI results suggest that cross-modal spatial recalibration is accomplished by an adjustment of unisensory representations in low-level auditory cortex. Such persistent adjustments of low-level sensory representations seem to be mediated by the interplay with higher-level spatial representations in parietal cortex.
Collapse
|
36
|
Multisensory Perception of Contradictory Information in an Environment of Varying Reliability: Evidence for Conscious Perception and Optimal Causal Inference. Sci Rep 2017; 7:3167. [PMID: 28600573 PMCID: PMC5466670 DOI: 10.1038/s41598-017-03521-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Accepted: 05/01/2017] [Indexed: 11/23/2022] Open
Abstract
Two psychophysical experiments examined multisensory integration of visual-auditory (Experiment 1) and visual-tactile-auditory (Experiment 2) signals. Participants judged the location of these multimodal signals relative to a standard presented at the median plane of the body. A cue conflict was induced by presenting the visual signals with a constant spatial discrepancy to the other modalities. Extending previous studies, the reliability of certain modalities (visual in Experiment 1, visual and tactile in Experiment 2) was varied from trial to trial by presenting signals with either strong or weak location information (e.g., a relatively dense or dispersed dot cloud as visual stimulus). We investigated how participants would adapt to the cue conflict from the contradictory information under these varying reliability conditions and whether participants had insight to their performance. During the course of both experiments, participants switched from an integration strategy to a selection strategy in Experiment 1 and to a calibration strategy in Experiment 2. Simulations of various multisensory perception strategies proposed that optimal causal inference in a varying reliability environment not only depends on the amount of multimodal discrepancy, but also on the relative reliability of stimuli across the reliability conditions.
Collapse
|
37
|
The Impact of Feedback on the Different Time Courses of Multisensory Temporal Recalibration. Neural Plast 2017; 2017:3478742. [PMID: 28316841 PMCID: PMC5339631 DOI: 10.1155/2017/3478742] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 01/14/2017] [Accepted: 01/26/2017] [Indexed: 11/18/2022] Open
Abstract
The capacity to rapidly adjust perceptual representations confers a fundamental advantage when confronted with a constantly changing world. Unexplored is how feedback regarding sensory judgments (top-down factors) interacts with sensory statistics (bottom-up factors) to drive long- and short-term recalibration of multisensory perceptual representations. Here, we examined the time course of both cumulative and rapid temporal perceptual recalibration for individuals completing an audiovisual simultaneity judgment task in which they were provided with varying degrees of feedback. We find that in the presence of feedback (as opposed to simple sensory exposure) temporal recalibration is more robust. Additionally, differential time courses are seen for cumulative and rapid recalibration dependent upon the nature of the feedback provided. Whereas cumulative recalibration effects relied more heavily on feedback that informs (i.e., negative feedback) rather than confirms (i.e., positive feedback) the judgment, rapid recalibration shows the opposite tendency. Furthermore, differential effects on rapid and cumulative recalibration were seen when the reliability of feedback was altered. Collectively, our findings illustrate that feedback signals promote and sustain audiovisual recalibration over the course of cumulative learning and enhance rapid trial-to-trial learning. Furthermore, given the differential effects seen for cumulative and rapid recalibration, these processes may function via distinct mechanisms.
Collapse
|
38
|
Accumulation and decay of visual capture and the ventriloquism aftereffect caused by brief audio-visual disparities. Exp Brain Res 2016; 235:585-595. [PMID: 27837258 DOI: 10.1007/s00221-016-4820-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 11/03/2016] [Indexed: 10/20/2022]
Abstract
Visual capture and the ventriloquism aftereffect resolve spatial disparities of incongruent auditory visual (AV) objects by shifting auditory spatial perception to align with vision. Here, we demonstrated the distinct temporal characteristics of visual capture and the ventriloquism aftereffect in response to brief AV disparities. In a set of experiments, subjects localized either the auditory component of AV targets (A within AV) or a second sound presented at varying delays (1-20 s) after AV exposure (A2 after AV). AV targets were trains of brief presentations (1 or 20), covering a ±30° azimuthal range, and with ±8° (R or L) disparity. We found that the magnitude of visual capture generally reached its peak within a single AV pair and did not dissipate with time, while the ventriloquism aftereffect accumulated with repetitions of AV pairs and dissipated with time. Additionally, the magnitude of the auditory shift induced by each phenomenon was uncorrelated across listeners and visual capture was unaffected by subsequent auditory targets, indicating that visual capture and the ventriloquism aftereffect are separate mechanisms with distinct effects on auditory spatial perception. Our results indicate that visual capture is a 'sample-and-hold' process that binds related objects and stores the combined percept in memory, whereas the ventriloquism aftereffect is a 'leaky integrator' process that accumulates with experience and decays with time to compensate for cross-modal disparities.
Collapse
|
39
|
Lüttke CS, Ekman M, van Gerven MAJ, de Lange FP. McGurk illusion recalibrates subsequent auditory perception. Sci Rep 2016; 6:32891. [PMID: 27611960 PMCID: PMC5017187 DOI: 10.1038/srep32891] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2016] [Accepted: 08/08/2016] [Indexed: 11/09/2022] Open
Abstract
Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input.
Collapse
Affiliation(s)
- Claudia S Lüttke
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, the Netherlands
| | - Matthias Ekman
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, the Netherlands
| | - Marcel A J van Gerven
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, the Netherlands
| | - Floris P de Lange
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, the Netherlands
| |
Collapse
|
40
|
Noel JP, De Niear M, Van der Burg E, Wallace MT. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan. PLoS One 2016; 11:e0161698. [PMID: 27551918 PMCID: PMC4994953 DOI: 10.1371/journal.pone.0161698] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Accepted: 08/10/2016] [Indexed: 11/18/2022] Open
Abstract
Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
| | - Matthew De Niear
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Medical Scientist Training Program, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
| | - Erik Van der Burg
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- School of Psychology, University of Sydney, Sydney, Australia
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, 37235, United States of America
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, United States of America
- * E-mail:
| |
Collapse
|