1
|
Snir A, Cieśla K, Vekslar R, Amedi A. Highly compromised auditory spatial perception in aided congenitally hearing-impaired and rapid improvement with tactile technology. iScience 2024; 27:110808. [PMID: 39290844 PMCID: PMC11407022 DOI: 10.1016/j.isci.2024.110808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 07/11/2024] [Accepted: 08/21/2024] [Indexed: 09/19/2024] Open
Abstract
Spatial understanding is a multisensory construct while hearing is the only natural sense enabling the simultaneous perception of the entire 3D space. To test whether such spatial understanding is dependent on auditory experience, we study congenitally hearing-impaired users of assistive devices. We apply an in-house technology, which, inspired by the auditory system, performs intensity-weighting to represent external spatial positions and motion on the fingertips. We see highly impaired auditory spatial capabilities for tracking moving sources, which based on the "critical periods" theory emphasizes the role of nature in sensory development. Meanwhile, for tactile and audio-tactile spatial motion perception, the hearing-impaired show performance similar to typically hearing individuals. The immediate availability of 360° external space representation through touch, despite the lack of such experience during the lifetime, points to the significant role of nurture in spatial perception development, and to its amodal character. The findings show promise toward advancing multisensory solutions for rehabilitation.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| |
Collapse
|
2
|
Zhu H, Beierholm U, Shams L. BCI Toolbox: An open-source python package for the Bayesian causal inference model. PLoS Comput Biol 2024; 20:e1011791. [PMID: 38976678 PMCID: PMC11257388 DOI: 10.1371/journal.pcbi.1011791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 07/18/2024] [Accepted: 06/05/2024] [Indexed: 07/10/2024] Open
Abstract
Psychological and neuroscientific research over the past two decades has shown that the Bayesian causal inference (BCI) is a potential unifying theory that can account for a wide range of perceptual and sensorimotor processes in humans. Therefore, we introduce the BCI Toolbox, a statistical and analytical tool in Python, enabling researchers to conveniently perform quantitative modeling and analysis of behavioral data. Additionally, we describe the algorithm of the BCI model and test its stability and reliability via parameter recovery. The present BCI toolbox offers a robust platform for BCI model implementation as well as a hands-on tool for learning and understanding the model, facilitating its widespread use and enabling researchers to delve into the data to uncover underlying cognitive mechanisms.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, University of California, Los Angeles, California, United States of America
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Ulrik Beierholm
- Department of Psychology, University of Durham, Durham, United Kingdom
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, California, United States of America
- Department of Bioengineering, and Neuroscience Interdepartmental Program, University of California, Los Angeles, California, United States of America
| |
Collapse
|
3
|
Bernal-Berdun E, Vallejo M, Sun Q, Serrano A, Gutierrez D. Modeling the Impact of Head-Body Rotations on Audio-Visual Spatial Perception for Virtual Reality Applications. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2624-2632. [PMID: 38446650 DOI: 10.1109/tvcg.2024.3372112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Humans perceive the world by integrating multimodal sensory feedback, including visual and auditory stimuli, which holds true in virtual reality (VR) environments. Proper synchronization of these stimuli is crucial for perceiving a coherent and immersive VR experience. In this work, we focus on the interplay between audio and vision during localization tasks involving natural head-body rotations. We explore the impact of audio-visual offsets and rotation velocities on users' directional localization acuity for various viewing modes. Using psychometric functions, we model perceptual disparities between visual and auditory cues and determine offset detection thresholds. Our findings reveal that target localization accuracy is affected by perceptual audio-visual disparities during head-body rotations, but remains consistent in the absence of stimuli-head relative motion. We then showcase the effectiveness of our approach in predicting and enhancing users' localization accuracy within realistic VR gaming applications. To provide additional support for our findings, we implement a natural VR game wherein we apply a compensatory audio-visual offset derived from our measured psychometric functions. As a result, we demonstrate a substantial improvement of up to 40% in participants' target localization accuracy. We additionally provide guidelines for content creation to ensure coherent and seamless VR experiences.
Collapse
|
4
|
Kayser C, Debats N, Heuer H. Both stimulus-specific and configurational features of multiple visual stimuli shape the spatial ventriloquism effect. Eur J Neurosci 2024; 59:1770-1788. [PMID: 38230578 DOI: 10.1111/ejn.16251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/22/2023] [Accepted: 12/25/2023] [Indexed: 01/18/2024]
Abstract
Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Nienke Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
5
|
Kayser C, Heuer H. Multisensory perception depends on the reliability of the type of judgment. J Neurophysiol 2024; 131:723-737. [PMID: 38416720 DOI: 10.1152/jn.00451.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 02/05/2024] [Accepted: 02/24/2024] [Indexed: 03/01/2024] Open
Abstract
The brain engages the processes of multisensory integration and recalibration to deal with discrepant multisensory signals. These processes consider the reliability of each sensory input, with the more reliable modality receiving the stronger weight. Sensory reliability is typically assessed via the variability of participants' judgments, yet these can be shaped by factors both external and internal to the nervous system. For example, motor noise and participant's dexterity with the specific response method contribute to judgment variability, and different response methods applied to the same stimuli can result in different estimates of sensory reliabilities. Here we ask how such variations in reliability induced by variations in the response method affect multisensory integration and sensory recalibration, as well as motor adaptation, in a visuomotor paradigm. Participants performed center-out hand movements and were asked to judge the position of the hand or rotated visual feedback at the movement end points. We manipulated the variability, and thus the reliability, of repeated judgments by asking participants to respond using either a visual or a proprioceptive matching procedure. We find that the relative weights of visual and proprioceptive signals, and thus the asymmetry of multisensory integration and recalibration, depend on the reliability modulated by the judgment method. Motor adaptation, in contrast, was insensitive to this manipulation. Hence, the outcome of multisensory binding is shaped by the noise introduced by sensorimotor processing, in line with perception and action being intertwined.NEW & NOTEWORTHY Our brain tends to combine multisensory signals based on their respective reliability. This reliability depends on sensory noise in the environment, noise in the nervous system, and, as we show here, variability induced by the specific judgment procedure.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
6
|
Debats NB, Heuer H, Kayser C. Different time scales of common-cause evidence shape multisensory integration, recalibration and motor adaptation. Eur J Neurosci 2023; 58:3253-3269. [PMID: 37461244 DOI: 10.1111/ejn.16095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 07/03/2023] [Indexed: 09/05/2023]
Abstract
Perceptual coherence in the face of discrepant multisensory signals is achieved via the processes of multisensory integration, recalibration and sometimes motor adaptation. These supposedly operate on different time scales, with integration reducing immediate sensory discrepancies and recalibration and motor adaptation reflecting the cumulative influence of their recent history. Importantly, whether discrepant signals are bound during perception is guided by the brains' inference of whether they originate from a common cause. When combined, these two notions lead to the hypothesis that the time scales on which integration and recalibration (or motor adaptation) operate are associated with different time scales of evidence about a common cause underlying two signals. We tested this prediction in a well-established visuo-motor paradigm, in which human participants performed visually guided hand movements. The kinematic correlation between hand and cursor movements indicates their common origin, which allowed us to manipulate the common-cause evidence by titrating this correlation. Specifically, we dissociated hand and cursor signals during individual movements while preserving their correlation across the series of movement endpoints. Following our hypothesis, this manipulation reduced integration compared with a condition in which visual and proprioceptive signals were perfectly correlated. In contrast, recalibration and motor adaption were not affected by this manipulation. This supports the notion that multisensory integration and recalibration deal with sensory discrepancies on different time scales guided by common-cause evidence: Integration is prompted by local common-cause evidence and reduces immediate discrepancies, whereas recalibration and motor adaptation are prompted by global common-cause evidence and reduce persistent discrepancies.
Collapse
Affiliation(s)
- Nienke B Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|
7
|
Kayser C, Park H, Heuer H. Cumulative multisensory discrepancies shape the ventriloquism aftereffect but not the ventriloquism bias. PLoS One 2023; 18:e0290461. [PMID: 37607201 PMCID: PMC10443876 DOI: 10.1371/journal.pone.0290461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023] Open
Abstract
Multisensory integration and recalibration are two processes by which perception deals with discrepant signals. Both are often studied in the spatial ventriloquism paradigm. There, integration is probed by the presentation of discrepant audio-visual stimuli, while recalibration manifests as an aftereffect in subsequent judgements of unisensory sounds. Both biases are typically quantified against the degree of audio-visual discrepancy, reflecting the possibility that both may arise from common underlying multisensory principles. We tested a specific prediction of this: that both processes should also scale similarly with the history of multisensory discrepancies, i.e. the sequence of discrepancies in several preceding audio-visual trials. Analyzing data from ten experiments with randomly varying spatial discrepancies we confirmed the expected dependency of each bias on the immediately presented discrepancy. And in line with the aftereffect being a cumulative process, this scaled with the discrepancies presented in at least three preceding audio-visual trials. However, the ventriloquism bias did not depend on this three-trial history of multisensory discrepancies and also did not depend on the aftereffect biases in previous trials - making these two multisensory processes experimentally dissociable. These findings support the notion that the ventriloquism bias and the aftereffect reflect distinct functions, with integration maintaining a stable percept by reducing immediate sensory discrepancies and recalibration maintaining an accurate percept by accounting for consistent discrepancies.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Hame Park
- Department of Neurophysiology & Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
8
|
Otsuka T, Yotsumoto Y. Near-optimal integration of the magnitude information of time and numerosity. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230153. [PMID: 37564065 PMCID: PMC10410204 DOI: 10.1098/rsos.230153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 07/20/2023] [Indexed: 08/12/2023]
Abstract
Magnitude information is often correlated in the external world, providing complementary information about the environment. As if to reflect this relationship, the perceptions of different magnitudes (e.g. time and numerosity) are known to influence one another. Recent studies suggest that such magnitude interaction is similar to cue integration, such as multisensory integration. Here, we tested whether human observers could integrate the magnitudes of two quantities with distinct physical units (i.e. time and numerosity) as abstract magnitude information. The participants compared the magnitudes of two visual stimuli based on time, numerosity, or both. Consistent with the predictions of the maximum-likelihood estimation model, the participants integrated time and numerosity in a near-optimal manner; the weight of each dimension was proportional to their relative reliability, and the integrated estimate was more reliable than either the time or numerosity estimate. Furthermore, the integration approached a statistical optimum as the temporal discrepancy of the acquisition of each piece of information became smaller. These results suggest that magnitude interaction arises through a similar computational mechanism to cue integration. They are also consistent with the idea that different magnitudes are processed by a generalized magnitude system.
Collapse
Affiliation(s)
- Taku Otsuka
- Department of Life Sciences, University of Tokyo, Tokyo, Japan
| | - Yuko Yotsumoto
- Department of Life Sciences, University of Tokyo, Tokyo, Japan
| |
Collapse
|
9
|
Macklin AS, Yau JM, Fischer-Baum S, O'Malley MK. Representational Similarity Analysis for Tracking Neural Correlates of Haptic Learning on a Multimodal Device. IEEE TRANSACTIONS ON HAPTICS 2023; 16:424-435. [PMID: 37556331 PMCID: PMC10605963 DOI: 10.1109/toh.2023.3303838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
A goal of wearable haptic devices has been to enable haptic communication, where individuals learn to map information typically processed visually or aurally to haptic cues via a process of cross-modal associative learning. Neural correlates have been used to evaluate haptic perception and may provide a more objective approach to assess association performance than more commonly used behavioral measures of performance. In this article, we examine Representational Similarity Analysis (RSA) of electroencephalography (EEG) as a framework to evaluate how the neural representation of multifeatured haptic cues changes with association training. We focus on the first phase of cross-modal associative learning, perception of multimodal cues. A participant learned to map phonemes to multimodal haptic cues, and EEG data were acquired before and after training to create neural representational spaces that were compared to theoretical models. Our perceptual model showed better correlations to the neural representational space before training, while the feature-based model showed better correlations with the post-training data. These results suggest that training may lead to a sharpening of the sensory response to haptic cues. Our results show promise that an EEG-RSA approach can capture a shift in the representational space of cues, as a means to track haptic learning.
Collapse
|
10
|
Debats NB, Heuer H, Kayser C. Short-term effects of visuomotor discrepancies on multisensory integration, proprioceptive recalibration, and motor adaptation. J Neurophysiol 2023; 129:465-478. [PMID: 36651909 DOI: 10.1152/jn.00478.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Information about the position of our hand is provided by multisensory signals that are often not perfectly aligned. Discrepancies between the seen and felt hand position or its movement trajectory engage the processes of 1) multisensory integration, 2) sensory recalibration, and 3) motor adaptation, which adjust perception and behavioral responses to apparently discrepant signals. To foster our understanding of the coemergence of these three processes, we probed their short-term dependence on multisensory discrepancies in a visuomotor task that has served as a model for multisensory perception and motor control previously. We found that the well-established integration of discrepant visual and proprioceptive signals is tied to the immediate discrepancy and independent of the outcome of the integration of discrepant signals in immediately preceding trials. However, the strength of integration was context dependent, being stronger in an experiment featuring stimuli that covered a smaller range of visuomotor discrepancies (±15°) compared with one covering a larger range (±30°). Both sensory recalibration and motor adaptation for nonrepeated movement directions were absent after two bimodal trials with same or opposite visuomotor discrepancies. Hence our results suggest that short-term sensory recalibration and motor adaptation are not an obligatory consequence of the integration of preceding discrepant multisensory signals.NEW & NOTEWORTHY The functional relation between multisensory integration and recalibration remains debated. We here refute the notion that they coemerge in an obligatory manner and support the hypothesis that they serve distinct goals of perception.
Collapse
Affiliation(s)
- Nienke B Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|
11
|
Peters MA. Towards characterizing the canonical computations generating phenomenal experience. Neurosci Biobehav Rev 2022; 142:104903. [DOI: 10.1016/j.neubiorev.2022.104903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 09/27/2022] [Accepted: 10/01/2022] [Indexed: 10/31/2022]
|
12
|
Quintero SI, Shams L, Kamal K. Changing the Tendency to Integrate the Senses. Brain Sci 2022; 12:1384. [PMID: 36291318 PMCID: PMC9599885 DOI: 10.3390/brainsci12101384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 10/04/2022] [Accepted: 10/05/2022] [Indexed: 11/16/2022] Open
Abstract
Integration of sensory signals that emanate from the same source, such as the visual of lip articulations and the sound of the voice of a speaking individual, can improve perception of the source signal (e.g., speech). Because momentary sensory inputs are typically corrupted with internal and external noise, there is almost always a discrepancy between the inputs, facing the perceptual system with the problem of determining whether the two signals were caused by the same source or different sources. Thus, whether or not multisensory stimuli are integrated and the degree to which they are bound is influenced by factors such as the prior expectation of a common source. We refer to this factor as the tendency to bind stimuli, or for short, binding tendency. In theory, the tendency to bind sensory stimuli can be learned by experience through the acquisition of the probabilities of the co-occurrence of the stimuli. It can also be influenced by cognitive knowledge of the environment. The binding tendency varies across individuals and can also vary within an individual over time. Here, we review the studies that have investigated the plasticity of binding tendency. We discuss the protocols that have been reported to produce changes in binding tendency, the candidate learning mechanisms involved in this process, the possible neural correlates of binding tendency, and outstanding questions pertaining to binding tendency and its plasticity. We conclude by proposing directions for future research and argue that understanding mechanisms and recipes for increasing binding tendency can have important clinical and translational applications for populations or individuals with a deficiency in multisensory integration.
Collapse
Affiliation(s)
- Saul I. Quintero
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
- Department of Bioengineering, University of California, Los Angeles, CA 90089, USA
- Neuroscience Interdepartmental Program, University of California, Los Angeles, CA 90089, USA
| | - Kimia Kamal
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
13
|
Hong F, Badde S, Landy MS. Repeated exposure to either consistently spatiotemporally congruent or consistently incongruent audiovisual stimuli modulates the audiovisual common-cause prior. Sci Rep 2022; 12:15532. [PMID: 36109544 PMCID: PMC9478143 DOI: 10.1038/s41598-022-19041-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/23/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractTo estimate an environmental property such as object location from multiple sensory signals, the brain must infer their causal relationship. Only information originating from the same source should be integrated. This inference relies on the characteristics of the measurements, the information the sensory modalities provide on a given trial, as well as on a cross-modal common-cause prior: accumulated knowledge about the probability that cross-modal measurements originate from the same source. We examined the plasticity of this cross-modal common-cause prior. In a learning phase, participants were exposed to a series of audiovisual stimuli that were either consistently spatiotemporally congruent or consistently incongruent; participants’ audiovisual spatial integration was measured before and after this exposure. We fitted several Bayesian causal-inference models to the data; the models differed in the plasticity of the common-source prior. Model comparison revealed that, for the majority of the participants, the common-cause prior changed during the learning phase. Our findings reveal that short periods of exposure to audiovisual stimuli with a consistent causal relationship can modify the common-cause prior. In accordance with previous studies, both exposure conditions could either strengthen or weaken the common-cause prior at the participant level. Simulations imply that the direction of the prior-update might be mediated by the degree of sensory noise, the variability of the measurements of the same signal across trials, during the learning phase.
Collapse
|
14
|
Park H, Kayser C. The context of experienced sensory discrepancies shapes multisensory integration and recalibration differently. Cognition 2022; 225:105092. [DOI: 10.1016/j.cognition.2022.105092] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 03/02/2022] [Accepted: 03/07/2022] [Indexed: 11/03/2022]
|
15
|
Tsutsuse KS, Vibell J, Sinnett S. EXPRESS: Multisensory Perception of Natural Versus Unnatural Motion. Q J Exp Psychol (Hove) 2022; 76:1233-1244. [PMID: 35658653 DOI: 10.1177/17470218221108251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Previous research has shown that visual perception is influenced by Newtonian constraints. Kominsky et al. (2017) showed humans detect unnatural motion, where objects break Newtonian constraints by moving at a faster speed after colliding with another object, faster than collisions that do not violate Newtonian constraints. These findings show that the perceptual system distinguishes between realistic and unrealistic causal events. However, real world collisions are rarely silent. The present study extends this research by including sound at the collision point between two objects to evaluate how multisensory integration influences the perception of natural versus unnatural colliding events. Participants viewed an array of three simultaneous videos, each depicting two objects moving in a horizontal back and forth motion. Two of the videos showed the objects moving at the same speed while the third video was an oddball that either moved faster before the collision and slower after (natural target), or slower before the collision and faster after (unnatural target). A brief click was presented at the collision point of one or none of the videos. Participants were asked to indicate the oddball video via keypress. Replicating Kominsky et al. (2017), participants were faster when identifying unnatural target motion events compared to natural target motion events, both with and without sound. The findings also demonstrated lower accuracy rates for unnatural events compared to natural events, especially when a sound was added. These findings suggest that the addition of a sound could be distracting to participants, possibly due to limitations in attentional resources.
Collapse
Affiliation(s)
- Kayla Soma Tsutsuse
- Department of Psychology, University of Hawaii at Manoa 2530 Dole Street Sakamaki D412, Honolulu, HI, 96822 3949
| | - Jonas Vibell
- Department of Psychology, University of Hawaii at Manoa 2530 Dole Street Sakamaki D412, Honolulu, HI, 96822 3949
| | - Scott Sinnett
- Department of Psychology, University of Hawaii at Manoa 2530 Dole Street Sakamaki D412, Honolulu, HI, 96822 3949
| |
Collapse
|
16
|
Bernstein LE, Jordan N, Auer ET, Eberhardt SP. Lipreading: A Review of Its Continuing Importance for Speech Recognition With an Acquired Hearing Loss and Possibilities for Effective Training. Am J Audiol 2022; 31:453-469. [PMID: 35316072 PMCID: PMC9524756 DOI: 10.1044/2021_aja-21-00112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 10/25/2021] [Accepted: 12/30/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The goal of this review article is to reinvigorate interest in lipreading and lipreading training for adults with acquired hearing loss. Most adults benefit from being able to see the talker when speech is degraded; however, the effect size is related to their lipreading ability, which is typically poor in adults who have experienced normal hearing through most of their lives. Lipreading training has been viewed as a possible avenue for rehabilitation of adults with an acquired hearing loss, but most training approaches have not been particularly successful. Here, we describe lipreading and theoretically motivated approaches to its training, as well as examples of successful training paradigms. We discuss some extensions to auditory-only (AO) and audiovisual (AV) speech recognition. METHOD Visual speech perception and word recognition are described. Traditional and contemporary views of training and perceptual learning are outlined. We focus on the roles of external and internal feedback and the training task in perceptual learning, and we describe results of lipreading training experiments. RESULTS Lipreading is commonly characterized as limited to viseme perception. However, evidence demonstrates subvisemic perception of visual phonetic information. Lipreading words also relies on lexical constraints, not unlike auditory spoken word recognition. Lipreading has been shown to be difficult to improve through training, but under specific feedback and task conditions, training can be successful, and learning can generalize to untrained materials, including AV sentence stimuli in noise. The results on lipreading have implications for AO and AV training and for use of acoustically processed speech in face-to-face communication. CONCLUSION Given its importance for speech recognition with a hearing loss, we suggest that the research and clinical communities integrate lipreading in their efforts to improve speech recognition in adults with acquired hearing loss.
Collapse
Affiliation(s)
- Lynne E. Bernstein
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| | - Nicole Jordan
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| | - Edward T. Auer
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| | - Silvio P. Eberhardt
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| |
Collapse
|
17
|
Shams L, Beierholm U. Bayesian causal inference: A unifying neuroscience theory. Neurosci Biobehav Rev 2022; 137:104619. [PMID: 35331819 DOI: 10.1016/j.neubiorev.2022.104619] [Citation(s) in RCA: 37] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 02/21/2022] [Accepted: 03/10/2022] [Indexed: 01/08/2023]
Abstract
Understanding of the brain and the principles governing neural processing requires theories that are parsimonious, can account for a diverse set of phenomena, and can make testable predictions. Here, we review the theory of Bayesian causal inference, which has been tested, refined, and extended in a variety of tasks in humans and other primates by several research groups. Bayesian causal inference is normative and has explained human behavior in a vast number of tasks including unisensory and multisensory perceptual tasks, sensorimotor, and motor tasks, and has accounted for counter-intuitive findings. The theory has made novel predictions that have been tested and confirmed empirically, and recent studies have started to map its algorithms and neural implementation in the human brain. The parsimony, the diversity of the phenomena that the theory has explained, and its illuminating brain function at all three of Marr's levels of analysis make Bayesian causal inference a strong neuroscience theory. This also highlights the importance of collaborative and multi-disciplinary research for the development of new theories in neuroscience.
Collapse
Affiliation(s)
- Ladan Shams
- Departments of Psychology, BioEngineering, and Neuroscience Interdepartmental Program, University of California, Los Angeles, USA.
| | | |
Collapse
|
18
|
Kvamme TL, Sarmanlu M, Bailey C, Overgaard M. Neurofeedback Modulation of the Sound-induced Flash Illusion Using Parietal Cortex Alpha Oscillations Reveals Dependency on Prior Multisensory Congruency. Neuroscience 2021; 482:1-17. [PMID: 34838934 DOI: 10.1016/j.neuroscience.2021.11.028] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 11/12/2021] [Accepted: 11/19/2021] [Indexed: 01/27/2023]
Abstract
Spontaneous neural oscillations are key predictors of perceptual decisions to bind multisensory signals into a unified percept. Research links decreased alpha power in the posterior cortices to attention and audiovisual binding in the sound-induced flash illusion (SIFI) paradigm. This suggests that controlling alpha oscillations would be a way of controlling audiovisual binding. In the present feasibility study we used MEG-neurofeedback to train one group of subjects to increase left/right and another to increase right/left alpha power ratios in the parietal cortex. We tested for changes in audiovisual binding in a SIFI paradigm where flashes appeared in both hemifields. Results showed that the neurofeedback induced a significant asymmetry in alpha power for the left/right group, not seen for the right/left group. Corresponding asymmetry changes in audiovisual binding in illusion trials (with 2, 3, and 4 beeps paired with 1 flash) were not apparent. Exploratory analyses showed that neurofeedback training effects were present for illusion trials with the lowest numeric disparity (i.e., 2 beeps and 1 flash trials) only if the previous trial had high congruency (2 beeps and 2 flashes). Our data suggest that the relation between parietal alpha power (an index of attention) and its effect on audiovisual binding is dependent on the learned causal structure in the previous stimulus. The present results suggests that low alpha power biases observers towards audiovisual binding when they have learned that audiovisual signals originate from a common origin, consistent with a Bayesian causal inference account of multisensory perception.
Collapse
Affiliation(s)
- Timo L Kvamme
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark.
| | - Mesud Sarmanlu
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| | - Christopher Bailey
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| | - Morten Overgaard
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| |
Collapse
|
19
|
Debats NB, Heuer H, Kayser C. Visuo-proprioceptive integration and recalibration with multiple visual stimuli. Sci Rep 2021; 11:21640. [PMID: 34737371 PMCID: PMC8569193 DOI: 10.1038/s41598-021-00992-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 10/18/2021] [Indexed: 11/29/2022] Open
Abstract
To organize the plethora of sensory signals from our environment into a coherent percept, our brain relies on the processes of multisensory integration and sensory recalibration. We here asked how visuo-proprioceptive integration and recalibration are shaped by the presence of more than one visual stimulus, hence paving the way to study multisensory perception under more naturalistic settings with multiple signals per sensory modality. We used a cursor-control task in which proprioceptive information on the endpoint of a reaching movement was complemented by two visual stimuli providing additional information on the movement endpoint. The visual stimuli were briefly shown, one synchronously with the hand reaching the movement endpoint, the other delayed. In Experiment 1, the judgments of hand movement endpoint revealed integration and recalibration biases oriented towards the position of the synchronous stimulus and away from the delayed one. In Experiment 2 we contrasted two alternative accounts: that only the temporally more proximal visual stimulus enters integration similar to a winner-takes-all process, or that the influences of both stimuli superpose. The proprioceptive biases revealed that integration—and likely also recalibration—are shaped by the superposed contributions of multiple stimuli rather than by only the most powerful individual one.
Collapse
Affiliation(s)
- Nienke B Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Universitätsstrasse 25, 33615, Bielefeld, Germany. .,Center for Cognitive Interaction Technology (CITEC), Universität Bielefeld, Bielefeld, Germany.
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Universitätsstrasse 25, 33615, Bielefeld, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Universitätsstrasse 25, 33615, Bielefeld, Germany.,Center for Cognitive Interaction Technology (CITEC), Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|
20
|
Variance misperception under skewed empirical noise statistics explains overconfidence in the visual periphery. Atten Percept Psychophys 2021; 84:161-178. [PMID: 34426932 DOI: 10.3758/s13414-021-02358-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/16/2021] [Indexed: 11/08/2022]
Abstract
Perceptual confidence typically corresponds to accuracy. However, observers can be overconfident relative to accuracy, termed "subjective inflation." Inflation is stronger in the visual periphery relative to central vision, especially under conditions of peripheral inattention. Previous literature suggests inflation stems from errors in estimating noise (i.e., "variance misperception"). However, despite previous Bayesian hypotheses about metacognitive noise estimation, no work has systematically explored how noise estimation may critically depend on empirical noise statistics, which may differ across the visual field, with central noise distributed symmetrically but peripheral noise positively skewed. Here, we examined central and peripheral vision predictions from five Bayesian-inspired noise-estimation algorithms under varying usage of noise priors, including effects of attention. Models that failed to optimally estimate noise exhibited peripheral inflation, but only models that explicitly used peripheral noise priors-but used them incorrectly-showed increasing peripheral inflation under increasing peripheral inattention. Further, only one model successfully captured previous empirical results, which showed a selective increase in confidence in incorrect responses under performance reductions due to inattention accompanied by no change in confidence in correct responses; this was the model that implemented Bayesian estimation of peripheral noise, but using an (incorrect) symmetric rather than the correct positively skewed peripheral noise prior. Our findings explain peripheral inflation, especially under inattention, and suggest future experiments that might reveal the noise expectations used by the visual metacognitive system.
Collapse
|
21
|
Opoku-Baah C, Schoenhaut AM, Vassall SG, Tovar DA, Ramachandran R, Wallace MT. Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review. J Assoc Res Otolaryngol 2021; 22:365-386. [PMID: 34014416 PMCID: PMC8329114 DOI: 10.1007/s10162-021-00789-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/07/2021] [Indexed: 01/03/2023] Open
Abstract
In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
Collapse
Affiliation(s)
- Collins Opoku-Baah
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Adriana M Schoenhaut
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Sarah G Vassall
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - David A Tovar
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ramnarayan Ramachandran
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Vision Research Center, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA.
- Vanderbilt Vision Research Center, Nashville, TN, USA.
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
22
|
Opoku-Baah C, Wallace MT. Brief period of monocular deprivation drives changes in audiovisual temporal perception. J Vis 2020; 20:8. [PMID: 32761108 PMCID: PMC7438662 DOI: 10.1167/jov.20.8.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
The human brain retains a striking degree of plasticity into adulthood. Recent studies have demonstrated that a short period of altered visual experience (via monocular deprivation) can change the dynamics of binocular rivalry in favor of the deprived eye, a compensatory action thought to be mediated by an upregulation of cortical gain control mechanisms. Here, we sought to better understand the impact of monocular deprivation on multisensory abilities, specifically examining audiovisual temporal perception. Using an audiovisual simultaneity judgment task, we discovered that 90 minutes of monocular deprivation produced opposing effects on the temporal binding window depending on the eye used in the task. Thus, in those who performed the task with their deprived eye there was a narrowing of the temporal binding window, whereas in those performing the task with their nondeprived eye there was a widening of the temporal binding window. The effect was short lived, being observed only in the first 10 minutes of postdeprivation testing. These findings indicate that changes in visual experience in the adult can rapidly impact multisensory perceptual processes, a finding that has important clinical implications for those patients with adult-onset visual deprivation and for therapies founded on monocular deprivation.
Collapse
Affiliation(s)
| | - Mark T Wallace
- ,.,,.,,.,,.,,.,,
| |
Collapse
|
23
|
Abstract
According to the Bayesian framework of multisensory integration, audiovisual stimuli associated with a stronger prior belief that they share a common cause (i.e., causal prior) are predicted to result in a greater degree of perceptual binding and therefore greater audiovisual integration. In the present psychophysical study, we systematically manipulated the causal prior while keeping sensory evidence constant. We paired auditory and visual stimuli during an association phase to be spatiotemporally either congruent or incongruent, with the goal of driving the causal prior in opposite directions for different audiovisual pairs. Following this association phase, every pairwise combination of the auditory and visual stimuli was tested in a typical ventriloquism-effect (VE) paradigm. The size of the VE (i.e., the shift of auditory localization towards the spatially discrepant visual stimulus) indicated the degree of multisensory integration. Results showed that exposure to an audiovisual pairing as spatiotemporally congruent compared to incongruent resulted in a larger subsequent VE (Experiment 1). This effect was further confirmed in a second VE paradigm, where the congruent and the incongruent visual stimuli flanked the auditory stimulus, and a VE in the direction of the congruent visual stimulus was shown (Experiment 2). Since the unisensory reliabilities for the auditory or visual components did not change after the association phase, the observed effects are likely due to changes in multisensory binding by association learning. As suggested by Bayesian theories of multisensory processing, our findings support the existence of crossmodal causal priors that are flexibly shaped by experience in a changing world.
Collapse
Affiliation(s)
- Jonathan Tong
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
- Centre for Vision Research, Department of Psychology, York University, Toronto, Ontario, Canada
| | - Lux Li
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany.
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
24
|
Bruns P. The Ventriloquist Illusion as a Tool to Study Multisensory Processing: An Update. Front Integr Neurosci 2019; 13:51. [PMID: 31572136 PMCID: PMC6751356 DOI: 10.3389/fnint.2019.00051] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Accepted: 08/22/2019] [Indexed: 12/02/2022] Open
Abstract
Ventriloquism, the illusion that a voice appears to come from the moving mouth of a puppet rather than from the actual speaker, is one of the classic examples of multisensory processing. In the laboratory, this illusion can be reliably induced by presenting simple meaningless audiovisual stimuli with a spatial discrepancy between the auditory and visual components. Typically, the perceived location of the sound source is biased toward the location of the visual stimulus (the ventriloquism effect). The strength of the visual bias reflects the relative reliability of the visual and auditory inputs as well as prior expectations that the two stimuli originated from the same source. In addition to the ventriloquist illusion, exposure to spatially discrepant audiovisual stimuli results in a subsequent recalibration of unisensory auditory localization (the ventriloquism aftereffect). In the past years, the ventriloquism effect and aftereffect have seen a resurgence as an experimental tool to elucidate basic mechanisms of multisensory integration and learning. For example, recent studies have: (a) revealed top-down influences from the reward and motor systems on cross-modal binding; (b) dissociated recalibration processes operating at different time scales; and (c) identified brain networks involved in the neuronal computations underlying multisensory integration and learning. This mini review article provides a brief overview of established experimental paradigms to measure the ventriloquism effect and aftereffect before summarizing these pathbreaking new advancements. Finally, it is pointed out how the ventriloquism effect and aftereffect could be utilized to address some of the current open questions in the field of multisensory research.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
25
|
Developmental changes in the perception of audiotactile simultaneity. J Exp Child Psychol 2019; 183:208-221. [DOI: 10.1016/j.jecp.2019.02.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 01/29/2019] [Accepted: 02/13/2019] [Indexed: 11/23/2022]
|
26
|
Rohe T, Ehlis AC, Noppeney U. The neural dynamics of hierarchical Bayesian causal inference in multisensory perception. Nat Commun 2019; 10:1907. [PMID: 31015423 PMCID: PMC6478901 DOI: 10.1038/s41467-019-09664-2] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 03/14/2019] [Indexed: 11/09/2022] Open
Abstract
Transforming the barrage of sensory signals into a coherent multisensory percept relies on solving the binding problem - deciding whether signals come from a common cause and should be integrated or, instead, segregated. Human observers typically arbitrate between integration and segregation consistent with Bayesian Causal Inference, but the neural mechanisms remain poorly understood. Here, we presented people with audiovisual sequences that varied in the number of flashes and beeps, then combined Bayesian modelling and EEG representational similarity analyses. Our data suggest that the brain initially represents the number of flashes and beeps independently. Later, it computes their numbers by averaging the forced-fusion and segregation estimates weighted by the probabilities of common and independent cause models (i.e. model averaging). Crucially, prestimulus oscillatory alpha power and phase correlate with observers' prior beliefs about the world's causal structure that guide their arbitration between sensory integration and segregation.
Collapse
Affiliation(s)
- Tim Rohe
- Department of Psychiatry and Psychotherapy, Calwerstr. 14, University of Tuebingen, 72076, Tuebingen, Germany.
| | - Ann-Christine Ehlis
- Department of Psychiatry and Psychotherapy, Calwerstr. 14, University of Tuebingen, 72076, Tuebingen, Germany
- LEAD Graduate School & Research Network, Walter-Simon-Straße 12, University of Tuebingen, 72074, Tuebingen, Germany
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
| |
Collapse
|
27
|
Prior expectation of objects in space is dependent on the direction of gaze. Cognition 2019; 182:220-226. [DOI: 10.1016/j.cognition.2018.10.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2018] [Revised: 10/09/2018] [Accepted: 10/12/2018] [Indexed: 10/28/2022]
|
28
|
Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLoS Comput Biol 2018; 14:e1006110. [PMID: 30052625 PMCID: PMC6063401 DOI: 10.1371/journal.pcbi.1006110] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
Collapse
Affiliation(s)
- Luigi Acerbi
- Center for Neural Science, New York University, New York, NY, United States of America
| | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, United States of America
- Department of Psychology, New York University, New York, NY, United States of America
| |
Collapse
|