1
|
Scheller M, Nardini M. Correctly establishing evidence for cue combination via gains in sensory precision: Why the choice of comparator matters. Behav Res Methods 2024; 56:2842-2858. [PMID: 37730934 PMCID: PMC11133123 DOI: 10.3758/s13428-023-02227-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2023] [Indexed: 09/22/2023]
Abstract
Studying how sensory signals from different sources (sensory cues) are integrated within or across multiple senses allows us to better understand the perceptual computations that lie at the foundation of adaptive behaviour. As such, determining the presence of precision gains - the classic hallmark of cue combination - is important for characterising perceptual systems, their development and functioning in clinical conditions. However, empirically measuring precision gains to distinguish cue combination from alternative perceptual strategies requires careful methodological considerations. Here, we note that the majority of existing studies that tested for cue combination either omitted this important contrast, or used an analysis approach that, unknowingly, strongly inflated false positives. Using simulations, we demonstrate that this approach enhances the chances of finding significant cue combination effects in up to 100% of cases, even when cues are not combined. We establish how this error arises when the wrong cue comparator is chosen and recommend an alternative analysis that is easy to implement but has only been adopted by relatively few studies. By comparing combined-cue perceptual precision with the best single-cue precision, determined for each observer individually rather than at the group level, researchers can enhance the credibility of their reported effects. We also note that testing for deviations from optimal predictions alone is not sufficient to ascertain whether cues are combined. Taken together, to correctly test for perceptual precision gains, we advocate for a careful comparator selection and task design to ensure that cue combination is tested with maximum power, while reducing the inflation of false positives.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, Durham University, Durham, UK.
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
2
|
Kirsch W, Kunde W. On the Role of Interoception in Body and Object Perception: A Multisensory-Integration Account. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:321-339. [PMID: 35994810 PMCID: PMC10018064 DOI: 10.1177/17456916221096138] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Various "embodied perception" phenomena suggest that what people sense of their body shapes what they perceive of the environment and that what they perceive of the environment shapes what they perceive of their bodies. For example, an observer's own hand can be felt where a fake hand is seen, events produced by own body movements seem to occur earlier than they did, and feeling a heavy weight at an observer's back may prompt hills to look steeper. Here we argue that such and various other phenomena are instances of multisensory integration of interoceptive signals from the body and exteroceptive signals from the environment. This overarching view provides a mechanistic description of what embodiment in perception means and how it works. It suggests new research questions while questioning a special role of the body itself and various phenomenon-specific explanations in terms of ownership, agency, or action-related scaling of visual information.
Collapse
Affiliation(s)
- Wladimir Kirsch
- Wladimir Kirsch, Department of Psychology,
University of Würzburg
| | | |
Collapse
|
3
|
Perceptual changes after learning of an arbitrary mapping between vision and hand movements. Sci Rep 2022; 12:11427. [PMID: 35794174 PMCID: PMC9259624 DOI: 10.1038/s41598-022-15579-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 06/27/2022] [Indexed: 11/21/2022] Open
Abstract
The present study examined the perceptual consequences of learning arbitrary mappings between visual stimuli and hand movements. Participants moved a small cursor with their unseen hand twice to a large visual target object and then judged either the relative distance of the hand movements (Exp.1), or the relative number of dots that appeared in the two consecutive target objects (Exp.2) using a two-alternative forced choice method. During a learning phase, the numbers of dots that appeared in the target object were correlated with the hand movement distance. In Exp.1, we observed that after the participants were trained to expect many dots with larger hand movements, they judged movements made to targets with many dots as being longer than the same movements made to targets with few dots. In Exp.2, another group of participants who received the same training judged the same number of dots as smaller when larger rather than smaller hand movements were executed. When many dots were paired with smaller hand movements during the learning phase of both experiments, no significant changes in the perception of movements and of visual stimuli were observed. These results suggest that changes in the perception of body states and of external objects can arise when certain body characteristics co-occur with certain characteristics of the environment. They also indicate that the (dis)integration of multimodal perceptual signals depends not only on the physical or statistical relation between these signals, but on which signal is currently attended.
Collapse
|
4
|
Kim H, Lee IK. Studying the Effects of Congruence of Auditory and Visual Stimuli on Virtual Reality Experiences. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2080-2090. [PMID: 35167477 DOI: 10.1109/tvcg.2022.3150514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Studies in virtual reality (VR) have introduced numerous multisensory simulation techniques for more immersive VR experiences. However, although they primarily focus on expanding sensory types or increasing individual sensory quality, they lack consensus in designing appropriate interactions between different sensory stimuli. This paper explores how the congruence between auditory and visual (AV) stimuli, which are the sensory stimuli typically provided by VR devices, affects the cognition and experience of VR users as a critical interaction factor in promoting multisensory integration. We defined the types of (in)congruence between AV stimuli, and then designed 12 virtual spaces with different types or degrees of congruence between AV stimuli. We then evaluated the presence, immersion, motion sickness, and cognition changes in each space. We observed the following key findings: 1) there is a limit to the degree of temporal or spatial incongruence that can be tolerated, with few negative effects on user experience until that point is exceeded; 2) users are tolerant of semantic incongruence; 3) a simulation that considers synesthetic congruence contributes to the user's sense of immersion and presence. Based on these insights, we identified the essential considerations for designing sensory simulations in VR and proposed future research directions.
Collapse
|
5
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
6
|
Impact of proprioception on the perceived size and distance of external objects in a virtual action task. Psychon Bull Rev 2021; 28:1191-1201. [PMID: 33782919 PMCID: PMC8367880 DOI: 10.3758/s13423-021-01915-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2021] [Indexed: 11/08/2022]
Abstract
Previous research has revealed changes in the perception of objects due to changes of object-oriented actions. In present study, we varied the arm and finger postures in the context of a virtual reaching and grasping task and tested whether this manipulation can simultaneously affect the perceived size and distance of external objects. Participants manually controlled visual cursors, aiming at reaching and enclosing a distant target object, and judged the size and distance of this object. We observed that a visual-proprioceptive discrepancy introduced during the reaching part of the action simultaneously affected the judgments of target distance and of target size (Experiment 1). A related variation applied to the grasping part of the action affected the judgments of size, but not of distance of the target (Experiment 2). These results indicate that perceptual effects observed in the context of actions can directly arise through sensory integration of multimodal redundant signals and indirectly through perceptual constancy mechanisms.
Collapse
|
7
|
Abstract
According to the Bayesian framework of multisensory integration, audiovisual stimuli associated with a stronger prior belief that they share a common cause (i.e., causal prior) are predicted to result in a greater degree of perceptual binding and therefore greater audiovisual integration. In the present psychophysical study, we systematically manipulated the causal prior while keeping sensory evidence constant. We paired auditory and visual stimuli during an association phase to be spatiotemporally either congruent or incongruent, with the goal of driving the causal prior in opposite directions for different audiovisual pairs. Following this association phase, every pairwise combination of the auditory and visual stimuli was tested in a typical ventriloquism-effect (VE) paradigm. The size of the VE (i.e., the shift of auditory localization towards the spatially discrepant visual stimulus) indicated the degree of multisensory integration. Results showed that exposure to an audiovisual pairing as spatiotemporally congruent compared to incongruent resulted in a larger subsequent VE (Experiment 1). This effect was further confirmed in a second VE paradigm, where the congruent and the incongruent visual stimuli flanked the auditory stimulus, and a VE in the direction of the congruent visual stimulus was shown (Experiment 2). Since the unisensory reliabilities for the auditory or visual components did not change after the association phase, the observed effects are likely due to changes in multisensory binding by association learning. As suggested by Bayesian theories of multisensory processing, our findings support the existence of crossmodal causal priors that are flexibly shaped by experience in a changing world.
Collapse
Affiliation(s)
- Jonathan Tong
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
- Centre for Vision Research, Department of Psychology, York University, Toronto, Ontario, Canada
| | - Lux Li
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany.
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
8
|
Shayman CS, Peterka RJ, Gallun FJ, Oh Y, Chang NYN, Hullar TE. Frequency-dependent integration of auditory and vestibular cues for self-motion perception. J Neurophysiol 2020; 123:936-944. [PMID: 31940239 DOI: 10.1152/jn.00307.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1-1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation.NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,School of Medicine, University of Utah, Salt Lake City, Utah
| | - Robert J Peterka
- Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| | - Frederick J Gallun
- National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon.,Oregon Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon
| | - Yonghee Oh
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida
| | - Nai-Yuan N Chang
- Department of Preventive and Restorative Dental Sciences-Division of Bioengineering and Biomaterials, University of California, San Francisco, San Francisco, California
| | - Timothy E Hullar
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| |
Collapse
|
9
|
Kuang S, Deng H, Zhang T. Adaptive heading performance during self-motion perception. Psych J 2019; 9:295-305. [PMID: 31814320 DOI: 10.1002/pchj.330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 11/05/2019] [Accepted: 11/13/2019] [Indexed: 11/07/2022]
Abstract
Previous studies have documented that the perception of self-motion direction can be extracted from the patterns of image motion on the retina (also termed optic flow). Self-motion perception remains stable even when the optic-flow information is distorted by concurrent gaze shifts from body/eye rotations. This has been interpreted that extraretinal signals-efference copies of eye/body movements-are involved in compensating for retinal distortions. Here, we tested an alternative hypothesis to the extraretinal interpretation. We hypothesized that accurate self-motion perception can be achieved from a purely optic-flow-based visual strategy acquired through experience, independent of extraretinal mechanism. To test this, we asked human subjects to perform a self-motion direction discrimination task under normal optic flow (fixation condition) or distorted optic flow resulted from either realistic (pursuit condition) or simulated (simulated condition) eye movements. The task was performed either without (pre- and posttraining) or with (during training) the feedback about the correct answer. We first replicated the previous observation that before training, direction perception was greatly impaired in the simulated condition where the optic flow was distorted and extraretinal eye movement signals were absent. We further showed that after a few training sessions, the initial impairment in direction perception was gradually improved. These results reveal that behavioral training can enforce the exploitation of retinal cues to compensate for the distortion, without the contribution from the extraretinal signals. Our results suggest that self-motion perception is a flexible and adaptive process which might depend on neural plasticity in relevant cortical areas.
Collapse
Affiliation(s)
- Shenbing Kuang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Hu Deng
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Tao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
10
|
Cuturi LF, Gori M. Biases in the Visual and Haptic Subjective Vertical Reveal the Role of Proprioceptive/Vestibular Priors in Child Development. Front Neurol 2019; 9:1151. [PMID: 30666230 PMCID: PMC6330314 DOI: 10.3389/fneur.2018.01151] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 12/12/2018] [Indexed: 11/13/2022] Open
Abstract
Investigation of the perception of verticality permits to disclose the perceptual mechanisms that underlie balance control and spatial navigation. Estimation of verticality in unusual body orientation with respect to gravity (e.g., laterally tilted in the roll plane) leads to biases that change depending on the encoding sensory modality and the amount of tilt. A well-known phenomenon is the A-effect, that is a bias toward the body tilt often interpreted in a Bayesian framework to be the byproduct of a prior peaked at the most common head and body orientation, i.e., upright. In this study, we took advantage of this phenomenon to study the interaction of visual, haptic sensory information with vestibular/proprioceptive priors across development. We tested children (5-13 y.o) and adults (>22 y.o.) in an orientation discrimination task laterally tilted 90° to their left-ear side. Experimental conditions differed for the tested sensory modality: visual-only, haptic-only, both modalities. Resulting accuracy depended on the developmental stage and the encoding sensory modality, showing A-effects in vision across all ages and in the haptic modality only for the youngest children whereas bimodal judgments show lack of multisensory integration in children. A Bayesian prior model nicely predicts the behavioral data when the peak of the prior distribution shifts across age groups. Our results suggest that vision is pivotal to acquire an idiotropic vector useful for improving precision when upright. The acquisition of such a prior might be related to the development of head and trunk coordination, a process that is fundamental for gaining successful spatial navigation.
Collapse
Affiliation(s)
- Luigi F Cuturi
- Unit for Visually Impaired People, Science and Technology for Children and Adults, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Science and Technology for Children and Adults, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
11
|
A virtual reality approach identifies flexible inhibition of motion aftereffects induced by head rotation. Behav Res Methods 2018; 51:96-107. [PMID: 30187432 DOI: 10.3758/s13428-018-1116-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
As we move in space, our retinae receive motion signals from two causes: those resulting from motion in the world and those resulting from self-motion. Mounting evidence has shown that vestibular self-motion signals interact with visual motion processing profoundly. However, most contemporary methods arguably lack portability and generality and are incapable of providing measurements during locomotion. Here we developed a virtual reality approach, combining a three-space sensor with a head-mounted display, to quantitatively manipulate the causality between retinal motion and head rotations in the yaw plane. Using this system, we explored how self-motion affected visual motion perception, particularly the motion aftereffect (MAE). Subjects watched gratings presented on a head-mounted display. The gratings drifted at the same velocity as head rotations, with the drifting direction being identical, opposite, or perpendicular to the direction of head rotations. We found that MAE lasted a significantly shorter time when subjects' heads rotated than when their heads were kept still. This effect was present regardless of the drifting direction of the gratings, and was also observed during passive head rotations. These findings suggest that the adaptation to retinal motion is suppressed by head rotations. Because the suppression was also found during passive head movements, it should result from visual-vestibular interaction rather than from efference copy signals. Such visual-vestibular interaction is more flexible than has previously been thought, since the suppression could be observed even when the retinal motion direction was perpendicular to head rotations. Our work suggests that a virtual reality approach can be applied to various studies of multisensory integration and interaction.
Collapse
|
12
|
Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLoS Comput Biol 2018; 14:e1006110. [PMID: 30052625 PMCID: PMC6063401 DOI: 10.1371/journal.pcbi.1006110] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
Collapse
Affiliation(s)
- Luigi Acerbi
- Center for Neural Science, New York University, New York, NY, United States of America
| | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, United States of America
- Department of Psychology, New York University, New York, NY, United States of America
| |
Collapse
|
13
|
Kaliuzhna M, Gale S, Prsa M, Maire R, Blanke O. Optimal visuo-vestibular integration for self-motion perception in patients with unilateral vestibular loss. Neuropsychologia 2018; 111:112-116. [DOI: 10.1016/j.neuropsychologia.2018.01.033] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 12/12/2017] [Accepted: 01/22/2018] [Indexed: 11/29/2022]
|
14
|
Goeke CM, Planera S, Finger H, König P. Bayesian Alternation during Tactile Augmentation. Front Behav Neurosci 2016; 10:187. [PMID: 27774057 PMCID: PMC5054009 DOI: 10.3389/fnbeh.2016.00187] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 09/22/2016] [Indexed: 11/25/2022] Open
Abstract
A large number of studies suggest that the integration of multisensory signals by humans is well-described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study, we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC) task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition), rotation only (native condition), and both augmented and native information (bimodal condition). Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants' responses with a probit model and calculated the just notable difference (JND). Then, we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67) than the Bayesian integration model (χred2 = 4.34). Slightly higher accuracy showed a non-Bayesian winner takes all (WTA) model (χred2 = 1.64), which either used only native or only augmented values per subject for prediction. However, the performance of the Bayesian alternation model could be substantially improved (χred2 = 1.09) utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in untrained humans is combined via a subjective Bayesian alternation process. Therefore, we conclude that behavior in our bimodal condition is explained better by top down-subjective weighting than by bottom-up weighting based upon objective cue reliability.
Collapse
Affiliation(s)
- Caspar M. Goeke
- Institute of Cognitive Science, University of OsnabrückOsnabrück, Germany
| | - Serena Planera
- Institute of Cognitive Science, University of OsnabrückOsnabrück, Germany
| | - Holger Finger
- Institute of Cognitive Science, University of OsnabrückOsnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, University of OsnabrückOsnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-EppendorfHamburg, Germany
| |
Collapse
|
15
|
Nash CJ, Cole DJ, Bigler RS. A review of human sensory dynamics for application to models of driver steering and speed control. BIOLOGICAL CYBERNETICS 2016; 110:91-116. [PMID: 27086133 PMCID: PMC4903114 DOI: 10.1007/s00422-016-0682-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Accepted: 02/22/2016] [Indexed: 06/05/2023]
Abstract
In comparison with the high level of knowledge about vehicle dynamics which exists nowadays, the role of the driver in the driver-vehicle system is still relatively poorly understood. A large variety of driver models exist for various applications; however, few of them take account of the driver's sensory dynamics, and those that do are limited in their scope and accuracy. A review of the literature has been carried out to consolidate information from previous studies which may be useful when incorporating human sensory systems into the design of a driver model. This includes information on sensory dynamics, delays, thresholds and integration of multiple sensory stimuli. This review should provide a basis for further study into sensory perception during driving.
Collapse
Affiliation(s)
- Christopher J. Nash
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| | - David J. Cole
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| | - Robert S. Bigler
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| |
Collapse
|
16
|
Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction. Sci Rep 2016; 6:26301. [PMID: 27198907 PMCID: PMC4873743 DOI: 10.1038/srep26301] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 04/25/2016] [Indexed: 12/01/2022] Open
Abstract
Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion.
Collapse
|
17
|
Salomon R, Kaliuzhna M, Herbelin B, Blanke O. Balancing awareness: Vestibular signals modulate visual consciousness in the absence of awareness. Conscious Cogn 2015. [DOI: 10.1016/j.concog.2015.07.009] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
18
|
de Winkel KN, Katliar M, Bülthoff HH. Forced fusion in multisensory heading estimation. PLoS One 2015; 10:e0127104. [PMID: 25938235 PMCID: PMC4418840 DOI: 10.1371/journal.pone.0127104] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 04/10/2015] [Indexed: 11/18/2022] Open
Abstract
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.
Collapse
Affiliation(s)
- Ksander N. de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713, Korea
- * E-mail:
| |
Collapse
|