1
|
Kirollos R, Herdman CM. Caloric vestibular stimulation induces vestibular circular vection even with a conflicting visual display presented in a virtual reality headset. Iperception 2023; 14:20416695231168093. [PMID: 37113619 PMCID: PMC10126621 DOI: 10.1177/20416695231168093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 03/06/2023] [Indexed: 04/29/2023] Open
Abstract
This study explored visual-vestibular sensory integration when the vestibular system receives self-motion information using caloric irrigation. The objectives of this study were to (1) determine if measurable vestibular circular vection can be induced in healthy participants using caloric vestibular stimulation and (2) determine if a conflicting visual display could impact vestibular vection. In Experiment 1 (E1), participants had their eyes closed. Air caloric vestibular stimulation cooled the endolymph fluid of the horizontal semi-circular canal inducing vestibular circular vection. Participants reported vestibular circular vection with a potentiometer knob that measured circular vection direction, speed, and duration. In Experiment 2 (E2), participants viewed a stationary display in a virtual reality headset that did not signal self-motion while receiving caloric vestibular stimulation. This produced a visual-vestibular conflict. Participants indicated clockwise vection in the left ear and counter-clockwise vection in right ear in a significant proportion of trials in E1 and E2. Vection was significantly slower and shorter in E2 compared to E1. E2 results demonstrated that during visual-vestibular conflict, visual and vestibular cues are used to determine self-motion rather than one system overriding the other. These results are consistent with optimal cue integration hypothesis.
Collapse
Affiliation(s)
- Ramy Kirollos
- Ramy Kirollos, Defence Research and Development
Canada, Toronto Research Center, 1133 Sheppard Ave. W., Toronto, Ontario, M3 K 2C9,
Canada; Visualization and Simulation Center, Carleton University, 1125 Colonel By Drive,
Ottawa, Ontario, K1S 5B6, Canada.
| | | |
Collapse
|
2
|
Bruns P, Li L, Guerreiro MJ, Shareef I, Rajendran SS, Pitchaimuthu K, Kekunnaya R, Röder B. Audiovisual spatial recalibration but not integration is shaped by early sensory experience. iScience 2022; 25:104439. [PMID: 35874923 PMCID: PMC9301879 DOI: 10.1016/j.isci.2022.104439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 02/14/2022] [Accepted: 05/06/2022] [Indexed: 11/15/2022] Open
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Corresponding author
| | - Lux Li
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Department of Epidemiology and Biostatistics, Schulich School of Medicine & Dentistry, Western University, London, ON N6G 2M1, Canada
| | - Maria J.S. Guerreiro
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, University of Oldenburg, 26111 Oldenburg, Germany
| | - Idris Shareef
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Siddhart S. Rajendran
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Kabilan Pitchaimuthu
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Ramesh Kekunnaya
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
| |
Collapse
|
3
|
Bruschetta M, de Winkel KN, Mion E, Pretto P, Beghi A, Bülthoff HH. Assessing the contribution of active somatosensory stimulation to self-acceleration perception in dynamic driving simulators. PLoS One 2021; 16:e0259015. [PMID: 34793458 PMCID: PMC8601569 DOI: 10.1371/journal.pone.0259015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 10/11/2021] [Indexed: 11/18/2022] Open
Abstract
In dynamic driving simulators, the experience of operating a vehicle is reproduced by combining visual stimuli generated by graphical rendering with inertial stimuli generated by platform motion. Due to inherent limitations of the platform workspace, inertial stimulation is subject to shortcomings in the form of missing cues, false cues, and/or scaling errors, which negatively affect simulation fidelity. In the present study, we aim at quantifying the relative contribution of an active somatosensory stimulation to the perceived intensity of self-motion, relative to other sensory systems. Participants judged the intensity of longitudinal and lateral driving maneuvers in a dynamic driving simulator in passive driving conditions, with and without additional active somatosensory stimulation, as provided by an Active Seat (AS) and Active Belts (AB) integrated system (ASB). The results show that ASB enhances the perceived intensity of sustained decelerations, and increases the precision of acceleration perception overall. Our findings are consistent with models of perception, and indicate that active somatosensory stimulation can indeed be used to improve simulation fidelity.
Collapse
Affiliation(s)
- Mattia Bruschetta
- Department of Information Engineering, University of Padova, Padova, Italy
| | - Ksander N. de Winkel
- TU Delft, Cognitive Robotics Delft, Delft, Netherlands
- Department of Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Enrico Mion
- Department of Information Engineering, University of Padova, Padova, Italy
- Department of Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- * E-mail:
| | | | - Alessandro Beghi
- Department of Information Engineering, University of Padova, Padova, Italy
| | - Heinrich H. Bülthoff
- Department of Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
4
|
Rodriguez R, Crane BT. Effect of timing delay between visual and vestibular stimuli on heading perception. J Neurophysiol 2021; 126:304-312. [PMID: 34191637 DOI: 10.1152/jn.00351.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings to influence inertial perception. Seven healthy human subjects experienced 2 s of translation along a heading of 0°, ±35°, ±70°, ±105°, or ±140°. These inertial headings were paired with 2-s duration visual headings that were presented at relative offsets of 0°, ±30°, ±60°, ±90°, or ±120°. The visual stimuli were also presented at 17 temporal delays ranging from -500 ms (visual lead) to 2,000 ms (visual delay) relative to the inertial stimulus. After each stimulus, subjects reported the direction of the inertial stimulus using a dial. The bias of the inertial heading toward the visual heading was robust at ±250 ms when examined across subjects during this period: 8.0° ± 0.5° with a 30° offset, 12.2° ± 0.5° with a 60° offset, 11.7° ± 0.6° with a 90° offset, and 9.8° ± 0.7° with a 120° offset (mean bias toward visual ± SE). The mean bias was much diminished with temporal misalignments of ±500 ms, and there was no longer any visual influence on the inertial heading when the visual stimulus was delayed by 1,000 ms or more. Although the amount of bias varied between subjects, the effect of delay was similar.NEW & NOTEWORTHY The effect of timing on visual-inertial integration on heading perception has not been previously examined. This study finds that visual direction influence inertial heading perception when timing differences are within 250 ms. This suggests visual-inertial stimuli can be integrated over a wider range than reported for visual-auditory integration and may be due to the unique nature of inertial sensation, which can only sense acceleration while the visual system senses position but encodes velocity.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
5
|
Keshner EA, Lamontagne A. The Untapped Potential of Virtual Reality in Rehabilitation of Balance and Gait in Neurological Disorders. FRONTIERS IN VIRTUAL REALITY 2021; 2:641650. [PMID: 33860281 PMCID: PMC8046008 DOI: 10.3389/frvir.2021.641650] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Dynamic systems theory transformed our understanding of motor control by recognizing the continual interaction between the organism and the environment. Movement could no longer be visualized simply as a response to a pattern of stimuli or as a demonstration of prior intent; movement is context dependent and is continuously reshaped by the ongoing dynamics of the world around us. Virtual reality is one methodological variable that allows us to control and manipulate that environmental context. A large body of literature exists to support the impact of visual flow, visual conditions, and visual perception on the planning and execution of movement. In rehabilitative practice, however, this technology has been employed mostly as a tool for motivation and enjoyment of physical exercise. The opportunity to modulate motor behavior through the parameters of the virtual world is often ignored in practice. In this article we present the results of experiments from our laboratories and from others demonstrating that presenting particular characteristics of the virtual world through different sensory modalities will modify balance and locomotor behavior. We will discuss how movement in the virtual world opens a window into the motor planning processes and informs us about the relative weighting of visual and somatosensory signals. Finally, we discuss how these findings should influence future treatment design.
Collapse
Affiliation(s)
- Emily A. Keshner
- Department of Health and Rehabilitation Sciences, Temple University, Philadelphia, PA, United States
- Correspondence: Emily A. Keshner,
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada
- Virtual Reality and Mobility Laboratory, CISSS Laval—Jewish Rehabilitation Hospital Site of the Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Laval, QC, Canada
| |
Collapse
|
6
|
De Winkel KN, Edel E, Happee R, Bülthoff HH. Multisensory Interactions in Head and Body Centered Perception of Verticality. Front Neurosci 2021; 14:599226. [PMID: 33510611 PMCID: PMC7835726 DOI: 10.3389/fnins.2020.599226] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 12/08/2020] [Indexed: 11/25/2022] Open
Abstract
Percepts of verticality are thought to be constructed as a weighted average of multisensory inputs, but the observed weights differ considerably between studies. In the present study, we evaluate whether this can be explained by differences in how visual, somatosensory and proprioceptive cues contribute to representations of the Head In Space (HIS) and Body In Space (BIS). Participants (10) were standing on a force plate on top of a motion platform while wearing a visualization device that allowed us to artificially tilt their visual surroundings. They were presented with (in)congruent combinations of visual, platform, and head tilt, and performed Rod & Frame Test (RFT) and Subjective Postural Vertical (SPV) tasks. We also recorded postural responses to evaluate the relation between perception and balance. The perception data shows that body tilt, head tilt, and visual tilt affect the HIS and BIS in both experimental tasks. For the RFT task, visual tilt induced considerable biases (≈ 10° for 36° visual tilt) in the direction of the vertical expressed in the visual scene; for the SPV task, participants also adjusted platform tilt to correct for illusory body tilt induced by the visual stimuli, but effects were much smaller (≈ 0.25°). Likewise, postural data from the SPV task indicate participants slightly shifted their weight to counteract visual tilt (0.3° for 36° visual tilt). The data reveal a striking dissociation of visual effects between the two tasks. We find that the data can be explained well using a model where percepts of the HIS and BIS are constructed from direct signals from head and body sensors, respectively, and indirect signals based on body and head signals but corrected for perceived neck tilt. These findings show that perception of the HIS and BIS derive from the same sensory signals, but see profoundly different weighting factors. We conclude that observations of different weightings between studies likely result from querying of distinct latent constructs referenced to the body or head in space.
Collapse
Affiliation(s)
- Ksander N. De Winkel
- Intelligent Vehicles Research Group, Faculty 3mE, Cognitive Robotics Department, Delft University of Technology, Delft, Netherlands
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Ellen Edel
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Riender Happee
- Intelligent Vehicles Research Group, Faculty 3mE, Cognitive Robotics Department, Delft University of Technology, Delft, Netherlands
| | - Heinrich H. Bülthoff
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
7
|
French RL, DeAngelis GC. Multisensory neural processing: from cue integration to causal inference. CURRENT OPINION IN PHYSIOLOGY 2020; 16:8-13. [PMID: 32968701 DOI: 10.1016/j.cophys.2020.04.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Neurophysiological studies of multisensory processing have largely focused on how the brain integrates information from different sensory modalities to form a coherent percept. However, in the natural environment, an important extra step is needed: the brain faces the problem of causal inference, which involves determining whether different sources of sensory information arise from the same environmental cause, such that integrating them is advantageous Behavioral and computational studies have provided a strong foundation for studying causal inference, but studies of its neural basis have only recently been undertaken. This review focuses on recent advances regarding how the brain infers the causes of sensory inputs and uses this information to make robust perceptual estimates.
Collapse
Affiliation(s)
- Ranran L French
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| |
Collapse
|
8
|
Rodriguez R, Crane BT. Common causation and offset effects in human visual-inertial heading direction integration. J Neurophysiol 2020; 123:1369-1379. [PMID: 32130052 DOI: 10.1152/jn.00019.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement direction can be determined from a combination of visual and inertial cues. Visual motion (optic flow) can represent self-motion through a fixed environment or environmental motion relative to an observer. Simultaneous visual and inertial heading cues present the question of whether the cues have a common cause (i.e., should be integrated) or whether they should be considered independent. This was studied in eight healthy human subjects who experienced 12 visual and inertial headings in the horizontal plane divided in 30° increments. The headings were estimated in two unisensory and six multisensory trial blocks. Each unisensory block included 72 stimulus presentations, while each multisensory block included 144 stimulus presentations, including every possible combination of visual and inertial headings in random order. After each multisensory stimulus, subjects reported their perception of visual and inertial headings as congruous (i.e., having common causation) or not. In the multisensory trial blocks, subjects also reported visual or inertial heading direction (3 trial blocks for each). For aligned visual-inertial headings, the rate of common causation was higher during alignment in cardinal than noncardinal directions. When visual and inertial stimuli were separated by 30°, the rate of reported common causation remained >50%, but it decreased to 15% or less for separation of ≥90°. The inertial heading was biased toward the visual heading by 11-20° for separations of 30-120°. Thus there was sensory integration even in conditions without reported common causation. The visual heading was minimally influenced by inertial direction. When trials with common causation perception were compared with those without, inertial heading perception had a stronger bias toward visual stimulus direction.NEW & NOTEWORTHY Optic flow ambiguously represents self-motion or environmental motion. When these are in different directions, it is uncertain whether these are integrated into a common perception or not. This study looks at that issue by determining whether the two modalities are consistent and by measuring their perceived directions to get a degree of influence. The visual stimulus can have significant influence on the inertial stimulus even when they are perceived as inconsistent.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
9
|
Colonius H, Diederich A. Formal models and quantitative measures of multisensory integration: a selective overview. Eur J Neurosci 2020; 51:1161-1178. [DOI: 10.1111/ejn.13813] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Revised: 12/18/2017] [Accepted: 12/20/2017] [Indexed: 11/26/2022]
Affiliation(s)
- Hans Colonius
- Department of Psychology Carl von Ossietzky Universität Oldenburg Oldenburg 26111 Germany
- Department of Psychological Sciences Purdue University West Lafayette IN USA
| | - Adele Diederich
- Department of Psychological Sciences Purdue University West Lafayette IN USA
- Life Sciences and Chemistry Jacobs University Bremen Bremen Germany
| |
Collapse
|
10
|
de Winkel KN, Kurtz M, Bülthoff HH. Effects of visual stimulus characteristics and individual differences in heading estimation. J Vis 2019; 18:9. [PMID: 30347100 DOI: 10.1167/18.11.9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual heading estimation is subject to periodic patterns of constant (bias) and variable (noise) error. The nature of the errors, however, appears to differ between studies, showing underestimation in some, but overestimation in others. We investigated whether field of view (FOV), the availability of binocular disparity cues, motion profile, and visual scene layout can account for error characteristics, with a potential mediating effect of vection. Twenty participants (12 females) reported heading and rated vection for visual horizontal motion stimuli with headings ranging the full circle, while we systematically varied the above factors. Overall, the results show constant errors away from the fore-aft axis. Error magnitude was affected by FOV, disparity, and scene layout. Variable errors varied with heading angle, and depended on scene layout. Higher vection ratings were associated with smaller variable errors. Vection ratings depended on FOV, motion profile, and scene layout, with the highest ratings for a large FOV, cosine-bell velocity profile, and a ground plane scene rather than a dot cloud scene. Although the factors did affect error magnitude, differences in its direction were observed only between participants. We show that the observations are consistent with prior beliefs that headings align with the cardinal axes, where the attraction of each axis is an idiosyncratic property.
Collapse
Affiliation(s)
- Ksander N de Winkel
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Max Kurtz
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department of Human Factors and Engineering Psychology, University of Twente, Enschede, The Netherlands
| | - Heinrich H Bülthoff
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
11
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
12
|
Rodriguez R, Crane BT. Effect of range of heading differences on human visual-inertial heading estimation. Exp Brain Res 2019; 237:1227-1237. [PMID: 30847539 DOI: 10.1007/s00221-019-05506-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Accepted: 03/01/2019] [Indexed: 11/29/2022]
Abstract
Both visual and inertial cues are salient in heading determination. However, optic flow can ambiguously represent self-motion or environmental motion. It is unclear how visual and inertial heading cues are determined to have common cause and integrated vs perceived independently. In four experiments visual and inertial headings were presented simultaneously with ten subjects reporting visual or inertial headings in separate trial blocks. Experiment 1 examined inertial headings within 30° of straight-ahead and visual headings that were offset by up to 60°. Perception of the inertial heading was shifted in the direction of the visual stimulus by as much as 35° by the 60° offset, while perception of the visual stimulus remained largely uninfluenced. Experiment 2 used ± 140° range of inertial headings with up to 120° visual offset. This experiment found variable behavior between subjects with most perceiving the sensory stimuli to be shifted towards an intermediate heading but a few perceiving the headings independently. The visual and inertial headings influenced each other even at the largest offsets. Experiments 3 and 4 had similar inertial headings to experiments 1 and 2, respectively, except subjects reported environmental motion direction. Experiment 4 displayed similar perceptual influences as experiment 2, but in experiment 3 percepts were independent. Results suggested that perception of visual and inertial stimuli tend to be perceived as having common causation in most subjects with offsets up to 90° although with significant variation in perception between individuals. Limiting the range of inertial headings caused the visual heading to dominate the perception.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA
| | - Benjamin T Crane
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Otolaryngology, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Neuroscience, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA.
| |
Collapse
|
13
|
Hinterecker T, Pretto P, de Winkel KN, Karnath HO, Bülthoff HH, Meilinger T. Body-relative horizontal-vertical anisotropy in human representations of traveled distances. Exp Brain Res 2018; 236:2811-2827. [PMID: 30030590 PMCID: PMC6153888 DOI: 10.1007/s00221-018-5337-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 07/17/2018] [Indexed: 01/14/2023]
Abstract
A growing number of studies investigated anisotropies in representations of horizontal and vertical spaces. In humans, compelling evidence for such anisotropies exists for representations of multi-floor buildings. In contrast, evidence regarding open spaces is indecisive. Our study aimed at further enhancing the understanding of horizontal and vertical spatial representations in open spaces utilizing a simple traveled distance estimation paradigm. Blindfolded participants were moved along various directions in the sagittal plane. Subsequently, participants passively reproduced the traveled distance from memory. Participants performed this task in an upright and in a 30° backward-pitch orientation. The accuracy of distance estimates in the upright orientation showed a horizontal–vertical anisotropy, with higher accuracy along the horizontal axis compared with the vertical axis. The backward-pitch orientation enabled us to investigate whether this anisotropy was body or earth-centered. The accuracy patterns of the upright condition were positively correlated with the body-relative (not the earth-relative) coordinate mapping of the backward-pitch condition, suggesting a body-centered anisotropy. Overall, this is consistent with findings on motion perception. It suggests that the distance estimation sub-process of path integration is subject to horizontal–vertical anisotropy. Based on the previous studies that showed isotropy in open spaces, we speculate that real physical self-movements or categorical versus isometric encoding are crucial factors for (an)isotropies in spatial representations.
Collapse
Affiliation(s)
- Thomas Hinterecker
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany. .,Graduate Training Centre of Neuroscience, Tübingen University, Tübingen, Germany.
| | - Paolo Pretto
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| | - Ksander N de Winkel
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| | - Hans-Otto Karnath
- Division of Neuropsychology, Center of Neurology, Tübingen University, Tübingen, Germany
| | - Heinrich H Bülthoff
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| | - Tobias Meilinger
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| |
Collapse
|
14
|
Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLoS Comput Biol 2018; 14:e1006110. [PMID: 30052625 PMCID: PMC6063401 DOI: 10.1371/journal.pcbi.1006110] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
Collapse
Affiliation(s)
- Luigi Acerbi
- Center for Neural Science, New York University, New York, NY, United States of America
| | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, United States of America
- Department of Psychology, New York University, New York, NY, United States of America
| |
Collapse
|
15
|
Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze. PLoS One 2018; 13:e0199097. [PMID: 29902253 PMCID: PMC6002115 DOI: 10.1371/journal.pone.0199097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 05/31/2018] [Indexed: 11/21/2022] Open
Abstract
Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.
Collapse
|
16
|
Noel JP, Blanke O, Serino A. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference. Ann N Y Acad Sci 2018; 1426:146-165. [PMID: 29876922 DOI: 10.1111/nyas.13867] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/24/2018] [Accepted: 05/02/2018] [Indexed: 01/09/2023]
Abstract
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience (LNCO), Center for Neuroprosthetics (CNP), Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Neurology, University of Geneva, Geneva, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
17
|
de Winkel KN, Katliar M, Diers D, Bülthoff HH. Causal Inference in the Perception of Verticality. Sci Rep 2018; 8:5483. [PMID: 29615728 PMCID: PMC5882842 DOI: 10.1038/s41598-018-23838-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 03/20/2018] [Indexed: 12/01/2022] Open
Abstract
The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body's inertial sensors with prior knowledge that upright is usually above the head. Recent findings furthermore show that the weighting of the respective sensory signals is proportional to their reliability, consistent with a Bayesian interpretation of a vector sum (Forced Fusion, FF). However, violations of FF have also been reported, suggesting that the CNS may rely on a single sensory system (Cue Capture, CC), or choose to process sensory signals based on inferred signal causality (Causal Inference, CI). We developed a novel alternative-reality system to manipulate visual and physical tilt independently. We tasked participants (n = 36) to indicate the perceived upright for various (in-)congruent combinations of visual-inertial stimuli, and compared models based on their agreement with the data. The results favor the CI model over FF, although this effect became unambiguous only for large discrepancies (±60°). We conclude that the notion of a vector sum does not provide a comprehensive explanation of the perception of the upright, and that CI offers a better alternative.
Collapse
Affiliation(s)
- Ksander N de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany.
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| | - Daniel Diers
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| | - Heinrich H Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| |
Collapse
|
18
|
Hanna M, Fung J, Lamontagne A. Multisensory control of a straight locomotor trajectory. J Vestib Res 2018; 27:17-25. [PMID: 28387689 DOI: 10.3233/ves-170603] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Locomotor steering is contingent upon orienting oneself spatially in the environment. When the head is turned while walking, the optic flow projected onto the retina is a complex pattern comprising of a translational and a rotational component. We have created a unique paradigm to simulate different optic flows in a virtual environment. We hypothesized that non-visual (vestibular and somatosensory) cues are required for proper control of a straight trajectory while walking. This research study included 9 healthy young subjects walking in a large physical space (40×25m2) while the virtual environment is viewed in a helmet-mounted display. They were instructed to walk straight in the physical world while being exposed to three conditions: (1) self-initiated active head turns (AHT: 40° right, left, or none); (2) visually simulated head turns (SHT); and (3) visually simulated head turns with no target element (SHT_NT). Conditions 1 and 2 involved an eye-level target which subjects were instructed to fixate, whereas condition 3 was similar to condition 2 but with no target. Identical retinal flow patterns were present in the AHT and SHT conditions whereas non-visual cues differed in that a head rotation was sensed only in AHT but not in SHT. Body motions were captured by a 12-camera Vicon system. Horizontal orientations of the head and body segments, as well as the trajectory of the body's centre of mass were analyzed. SHT and SNT_NT yielded similar results. Heading and body segment orientations changed in the direction opposite to the head turns in SHT conditions. Heading remained unchanged across head turn directions in AHT. Results suggest that non-visual information is used in the control of heading while being exposed to changing rotational optic flows. The small magnitude of the changes in SHT conditions suggests that the CNS can re-weight relevant sources of information to minimize heading errors in the presence of sensory conflicts.
Collapse
Affiliation(s)
- Maxim Hanna
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.,Feil and Oberfeld /CRIR Research Centre, Jewish Rehabilitation Hospital, CISSS-Laval, QC, Canada
| | - Joyce Fung
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.,Feil and Oberfeld /CRIR Research Centre, Jewish Rehabilitation Hospital, CISSS-Laval, QC, Canada
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.,Feil and Oberfeld /CRIR Research Centre, Jewish Rehabilitation Hospital, CISSS-Laval, QC, Canada
| |
Collapse
|
19
|
Combining random forest with multi-block local binary pattern feature selection for multiclass head pose estimation. PLoS One 2017; 12:e0180792. [PMID: 28715442 PMCID: PMC5513428 DOI: 10.1371/journal.pone.0180792] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Accepted: 06/21/2017] [Indexed: 12/02/2022] Open
Abstract
A new head pose estimation technique based on Random Forest (RF) and texture features for facial image analysis using a monocular camera is proposed in this paper, especially about how to efficiently combine the random forest and the features. In the proposed technique a randomized tree with useful attributes is trained to improve estimation accuracy and tolerance of occlusions and illumination. Specifically, a number of features including Multi-scale Block Local Block Pattern (MB-LBP) are extracted from an image, and random features such as the MB-LBP scale parameters, a block coordinate, and a layer of an image pyramid in the feature pool are used for training the tree. The randomized tree aims to maximize the information gain at each node while random samples traverse the nodes in the tree. To this aim, a split function considering the uniform property of the LBP feature is developed to move sample blocks to the left or the right children nodes. The trees are independently trained with random inputs, yet they are grouped to form a random forest so that the results collected from the trees are used for make the final decision. Precisely, we use a Maximum-A-Posteriori criterion in the decision. It is demonstrated with experimental results that the proposed technique provides significantly enhanced classification performance in the head pose estimation in various conditions of illumination, poses, expressions, and facial occlusions.
Collapse
|
20
|
Crane BT. Effect of eye position during human visual-vestibular integration of heading perception. J Neurophysiol 2017; 118:1609-1621. [PMID: 28615328 DOI: 10.1152/jn.00037.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Revised: 06/13/2017] [Accepted: 06/13/2017] [Indexed: 11/22/2022] Open
Abstract
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems.NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability.
Collapse
Affiliation(s)
- Benjamin T Crane
- Department of Otolaryngology, University of Rochester, Rochester, New York
| |
Collapse
|
21
|
Nesti A, de Winkel K, Bülthoff HH. Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation. PLoS One 2017; 12:e0170497. [PMID: 28125681 PMCID: PMC5268484 DOI: 10.1371/journal.pone.0170497] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Accepted: 12/15/2016] [Indexed: 11/26/2022] Open
Abstract
While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence.
Collapse
Affiliation(s)
- Alessandro Nesti
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Ksander de Winkel
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
22
|
de Winkel KN, Katliar M, Bülthoff HH. Causal Inference in Multisensory Heading Estimation. PLoS One 2017; 12:e0169676. [PMID: 28060957 PMCID: PMC5218471 DOI: 10.1371/journal.pone.0169676] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Accepted: 12/20/2016] [Indexed: 11/30/2022] Open
Abstract
A large body of research shows that the Central Nervous System (CNS) integrates multisensory information. However, this strategy should only apply to multisensory signals that have a common cause; independent signals should be segregated. Causal Inference (CI) models account for this notion. Surprisingly, previous findings suggested that visual and inertial cues on heading of self-motion are integrated regardless of discrepancy. We hypothesized that CI does occur, but that characteristics of the motion profiles affect multisensory processing. Participants estimated heading of visual-inertial motion stimuli with several different motion profiles and a range of intersensory discrepancies. The results support the hypothesis that judgments of signal causality are included in the heading estimation process. Moreover, the data suggest a decreasing tolerance for discrepancies and an increasing reliance on visual cues for longer duration motions.
Collapse
Affiliation(s)
- Ksander N. de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
- * E-mail:
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
| |
Collapse
|
23
|
Goeke CM, Planera S, Finger H, König P. Bayesian Alternation during Tactile Augmentation. Front Behav Neurosci 2016; 10:187. [PMID: 27774057 PMCID: PMC5054009 DOI: 10.3389/fnbeh.2016.00187] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 09/22/2016] [Indexed: 11/25/2022] Open
Abstract
A large number of studies suggest that the integration of multisensory signals by humans is well-described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study, we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC) task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition), rotation only (native condition), and both augmented and native information (bimodal condition). Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants' responses with a probit model and calculated the just notable difference (JND). Then, we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67) than the Bayesian integration model (χred2 = 4.34). Slightly higher accuracy showed a non-Bayesian winner takes all (WTA) model (χred2 = 1.64), which either used only native or only augmented values per subject for prediction. However, the performance of the Bayesian alternation model could be substantially improved (χred2 = 1.09) utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in untrained humans is combined via a subjective Bayesian alternation process. Therefore, we conclude that behavior in our bimodal condition is explained better by top down-subjective weighting than by bottom-up weighting based upon objective cue reliability.
Collapse
Affiliation(s)
- Caspar M. Goeke
- Institute of Cognitive Science, University of OsnabrückOsnabrück, Germany
| | - Serena Planera
- Institute of Cognitive Science, University of OsnabrückOsnabrück, Germany
| | - Holger Finger
- Institute of Cognitive Science, University of OsnabrückOsnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, University of OsnabrückOsnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-EppendorfHamburg, Germany
| |
Collapse
|
24
|
Genzel D, Firzlaff U, Wiegrebe L, MacNeilage PR. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals. J Neurophysiol 2016; 116:765-75. [PMID: 27169504 DOI: 10.1152/jn.00052.2016] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Accepted: 05/09/2016] [Indexed: 11/22/2022] Open
Abstract
Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating.
Collapse
Affiliation(s)
- Daria Genzel
- Department Biology II, Ludwig-Maximilian University of Munich, Planegg-Martinsried, Germany; Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany
| | - Uwe Firzlaff
- Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany; Chair of Zoology, Technische Universität München, Freising-Weihenstephan, Germany; and
| | - Lutz Wiegrebe
- Department Biology II, Ludwig-Maximilian University of Munich, Planegg-Martinsried, Germany; Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany
| | - Paul R MacNeilage
- Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany; Deutsches Schwindel- und Gleichgewichtszentrum, University Hospital of Munich, Munich, Germany
| |
Collapse
|
25
|
Nash CJ, Cole DJ, Bigler RS. A review of human sensory dynamics for application to models of driver steering and speed control. BIOLOGICAL CYBERNETICS 2016; 110:91-116. [PMID: 27086133 PMCID: PMC4903114 DOI: 10.1007/s00422-016-0682-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Accepted: 02/22/2016] [Indexed: 06/05/2023]
Abstract
In comparison with the high level of knowledge about vehicle dynamics which exists nowadays, the role of the driver in the driver-vehicle system is still relatively poorly understood. A large variety of driver models exist for various applications; however, few of them take account of the driver's sensory dynamics, and those that do are limited in their scope and accuracy. A review of the literature has been carried out to consolidate information from previous studies which may be useful when incorporating human sensory systems into the design of a driver model. This includes information on sensory dynamics, delays, thresholds and integration of multiple sensory stimuli. This review should provide a basis for further study into sensory perception during driving.
Collapse
Affiliation(s)
- Christopher J. Nash
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| | - David J. Cole
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| | - Robert S. Bigler
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| |
Collapse
|
26
|
Greenlee M, Frank S, Kaliuzhna M, Blanke O, Bremmer F, Churan J, Cuturi LF, MacNeilage P, Smith A. Multisensory Integration in Self Motion Perception. Multisens Res 2016. [DOI: 10.1163/22134808-00002527] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
Collapse
Affiliation(s)
- Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Sebastian M. Frank
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Mariia Kaliuzhna
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Jan Churan
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Luigi F. Cuturi
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, UK
| |
Collapse
|