1
|
Xu SZ, Chen FXY, Gong R, Zhang FL, Zhang SH. BiRD: Using Bidirectional Rotation Gain Differences to Redirect Users during Back-and-forth Head Turns in Walking. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2693-2702. [PMID: 38437103 DOI: 10.1109/tvcg.2024.3372094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Redirected walking (RDW) facilitates user navigation within expansive virtual spaces despite the constraints of limited physical spaces. It employs discrepancies between human visual-proprioceptive sensations, known as gains, to enable the remapping of virtual and physical environments. In this paper, we explore how to apply rotation gain while the user is walking. We propose to apply a rotation gain to let the user rotate by a different angle when reciprocating from a previous head rotation, to achieve the aim of steering the user to a desired direction. To apply the gains imperceptibly based on such a Bidirectional Rotation gain Difference (BiRD), we conduct both measurement and verification experiments on the detection thresholds of the rotation gain for reciprocating head rotations during walking. Unlike previous rotation gains which are measured when users are turning around in place (standing or sitting), BiRD is measured during users' walking. Our study offers a critical assessment of the acceptable range of rotational mapping differences for different rotational orientations across the user's walking experience, contributing to an effective tool for redirecting users in virtual environments.
Collapse
|
2
|
Halow SJ, Hamilton A, Folmer E, MacNeilage PR. Impaired stationarity perception is associated with increased virtual reality sickness. J Vis 2023; 23:7. [PMID: 38127329 PMCID: PMC10750839 DOI: 10.1167/jov.23.14.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 11/05/2023] [Indexed: 12/23/2023] Open
Abstract
Stationarity perception refers to the ability to accurately perceive the surrounding visual environment as world-fixed during self-motion. Perception of stationarity depends on mechanisms that evaluate the congruence between retinal/oculomotor signals and head movement signals. In a series of psychophysical experiments, we systematically varied the congruence between retinal/oculomotor and head movement signals to find the range of visual gains that is compatible with perception of a stationary environment. On each trial, human subjects wearing a head-mounted display execute a yaw head movement and report whether the visual gain was perceived to be too slow or fast. A psychometric fit to the data across trials reveals the visual gain most compatible with stationarity (a measure of accuracy) and the sensitivity to visual gain manipulation (a measure of precision). Across experiments, we varied 1) the spatial frequency of the visual stimulus, 2) the retinal location of the visual stimulus (central vs. peripheral), and 3) fixation behavior (scene-fixed vs. head-fixed). Stationarity perception is most precise and accurate during scene-fixed fixation. Effects of spatial frequency and retinal stimulus location become evident during head-fixed fixation, when retinal image motion is increased. Virtual Reality sickness assessed using the Simulator Sickness Questionnaire covaries with perceptual performance. Decreased accuracy is associated with an increase in the nausea subscore, while decreased precision is associated with an increase in the oculomotor and disorientation subscores.
Collapse
Affiliation(s)
| | - Allie Hamilton
- University of Nevada, Reno, Psychology, Reno, Nevada, USA
| | - Eelke Folmer
- University of Nevada, Reno, Computer Science, Reno, Nevada, USA
| | | |
Collapse
|
3
|
Bayer M, Zimmermann E. Serial dependencies in visual stability during self-motion. J Neurophysiol 2023; 130:447-457. [PMID: 37465870 DOI: 10.1152/jn.00157.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/14/2023] [Accepted: 07/18/2023] [Indexed: 07/20/2023] Open
Abstract
Every time we move our head, the brain must decide whether the displacement of the visual scene is the result of external or self-produced motion. Gaze shifts generate the biggest and most frequent disturbance of vision. Visual stability during gaze shifts is necessary for both, dissociating self-produced from external motion and retaining bodily balance. Here, we asked participants to perform an eye-head gaze shift to a target that was briefly presented in a head-mounted display. We manipulated the velocity of the scene displacement across trials such that the background moved either too fast or too slow in relation to the head movement speed. Participants were required to report whether they perceived the gaze-contingent visual motion as faster or slower than what they would expect from their head movement velocity. We found that the point of visual stability was attracted to the velocity presented in the previous trial. Our data reveal that serial dependencies in visual stability calibrate the mapping between motor-related signals coding head movement velocity and visual motion velocity. This process is likely to aid in visual stability as the accuracy of this mapping is crucial to maintain visual stability during self-motion.NEW & NOTEWORTHY We report that visual stability during self-motion is maintained by serial dependencies between the current and the previous gaze-contingent visual velocity that was experienced during a head movement. The gaze-contingent scene displacement velocity that appears normal to us thus depends on what we have registered in the recent history of gaze shifts. Serial dependencies provide an efficient means to maintain visual stability during self-motion.
Collapse
Affiliation(s)
- Manuel Bayer
- Institute for Experimental Psychology, Heinrich-Heine-University Düsseldorf, Germany
| | - Eckart Zimmermann
- Institute for Experimental Psychology, Heinrich-Heine-University Düsseldorf, Germany
| |
Collapse
|
4
|
Gabriel GA, Harris LR, Gnanasegaram JJ, Cushing SL, Gordon KA, Haycock BC, Campos JL. Age-related changes to vestibular heave and pitch perception and associations with postural control. Sci Rep 2022; 12:6426. [PMID: 35440744 PMCID: PMC9018785 DOI: 10.1038/s41598-022-09807-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 03/21/2022] [Indexed: 11/09/2022] Open
Abstract
Falls are a common cause of injury in older adults (OAs), and age-related declines across the sensory systems are associated with increased falls risk. The vestibular system is particularly important for maintaining balance and supporting safe mobility, and aging has been associated with declines in vestibular end-organ functioning. However, few studies have examined potential age-related differences in vestibular perceptual sensitivities or their association with postural stability. Here we used an adaptive-staircase procedure to measure detection and discrimination thresholds in 19 healthy OAs and 18 healthy younger adults (YAs), by presenting participants with passive heave (linear up-and-down translations) and pitch (forward-backward tilt rotations) movements on a motion-platform in the dark. We also examined participants' postural stability under various standing-balance conditions. Associations among these postural measures and vestibular perceptual thresholds were further examined. Ultimately, OAs showed larger heave and pitch detection thresholds compared to YAs, and larger perceptual thresholds were associated with greater postural sway, but only in OAs. Overall, these results suggest that vestibular perceptual sensitivity declines with older age and that such declines are associated with poorer postural stability. Future studies could consider the potential applicability of these results in the development of screening tools for falls prevention in OAs.
Collapse
Affiliation(s)
- Grace A Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Department of Psychology, University of Toronto, 500 University Avenue, Toronto, ON, M5G 2A2, Canada
| | - Laurence R Harris
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada
| | - Joshua J Gnanasegaram
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Sharon L Cushing
- Department of Otolaryngology-Head and Neck Surgery, Hospital for Sick Children, Toronto, ON, Canada.,Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada.,Archie's Cochlear Implant Laboratory, Hospital for Sick Children, Toronto, ON, Canada
| | - Karen A Gordon
- Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada.,Archie's Cochlear Implant Laboratory, Hospital for Sick Children, Toronto, ON, Canada
| | - Bruce C Haycock
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,University of Toronto Institute for Aerospace Studies, Toronto, ON, Canada
| | - Jennifer L Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada. .,Department of Psychology, University of Toronto, 500 University Avenue, Toronto, ON, M5G 2A2, Canada.
| |
Collapse
|
5
|
Chung W, Barnett-Cowan M. Influence of Sensory Conflict on Perceived Timing of Passive Rotation in Virtual Reality. Multisens Res 2022; 35:1-23. [PMID: 35477696 DOI: 10.1163/22134808-bja10074] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 03/17/2022] [Indexed: 02/21/2024]
Abstract
Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.
Collapse
Affiliation(s)
- William Chung
- Department of Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
6
|
Niehof N, Perdreau F, Koppen M, Medendorp WP. Time course of the subjective visual vertical during sustained optokinetic and galvanic vestibular stimulation. J Neurophysiol 2019; 122:788-796. [DOI: 10.1152/jn.00083.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The brain is thought to use rotation cues from both the vestibular and optokinetic system to disambiguate the gravito-inertial force, as measured by the otoliths, into components of linear acceleration and gravity direction relative to the head. Hence, when the head is stationary and upright, an erroneous percept of tilt arises during optokinetic roll stimulation (OKS) or when an artificial canal-like signal is delivered by means of galvanic vestibular stimulation (GVS). It is still unknown how this percept is affected by the combined presence of both cues or how it develops over time. Here, we measured the time course of the subjective visual vertical (SVV), as a proxy of perceived head tilt, in human participants ( n = 16) exposed to constant-current GVS (1 and 2 mA, cathodal and anodal) and constant-velocity OKS (30°/s clockwise and counterclockwise) or their combination. In each trial, participants continuously adjusted the orientation of a visual line, which drifted randomly, to Earth vertical. We found that both GVS and OKS evoke an exponential time course of the SVV. These time courses have different amplitudes and different time constants, 4 and 7 s respectively, and combine linearly when the two stimulations are presented together. We discuss these results in the framework of observer theory and Bayesian state estimation. NEW & NOTEWORTHY While it is known that both roll optokinetic stimuli and galvanic vestibular stimulation affect the percept of vertical, how their effects combine and develop over time is still unclear. Here we show that both effects combined linearly but are characterized by different time constants, which we discuss from a probabilistic perspective.
Collapse
Affiliation(s)
- Nynke Niehof
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Florian Perdreau
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Mathieu Koppen
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - W. Pieter Medendorp
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Gallagher M, Dowsett R, Ferrè ER. Vection in virtual reality modulates vestibular-evoked myogenic potentials. Eur J Neurosci 2019; 50:3557-3565. [PMID: 31233640 DOI: 10.1111/ejn.14499] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 06/05/2019] [Accepted: 06/17/2019] [Indexed: 11/28/2022]
Abstract
The popularity of virtual reality (VR) has increased rapidly in recent years. While significant technological advancements are apparent, a troublesome problem with VR is that between 20% and 80% of users will experience unpleasant side effects such as nausea, disorientation, blurred vision and headaches-a malady known as Cybersickness. Cybersickness may be caused by a conflict between sensory signals for self-motion: while vision signals that the user is moving in a certain direction with certain acceleration, the vestibular organs provide no corroborating information. To resolve the sensory conflict, vestibular cues may be down-weighted leading to an alteration of how the brain interprets actual vestibular information. This may account for the frequently reported after-effects of VR exposure. Here, we investigated whether exposure to vection in VR modulates vestibular processing. We measured vestibular-evoked myogenic potentials (VEMPs) during brief immersion in a vection-inducing VR environment presented via head-mounted display. We found changes in VEMP asymmetry ratio, with a substantial increase in VEMP amplitude recorded on the left sternocleidomastoid muscle following just one minute of exposure to vection in VR. Our results suggest that exposure to vection in VR modulates vestibular processing, which may explain common after-effects of VR.
Collapse
Affiliation(s)
- Maria Gallagher
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | - Ross Dowsett
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | | |
Collapse
|
8
|
Moroz M, Garzorz I, Folmer E, MacNeilage P. Sensitivity to Visual Speed Modulation in Head-Mounted Displays Depends on Fixation. DISPLAYS 2019; 58:12-19. [PMID: 32863474 PMCID: PMC7454227 DOI: 10.1016/j.displa.2018.09.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A primary cause of simulator sickness in head-mounted displays (HMDs) is conflict between the visual scene displayed to the user and the visual scene expected by the brain when the user's head is in motion. It is useful to measure perceptual sensitivity to visual speed modulation in HMDs because conditions that minimize this sensitivity may prove less likely to elicit simulator sickness. In prior research, we measured sensitivity to visual gain modulation during slow, passive, full-body yaw rotations and observed that sensitivity was reduced when subjects fixated a head-fixed target compared with when they fixated a scene-fixed target. In the current study, we investigated whether this pattern of results persists when (1) movements are faster, active head turns, and (2) visual stimuli are presented on an HMD rather than on a monitor. Subjects wore an Oculus Rift CV1 HMD and viewed a 3D scene of white points on a black background. On each trial, subjects moved their head from a central position to face a 15° eccentric target. During the head movement they fixated a point that was either head-fixed or scene-fixed, depending on condition. They then reported if the visual scene motion was too fast or too slow. Visual speed on subsequent trials was modulated according to a staircase procedure to find the speed increment that was just noticeable. Sensitivity to speed modulation during active head movement was reduced during head-fixed fixation, similar to what we observed during passive whole-body rotation. We conclude that fixation of a head-fixed target is an effective way to reduce sensitivity to visual speed modulation in HMDs, and may also be an effective strategy to reduce susceptibility to simulator sickness.
Collapse
Affiliation(s)
- Matthew Moroz
- Department of Psychology, University of Nevada, Reno
| | - Isabelle Garzorz
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München
| | - Eelke Folmer
- Department of Computer Science, University of Nevada, Reno
| | | |
Collapse
|
9
|
Garzorz IT, MacNeilage PR. Towards dynamic modeling of visual-vestibular conflict detection. PROGRESS IN BRAIN RESEARCH 2019; 248:277-284. [PMID: 31239138 PMCID: PMC7162554 DOI: 10.1016/bs.pbr.2019.03.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Visual-vestibular mismatch is a common occurrence, with causes ranging from vehicular travel, to vestibular dysfunction, to virtual reality displays. Behavioral and physiological consequences of this mismatch include adaptation of reflexive eye movements, oscillopsia, vertigo, and nausea. Despite this significance, we still do not have a good understanding of how the nervous system evaluates visual-vestibular conflict. Here we review research that quantifies perceptual sensitivity to visual-vestibular conflict and factors that mediate this sensitivity, such as noise on visual and vestibular sensory estimates. We emphasize that dynamic modeling methods are necessary to investigate how the nervous system monitors conflict between time-varying visual and vestibular signals, and we present a simple example of a drift-diffusion model for visual-vestibular conflict detection. The model makes predictions for detection of conflict arising from changes in both visual gain and latency. We conclude with discussion of topics for future research.
Collapse
Affiliation(s)
- Isabelle T Garzorz
- German Center for Vertigo and Balance Disorders, University Hospital of Munich, Munich, Germany; Graduate School of Systemic Neurosciences, Ludwig-Maximilian University, Munich, Germany.
| | - Paul R MacNeilage
- Department of Psychology, Cognitive and Brain Sciences, University of Nevada, Reno, NV, United States
| |
Collapse
|
10
|
Garzorz IT, Freeman TCA, Ernst MO, MacNeilage PR. Insufficient compensation for self-motion during perception of object speed: The vestibular Aubert-Fleischl phenomenon. J Vis 2018; 18:9. [DOI: 10.1167/18.13.9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Isabelle T. Garzorz
- German Center for Vertigo and Balance Disorders (DSGZ), University Hospital of Munich, Ludwig Maximilian University, Munich, Germany
- Graduate School of Systemic Neurosciences (GSN), Ludwig Maximilian University, Planegg-Martinsried, Germany
| | | | - Marc O. Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo and Balance Disorders (DSGZ), University Hospital of Munich, Ludwig Maximilian University, Munich, Germany
- Present address: Department of Psychology, Cognitive and Brain Sciences, University of Nevada, Reno, NV, USA
| |
Collapse
|
11
|
Rigutti S, Stragà M, Jez M, Baldassi G, Carnaghi A, Miceu P, Fantoni C. Don't worry, be active: how to facilitate the detection of errors in immersive virtual environments. PeerJ 2018; 6:e5844. [PMID: 30397547 PMCID: PMC6211266 DOI: 10.7717/peerj.5844] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 09/26/2018] [Indexed: 11/23/2022] Open
Abstract
The current research aims to study the link between the type of vision experienced in a collaborative immersive virtual environment (active vs. multiple passive), the type of error one looks for during a cooperative multi-user exploration of a design project (affordance vs. perceptual violations), and the type of setting in which multi-user perform (field in Experiment 1 vs. laboratory in Experiment 2). The relevance of this link is backed by the lack of conclusive evidence on an active vs. passive vision advantage in cooperative search tasks within software based on immersive virtual reality (IVR). Using a yoking paradigm based on the mixed usage of simultaneous active and multiple passive viewings, we found that the likelihood of error detection in a complex 3D environment was characterized by an active vs. multi-passive viewing advantage depending on: (1) the degree of knowledge dependence of the type of error the passive/active observers were looking for (low for perceptual violations, vs. high for affordance violations), as the advantage tended to manifest itself irrespectively from the setting for affordance, but not for perceptual violations; and (2) the degree of social desirability possibly induced by the setting in which the task was performed, as the advantage occurred irrespectively from the type of error in the laboratory (Experiment 2) but not in the field (Experiment 1) setting. Results are relevant to future development of cooperative software based on IVR used for supporting the design review. A multi-user design review experience in which designers, engineers and end-users all cooperate actively within the IVR wearing their own head mounted display, seems more suitable for the detection of relevant errors than standard systems characterized by a mixed usage of active and passive viewing.
Collapse
Affiliation(s)
- Sara Rigutti
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Marta Stragà
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Marco Jez
- Area Science Park, Arsenal S.r.L, Trieste, Italy
| | - Giulio Baldassi
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Andrea Carnaghi
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Piero Miceu
- Area Science Park, Arsenal S.r.L, Trieste, Italy
| | - Carlo Fantoni
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| |
Collapse
|
12
|
A virtual reality approach identifies flexible inhibition of motion aftereffects induced by head rotation. Behav Res Methods 2018; 51:96-107. [PMID: 30187432 DOI: 10.3758/s13428-018-1116-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
As we move in space, our retinae receive motion signals from two causes: those resulting from motion in the world and those resulting from self-motion. Mounting evidence has shown that vestibular self-motion signals interact with visual motion processing profoundly. However, most contemporary methods arguably lack portability and generality and are incapable of providing measurements during locomotion. Here we developed a virtual reality approach, combining a three-space sensor with a head-mounted display, to quantitatively manipulate the causality between retinal motion and head rotations in the yaw plane. Using this system, we explored how self-motion affected visual motion perception, particularly the motion aftereffect (MAE). Subjects watched gratings presented on a head-mounted display. The gratings drifted at the same velocity as head rotations, with the drifting direction being identical, opposite, or perpendicular to the direction of head rotations. We found that MAE lasted a significantly shorter time when subjects' heads rotated than when their heads were kept still. This effect was present regardless of the drifting direction of the gratings, and was also observed during passive head rotations. These findings suggest that the adaptation to retinal motion is suppressed by head rotations. Because the suppression was also found during passive head movements, it should result from visual-vestibular interaction rather than from efference copy signals. Such visual-vestibular interaction is more flexible than has previously been thought, since the suppression could be observed even when the retinal motion direction was perpendicular to head rotations. Our work suggests that a virtual reality approach can be applied to various studies of multisensory integration and interaction.
Collapse
|
13
|
Garzorz IT, MacNeilage PR. Visual-Vestibular Conflict Detection Depends on Fixation. Curr Biol 2017; 27:2856-2861.e4. [DOI: 10.1016/j.cub.2017.08.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/19/2017] [Accepted: 08/04/2017] [Indexed: 10/18/2022]
|
14
|
Freitag S, Weyers B, Kuhlen TW. Examining Rotation Gain in CAVE-like Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:1462-1471. [PMID: 26780809 DOI: 10.1109/tvcg.2016.2518298] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
When moving through a tracked immersive virtual environment, it is sometimes useful to deviate from the normal one-to-one mapping of real to virtual motion. One option is the application of rotation gain, where the virtual rotation of a user around the vertical axis is amplified or reduced by a factor. Previous research in head-mounted display environments has shown that rotation gain can go unnoticed to a certain extent, which is exploited in redirected walking techniques. Furthermore, it can be used to increase the effective field of regard in projection systems. However, rotation gain has never been studied in CAVE systems, yet. In this work, we present an experiment with 87 participants examining the effects of rotation gain in a CAVE-like virtual environment. The results show no significant effects of rotation gain on simulator sickness, presence, or user performance in a cognitive task, but indicate that there is a negative influence on spatial knowledge especially for inexperienced users. In secondary results, we could confirm results of previous work and demonstrate that they also hold for CAVE environments, showing a negative correlation between simulator sickness and presence, cognitive performance and spatial knowledge, a positive correlation between presence and spatial knowledge, a mitigating influence of experience with 3D applications and previous CAVE exposure on simulator sickness, and a higher incidence of simulator sickness in women.
Collapse
|
15
|
Affiliation(s)
- Andrew Glennerster
- Department of Psychology, School of Psychology and Clinical Language Sciences, University of Reading Reading, UK
| |
Collapse
|
16
|
Hodgson E, Bachmann E, Thrash T. Performance of redirected walking algorithms in a constrained virtual world. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:579-587. [PMID: 24650985 DOI: 10.1109/tvcg.2014.34] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Redirected walking algorithms imperceptibly rotate a virtual scene about users of immersive virtual environment systems in order to guide them away from tracking area boundaries. Ideally, these distortions permit users to explore large unbounded virtual worlds while walking naturally within a physically limited space. Many potential virtual worlds are composed of corridors, passageways, or aisles. Assuming users are not expected to walk through walls or other objects within the virtual world, these constrained worlds limit the directions of travel and as well as the number of opportunities to change direction. The resulting differences in user movement characteristics within the physical world have an impact on redirected walking algorithm performance. This work presents a comparison of generalized RDW algorithm performance within a constrained virtual world. In contrast to previous studies involving unconstrained virtual worlds, experimental results indicate that the steer-to-orbit keeps users in a smaller area than the steer-to-center algorithm. Moreover, in comparison to steer-to-center, steer-to-orbit is shown to reduce potential wall contacts by over 29%.
Collapse
|
17
|
Reduction in sensitivity to radial optic-flow congruent with ego-motion. Vision Res 2012; 62:201-8. [PMID: 22543249 DOI: 10.1016/j.visres.2012.04.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2010] [Revised: 04/09/2012] [Accepted: 04/11/2012] [Indexed: 11/24/2022]
|
18
|
Fantoni C, Caudek C, Domini F. Perceived surface slant is systematically biased in the actively-generated optic flow. PLoS One 2012; 7:e33911. [PMID: 22479473 PMCID: PMC3316515 DOI: 10.1371/journal.pone.0033911] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2011] [Accepted: 02/19/2012] [Indexed: 12/04/2022] Open
Abstract
Humans make systematic errors in the 3D interpretation of the optic flow in both passive and active vision. These systematic distortions can be predicted by a biologically-inspired model which disregards self-motion information resulting from head movements (Caudek, Fantoni, & Domini 2011). Here, we tested two predictions of this model: (1) A plane that is stationary in an earth-fixed reference frame will be perceived as changing its slant if the movement of the observer's head causes a variation of the optic flow; (2) a surface that rotates in an earth-fixed reference frame will be perceived to be stationary, if the surface rotation is appropriately yoked to the head movement so as to generate a variation of the surface slant but not of the optic flow. Both predictions were corroborated by two experiments in which observers judged the perceived slant of a random-dot planar surface during egomotion. We found qualitatively similar biases for monocular and binocular viewing of the simulated surfaces, although, in principle, the simultaneous presence of disparity and motion cues allows for a veridical recovery of surface slant.
Collapse
Affiliation(s)
- Carlo Fantoni
- Center for Neuroscience and Cognitive, Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy.
| | | | | |
Collapse
|
19
|
Jerald J, Whitton M, Brooks FP. Scene-Motion Thresholds During Head Yaw for Immersive Virtual Environments. ACM TRANSACTIONS ON APPLIED PERCEPTION 2012; 9:4. [PMID: 25705137 PMCID: PMC4334481 DOI: 10.1145/2134203.2134207] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2009] [Accepted: 04/01/2011] [Indexed: 06/04/2023]
Abstract
In order to better understand how scene motion is perceived in immersive virtual environments, we measured scene-motion thresholds under different conditions across three experiments. Thresholds were measured during quasi-sinusoidal head yaw, single left-to-right or right-to-left head yaw, different phases of head yaw, slow to fast head yaw, scene motion relative to head yaw, and two scene illumination levels. We found that across various conditions 1) thresholds are greater when the scene moves with head yaw (corresponding to gain < 1:0) than when the scene moves against head yaw (corresponding to gain > 1:0), and 2) thresholds increase as head motion increases.
Collapse
|
20
|
Hanes DA. Mathematical requirements of visual–vestibular integration. J Math Biol 2011; 65:1245-66. [DOI: 10.1007/s00285-011-0494-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2011] [Revised: 11/16/2011] [Indexed: 10/15/2022]
|
21
|
Jürgens R, Becker W. Human spatial orientation in non-stationary environments: relation between self-turning perception and detection of surround motion. Exp Brain Res 2011; 215:327-44. [DOI: 10.1007/s00221-011-2900-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2011] [Accepted: 09/30/2011] [Indexed: 11/25/2022]
|
22
|
Souman JL, Freeman TCA, Eikmeier V, Ernst MO. Humans do not have direct access to retinal flow during walking. J Vis 2010; 10:14. [PMID: 20884509 DOI: 10.1167/10.11.14] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Perceived visual speed has been reported to be reduced during walking. This reduction has been attributed to a partial subtraction of walking speed from visual speed (F. H. Durgin & K. Gigone, 2007; F. H. Durgin, K. Gigone, & R. Scott, 2005). We tested whether observers still have access to the retinal flow before subtraction takes place. Observers performed a 2IFC visual speed discrimination task while walking on a treadmill. In one condition, walking speed was identical in the two intervals, while in a second condition walking speed differed between intervals. If observers have access to the retinal flow before subtraction, any changes in walking speed across intervals should not affect their ability to discriminate retinal flow speed. Contrary to this "direct access hypothesis," we found that observers were worse at discrimination when walking speed differed between intervals. The results therefore suggest that observers do not have access to retinal flow before subtraction. We also found that the amount of subtraction depended on the visual speed presented, suggesting that the interaction between the processing of visual input and of self-motion is more complex than previously proposed.
Collapse
Affiliation(s)
- Jan L Souman
- Multisensory Perception and Action Group, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| | | | | | | |
Collapse
|
23
|
The role of attention on the integration of visual and inertial cues. Exp Brain Res 2009; 198:287-300. [PMID: 19350230 PMCID: PMC2733186 DOI: 10.1007/s00221-009-1767-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2008] [Accepted: 03/03/2009] [Indexed: 11/05/2022]
Abstract
The extent to which attending to one stimulus while ignoring another influences the integration of visual and inertial (vestibular, somatosensory, proprioceptive) stimuli is currently unknown. It is also unclear how cue integration is affected by an awareness of cue conflicts. We investigated these questions using a turn-reproduction paradigm, where participants were seated on a motion platform equipped with a projection screen and were asked to actively return a combined visual and inertial whole-body rotation around an earth-vertical axis. By introducing cue conflicts during the active return and asking the participants whether they had noticed a cue conflict, we measured the influence of each cue on the response. We found that the task instruction had a significant effect on cue weighting in the response, with a higher weight assigned to the attended modality, only when participants noticed the cue conflict. This suggests that participants used task-induced attention to reduce the influence of stimuli that conflict with the task instructions.
Collapse
|
24
|
Kaptein RG, Van Gisbergen JAM. Canal and Otolith Contributions to Visual Orientation Constancy During Sinusoidal Roll Rotation. J Neurophysiol 2006; 95:1936-48. [PMID: 16319209 DOI: 10.1152/jn.00856.2005] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Using vestibular sensors to maintain visual stability during changes in head tilt, crucial when panoramic cues are not available, presents a computational challenge. Reliance on the otoliths requires a neural strategy for resolving their tilt/translation ambiguity, such as canal–otolith interaction or frequency segregation. The canal signal is subject to bandwidth limitations. In this study, we assessed the relative contribution of canal and otolith signals and investigated how they might be processed and combined. The experimental approach was to explore conditions with and without otolith contributions in a frequency range with various degrees of canal activation. We tested the perceptual stability of visual line orientation in six human subjects during passive sinusoidal roll tilt in the dark at frequencies from 0.05 to 0.4 Hz (30° peak to peak). Because subjects were constantly monitoring spatial motion of a visual line in the frontal plane, the paradigm required moment-to-moment updating for ongoing ego motion. Their task was to judge the total spatial sway of the line when it rotated sinusoidally at various amplitudes. From the responses we determined how the line had to be rotated to be perceived as stable in space. Tests were taken both with (subject upright) and without (subject supine) gravity cues. Analysis of these data showed that the compensation for body rotation in the computation of line orientation in space, although always incomplete, depended on vestibular rotation frequency and on the availability of gravity cues. In the supine condition, the compensation for ego motion showed a steep increase with frequency, compatible with an integrated canal signal. The improvement of performance in the upright condition, afforded by graviceptive cues from the otoliths, showed low-pass characteristics. Simulations showed that a linear combination of an integrated canal signal and a gravity-based signal can account for these results.
Collapse
Affiliation(s)
- Ronald G Kaptein
- Department of Biophysics, Radboud University Nijmegen, Geert Grooteplein 21, 6525 EZ Nijmegen, The Netherlands
| | | |
Collapse
|