1
|
Troje NF. Depth from motion parallax: Deictic consistency, eye contact, and a serious problem with Zoom. J Vis 2023; 23:1. [PMID: 37656465 PMCID: PMC10479236 DOI: 10.1167/jov.23.10.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 07/24/2023] [Indexed: 09/02/2023] Open
Abstract
The dynamics of head and eye gaze between two or more individuals displayed during verbal and nonverbal face-to-face communication contains a wealth of information and is used for both volitionary and unconscious signaling. Current video communication systems convey visual signals about gaze behavior and other directional cues, but the information they carry is often spurious and potentially misleading. I discuss the consequences of this situation, identify the source of the problem as a more general lack of deictic consistency, and demonstrate that using display technologies that simulate motion parallax are both necessary and sufficient to alleviate it. I then devise an avatar-based remote communication solution that achieves deictic consistency and provides natural, dynamic eye contact for computer-mediated audiovisual communication.
Collapse
Affiliation(s)
- Nikolaus F Troje
- Centre for Vision Research and Department of Biology, York University, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Schuetz I, Karimpur H, Fiehler K. vexptoolbox: A software toolbox for human behavior studies using the Vizard virtual reality platform. Behav Res Methods 2023; 55:570-582. [PMID: 35322350 PMCID: PMC10027796 DOI: 10.3758/s13428-022-01831-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2022] [Indexed: 11/08/2022]
Abstract
Virtual reality (VR) is a powerful tool for researchers due to its potential to study dynamic human behavior in highly naturalistic environments while retaining full control over the presented stimuli. Due to advancements in consumer hardware, VR devices are now very affordable and have also started to include technologies such as eye tracking, further extending potential research applications. Rendering engines such as Unity, Unreal, or Vizard now enable researchers to easily create complex VR environments. However, implementing the experimental design can still pose a challenge, and these packages do not provide out-of-the-box support for trial-based behavioral experiments. Here, we present a Python toolbox, designed to facilitate common tasks when developing experiments using the Vizard VR platform. It includes functionality for common tasks like creating, randomizing, and presenting trial-based experimental designs or saving results to standardized file formats. Moreover, the toolbox greatly simplifies continuous recording of eye and body movements using any hardware supported in Vizard. We further implement and describe a simple goal-directed reaching task in VR and show sample data recorded from five volunteers. The toolbox, example code, and data are all available on GitHub under an open-source license. We hope that our toolbox can simplify VR experiment development, reduce code duplication, and aid reproducibility and open-science efforts.
Collapse
Affiliation(s)
- Immo Schuetz
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Harun Karimpur
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
3
|
Rzepka AM, Hussey KJ, Maltz MV, Babin K, Wilcox LM, Culham JC. Familiar size affects perception differently in virtual reality and the real world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210464. [PMID: 36511414 PMCID: PMC9745877 DOI: 10.1098/rstb.2021.0464] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
The promise of virtual reality (VR) as a tool for perceptual and cognitive research rests on the assumption that perception in virtual environments generalizes to the real world. Here, we conducted two experiments to compare size and distance perception between VR and physical reality (Maltz et al. 2021 J. Vis. 21, 1-18). In experiment 1, we used VR to present dice and Rubik's cubes at their typical sizes or reversed sizes at distances that maintained a constant visual angle. After viewing the stimuli binocularly (to provide vergence and disparity information) or monocularly, participants manually estimated perceived size and distance. Unlike physical reality, where participants relied less on familiar size and more on presented size during binocular versus monocular viewing, in VR participants relied heavily on familiar size regardless of the availability of binocular cues. In experiment 2, we demonstrated that the effects in VR generalized to other stimuli and to a higher quality VR headset. These results suggest that the use of binocular cues and familiar size differs substantially between virtual and physical reality. A deeper understanding of perceptual differences is necessary before assuming that research outcomes from VR will generalize to the real world. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Anna M. Rzepka
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Kieran J. Hussey
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Margaret V. Maltz
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Karsten Babin
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Laurie M. Wilcox
- Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Jody C. Culham
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7,Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| |
Collapse
|
4
|
Eftekharifar S, Thaler A, Bebko AO, Troje NF. The role of binocular disparity and active motion parallax in cybersickness. Exp Brain Res 2021; 239:2649-2660. [PMID: 34216232 DOI: 10.1007/s00221-021-06124-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 04/24/2021] [Indexed: 10/20/2022]
Abstract
Cybersickness is an enduring problem for users of virtual environments. While it is generally assumed that cybersickness is caused by discrepancies in perceived self-motion between the visual and vestibular systems, little is known about the relative contribution of active motion parallax and binocular disparity to the occurrence of cybersickness. We investigated the role of these two depth cues in cybersickness by simulating a roller-coaster ride using a head-mounted display. Participants could see the tracks via a virtual frame placed at the front of the roller-coaster cart. We manipulated the state of the frame, so it behaved like: (1) a window into the virtual scene, (2) a 2D screen, (3) and (4) a window for one of the two depth cues, and a 2D screen for the other. Participants completed the Simulator Sickness Questionnaire before and after the experiment, and verbally reported their level of discomfort at repeated intervals during the ride. Additionally, participants' electrodermal activity (EDA) was recorded. The results of the questionnaire and the continuous ratings revealed the largest increase in cybersickness when the frame behaved like a window, and least increase when the frame behaved like a 2D screen. Cybersickness scores were at an intermediate level for the conditions where the frame simulated only one depth cue. This suggests that neither active motion parallax nor binocular disparity had a more prominent effect on the severity of cybersickness. The EDA responses increased at about the same rate in all conditions, suggesting that EDA is not necessarily coupled with subjectively experienced cybersickness.
Collapse
Affiliation(s)
| | - Anne Thaler
- Centre for Vision Research & Department of Biology, York University, Toronto, ON, Canada
| | - Adam O Bebko
- Centre for Vision Research & Department of Biology, York University, Toronto, ON, Canada
| | - Nikolaus F Troje
- Centre for Vision Research & Department of Biology, York University, Toronto, ON, Canada
| |
Collapse
|
5
|
Snow JC, Culham JC. The Treachery of Images: How Realism Influences Brain and Behavior. Trends Cogn Sci 2021; 25:506-519. [PMID: 33775583 PMCID: PMC10149139 DOI: 10.1016/j.tics.2021.02.008] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/22/2021] [Indexed: 10/21/2022]
Abstract
Although the cognitive sciences aim to ultimately understand behavior and brain function in the real world, for historical and practical reasons, the field has relied heavily on artificial stimuli, typically pictures. We review a growing body of evidence that both behavior and brain function differ between image proxies and real, tangible objects. We also propose a new framework for immersive neuroscience to combine two approaches: (i) the traditional build-up approach of gradually combining simplified stimuli, tasks, and processes; and (ii) a newer tear-down approach that begins with reality and compelling simulations such as virtual reality to determine which elements critically affect behavior and brain processing.
Collapse
Affiliation(s)
- Jacqueline C Snow
- Department of Psychology, University of Nevada Reno, Reno, NV 89557, USA
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, N6A 5C2, Canada; Brain and Mind Institute, Western Interdisciplinary Research Building, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| |
Collapse
|
6
|
Abstract
The aim of the current study was to develop a novel task that allows for the quick assessment of spatial memory precision with minimal technical and training requirements. In this task, participants memorized the position of an object in a virtual room and then judged from a different perspective, whether the object has moved to the left or to the right. Results revealed that participants exhibited a systematic bias in their responses that we termed the reversed congruency effect. Specifically, they performed worse when the camera and the object moved in the same direction than when they moved in opposite directions. Notably, participants responded correctly in almost 100% of the incongruent trials, regardless of the distance by which the object was displaced. In Experiment 2, we showed that this effect cannot be explained by the movement of the object on the screen, but that it relates to the perspective shift and the movement of the object in the virtual world. We also showed that the presence of additional objects in the environment reduces the reversed congruency effect such that it no longer predicts performance. In Experiment 3, we showed that the reversed congruency effect is greater in older adults, suggesting that the quality of spatial memory and perspective-taking abilities are critical. Overall, our results suggest that this effect is driven by difficulties in the precise encoding of object locations in the environment and in understanding how perspective shifts affect the projected positions of the objects in the two-dimensional image.
Collapse
|
7
|
Bachmann J, Zabicki A, Gradl S, Kurz J, Munzert J, Troje NF, Krueger B. Does co-presence affect the way we perceive and respond to emotional interactions? Exp Brain Res 2021; 239:923-936. [PMID: 33427949 PMCID: PMC7943523 DOI: 10.1007/s00221-020-06020-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 12/15/2020] [Indexed: 11/06/2022]
Abstract
This study compared how two virtual display conditions of human body expressions influenced explicit and implicit dimensions of emotion perception and response behavior in women and men. Two avatars displayed emotional interactions (angry, sad, affectionate, happy) in a "pictorial" condition depicting the emotional interactive partners on a screen within a virtual environment and a "visual" condition allowing participants to share space with the avatars, thereby enhancing co-presence and agency. Subsequently to stimulus presentation, explicit valence perception and response tendency (i.e. the explicit tendency to avoid or approach the situation) were assessed on rating scales. Implicit responses, i.e. postural and autonomic responses towards the observed interactions were measured by means of postural displacement and changes in skin conductance. Results showed that self-reported presence differed between pictorial and visual conditions, however, it was not correlated with skin conductance responses. Valence perception was only marginally influenced by the virtual condition and not at all by explicit response behavior. There were gender-mediated effects on postural response tendencies as well as gender differences in explicit response behavior but not in valence perception. Exploratory analyses revealed a link between valence perception and preferred behavioral response in women but not in men. We conclude that the display condition seems to influence automatic motivational tendencies but not higher level cognitive evaluations. Moreover, intragroup differences in explicit and implicit response behavior highlight the importance of individual factors beyond gender.
Collapse
Affiliation(s)
- Julia Bachmann
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany.
| | - Adam Zabicki
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany
| | - Stefan Gradl
- Machine Learning and Data Analysis Lab, Faculty of Engineering, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
| | - Johannes Kurz
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany
| | - Jörn Munzert
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps University of Marburg and Justus Liebig University, Giessen, Germany
| | - Nikolaus F Troje
- BioMotionLab, Department of Biology and Centre for Vision Research, York University Toronto, Toronto, Canada
| | - Britta Krueger
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany
| |
Collapse
|
8
|
Bebko AO, Troje NF. bmlTUX: Design and Control of Experiments in Virtual Reality and Beyond. Iperception 2020; 11:2041669520938400. [PMID: 32733664 PMCID: PMC7370570 DOI: 10.1177/2041669520938400] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 06/09/2020] [Indexed: 11/16/2022] Open
Abstract
Advances in virtual reality technology have made it a valuable new tool for vision and perception researchers. Coding virtual reality experiments from scratch can be difficult and time-consuming, so researchers rely on software such as Unity game engine to create and edit virtual scenes. However, Unity lacks built-in tools for controlling experiments. Existing third-party add-ins requires complicated scripts to define experiments. This can be difficult and requires advanced coding knowledge, especially for multifactorial experimental designs. In this article, we describe a new free and open-source tool called the BiomotionLab Toolkit for Unity Experiments (bmlTUX) that provides a simple interface for controlling experiments in Unity. In contrast to existing tools, bmlTUX provides a graphical interface to automatically handle combinatorics, counterbalancing, randomization, mixed designs, and blocking of trial order. The toolbox works out-of-the-box since simple experiments can be created with almost no coding. Furthermore, multiple design configurations can be swapped with a drag-and-drop interface allowing researchers to test new configurations iteratively while maintaining the ability to easily revert to previous configurations. Despite its simplicity, bmlTUX remains highly flexible and customizable, catering to coding novices and experts alike.
Collapse
Affiliation(s)
- Adam O. Bebko
- Department of Biology and Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Nikolaus F. Troje
- Department of Biology and Centre for Vision Research, York University, Toronto, Ontario, Canada
| |
Collapse
|