1
|
Castet E, Termoz-Masson J, Vizcay S, Delachambre J, Myrodia V, Aguilar C, Matonti F, Kornprobst P. PTVR - A software in Python to make virtual reality experiments easier to build and more reproducible. J Vis 2024; 24:19. [PMID: 38652657 PMCID: PMC11044846 DOI: 10.1167/jov.24.4.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 02/25/2024] [Indexed: 04/25/2024] Open
Abstract
Researchers increasingly use virtual reality (VR) to perform behavioral experiments, especially in vision science. These experiments are usually programmed directly in so-called game engines that are extremely powerful. However, this process is tricky and time-consuming as it requires solid knowledge of game engines. Consequently, the anticipated prohibitive effort discourages many researchers who want to engage in VR. This paper introduces the Perception Toolbox for Virtual Reality (PTVR) library, allowing visual perception studies in VR to be created using high-level Python script programming. A crucial consequence of using a script is that an experiment can be described by a single, easy-to-read piece of code, thus improving VR studies' transparency, reproducibility, and reusability. We built our library upon a seminal open-source library released in 2018 that we have considerably developed since then. This paper aims to provide a comprehensive overview of the PTVR software for the first time. We introduce the main objects and features of PTVR and some general concepts related to the three-dimensional (3D) world. This new library should dramatically reduce the difficulty of programming experiments in VR and elicit a whole new set of visual perception studies with high ecological validity.
Collapse
Affiliation(s)
- Eric Castet
- Aix Marseille Univ, CNRS, CRPN, Marseille, France
| | | | | | | | | | | | | | | |
Collapse
|
2
|
Langenberg M, Bayer M, Zimmermann E. Active production and passive observation of hand movements shift visual hand location. Sci Rep 2023; 13:20645. [PMID: 38001114 PMCID: PMC10673826 DOI: 10.1038/s41598-023-47557-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023] Open
Abstract
Which factors influence the perception of our hand location is a matter of current debate. Here, we test if sensorimotor processing contributes to the perception of hand location. We developed a novel visuomotor adaptation procedure to measure whether actively performing hand movements or passively observing them, influences visual perception of hand location. Participants had to point with a handheld controller to a briefly presented visual target. When they reached the remembered position of the target, the controller presented a tactile buzz. In adaptation trials, the tactile buzz was presented when the hand had not yet reached the target. Over the course of trials, participants adapted to the manipulation and pointed to a location between the visual target and the tactile buzz. We measured the perceived location of the hand by flashing a virtual pair of left and right hands before and after adaptation. Participants had to judge which hand they perceived closer to their body on the fronto-parallel plane. After adaptation, they judged the right hand, that corresponded to the hand used during adaptation, to be located further away from the body. We conclude that sensorimotor prediction of the consequences of hand movements shape sensory processing of hand location.
Collapse
Affiliation(s)
- Maryvonne Langenberg
- Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Universitätsstr. 1, 40225, Düsseldorf, Germany
| | - Manuel Bayer
- Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Universitätsstr. 1, 40225, Düsseldorf, Germany
| | - Eckart Zimmermann
- Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Universitätsstr. 1, 40225, Düsseldorf, Germany.
| |
Collapse
|
3
|
Bayer M, Betka S, Herbelin B, Blanke O, Zimmermann E. The full-body illusion changes visual depth perception. Sci Rep 2023; 13:10569. [PMID: 37386091 PMCID: PMC10310716 DOI: 10.1038/s41598-023-37715-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 06/26/2023] [Indexed: 07/01/2023] Open
Abstract
Knowing where objects are relative to us implies knowing where we are relative to the external world. Here, we investigated whether space perception can be influenced by an experimentally induced change in perceived self-location. To dissociate real and apparent body positions, we used the full-body illusion. In this illusion, participants see a distant avatar being stroked in virtual reality while their own physical back is simultaneously stroked. After experiencing the discrepancy between the seen and the felt location of the stroking, participants report a forward drift in self-location toward the avatar. We wondered whether this illusion-induced forward drift in self-location would affect where we perceive objects in depth. We applied a psychometric measurement in which participants compared the position of a probe against a reference sphere in a two-alternative forced choice task. We found a significant improvement in task performance for the right visual field, indicated by lower just-noticeable differences, i.e., participants were better at judging the differences of the two spheres in depth. Our results suggest that the full-body illusion is able to facilitate depth perception at least unilaterally, implying that depth perception is influenced by perceived self-location.
Collapse
Affiliation(s)
- Manuel Bayer
- Department of Experimental Psychology, Heinrich-Heine-University, Düsseldorf, Germany.
| | - Sophie Betka
- Laboratory of Cognitive Neuroscience, NeuroX Institute & Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Bruno Herbelin
- Laboratory of Cognitive Neuroscience, NeuroX Institute & Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, NeuroX Institute & Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
- Department of Clinical Neuroscience, Geneva University Hospital, Geneva, Switzerland
| | - Eckart Zimmermann
- Department of Experimental Psychology, Heinrich-Heine-University, Düsseldorf, Germany
| |
Collapse
|
4
|
Camponogara I, Volcic R. Visual uncertainty unveils the distinct role of haptic cues in multisensory grasping. eNeuro 2022; 9:ENEURO.0079-22.2022. [PMID: 35641223 PMCID: PMC9215692 DOI: 10.1523/eneuro.0079-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/26/2022] [Accepted: 05/19/2022] [Indexed: 11/21/2022] Open
Abstract
Human multisensory grasping movements (i.e., seeing and feeling a handheld object while grasping it with the contralateral hand) are superior to movements guided by each separate modality. This multisensory advantage might be driven by the integration of vision with either the haptic position only or with both position and size cues. To contrast these two hypotheses, we manipulated visual uncertainty (central vs. peripheral vision) and the availability of haptic cues during multisensory grasping. We showed a multisensory benefit irrespective of the degree of visual uncertainty suggesting that the integration process involved in multisensory grasping can be flexibly modulated by the contribution of each modality. Increasing visual uncertainty revealed the role of the distinct haptic cues. The haptic position cue was sufficient to promote multisensory benefits evidenced by faster actions with smaller grip apertures, whereas the haptic size was fundamental in fine-tuning the grip aperture scaling. These results support the hypothesis that, in multisensory grasping, vision is integrated with all haptic cues, with the haptic position cue playing the key part. Our findings highlight the important role of non-visual sensory inputs in sensorimotor control and hint at the potential contributions of the haptic modality in developing and maintaining visuomotor functions.Significance statementThe longstanding view that vision is considered the primary sense we rely on to guide grasping movements relegates the equally important haptic inputs, such as touch and proprioception, to a secondary role. Here we show that by increasing visual uncertainty during visuo-haptic grasping, the central nervous system exploits distinct haptic inputs about the object position and size to optimize grasping performance. Specifically, we demonstrate that haptic inputs about the object position are fundamental to support vision in enhancing grasping performance, whereas haptic size inputs can further refine hand shaping. Our results provide strong evidence that non-visual inputs serve an important, previously under-appreciated, functional role in grasping.
Collapse
Affiliation(s)
- Ivan Camponogara
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Robert Volcic
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
5
|
Surber T, Overstreet T, Masoner H, Dowell C, Hajnal A. Functional Specificity of the Affordance of Reaching. Exp Psychol 2022; 69:23-39. [PMID: 35579538 DOI: 10.1027/1618-3169/a000544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The information that specifies whether an object is within reach is a complex pattern that depends on body-scaled parameters measured from an egocentric reference point. The pattern is a function of relevant body proportions (eye height, shoulder height [SH], arm length) with respect to the spatial location of the target object. In addition to not knowing how these factors map onto perception, it is also not known whether the egocentric viewpoint is centered at the eye or the shoulder. In three experiments, we systematically tested whether observers can perceive eye height and SH (Experiment 1), whether they can point accurately in the direction of a target object (Experiment 2), and whether they can point accurately to judge if the target object is within reach (Experiment 3). Experiment 1 demonstrated that participants are more accurate at judging their own eye height than SH. Experiment 2 revealed that participants can more accurately point to a target object's location when measured from the shoulder as a reference point than when measured from the eye. In Experiment 3, we showed that a higher-order variable that includes arm length, body height, and angle of declination to the target successfully predicted affordance judgments, regardless of a reference point. We consider this as evidence that the invariant is functionally specific, not tied to any one particular anatomical body part.
Collapse
Affiliation(s)
- Tyler Surber
- Department of Humanities and Social Sciences, Pearl River Community College, Poplarville, MS, USA
| | | | - Hannah Masoner
- School of Psychology, University of Southern Mississippi, USA
| | | | - Alen Hajnal
- School of Psychology, University of Southern Mississippi, USA
| |
Collapse
|
6
|
Computer Vision Positioning and Local Obstacle Avoidance Optimization Based on Neural Network Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3061910. [PMID: 35401716 PMCID: PMC8993561 DOI: 10.1155/2022/3061910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 02/24/2022] [Indexed: 11/17/2022]
Abstract
Due to the rapid development of social computerization and smart devices, there is an increasing demand for indoor positioning of mobile robots in the robotics field, so it is very important to realize the autonomous navigation of mobile robots. However, in indoor scenes, due to factors such as dark walls, the global positioning system cannot effectively locate, and the broadband and wired positioning technologies used indoors have problems such as base station laying and delay. Computer vision positioning technology has greatly improved the camera hardware due to its simple equipment and low cost. Compared with other sensor cameras, it is less affected by environmental changes, so visual positioning has received extensive attention. Image matching has become the most critical link in visual positioning. The accuracy, speed, and robustness of image matching directly determine the results of visual positioning, so image matching has become the main topic of this study. In this study, the neural network algorithm is systematically optimized, especially for the robot's local obstacle avoidance, and an obstacle data acquisition method based on VGG16 and fast RCNN is proposed. In order to solve the problem that the semantic image segmentation algorithm based on AlexNet and ResNet is difficult to accurately obtain the information of multiple objects, and an image semantic segmentation algorithm combined with VGG16 is designed to classify the background and road in the image at the pixel level and capture the path boundary line. The collection of robot obstacle path information improves the speed and accuracy of highly automated local obstacle avoidance. This study uses neural network algorithms to systematically optimize computer vision positioning and also studies the accuracy optimization of local obstacle avoidance, aiming to promote its better development.
Collapse
|