1
|
Bai J, Warren WH. Relative rate of expansion controls speed in one-dimensional pedestrian following. J Vis 2023; 23:3. [PMID: 37676673 PMCID: PMC10494987 DOI: 10.1167/jov.23.10.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 07/16/2023] [Indexed: 09/08/2023] Open
Abstract
Patterns of crowd behavior are believed to result from local interactions between pedestrians. Many studies have investigated the local rules of interaction, such as steering, avoiding, and alignment, but how pedestrians control their walking speed when following another remains unsettled. Most pedestrian models assume the physical speed and distance of others as input. The present study compares such "omniscient" models with "visual" models based on optical variables. We experimentally tested eight speed control models from the pedestrian- and car-following literature. Walking participants were asked to follow a leader (a moving pole) in a virtual environment, while the leader's speed was perturbed during the trial. In Experiment 1, the leader's initial distance was varied. Each model was fit to the data and compared. The results showed that visual models based on optical expansion (\(\dot{\theta }\)) had the smallest root mean square error in speed across conditions, whereas other models exhibited increased error at longer distances. In Experiment 2, the leader's size (pole diameter) was varied. A model based on the relative rate of expansion (\(\dot{\theta }/\theta \)) performed better than the expansion rate model (\(\dot{\theta }\)), because it is less sensitive to leader size. Together, the results imply that pedestrians directly control their walking speed in one-dimensional following using relative rate of expansion, rather than the distal speed and distance of the leader.
Collapse
Affiliation(s)
- Jiuyang Bai
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - William H Warren
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| |
Collapse
|
2
|
Bury NA, Jenkin M, Allison RS, Herpers R, Harris LR. Vection underwater illustrates the limitations of neutral buoyancy as a microgravity analog. NPJ Microgravity 2023; 9:42. [PMID: 37301926 DOI: 10.1038/s41526-023-00282-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 05/25/2023] [Indexed: 06/12/2023] Open
Abstract
Neutral buoyancy has been used as an analog for microgravity from the earliest days of human spaceflight. Compared to other options on Earth, neutral buoyancy is relatively inexpensive and presents little danger to astronauts while simulating some aspects of microgravity. Neutral buoyancy removes somatosensory cues to the direction of gravity but leaves vestibular cues intact. Removal of both somatosensory and direction of gravity cues while floating in microgravity or using virtual reality to establish conflicts between them has been shown to affect the perception of distance traveled in response to visual motion (vection) and the perception of distance. Does removal of somatosensory cues alone by neutral buoyancy similarly impact these perceptions? During neutral buoyancy we found no significant difference in either perceived distance traveled nor perceived size relative to Earth-normal conditions. This contrasts with differences in linear vection reported between short- and long-duration microgravity and Earth-normal conditions. These results indicate that neutral buoyancy is not an effective analog for microgravity for these perceptual effects.
Collapse
Affiliation(s)
- Nils-Alexander Bury
- Institute of Visual Computing, Hochschule Bonn-Rhein-Sieg, Grantham-Allee 20, 53757, St. Augustin, Germany.
- Centre for Vision Research, York University, 4700 Keele St., Toronto, ON, M3J 1P3, Canada.
- Dept. of Psychology, York University, 4700 Keele St., Toronto, ON, M3J 1P3, Canada.
| | - Michael Jenkin
- Centre for Vision Research, York University, 4700 Keele St., Toronto, ON, M3J 1P3, Canada
- Department of Electrical Engineering & Computer Science, York University, 4700 Keele St., Toronto, ON, M3J 1P3, Canada
| | - Robert S Allison
- Centre for Vision Research, York University, 4700 Keele St., Toronto, ON, M3J 1P3, Canada
- Department of Electrical Engineering & Computer Science, York University, 4700 Keele St., Toronto, ON, M3J 1P3, Canada
| | - Rainer Herpers
- Institute of Visual Computing, Hochschule Bonn-Rhein-Sieg, Grantham-Allee 20, 53757, St. Augustin, Germany
- Department of Electrical Engineering & Computer Science, York University, 4700 Keele St., Toronto, ON, M3J 1P3, Canada
- Faculty of Computer Science, University of New Brunswick, Fredericton, Canada
| | - Laurence R Harris
- Centre for Vision Research, York University, 4700 Keele St., Toronto, ON, M3J 1P3, Canada
- Dept. of Psychology, York University, 4700 Keele St., Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
3
|
Creem-Regehr SH, Stefanucci JK, Bodenheimer B. Perceiving distance in virtual reality: theoretical insights from contemporary technologies. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210456. [PMID: 36511405 PMCID: PMC9745869 DOI: 10.1098/rstb.2021.0456] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Decades of research have shown that absolute egocentric distance is underestimated in virtual environments (VEs) when compared with the real world. This finding has implications on the use of VEs for applications that require an accurate sense of absolute scale. Fortunately, this underperception of scale can be attenuated by several factors, making perception more similar to (but still not the same as) that of the real world. Here, we examine these factors as two categories: (i) experience inherent to the observer, and (ii) characteristics inherent to the display technology. We analyse how these factors influence the sources of information for absolute distance perception with the goal of understanding how the scale of virtual spaces is calibrated. We identify six types of cues that change with these approaches, contributing both to a theoretical understanding of depth perception in VEs and a call for future research that can benefit from changing technologies. This article is part of the theme issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
| | | | - Bobby Bodenheimer
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
4
|
Kim E, Shin G. User discomfort while using a virtual reality headset as a personal viewing system for text-intensive office tasks. ERGONOMICS 2021; 64:891-899. [PMID: 33357004 DOI: 10.1080/00140139.2020.1869320] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2018] [Accepted: 12/17/2020] [Indexed: 06/12/2023]
Abstract
Ergonomics issues while using virtual reality (VR) headsets for text-intensive applications have not been studied. Measures of neck and shoulder discomfort and simulator sickness symptoms were quantified while participants were performing a document creation task for 60 min using a VR headset and a desktop monitor. During the task with the headset, participants rotated the head 2.7 times more frequently and used the neck extensor muscles 25.9% more, in average. They also rated the neck and shoulder discomfort 60% and 17.5% higher after the task. The simulator sickness symptoms were also rated significantly higher (p < .05) for the headset condition, with more pronounced differences in the symptoms related to visual discomfort. Results indicate that the physical discomforts due to the frequent head rotations and the headset weight, and visual discomforts due to difficulty in reading texts were the main issues of the VR headset for common office tasks. Practitioner summary: Ergonomics issues associated with the use of a VR headset for conducting office productivity work tasks have been evaluated in an experiment. Study results indicate that the development in the neck physical discomfort and visual discomfort may be the main barriers to the use of current VR headsets for office works. Abbreviations: VR: virtual reality; VDT: video display terminal; EMG: electromyography; MVC: maximum voluntary contraction; SSQ: simulator sickness questionnaire; ECG: electrocardiogram; NEMG: normalised electromyography.
Collapse
Affiliation(s)
- Eunjee Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Korea
| | - Gwanseob Shin
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Korea
| |
Collapse
|
5
|
Foley JM. Visually directed action. J Vis 2021; 21:25. [PMID: 34019620 PMCID: PMC8142698 DOI: 10.1167/jov.21.5.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When people throw or walk to targets in front of them without visual feedback, they often respond short. With feedback, responses rapidly become approximately accurate. To understand this, an experiment is performed with four stages. 1) The errors in blind walking and blind throwing are measured in a virtual environment in light and dark cue conditions. 2) Error feedback is introduced and the resulting learning measured. 3) Transfer to the other response is then measured. 4) Finally, responses to the perceived distances of the targets are measured. There is large initial under-responding. Feedback rapidly makes responses almost accurate. Throw training transfers completely to walking. Walk training produces a small effect on throwing. Under instructions to respond to perceived distances, under-responding recurs. The phenomena are well described by a model in which the relation between target distance and response distance is determined by a sequence of a perceptual, a cognitive, and a motor transform. Walk learning is primarily motor; throw learning is cognitive.
Collapse
Affiliation(s)
- John M Foley
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA.,
| |
Collapse
|
6
|
Baqapuri HI, Roes LD, Zvyagintsev M, Ramadan S, Keller M, Roecher E, Zweerings J, Klasen M, Gur RC, Mathiak K. A Novel Brain-Computer Interface Virtual Environment for Neurofeedback During Functional MRI. Front Neurosci 2021; 14:593854. [PMID: 33505237 PMCID: PMC7830095 DOI: 10.3389/fnins.2020.593854] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Accepted: 12/01/2020] [Indexed: 12/11/2022] Open
Abstract
Virtual environments (VEs), in the recent years, have become more prevalent in neuroscience. These VEs can offer great flexibility, replicability, and control over the presented stimuli in an immersive setting. With recent developments, it has become feasible to achieve higher-quality visuals and VEs at a reasonable investment. Our aim in this project was to develop and implement a novel real-time functional magnetic resonance imaging (rt-fMRI)-based neurofeedback (NF) training paradigm, taking into account new technological advances that allow us to integrate complex stimuli into a visually updated and engaging VE. We built upon and developed a first-person shooter in which the dynamic change of the VE was the feedback variable in the brain-computer interface (BCI). We designed a study to assess the feasibility of the BCI in creating an immersive VE for NF training. In a randomized single-blinded fMRI-based NF-training session, 24 participants were randomly allocated into one of two groups: active and reduced contingency NF. All participants completed three runs of the shooter-game VE lasting 10 min each. Brain activity in a supplementary motor area region of interest regulated the possible movement speed of the player's avatar and thus increased the reward probability. The gaming performance revealed that the participants were able to actively engage in game tasks and improve across sessions. All 24 participants reported being able to successfully employ NF strategies during the training while performing in-game tasks with significantly higher perceived NF control ratings in the NF group. Spectral analysis showed significant differential effects on brain activity between the groups. Connectivity analysis revealed significant differences, showing a lowered connectivity in the NF group compared to the reduced contingency-NF group. The self-assessment manikin ratings showed an increase in arousal in both groups but failed significance. Arousal has been linked to presence, or feelings of immersion, supporting the VE's objective. Long paradigms, such as NF in MRI settings, can lead to mental fatigue; therefore, VEs can help overcome such limitations. The rewarding achievements from gaming targets can lead to implicit learning of self-regulation and may broaden the scope of NF applications.
Collapse
Affiliation(s)
- Halim I. Baqapuri
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Linda D. Roes
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Souad Ramadan
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Micha Keller
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Erik Roecher
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Jana Zweerings
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
7
|
Feldstein IT, Kölsch FM, Konrad R. Egocentric Distance Perception: A Comparative Study Investigating Differences Between Real and Virtual Environments. Perception 2020; 49:940-967. [PMID: 33002392 DOI: 10.1177/0301006620951997] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Virtual reality systems are a popular tool in behavioral sciences. The participants' behavior is, however, a response to cognitively processed stimuli. Consequently, researchers must ensure that virtually perceived stimuli resemble those present in the real world to ensure the ecological validity of collected findings. Our article provides a literature review relating to distance perception in virtual reality. Furthermore, we present a new study that compares verbal distance estimates within real and virtual environments. The virtual space-a replica of a real outdoor area-was displayed using a state-of-the-art head-mounted display. Investigated distances ranged from 8 to 13 m. Overall, the results show no significant difference between egocentric distance estimates in real and virtual environments. However, a more in-depth analysis suggests that the order in which participants were exposed to the two environments may affect the outcome. Furthermore, the study suggests that a rising experience of immersion leads to an alignment of the estimated virtual distances with the real ones. The results also show that the discrepancy between estimates of real and virtual distances increases with the incongruity between virtual and actual eye heights, demonstrating the importance of an accurately set virtual eye height.
Collapse
Affiliation(s)
- Ilja T Feldstein
- Harvard Medical School, Department of Ophthalmology, United States
| | - Felix M Kölsch
- Technical University of Munich, Department of Mechanical Engineering, Germany
| | - Robert Konrad
- Stanford University, Department of Electrical Engineering, United States
| |
Collapse
|
8
|
Rolin RA, Fooken J, Spering M, Pai DK. Perception of Looming Motion in Virtual Reality Egocentric Interception Tasks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:3042-3048. [PMID: 30072330 DOI: 10.1109/tvcg.2018.2859987] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Motion in depth is commonly misperceived in Virtual Reality (VR), making it difficult to intercept moving objects, for example, in games. We investigate whether motion cues could be modified to improve these interactions in VR. We developed a time-to-contact estimation task, in which observers ($n=18$n=18) had to indicate by button press when a looming virtual object would collide with their head. We show that users consistently underestimate speed. We construct a user-specific model of motion-in-depth perception, and use this model to propose a novel method to modify monocular depth cues tailored to the specific user, correcting individual response errors in speed estimation. A user study was conducted in a simulated baseball environment and observers were asked to hit a looming baseball back in the direction of the pitcher. The study was conducted with and without intervention and demonstrates the effectiveness of the method in reducing interception errors following cue modifications. The intervention was particularly effective at fast ball speeds where performance is most limited by the user's sensorimotor constraints. The proposed approach is easy to implement and could improve the user experience of interacting with dynamic virtual environments.
Collapse
|
9
|
Stafford J, Whyatt C, Craig CM. Age-related differences in the perception of gap affordances: Impact of standardized action capabilities on road-crossing judgements. ACCIDENT; ANALYSIS AND PREVENTION 2019; 129:21-29. [PMID: 31100685 DOI: 10.1016/j.aap.2019.05.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 04/30/2019] [Accepted: 05/01/2019] [Indexed: 06/09/2023]
Abstract
Recent road-crossing literature has found that older adults show performance differences between estimation and perception-action tasks suggesting an age-related difficulty in accurately calibrating the information picked up from the surrounding environment to their action capabilities (Lobjois and Cavallo, 2009). The present study investigated whether participants could accurately perceive gap affordances via information that specifies the time-to-arrival of the approaching cars. To ensure the opportunities for action were the same across different age groups, independent of the actor's action capabilities, the action of crossing the road was standardised. A total of 45 participants (15 children, aged 10-12, 15 adults aged 19-39, 15 older adults aged 65+) were asked to judge, by pressing a button in a head-mounted display, whether the gap between oncoming cars afforded crossing. When the participant pressed the button, they moved across the road at a fixed speed. Adherence to a time-based variable (namely tau) explained 85% and 84% of the variance in both the children and adults' choices, respectively. Older adults tuned less into the time-based variable (tau) with it only accounting for 59% of the variance in road-crossing decisions. These findings suggest that, the ability to use tau information which specifies whether a gap affords crossing or not, deteriorates with age.
Collapse
Affiliation(s)
- James Stafford
- School of Psychology, Queens University Belfast, David Kier Building, 18-30 Malone Road, Belfast, N. Ireland, BT7 1NN, UK.
| | - Caroline Whyatt
- Department of Psychology and Sport Science, University of Hertfordshire, CP Snow Building, Hatfield, UK
| | - Cathy M Craig
- INCISIV Ltd., Belfast, UK; School of Psychology, Ulster University, Coleraine Campus, Cromore Road, Coleraine, Co. Londonderry, BT52, 1SA, UK
| |
Collapse
|
10
|
Aseeri S, Paraiso K, Interrante V. Investigating the Influence of Virtual Human Entourage Elements on Distance Judgments in Virtual Architectural Interiors. Front Robot AI 2019; 6:44. [PMID: 33501060 PMCID: PMC7805819 DOI: 10.3389/frobt.2019.00044] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Accepted: 05/29/2019] [Indexed: 12/02/2022] Open
Abstract
Architectural design drawings commonly include entourage elements: accessory objects, such as people, plants, furniture, etc., that can help to provide a sense of the scale of the depicted structure and “bring the drawings to life” by illustrating typical usage scenarios. In this paper, we describe two experiments that explore the extent to which adding a photo-realistic, three-dimensional model of a familiar person as an entourage element in a virtual architectural model might help to address the classical problem of distance underestimation in these environments. In our first experiment, we found no significant differences in participants' distance perception accuracy in a semi-realistic virtual hallway model in the presence of a static or animated figure of a familiar virtual human, compared to their perception of distances in a hallway model in which no virtual human appeared. In our second experiment, we found no significant differences in distance estimation accuracy in a virtual environment in the presence of a moderately larger-than-life or smaller-than-life virtual human entourage model than when a right-sized virtual human model was used. The results of these two experiments suggest that virtual human entourage has limited potential to influence peoples' sense of the scale of an indoor space, and that simply adding entourage, even including an exact-scale model of a familiar person, will not, on its own, directly evoke more accurate egocentric distance judgments in VR.
Collapse
Affiliation(s)
- Sahar Aseeri
- Department of Computer Science and Engineering, University of Minnesota Twin Cities, Minneapolis, MN, United States
| | - Karla Paraiso
- Department of Computer Science and Engineering, Arizona State University, Tempe, AZ, United States
| | - Victoria Interrante
- Department of Computer Science and Engineering, University of Minnesota Twin Cities, Minneapolis, MN, United States
| |
Collapse
|
11
|
Jones JA, Hopper JE, Bolas MT, Krum DM. Orientation Perception in Real and Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2050-2060. [PMID: 30762557 DOI: 10.1109/tvcg.2019.2898798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Spatial perception in virtual environments has been a topic of intense research. Arguably, the majority of this work has focused on distance perception. However, orientation perception is also an important factor. In this paper, we systematically investigate allocentric orientation judgments in both real and virtual contexts over the course of four experiments. A pattern of sinusoidal judgment errors known to exist in 2D perspective displays is found to persist in immersive virtual environments. This pattern also manifests itself in a real world setting using two differing judgment methods. The findings suggest the presence of a radial anisotropy that persists across viewing contexts. Additionally, there is some evidence to suggest that observers have multiple strategies for processing orientations but further investigation is needed to fully describe this phenomenon. We also offer design suggestions for 3D user interfaces where users may perform orientation judgments.
Collapse
|
12
|
Lassagne A, Kemeny A, Posselt J, Merienne F. Evaluation of Spatial Filtering Algorithms for Visual Interactions in CAVEs. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2019; 39:53-63. [PMID: 30869598 DOI: 10.1109/mcg.2018.2877111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We present an approach to solving the problem of haptic and visual misalignment in CAVEs. The approach moves the collision box for the virtual screen's buttons to coincide with where the user perceives their virtual location. Different filtering strategies were used. We evaluated the algorithms with simulations and with real subjects.
Collapse
|
13
|
Janeh O, Bruder G, Steinicke F, Gulberti A, Poetter-Nerger M. Analyses of Gait Parameters of Younger and Older Adults During (Non-)Isometric Virtual Walking. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2663-2674. [PMID: 29990158 DOI: 10.1109/tvcg.2017.2771520] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Understanding real walking in virtual environments (VEs) is important for immersive experiences, allowing users to move through VEs in the most natural way. Previous studies have shown that basic implementations of real walking in virtual spaces, in which head-tracked movements are mapped isometrically to a VE, are not estimated as entirely natural. Instead, users estimate a virtual walking velocity as more natural when it is slightly increased compared to the user's physical locomotion. However, these findings have been reported in most cases only for young persons, e.g., students, whereas older adults are clearly underrepresented in such studies. Recently, virtual reality (VR) has received significant public and media attention. Therefore, it appears reasonable to assume that people at different ages will have access to VR, and might use this technology more and more in application scenarios such as rehabilitation or training. To better understand how people at different ages walk and perceive locomotion in VR, we have performed a study to investigate the effects of (non-)isometric mappings between physical movements and virtual motions in the VE on the walking biomechanics across generations, i.e., younger and older adults. Three primary domains (pace, base of support and phase) of spatio-temporal parameters were identified to evaluate gait performance. The results show that the older adults walked very similar in the real and VE in the pace and phasic domains, which differs from results found in younger adults. In contrast, the results indicate differences in terms of base of support domain parameters for both groups while walking within a VE and the real world. For non-isometric mappings, we found in both younger and older adults an increased divergence of gait parameters in all domains correlating with the up- or down-scaled velocity of visual self-motion feedback. The results provide important insights into the design of future VR applications for older adults in domains ranging from medicine and psychology to rehabilitation.
Collapse
|
14
|
Day B, Ebrahimi E, Hartman LS, Pagano CC, Babu SV. Calibration to tool use during visually-guided reaching. Acta Psychol (Amst) 2017; 181:27-39. [PMID: 29040934 DOI: 10.1016/j.actpsy.2017.09.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Revised: 07/10/2017] [Accepted: 09/25/2017] [Indexed: 12/01/2022] Open
Abstract
In studying human perception and performance researchers must understand how the body schema is modified to accurately represent one's capabilities when tools are used, as humans use tools that alter their capabilities frequently. The present work tested the idea that calibration is responsible for modifying an embodied action schema during tool use. We investigated calibration in the context of manual activity in near space through a behavioral measure. Participants made blind reaches to various visual distances in pre- and post-test phases using a short tool that did not extend their reach. During an intervening calibration phase they received visual feedback about the accuracy of their reaches, with half of the participants reaching with a tool that extended their reach by 30cm. Results indicated both groups showed calibration appropriate to the type of tool that they used during the calibration phase, and this calibration carried over to reaches made in the post-test. These results inform discussions on the proposed embodied action schema and have applications to virtual reality, specifically the development of self-avatars.
Collapse
Affiliation(s)
- Brian Day
- Department of Psychology, Butler University, United States.
| | | | - Leah S Hartman
- Department of Psychology, Clemson University, United States
| | | | | |
Collapse
|
15
|
Walking through a virtual environment improves perceived size within and beyond the walked space. Atten Percept Psychophys 2017; 79:39-44. [PMID: 27914094 DOI: 10.3758/s13414-016-1243-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Distances tend to be underperceived in virtual environments (VEs) by up to 50%, whereas distances tend to be perceived accurately in the real world. Previous work has shown that allowing participants to interact with the VE while receiving continual visual feedback can reduce this underperception. Judgments of virtual object size have been used to measure whether this improvement is due to the rescaling of perceived space, but there is disagreement within the literature as to whether judgments of object size benefit from interaction with feedback. This study contributes to that discussion by employing a more natural measure of object size. We also examined whether any improvement in virtual distance perception was limited to the space used for interaction (1-5 m) or extended beyond (7-11 m). The results indicated that object size judgments do benefit from interaction with the VE, and that this benefit extends to distances beyond the explored space.
Collapse
|
16
|
Kunz BR, Creem-Regehr SH. Testing the mechanisms underlying improved distance judgments in virtual environments. Perception 2015; 44:446-53. [PMID: 26492729 DOI: 10.1068/p7929] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Virtual environments (VEs) presented via head-mounted displays are typically perceived as smaller in scale than intended. Visual-motor experience in VEs can reduce this underestimation of distance, though the mechanisms underlying this improved accuracy of distance estimates are unknown. To address this question, we created a mismatch between biomechanical and visual indicators of self-movement within the VE, and assessed the effect on distance and size judgments. Our results suggest that visual-motor feedback influences subsequent distance judgments by recalibrating perceptual-motor relationships, but we found no evidence that perceived size, which was substantially underestimated, changed as a function of this feedback. In contrast to recent studies that suggest that feedback in VEs causes a broad rescaling ofvirtual space, our results are consistent with a visual-motor recalibration account for much of the improvement in distance judgments following VE experience.
Collapse
|
17
|
Geuss MN, Stefanucci JK, Creem-Regehr SH, Thompson WB, Mohler BJ. Effect of Display Technology on Perceived Scale of Space. HUMAN FACTORS 2015; 57:1235-1247. [PMID: 26060237 DOI: 10.1177/0018720815590300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2013] [Accepted: 05/12/2015] [Indexed: 06/04/2023]
Abstract
OBJECTIVE Our goal was to evaluate the degree to which display technologies influence the perception of size in an image. BACKGROUND Research suggests that factors such as whether an image is displayed stereoscopically, whether a user's viewpoint is tracked, and the field of view of a given display can affect users' perception of scale in the displayed image. METHOD Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen). RESULTS Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size. CONCLUSIONS Display technologies that are capable of stereoscopic display and tracking of the user's viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow. APPLICATIONS The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale.
Collapse
Affiliation(s)
- Michael N Geuss
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Jeanine K Stefanucci
- Max Planck Institute for Biological Cybernetics, Tübingen, GermanyUniversity of Utah, Salt Lake City, UtahMax Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Sarah H Creem-Regehr
- Max Planck Institute for Biological Cybernetics, Tübingen, GermanyUniversity of Utah, Salt Lake City, UtahMax Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | | - Betty J Mohler
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
18
|
Leyrer M, Linkenauger SA, Bülthoff HH, Mohler BJ. The importance of postural cues for determining eye height in immersive virtual reality. PLoS One 2015; 10:e0127000. [PMID: 25993274 PMCID: PMC4436369 DOI: 10.1371/journal.pone.0127000] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2014] [Accepted: 04/10/2015] [Indexed: 12/04/2022] Open
Abstract
In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height.
Collapse
Affiliation(s)
- Markus Leyrer
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- * E-mail: (ML); (BJM)
| | | | - Heinrich H. Bülthoff
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Betty J. Mohler
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- * E-mail: (ML); (BJM)
| |
Collapse
|
19
|
Creem-Regehr SH, Stefanucci JK, Thompson WB. Perceiving Absolute Scale in Virtual Environments: How Theory and Application Have Mutually Informed the Role of Body-Based Perception. PSYCHOLOGY OF LEARNING AND MOTIVATION 2015. [DOI: 10.1016/bs.plm.2014.09.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
20
|
Abstract
We investigate the structure of spatial knowledge that spontaneously develops during free exploration of a novel environment. We present evidence that this structure is similar to a labeled graph: a network of topological connections between places, labeled with local metric information. In contrast to route knowledge, we find that the most frequent routes and detours to target locations had not been traveled during learning. Contrary to purely topological knowledge, participants typically traveled the shortest metric distance to a target, rather than topologically equivalent but longer paths. The results are consistent with the proposal that people learn a labeled graph of their environment.
Collapse
Affiliation(s)
- Elizabeth R. Chrastil
- Brown University, Cognitive, Linguistic, & Psychological Sciences, Providence, Rhode Island, United States of America
- Boston University, Department of Psychological and Brain Sciences, Center for Memory and Brain, Boston, Massachusetts, United States of America
- * E-mail:
| | - William H. Warren
- Brown University, Cognitive, Linguistic, & Psychological Sciences, Providence, Rhode Island, United States of America
| |
Collapse
|
21
|
Kelly JW, Hammel WW, Siegel ZD, Sjolund LA. Recalibration of perceived distance in virtual environments occurs rapidly and transfers asymmetrically across scale. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:588-595. [PMID: 24650986 DOI: 10.1109/tvcg.2014.36] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Distance in immersive virtual reality is commonly underperceived relative to intended distance, causing virtual environments to appear smaller than they actually are. However, a brief period of interaction by walking through the virtual environment with visual feedback can cause dramatic improvement in perceived distance. The goal of the current project was to determine how quickly improvement occurs as a result of walking interaction (Experiment 1) and whether improvement is specific to the distances experienced during interaction, or whether improvement transfers across scales of space (Experiment 2). The results show that five interaction trials resulted in a large improvement in perceived distance, and that subsequent walking interactions showed continued but diminished improvement. Furthermore, interaction with near objects (1-2 m) improved distance perception for near but not far (4-5 m) objects, whereas interaction with far objects broadly improved distance perception for both near and far objects. These results have practical implications for ameliorating distance underperception in immersive virtual reality, as well as theoretical implications for distinguishing between theories of how walking interaction influences perceived distance.
Collapse
|
22
|
Rio KW, Rhea CK, Warren WH. Follow the leader: visual control of speed in pedestrian following. J Vis 2014; 14:4. [PMID: 24511143 PMCID: PMC3919103 DOI: 10.1167/14.2.4] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2013] [Accepted: 12/16/2013] [Indexed: 11/24/2022] Open
Abstract
When people walk together in groups or crowds they must coordinate their walking speed and direction with their neighbors. This paper investigates how a pedestrian visually controls speed when following a leader on a straight path (one-dimensional following). To model the behavioral dynamics of following, participants in Experiment 1 walked behind a confederate who randomly increased or decreased his walking speed. The data were used to test six models of speed control that used the leader's speed, distance, or combinations of both to regulate the follower's acceleration. To test the optical information used to control speed, participants in Experiment 2 walked behind a virtual moving pole, whose visual angle and binocular disparity were independently manipulated. The results indicate the followers match the speed of the leader, and do so using a visual control law that primarily nulls the leader's optical expansion (change in visual angle), with little influence of change in disparity. This finding has direct applications to understanding the coordination among neighbors in human crowds.
Collapse
Affiliation(s)
- Kevin W. Rio
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Christopher K. Rhea
- Department of Kinesiology, University of North Carolina at Greensboro, Greensboro, NC, USA
| | - William H. Warren
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| |
Collapse
|
23
|
More than just perception–action recalibration: Walking through a virtual environment causes rescaling of perceived space. Atten Percept Psychophys 2013; 75:1473-85. [DOI: 10.3758/s13414-013-0503-4] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
24
|
|
25
|
Jones JA, Swan JE, Bolas M. Peripheral stimulation and its effect on perceived spatial scale in virtual environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:701-710. [PMID: 23428455 DOI: 10.1109/tvcg.2013.37] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The following series of experiments explore the effect of static peripheral stimulation on the perception of distance and spatial scale in a typical head-mounted virtual environment. It was found that applying constant white light in an observer's far periphery enabled the observer to more accurately judge distances using blind walking. An effect of similar magnitude was also found when observers estimated the size of a virtual space using a visual scale task. The presence of the effect across multiple psychophysical tasks provided confidence that a perceptual change was, in fact, being invoked by the addition of the peripheral stimulation. These results were also compared to observer performance in a very large field of view virtual environment and in the real world. The subsequent findings raise the possibility that distance judgments in virtual environments might be considerably more similar to those in the real world than previous work has suggested.
Collapse
Affiliation(s)
- J Adam Jones
- University of Southern California, Institute for Creative Technologies, CA, USA.
| | | | | |
Collapse
|
26
|
Does perceptual-motor calibration generalize across two different forms of locomotion? Investigations of walking and wheelchairs. PLoS One 2013; 8:e54446. [PMID: 23424615 PMCID: PMC3570558 DOI: 10.1371/journal.pone.0054446] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2012] [Accepted: 12/11/2012] [Indexed: 11/19/2022] Open
Abstract
The relationship between biomechanical action and perception of self-motion during walking is typically consistent and well-learned but also adaptable. This perceptual-motor coupling can be recalibrated by creating a mismatch between the visual information for self-motion and walking speed. Perceptual-motor recalibration of locomotion has been demonstrated through effects on subsequent walking without vision, showing that learned perceptual-motor coupling influences a dynamic representation of one's spatial position during walking. Our present studies test whether recalibration of wheelchair locomotion, a novel form of locomotion for typically walking individuals, similarly influences subsequent wheelchair locomotion. Furthermore, we test whether adaptation to the pairing of visual information for self-motion during one form of locomotion transfers to a different locomotion modality. We find strong effects of perceptual-motor recalibration for matched locomotion modalities--walking/walking and wheeling/wheeling. Transfer across incongruent locomotion modalities showed weak recalibration effects. The results have implications both for theories of perceptual-motor calibration mechanisms and their effects on spatial orientation, as well as for practical applications in training and rehabilitation.
Collapse
|
27
|
Ambrosini E, Scorolli C, Borghi AM, Costantini M. Which body for embodied cognition? Affordance and language within actual and perceived reaching space. Conscious Cogn 2012; 21:1551-7. [DOI: 10.1016/j.concog.2012.06.010] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2011] [Revised: 06/19/2012] [Accepted: 06/26/2012] [Indexed: 11/26/2022]
|
28
|
Visual influence on path integration in darkness indicates a multimodal representation of large-scale space. Proc Natl Acad Sci U S A 2011; 108:1152-7. [PMID: 21199934 DOI: 10.1073/pnas.1011843108] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.
Collapse
|
29
|
Revisiting the effect of quality of graphics on distance judgments in virtual environments: a comparison of verbal reports and blind walking. Atten Percept Psychophys 2009; 71:1284-93. [PMID: 19633344 DOI: 10.3758/app.71.6.1284] [Citation(s) in RCA: 63] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In immersive virtual environments, judgments of perceived egocentric distance are significantly underestimated, as compared with accurate performance in the real world. Two experiments assessed the influence of graphics quality on two distinct estimates of distance, a visually directed walking task and verbal reports. Experiment 1 demonstrated a similar underestimation of distances walked to previously viewed targets in both low- and high-quality virtual classrooms. In Experiment 2, participants' verbal judgments underestimated target distances in both graphics quality environments but were more accurate in the high-quality environment, consistent with the subjective impression that high-quality environments seem larger. Contrary to previous results, we suggest that quality of graphics does influence judgments of distance, but only for verbal reports. This behavioral dissociation has implications beyond the context of virtual environments and may reflect a differential use of cues and context for verbal reports and visually directed walking.
Collapse
|
30
|
Estimating distance in real and virtual environments: Does order make a difference? Atten Percept Psychophys 2009; 71:1095-106. [PMID: 19525540 DOI: 10.3758/app.71.5.1096] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this investigation, we examined how the order in which people experience real and virtual environments influences their distance estimates. Participants made two sets of distance estimates in one of the following conditions: (1) real environment first, virtual environment second; (2) virtual environment first, real environment second; (3) real environment first, real environment second; or (4) virtual environment first, virtual environment second. In Experiment 1, the participants imagined how long it would take to walk to targets in real and virtual environments. The participants' first estimates were significantly more accurate in the real than in the virtual environment. When the second environment was the same as the first environment (real-real and virtual-virtual), the participants' second estimates were also more accurate in the real than in the virtual environment. When the second environment differed from the first environment (real-virtual and virtual-real), however, the participants' second estimates did not differ significantly across the two environments. A second experiment, in which the participants walked blindfolded to targets in the real environment and imagined how long it would take to walk to targets in the virtual environment, replicated these results. These subtle yet persistent order effects suggest that memory can play an important role in distance perception.
Collapse
|
31
|
Saracini C, Franke R, Blümel E, Belardinelli MO. Comparing distance perception in different virtual environments. Cogn Process 2009; 10 Suppl 2:S294-6. [DOI: 10.1007/s10339-009-0314-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
32
|
Jones KS, DeLucia PR, Hall AR, Johnson BR. Can metric feedback training hinder actions involving distance? HUMAN FACTORS 2009; 51:419-432. [PMID: 19750802 DOI: 10.1177/0018720809340341] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
OBJECTIVE The present studies tested whether distance estimation training with metric feedback can degrade the performance of untrained primarily perceptual-motor tasks. BACKGROUND Training with metric feedback can improve distance estimations. However, previous research led to the conclusion that those improvements stemmed from changes in cognitive processing rather than in perception. If trainees applied their new cognitive strategies to primarily perceptual-motor tasks, then the performance of those tasks should degrade. The present studies tested that possibility. METHOD Experiment 1 sought to replicate that training with metric feedback would improve metric distance estimations. Experiments 2 and 3 investigated whether such training would degrade the performance of a primarily perceptual-motor task. Experiment 4 investigated whether such training would affect a perceptual-motor task that required cognition. RESULTS Metric feedback improved metric distance estimation (Experiments 1-4) and throwing to a specified distance (Experiment 4). Metric feedback degraded throwing to a target (Experiments 2 and 3), although that effect was not evident when pretesting was omitted (Experiment 3). CONCLUSION If distance estimation trainees apply what they learned from metric feedback to untrained primarily perceptual-motor tasks, then the performance of those tasks will suffer. However, if trainees apply what they learned to untrained tasks that require metric estimation, then the performance of those tasks will improve. APPLICATION Distance estimation training with metric feedback may not generalize to other tasks and may even degrade performance on certain tasks. Future research must specify the conditions under which distance estimation training with metric feedback leads to performance improvements and decrements.
Collapse
Affiliation(s)
- Keith S Jones
- Department of Psychology, Texas Tech University, Lubbock, TX 79409-2051, USA.
| | | | | | | |
Collapse
|
33
|
Klein E, Swan J, Schmidt G, Livingston M, Staadt O. Measurement Protocols for Medium-Field Distance Perception in Large-Screen Immersive Displays. ACTA ACUST UNITED AC 2009. [DOI: 10.1109/vr.2009.4811007] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
34
|
Ellard CG, Wagar LS. Plasticity of the Association between Visual Space and Action Space in a Blind-Walking Task. Perception 2008; 37:1044-53. [DOI: 10.1068/p5798] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Many experiments have shown that a brief visual preview provides sufficient information to complete certain kinds of movements (reaching, grasping, and walking) with high precision. This suggests that participants must possess a calibration between visual target location and the kinaesthetic, proprioceptive, and/or vestibular stimulation generated during movement towards the target. We investigated the properties of this calibration using a cue-conflict paradigm in which participants were trained with mismatched locomotor and visual input. After training, participants were presented with visual targets and were asked to either walk to them or locate them in a spatial updating task. Our results showed that the training was sufficient to produce significant, systematic miscalibrations of the association between visual space and action space. These findings suggest that the association between action space and visual space is modifiable by experience. This plasticity could be either due to modification of a simple, task-specific sensory motor association or it could reflect a change in the gain of a path integration signal or a reorganisation of the relationship between perceived space and action space. We suggest further experiments that might help to distinguish between these possibilities.
Collapse
Affiliation(s)
- Colin G Ellard
- Department of Psychology, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Lori S Wagar
- Department of Psychology, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
35
|
The HIVE: A huge immersive virtual environment for research in spatial cognition. Behav Res Methods 2007; 39:835-43. [DOI: 10.3758/bf03192976] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|