1
|
Haskins AJ, Mentch J, Botch TL, Robertson CE. Active vision in immersive, 360° real-world environments. Sci Rep 2020; 10:14304. [PMID: 32868788 PMCID: PMC7459302 DOI: 10.1038/s41598-020-71125-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 08/06/2020] [Indexed: 11/30/2022] Open
Abstract
How do we construct a sense of place in a real-world environment? Real-world environments are actively explored via saccades, head turns, and body movements. Yet, little is known about how humans process real-world scene information during active viewing conditions. Here, we exploited recent developments in virtual reality (VR) and in-headset eye-tracking to test the impact of active vs. passive viewing conditions on gaze behavior while participants explored novel, real-world, 360° scenes. In one condition, participants actively explored 360° photospheres from a first-person perspective via self-directed motion (saccades and head turns). In another condition, photospheres were passively displayed to participants while they were head-restricted. We found that, relative to passive viewers, active viewers displayed increased attention to semantically meaningful scene regions, suggesting more exploratory, information-seeking gaze behavior. We also observed signatures of exploratory behavior in eye movements, such as quicker, more entropic fixations during active as compared with passive viewing conditions. These results show that active viewing influences every aspect of gaze behavior, from the way we move our eyes to what we choose to attend to. Moreover, these results offer key benchmark measurements of gaze behavior in 360°, naturalistic environments.
Collapse
Affiliation(s)
- Amanda J Haskins
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA.
| | - Jeff Mentch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| |
Collapse
|
2
|
Abstract
Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.
Collapse
|
3
|
Mantel B, Stoffregen TA, Campbell A, Bardy BG. Exploratory movement generates higher-order information that is sufficient for accurate perception of scaled egocentric distance. PLoS One 2015; 10:e0120025. [PMID: 25856410 PMCID: PMC4391914 DOI: 10.1371/journal.pone.0120025] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2014] [Accepted: 01/19/2015] [Indexed: 01/13/2023] Open
Abstract
Body movement influences the structure of multiple forms of ambient energy, including optics and gravito-inertial force. Some researchers have argued that egocentric distance is derived from inferential integration of visual and non-visual stimulation. We suggest that accurate information about egocentric distance exists in perceptual stimulation as higher-order patterns that extend across optics and inertia. We formalize a pattern that specifies the egocentric distance of a stationary object across higher-order relations between optics and inertia. This higher-order parameter is created by self-generated movement of the perceiver in inertial space relative to the illuminated environment. For this reason, we placed minimal restrictions on the exploratory movements of our participants. We asked whether humans can detect and use the information available in this higher-order pattern. Participants judged whether a virtual object was within reach. We manipulated relations between body movement and the ambient structure of optics and inertia. Judgments were precise and accurate when the higher-order optical-inertial parameter was available. When only optic flow was available, judgments were poor. Our results reveal that participants perceived egocentric distance from the higher-order, optical-inertial consequences of their own exploratory activity. Analysis of participants’ movement trajectories revealed that self-selected movements were complex, and tended to optimize availability of the optical-inertial pattern that specifies egocentric distance. We argue that accurate information about egocentric distance exists in higher-order patterns of ambient energy, that self-generated movement can generate these higher-order patterns, and that these patterns can be detected and used to support perception of egocentric distance that is precise and accurate.
Collapse
Affiliation(s)
- Bruno Mantel
- Movement-to-Health Laboratory, EuroMov, Montpellier-1 University, Montpellier, France
- Normandie Université, Caen, France
- Centre d’Etudes Sport et Actions Motrices, Université de Caen Basse-Normandie, Caen, France
| | - Thomas A. Stoffregen
- Affordance Perception-Action Laboratory, University of Minnesota, Minneapolis, United States of America
| | - Alain Campbell
- Normandie Université, Caen, France
- UMR 6139 Laboratoire de Mathématiques Nicolas Oresme, Université de Caen-Basse Normandie & CNRS, Caen, France
| | - Benoît G. Bardy
- Movement-to-Health Laboratory, EuroMov, Montpellier-1 University, Montpellier, France
- Institut Universitaire de France, Paris, France
- * E-mail:
| |
Collapse
|
4
|
Fath AJ, Fajen BR. Static and dynamic visual information about the size and passability of an aperture. Perception 2012; 40:887-904. [PMID: 22132505 DOI: 10.1068/p6917] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
The role of static eyeheight-scaled information in perceiving the passability of and guiding locomotion through apertures is well established. However, eyeheight-scaled information is not the only source of visual information about size and passability. In this study we tested the sufficiency of two other sources of information, both of which are available only to moving observers (ie are dynamic) and specify aperture size in intrinsic body-scaled units. The experiment was conducted in an immersive virtual environment that was monocularly viewed through a head-mounted display. Subjects walked through narrow openings between obstacles, rotating their shoulders as necessary, while head and shoulder position were tracked. The task was performed in three virtual environments that differed in terms of the availability of eyeheight-scaled information and the two dynamic sources of information. Analyses focused on the timing and amplitude of shoulder rotation as subjects walked through apertures, as well as walking speed and the number of collisions. Subjects successfully timed and appropriately scaled the amplitude of shoulder rotation to fit through apertures in all three conditions. These findings suggest that visual information other than eyeheight-scaled information can be used to guide locomotion through apertures.
Collapse
Affiliation(s)
- Aaron J Fath
- Department of Cognitive Science, Carnegie Building 308, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180, USA
| | | |
Collapse
|
5
|
Dokka K, MacNeilage PR, DeAngelis GC, Angelaki DE. Estimating distance during self-motion: a role for visual-vestibular interactions. J Vis 2011; 11:11.13.2. [PMID: 22045777 DOI: 10.1167/11.13.2] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
A fundamental challenge for the visual system is to extract the 3D spatial structure of the environment. When an observer translates without moving the eyes, the retinal speed of a stationary object is related to its distance by a scale factor that depends on the velocity of the observer's self-motion. Here, we aim to test whether the brain uses vestibular cues to self-motion to estimate distance to stationary surfaces in the environment. This relationship was systematically probed using a two-alternative forced-choice task in which distance perceived from monocular image motion during passive body translation was compared to distance perceived from binocular disparity while subjects were stationary. We show that perceived distance from motion depended on both observer velocity and retinal speed. For a given head speed, slower retinal speeds led to the perception of farther distances. Likewise, for a given retinal speed, slower head speeds led to the perception of nearer distances. However, these relationships were weak in some subjects and absent in others, and distance estimated from self-motion and retinal image motion was substantially compressed relative to distance estimated from binocular disparity. Overall, our findings suggest that the combination of retinal image motion and vestibular signals related to head velocity can provide a rudimentary capacity for distance estimation.
Collapse
Affiliation(s)
- Kalpana Dokka
- Department of Anatomy and Neurobiology, Washington University in St. Louis, USA
| | | | | | | |
Collapse
|
6
|
Mantel B, Bardy BG, Stoffregen TA. Multimodal Perception of Reachability Expressed Through Locomotion. ECOLOGICAL PSYCHOLOGY 2010. [DOI: 10.1080/10407413.2010.496665] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
7
|
Umemura H, Watanabe H. Interpretation of optic flows synchronized with observer’s hand movements. Vision Res 2009; 49:834-42. [DOI: 10.1016/j.visres.2009.02.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2008] [Revised: 02/25/2009] [Accepted: 02/28/2009] [Indexed: 11/30/2022]
|
8
|
Stoffregen TA, Yang CM, Giveans MR, Flanagan M, Bardy BG. Movement in the Perception of an Affordance for Wheelchair Locomotion. ECOLOGICAL PSYCHOLOGY 2009. [DOI: 10.1080/10407410802626001] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
9
|
Wexler M, van Boxtel JJA. Depth perception by the active observer. Trends Cogn Sci 2006; 9:431-8. [PMID: 16099197 DOI: 10.1016/j.tics.2005.06.018] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2005] [Revised: 06/13/2005] [Accepted: 06/27/2005] [Indexed: 10/25/2022]
Abstract
The connection between perception and action has classically been studied in one direction only: the effect of perception on subsequent action. Although our actions can modify our perceptions externally, by modifying the world or our view of it, it has recently become clear that even without this external feedback the preparation and execution of a variety of motor actions can have an effect on three-dimensional perceptual processes. Here, we review the ways in which an observer's motor actions--locomotion, head and eye movements, and object manipulation--affect his or her perception and representation of three-dimensional objects and space. Allowing observers to act can drastically change the way they perceive the third dimension, as well as how scientists view depth perception.
Collapse
Affiliation(s)
- Mark Wexler
- CNRS, 11 Pl. Marcelin Berthelot, 75005 Paris, France.
| | | |
Collapse
|
10
|
Admiraal MA, Keijsers NLW, Gielen CCAM. Gaze Affects Pointing Toward Remembered Visual Targets After a Self-Initiated Step. J Neurophysiol 2004; 92:2380-93. [PMID: 15190097 DOI: 10.1152/jn.01046.2003] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.
Collapse
Affiliation(s)
- M A Admiraal
- Dept. Biophysics, Univ. of Nijmegen, PO Box 9101, 6500 HB Nijmegen, The Netherlands.
| | | | | |
Collapse
|
11
|
Abstract
The use of driving simulation for vehicle design and driver perception studies is expanding rapidly. This is largely because simulation saves engineering time and costs, and can be used for studies of road and traffic safety. How applicable driving simulation is to the real world is unclear however, because analyses of perceptual criteria carried out in driving simulation experiments are controversial. On the one hand, recent data suggest that, in driving simulators with a large field of view, longitudinal speed can be estimated correctly from visual information. On the other hand, recent psychophysical studies have revealed an unexpectedly important contribution of vestibular cues in distance perception and steering, prompting a re-evaluation of the role of visuo-vestibular interaction in driving simulation studies.
Collapse
Affiliation(s)
- Andras Kemeny
- Laboratoire de Physiologie de la Perception et de l'Action, CNRS-Collège de France, 11, Place M. Berthelot, 75005, Paris, France
| | | |
Collapse
|
12
|
Peh CH, Panerai F, Droulez J, Cornilleau-Pérès V, Cheong LF. Absolute distance perception during in-depth head movement: calibrating optic flow with extra-retinal information. Vision Res 2002; 42:1991-2003. [PMID: 12160571 DOI: 10.1016/s0042-6989(02)00120-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
We investigated the ability of monocular human observer to scale absolute distance during sagittal head motion in the presence of pure optic flow information. Subjects were presented at eye-level computer-generated spheres (covered with randomly distributed dots) placed at several distances. We compared the condition of self-motion (SM) versus object-motion (OM) using equivalent optic flow field. When the amplitude of head movement was relatively constant, subjects estimated absolute distance rather accurately in both the SM and OM conditions. However, when the amplitude changed on a trial-to-trial basis, subjects' performance deteriorated only in the OM condition. We found that distance judgment in OM condition correlated strongly with optic flow divergence, and that non-visual cues served as important factors for scaling distances in SM condition. Absolute distance also seemed to be better scaled with sagittal head movement when compared with lateral head translation.
Collapse
Affiliation(s)
- Chin-Hwee Peh
- Department of Electrical and Computer Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore.
| | | | | | | | | |
Collapse
|