1
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion. PLoS One 2024; 19:e0295110. [PMID: 38483949 PMCID: PMC10939277 DOI: 10.1371/journal.pone.0295110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 02/05/2024] [Indexed: 03/17/2024] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
2
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion-A registered report protocol. PLoS One 2023; 18:e0267983. [PMID: 36716328 PMCID: PMC9886253 DOI: 10.1371/journal.pone.0267983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 04/19/2022] [Indexed: 02/01/2023] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigate this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants are shown a ball moving laterally which disappears after a certain time. They then indicate by button press when they think the ball would have hit a target rectangle positioned in the environment. While the ball is visible, participants sometimes experience simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task is a two-interval forced choice task in which participants judge which of two motions is faster: in one interval they see the same ball they observed in the first task while in the other they see a ball cloud whose speed is controlled by a PEST staircase. While observing the single ball, they are again moved visually either in the same or opposite direction as the ball or they remain static. We expect participants to overestimate the speed of a ball that moves opposite to their simulated self-motion (speed estimation task), which should then lead them to underestimate the time it takes the ball to reach the target rectangle (prediction task). Seeing the ball during visually simulated self-motion should increase variability in both tasks. We expect to find performance in both tasks to be correlated, both in accuracy and precision.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Canada
- * E-mail:
| | | |
Collapse
|
3
|
Jörges B, Harris LR. Object speed perception during lateral visual self-motion. Atten Percept Psychophys 2022; 84:25-46. [PMID: 34704212 PMCID: PMC8547725 DOI: 10.3758/s13414-021-02372-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2021] [Indexed: 11/08/2022]
Abstract
Judging object speed during observer self-motion requires disambiguating retinal stimulation from two sources: self-motion and object motion. According to the Flow Parsing hypothesis, observers estimate their own motion, then subtract the retinal corresponding motion from the total retinal stimulation and interpret the remaining stimulation as pertaining to object motion. Subtracting noisier self-motion information from retinal input should lead to a decrease in precision. Furthermore, when self-motion is only simulated visually, self-motion is likely to be underestimated, yielding an overestimation of target speed when target and observer move in opposite directions and an underestimation when they move in the same direction. We tested this hypothesis with a two-alternative forced-choice task in which participants judged which of two motions, presented in an immersive 3D environment, was faster. One motion interval contained a ball cloud whose speed was selected dynamically according to a PEST staircase, while the other contained one big target travelling laterally at a fixed speed. While viewing the big target, participants were either static or experienced visually simulated lateral self-motion in the same or opposite direction of the target. Participants were not significantly biased in either motion profile, and precision was only significantly lower when participants moved visually in the direction opposite to the target. We conclude that, when immersed in an ecologically valid 3D environment with rich self-motion cues, participants perceive an object's speed accurately at a small precision cost, even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| | - Laurence R. Harris
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| |
Collapse
|
4
|
Niehorster DC, Li L. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion. Iperception 2017; 8:2041669517708206. [PMID: 28567272 PMCID: PMC5439648 DOI: 10.1177/2041669517708206] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.
Collapse
Affiliation(s)
| | - Li Li
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong; Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, China
| |
Collapse
|
5
|
Levulis SJ, DeLucia PR, Oberfeld D. Effects of Adjacent Vehicles on Judgments of a Lead Car During Car Following. HUMAN FACTORS 2016; 58:1096-1111. [PMID: 27280300 DOI: 10.1177/0018720816652270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Accepted: 05/02/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE Two experiments were conducted to determine whether detection of the onset of a lead car's deceleration and judgments of its time to contact (TTC) were affected by the presence of vehicles in lanes adjacent to the lead car. BACKGROUND In a previous study, TTC judgments of an approaching object by a stationary observer were influenced by an adjacent task-irrelevant approaching object. The implication is that vehicles in lanes adjacent to a lead car could influence a driver's ability to detect the lead car's deceleration and to make judgments of its TTC. METHOD Displays simulated car-following scenes in which two vehicles in adjacent lanes were either present or absent. Participants were instructed to respond as soon as the lead car decelerated (Experiment 1) or when they thought their car would hit the decelerating lead car (Experiment 2). RESULTS The presence of adjacent vehicles did not affect response time to detect deceleration of a lead car but did affect the signal detection theory measure of sensitivity d' and the number of missed deceleration events. Judgments of the lead car's TTC were shorter when adjacent vehicles were present and decelerated early than when adjacent vehicles were absent. CONCLUSION The presence of vehicles in nearby lanes can affect a driver's ability to detect a lead car's deceleration and to make subsequent judgments of its TTC. APPLICATION Results suggest that nearby traffic can affect a driver's ability to accurately judge a lead car's motion in situations that pose risk for rear-end collisions.
Collapse
Affiliation(s)
- Samuel J Levulis
- Texas Tech University, LubbockJohannes Gutenberg University of Mainz, Germany
| | | | | |
Collapse
|
6
|
Royden CS, Parsons D, Travatello J. The effect of monocular depth cues on the detection of moving objects by moving observers. Vision Res 2016; 124:7-14. [PMID: 27264029 DOI: 10.1016/j.visres.2016.05.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Revised: 05/19/2016] [Accepted: 05/23/2016] [Indexed: 11/26/2022]
Abstract
An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Daniel Parsons
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| | - Joshua Travatello
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
7
|
Dokka K, MacNeilage PR, DeAngelis GC, Angelaki DE. Multisensory self-motion compensation during object trajectory judgments. ACTA ACUST UNITED AC 2013; 25:619-30. [PMID: 24062317 DOI: 10.1093/cercor/bht247] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual-vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual-vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals.
Collapse
Affiliation(s)
- Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Paul R MacNeilage
- German Center for Vertigo and Balance Disorders, University Hospital of Munich, Munich, Germany and
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
8
|
MacNeilage PR, Zhang Z, DeAngelis GC, Angelaki DE. Vestibular facilitation of optic flow parsing. PLoS One 2012; 7:e40264. [PMID: 22768345 PMCID: PMC3388053 DOI: 10.1371/journal.pone.0040264] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 06/04/2012] [Indexed: 11/18/2022] Open
Abstract
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.
Collapse
Affiliation(s)
- Paul R MacNeilage
- Vertigo, Balance, and Oculomotor Research Center, University Hospital of Munich, Munich, Germany.
| | | | | | | |
Collapse
|
9
|
Use of speed cues in the detection of moving objects by moving observers. Vision Res 2012; 59:17-24. [PMID: 22406544 DOI: 10.1016/j.visres.2012.02.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2011] [Revised: 01/12/2012] [Accepted: 02/21/2012] [Indexed: 11/20/2022]
Abstract
When an observer moves through an environment containing stationary and moving objects, he or she must be able to determine which objects are moving relative to the others in order to navigate successfully and avoid collisions. We investigated whether image speed can be used as a cue to detect a moving object in the scene. Our results show that image speed can be used to detect moving objects as long as the object is moving sufficiently faster or slower than it would if it were part of the stationary scene.
Collapse
|
10
|
Beardsley SA, Sikoglu EM, Hecht H, Vaina LM. Global flow impacts time-to-passage judgments based on local motion cues. Vision Res 2011; 51:1880-7. [PMID: 21763711 DOI: 10.1016/j.visres.2011.07.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2010] [Revised: 06/28/2011] [Accepted: 07/01/2011] [Indexed: 10/18/2022]
Abstract
We assessed the effect of the coherence of optic flow on time-to-passage judgments in order to investigate the strategies that observers use when local expansion information is reduced or lacking. In the standard display, we presented a cloud of dots whose image expanded consistent with constant observer motion. The dots themselves, however, did not expand and were thus devoid of object expansion cues. Only the separations between the dots expanded. Subjects had to judge which of two colored target dots, presented at different simulated depths and lateral displacements would pass them first. Image velocities of the target dots were chosen so as to correlate with time-to-passage only some of the time. When optic flow was mainly incoherent, subjects' responses were biased and relied on image velocities rather than on global flow analysis. However, the bias induced by misleading image velocity cues diminished as a function of the coherence of the optic flow. We discuss the results in the context of a global tau mechanism and settle a debate whether local expansion cues or optic flow analysis are the basis for time-to-passage estimation.
Collapse
Affiliation(s)
- Scott A Beardsley
- Department of Biomedical Engineering, Marquette University, P.O. Box 1881, Milwaukee, WI 53201, USA.
| | | | | | | |
Collapse
|
11
|
Royden CS, Connors EM. The detection of moving objects by moving observers. Vision Res 2010; 50:1014-24. [DOI: 10.1016/j.visres.2010.03.008] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2009] [Revised: 01/29/2010] [Accepted: 03/16/2010] [Indexed: 11/24/2022]
|
12
|
Khuu SK, Lee TC, Hayes A. Object speed derived from the integration of motion in the image plane and motion-in-depth signaled by stereomotion and looming. Vision Res 2010; 50:904-13. [DOI: 10.1016/j.visres.2010.02.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2009] [Revised: 02/06/2010] [Accepted: 02/09/2010] [Indexed: 10/19/2022]
|
13
|
Zago M, McIntyre J, Senot P, Lacquaniti F. Visuo-motor coordination and internal models for object interception. Exp Brain Res 2009; 192:571-604. [DOI: 10.1007/s00221-008-1691-3] [Citation(s) in RCA: 136] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2008] [Accepted: 11/25/2008] [Indexed: 10/21/2022]
|
14
|
Abstract
The authors examined age-related differences in the detection of collision events. Older and younger observers were presented with displays simulating approaching objects that would either collide or pass by the observer. In 4 experiments, the authors found that older observers, as compared with younger observers, had less sensitivity in detecting collisions with an increase in speed, at shorter display durations, and with longer time-to-contact conditions. Older observers also had greater difficulty when the scenario simulated observer motion, suggesting that older observers have difficulty discriminating object motion expansion from background expansion from observer motion. The results of these studies support the expansion sensitivity hypothesis-that age-related decrements in detecting collision events involving moving objects are the result of a decreased sensitivity to recover expansion information.
Collapse
Affiliation(s)
- George J Andersen
- Department of Psychology, University of California-Riverside, Riverside, CA 95251, USA.
| | | |
Collapse
|
15
|
Gray R, Regan DM. Unconfounding the direction of motion in depth, time to passage and rotation rate of an approaching object. Vision Res 2006; 46:2388-402. [PMID: 16542703 DOI: 10.1016/j.visres.2006.02.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2005] [Revised: 02/01/2006] [Accepted: 02/02/2006] [Indexed: 11/22/2022]
Abstract
Observers were presented with a set of 216 simulated approaching textured baseballs in random order. In Experiment 1 each had a different combination of time to passage (TTP), direction of motion in depth (dMID) in the vertical plane and total change in angular size (Deltatheta). In Experiments 2 and 3 each had a different combination of TTP, dMID and rate of ball rotation (RR). When required to discriminate TTP and dMID in separate experimental blocks for a non-rotating baseball (Experiment 1), observers could not discriminate dMID independently of variations in TTP but instead showed a bias towards perceiving objects approaching on a trajectory close to the nose as having a shorter TTP than objects approaching on a trajectory that would miss the face. When required to discriminate TTP, dMID and RR in separate experimental blocks (Experiment 2), TTP judgments were again influenced by dMID but could be made independently of RR. Judgments of the relative dMID were affected by variations in RR and rotation direction: for simulated overspin the (i.e., the top of the ball spins towards the observer) perceived ball trajectory was biased towards the ground whereas for simulated underspin the perceived ball trajectory was biased towards the sky. RR could be discriminated independently of both TTP and dMID. When required to make all three of these judgments simultaneously on each trial (Experiment 3) discrimination thresholds were not appreciably different from those found in Experiment 2. We conclude that TTP, dMID and RR can be estimated in parallel but not completely independently within the human visual system.
Collapse
Affiliation(s)
- Rob Gray
- Department of Applied Psychology, Arizona State University, USA.
| | | |
Collapse
|