1
|
Hülemeier AG, Lappe M. Illusory percepts of curvilinear self-motion when moving through crowds. J Vis 2023; 23:6. [PMID: 38112491 PMCID: PMC10732088 DOI: 10.1167/jov.23.14.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 11/13/2023] [Indexed: 12/21/2023] Open
Abstract
Self-motion generates optic flow, a pattern of expanding visual motion. Heading estimation from optic flow analysis is accurate in rigid environments, but it becomes challenging when other human walkers introduce independent motion to the scene. Previous studies showed that heading perception is surprisingly accurate when moving through a crowd of walkers but revealed strong heading biases when either articulation or translation of biological motion were presented in isolation. We hypothesized that these biases resulted from misperceiving the self-motion as curvilinear. Such errors might manifest as opposite biases depending on whether the observer perceived the crowd motion as indication of his/her self-translation or self-rotation. Our study investigated the link between heading biases and illusory path perception. Participants assessed heading and path perception while observing optic flow stimuli with varying walker movements. Self-motion perception was accurate during natural locomotion (articulation and translation), but significant heading biases occurred when walkers only articulated or translated. In this case, participants often reported a curved path of travel. Heading error and curvature pointed in opposite directions. On average, participants perceived the walker motion as evidence for viewpoint rotation leading to curvilinear path percepts.
Collapse
Affiliation(s)
| | - Markus Lappe
- Department of Psychology, University of Münster, Münster, Germany
| |
Collapse
|
2
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
3
|
Kuang S, Shi J, Wang Y, Zhang T. Where are you heading? Flexible integration of retinal and extra-retinal cues during self-motion perception. Psych J 2017; 6:141-152. [PMID: 28514063 DOI: 10.1002/pchj.165] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 01/26/2017] [Accepted: 02/02/2017] [Indexed: 11/08/2022]
Abstract
As we move forward in the environment, we experience a radial expansion of the retinal image, wherein the center corresponds to the instantaneous direction of self-motion. Humans can precisely perceive their heading direction even when the retinal motion is distorted by gaze shifts due to eye/body rotations. Previous studies have suggested that both retinal and extra-retinal strategies can compensate for the retinal image distortion. However, the relative contributions of each strategy remain unclear. To address this issue, we devised a two-alternative-headings discrimination task, in which participants had either real or simulated pursuit eye movements. The two conditions had the same retinal input but either with or without extra-retinal eye movement signals. Thus, the behavioral difference between conditions served as a metric of extra-retinal contribution. We systematically and independently manipulated pursuit speed, heading speed, and the reliability of retinal signals. We found that the levels of extra-retinal contributions increased with increasing pursuit speed (stronger extra-retinal signal), and with decreasing heading speed (weaker retinal signal). In addition, extra-retinal contributions also increased as we corrupted retinal signals with noise. Our results revealed that the relative magnitude of retinal and extra-retinal contributions was not fixed but rather flexibly adjusted to each specific task condition. This task-dependent, flexible integration appears to take the form of a reliability-based weighting scheme that maximizes heading performance.
Collapse
Affiliation(s)
- Shenbing Kuang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jinfu Shi
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yang Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Tao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
4
|
Royden CS, Holloway MA. Detecting moving objects in an optic flow field using direction- and speed-tuned operators. Vision Res 2014; 98:14-25. [PMID: 24607912 DOI: 10.1016/j.visres.2014.02.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2013] [Revised: 01/25/2014] [Accepted: 02/21/2014] [Indexed: 11/20/2022]
Abstract
An observer moving through a scene must be able to identify moving objects. Psychophysical results have shown that people can identify moving objects based on the speed or direction of their movement relative to the optic flow field generated by the observer's motion. Here we show that a model that uses speed- and direction-tuned units, whose responses are based on the response properties of cells in the primate visual cortex, can successfully identify the borders of moving objects in a scene through which an observer is moving.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Michael A Holloway
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
5
|
Kishore S, Hornick N, Sato N, Page WK, Duffy CJ. Driving strategy alters neuronal responses to self-movement: cortical mechanisms of distracted driving. ACTA ACUST UNITED AC 2011; 22:201-8. [PMID: 21653287 DOI: 10.1093/cercor/bhr115] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We presented naturalistic combinations of virtual self-movement stimuli while recording neuronal activity in monkey cerebral cortex. Monkeys used a joystick to drive to a straight ahead heading direction guided by either object motion or optic flow. The selected cue dominates neuronal responses, often mimicking responses evoked when that stimulus is presented alone. In some neurons, driving strategy creates selective response additivities. In others, it creates vulnerabilities to the disruptive effects of independently moving objects. Such cue interactions may be related to the disruptive effects of independently moving objects in Alzheimer's disease patients with navigational deficits.
Collapse
Affiliation(s)
- Sarita Kishore
- Department of Neurology, University of Rochester Medical Center, Rochester, NY 14642, USA
| | | | | | | | | |
Collapse
|
6
|
Cortical neurons combine visual cues about self-movement. Exp Brain Res 2010; 206:283-97. [PMID: 20852992 DOI: 10.1007/s00221-010-2406-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2010] [Accepted: 08/25/2010] [Indexed: 10/19/2022]
Abstract
Visual cues about self-movement are derived from the patterns of optic flow and the relative motion of discrete objects. We recorded dorsal medial superior temporal (MSTd) cortical neurons in monkeys that held centered visual fixation while viewing optic flow and object motion stimuli simulating the self-movement cues seen during translation on a circular path. Twenty stimulus configurations presented naturalistic combinations of optic flow with superimposed objects that simulated either earth-fixed landmark objects or independently moving animate objects. Landmarks and animate objects yield the same response interactions with optic flow; mainly additive effects, with a substantial number of sub- and super-additive responses. Sub- and super-additive interactions reflect each neuron's local and global motion sensitivities: Local motion sensitivity is based on the spatial arrangement of directions created by object motion and the surrounding optic flow. Global motion sensitivity is based on the temporal sequence of self-movement headings that define a simulated path through the environment. We conclude that MST neurons' spatio-temporal response properties combine object motion and optic flow cues to represent self-movement in diverse, naturalistic circumstances.
Collapse
|
7
|
Evidence for flow-parsing in radial flow displays. Vision Res 2008; 48:655-63. [PMID: 18243274 DOI: 10.1016/j.visres.2007.10.023] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2007] [Revised: 10/16/2007] [Accepted: 10/18/2007] [Indexed: 11/21/2022]
Abstract
Retinal motion of objects is not in itself enough to signal whether or how objects are moving in the world; the same pattern of retinal motion can result from movement of the object, the observer or both. Estimation of scene-relative movement of an object is vital for successful completion of many simple everyday tasks. Recent research has provided evidence for a neural flow-parsing mechanism which uses the brain's sensitivity to optic flow to separate retinal motion signals into those components due to observer movement and those due to the movement of objects in the scene. In this study we provide further evidence that flow-parsing is implicated in the assessment of object trajectory during observer movement. Furthermore, it is shown that flow-parsing involves a global analysis of retinal motion, as might be expected if optic flow processing underpinned this mechanism.
Collapse
|
8
|
A model for simultaneous computation of heading and depth in the presence of rotations. Vision Res 2007; 47:3025-40. [DOI: 10.1016/j.visres.2007.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2007] [Revised: 08/15/2007] [Accepted: 08/17/2007] [Indexed: 11/22/2022]
|
9
|
Duijnhouwer J, Beintema JA, van den Berg AV, van Wezel RJA. An illusory transformation of optic flow fields without local motion interactions. Vision Res 2006; 46:439-43. [PMID: 16009393 DOI: 10.1016/j.visres.2005.05.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2005] [Revised: 05/09/2005] [Accepted: 05/12/2005] [Indexed: 11/21/2022]
Abstract
The focus of expansion (FOE) of a radially expanding optic flow pattern that is overlapped by unidirectional laminar flow is perceptually displaced in the direction of that laminar flow. There is continuing debate on whether this effect is due to local or global motion interactions. Here, we show psychophysically that under conditions without local motion transparency the illusion becomes weaker but can still be observed. In our experiments, the radial and laminar-flow fields were not presented with overlap but separately to the left and right halves of the visual field with a blank vertical strip of 15 degrees horizontal width in between. The illusory shift observed in this condition cannot be explained by local motion interactions because (a) no transparent motion was present in the stimulus, and (b) the receptive fields of cortical cells involved in the analysis of local motion cross the vertical midline of the visual field to a limited extent. We conclude that global motion detectors that integrate motion from both halves of the visual field play a role in shifting the perceived position of the FOE and that local motion interactions may be sufficient, but are not necessary for the optic flow illusion to occur.
Collapse
Affiliation(s)
- Jacob Duijnhouwer
- Functional Neurobiology, Helmholtz Research Institute, Utrecht, The Netherlands.
| | | | | | | |
Collapse
|
10
|
Poljac E, Neggers B, van den Berg AV. Collision judgment of objects approaching the head. Exp Brain Res 2005; 171:35-46. [PMID: 16328256 DOI: 10.1007/s00221-005-0257-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2004] [Accepted: 09/28/2005] [Indexed: 10/25/2022]
Abstract
Recent investigations have indicated that human perception of the trajectory of objects approaching in the horizontal plane is precise but biased away from straight ahead. This is remarkable because it could mean that subjects perceive objects that approach on a collision course as missing the head. Approach within the horizontal plane through the eyes and the fixation point (the plane of regard) is special, as general motions will also have a component of motion perpendicular to the plane of regard. Thus, we investigated three-dimensional motion perception in the vicinity of the head, including vertical components. Subjects judged whether an object that moved in the mid-sagittal plane was going to hit below or above a well-known reference point on the face like the center of the chin or the forehead (perceptual task). Tactile and proprioceptive information about the reference point significantly improved precision. Precision did not change with distance of the approaching target or with fixation direction. Bias was virtually absent for these vertical motions. When subjects pointed with their index finger to the perceived location of impact on their face (visuo-motor task), they overestimated (1.7 cm) the horizontal eccentricity of the point of impact (pointing task). Vertical bias, however, was again virtually absent. Interestingly, when trajectories intersected the plane of regard, higher precision was observed in the perceptual task regardless of the other conditions. In contrast, neither bias nor precision of the pointing task changed significantly when the trajectories intersected the plane of regard. When asked to point to the location where a trajectory intersected the plane of regard, subjects overestimated the depth component of this intersection location by about 3 cm. The absence of perceptual and pointing bias in the vertical direction in contrast to the clear horizontal bias suggests that different (combinations of) cues are used to judge these components of the trajectory of an approaching object. The results of our perceptual task suggest a role for somatosensory signals in the visual judgment of impending impact.
Collapse
Affiliation(s)
- E Poljac
- Functional Neurobiology, Helmholtz Institute, Padualaan 8, 3584 Utrecht, The Netherlands
| | | | | |
Collapse
|
11
|
Hanada M. Computational analyses for illusory transformations in the optic flow field and heading perception in the presence of moving objects. Vision Res 2005; 45:749-58. [PMID: 15639501 DOI: 10.1016/j.visres.2004.09.037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2004] [Revised: 09/23/2004] [Indexed: 11/24/2022]
Abstract
When we see a stimulus of a radial flow field (the target flow) overlapped with a lateral flow field or another radial flow field, the focus of expansion (FOE) of the target radial flow appears to be shifted in a direction. Royden and Conti [(2003). A model using MT-like motion-opponent operators explains an illusory transformation in the optic flow field. Vision Research, 43, 2811-2826] argued that local motion subtraction is crucial for explanation of this phenomenon. The flow field which causes the illusory displacement of FOE was computationally analyzed. It was shown that the flow field is approximately a rigid-motion flow; the flow can be generated by simulating a situation where an observer moves toward a stationary scene. The heading direction for the observer corresponds to the perceived position of the FOE of the radial flow pattern. It implies that any algorithms which assume rigidity of the scene and recover veridical heading explain the bias in perceived FOE. There is no need for local motion subtraction in order to explain the phenomena. Furthermore, the flow for an observer's translation in the presence of objects moving laterally or in depth was computationally analyzed. It was found that algorithms which minimizes standard error functions with less weights to the independently moving objects show similar biases in recovered heading to the bias of human observers. It implies that local motion subtraction is not necessary for explanation of the bias in perceived heading due to an object moving laterally or in depth, contrary to the argument of Royden [(2002). Computing heading in the presence of moving objects: a model that uses motion-opponent operators. Vision Research, 42, 3043-3058].
Collapse
Affiliation(s)
- Mitsuhiko Hanada
- Department of Media Architecture, Future University-Hakodate, 116-2 Kamedanakano-cho, Hakodate, Hokkaido 041-8655, Japan.
| |
Collapse
|