1
|
Warren PA, Bell G, Li Y. Investigating distortions in perceptual stability during different self-movements using virtual reality. Perception 2022; 51:3010066221116480. [PMID: 35946126 PMCID: PMC9478599 DOI: 10.1177/03010066221116480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/30/2022] [Indexed: 11/30/2022]
Abstract
Using immersive virtual reality (the HTC Vive Head Mounted Display), we measured both bias and sensitivity when making judgements about the scene stability of a target object during both active (self-propelled) and passive (experimenter-propelled) observer movements. This was repeated in the same group of 16 participants for three different observer-target movement conditions in which the instability of a target was yoked to the movement of the observer. We found that in all movement conditions that the target needed to move with (in the same direction) as the participant to be perceived as scene-stable. Consistent with the presence of additional available information (efference copy) about self-movement during active conditions, biases were smaller and sensitivities to instability were higher in these relative to passive conditions. However, the presence of efference copy was clearly not sufficient to completely eliminate the bias and we suggest that the presence of additional visual information about self-movement is also critical. We found some (albeit limited) evidence for correlation between appropriate metrics across different movement conditions. These results extend previous findings, providing evidence for consistency of biases across different movement types, suggestive of common processing underpinning perceptual stability judgements.
Collapse
Affiliation(s)
- Paul A. Warren
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Graham Bell
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| | - Yu Li
- Virtual Reality Research (VR2) Facility, Division of
Neuroscience and Experimental Psychology, University of Manchester, Manchester, UK
| |
Collapse
|
2
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
3
|
Chaudhary S, Saywell N, Taylor D. The Differentiation of Self-Motion From External Motion Is a Prerequisite for Postural Control: A Narrative Review of Visual-Vestibular Interaction. Front Hum Neurosci 2022; 16:697739. [PMID: 35210998 PMCID: PMC8860980 DOI: 10.3389/fnhum.2022.697739] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 01/18/2022] [Indexed: 11/13/2022] Open
Abstract
The visual system is a source of sensory information that perceives environmental stimuli and interacts with other sensory systems to generate visual and postural responses to maintain postural stability. Although the three sensory systems; the visual, vestibular, and somatosensory systems work concurrently to maintain postural control, the visual and vestibular system interaction is vital to differentiate self-motion from external motion to maintain postural stability. The visual system influences postural control playing a key role in perceiving information required for this differentiation. The visual system’s main afferent information consists of optic flow and retinal slip that lead to the generation of visual and postural responses. Visual fixations generated by the visual system interact with the afferent information and the vestibular system to maintain visual and postural stability. This review synthesizes the roles of the visual system and their interaction with the vestibular system, to maintain postural stability.
Collapse
|
4
|
Abstract
During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA.,
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| |
Collapse
|
5
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
6
|
Kozhemiako N, Nunes AS, Samal A, Rana KD, Calabro FJ, Hämäläinen MS, Khan S, Vaina LM. Neural activity underlying the detection of an object movement by an observer during forward self-motion: Dynamic decoding and temporal evolution of directional cortical connectivity. Prog Neurobiol 2020; 195:101824. [PMID: 32446882 DOI: 10.1016/j.pneurobio.2020.101824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Revised: 05/09/2020] [Accepted: 05/18/2020] [Indexed: 01/13/2023]
Abstract
Relatively little is known about how the human brain identifies movement of objects while the observer is also moving in the environment. This is, ecologically, one of the most fundamental motion processing problems, critical for survival. To study this problem, we used a task which involved nine textured spheres moving in depth, eight simulating the observer's forward motion while the ninth, the target, moved independently with a different speed towards or away from the observer. Capitalizing on the high temporal resolution of magnetoencephalography (MEG) we trained a Support Vector Classifier (SVC) using the sensor-level data to identify correct and incorrect responses. Using the same MEG data, we addressed the dynamics of cortical processes involved in the detection of the independently moving object and investigated whether we could obtain confirmatory evidence for the brain activity patterns used by the classifier. Our findings indicate that response correctness could be reliably predicted by the SVC, with the highest accuracy during the blank period after motion and preceding the response. The spatial distribution of the areas critical for the correct prediction was similar but not exclusive to areas underlying the evoked activity. Importantly, SVC identified frontal areas otherwise not detected with evoked activity that seem to be important for the successful performance in the task. Dynamic connectivity further supported the involvement of frontal and occipital-temporal areas during the task periods. This is the first study to dynamically map cortical areas using a fully data-driven approach in order to investigate the neural mechanisms involved in the detection of moving objects during observer's self-motion.
Collapse
Affiliation(s)
- N Kozhemiako
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - A S Nunes
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | - A Samal
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA.
| | - K D Rana
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; National Institute of Mental Health, Bethesda, MD, USA.
| | - F J Calabro
- Department of Psychiatry and Biomedical Engineering, University of Pittsburgh, PA, USA.
| | - M S Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA.
| | - S Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA
| | - L M Vaina
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
7
|
Computational Mechanisms for Perceptual Stability using Disparity and Motion Parallax. J Neurosci 2020; 40:996-1014. [PMID: 31699889 DOI: 10.1523/jneurosci.0036-19.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 09/24/2019] [Accepted: 10/07/2019] [Indexed: 11/21/2022] Open
Abstract
Walking and other forms of self-motion create global motion patterns across our eyes. With the resulting stream of visual signals, how do we perceive ourselves as moving through a stable world? Although the neural mechanisms are largely unknown, human studies (Warren and Rushton, 2009) provide strong evidence that the visual system is capable of parsing the global motion into two components: one due to self-motion and the other due to independently moving objects. In the present study, we use computational modeling to investigate potential neural mechanisms for stabilizing visual perception during self-motion that build on neurophysiology of the middle temporal (MT) and medial superior temporal (MST) areas. One such mechanism leverages direction, speed, and disparity tuning of cells in dorsal MST (MSTd) to estimate the combined motion parallax and disparity signals attributed to the observer's self-motion. Feedback from the most active MSTd cell subpopulations suppresses motion signals in MT that locally match the preference of the MSTd cell in both parallax and disparity. This mechanism combined with local surround inhibition in MT allows the model to estimate self-motion while maintaining a sparse motion representation that is compatible with perceptual stability. A key consequence is that after signals compatible with the observer's self-motion are suppressed, the direction of independently moving objects is represented in a world-relative rather than observer-relative reference frame. Our analysis explicates how temporal dynamics and joint motion parallax-disparity tuning resolve the world-relative motion of moving objects and establish perceptual stability. Together, these mechanisms capture findings on the perception of object motion during self-motion.SIGNIFICANCE STATEMENT The image integrated by our eyes as we move through our environment undergoes constant flux as trees, buildings, and other surroundings stream by us. If our view can change so radically from one moment to the next, how do we perceive a stable world? Although progress has been made in understanding how this works, little is known about the underlying brain mechanisms. We propose a computational solution whereby multiple brain areas communicate to suppress the motion attributed to our movement relative to the stationary world, which is often responsible for a large proportion of the flux across the visual field. We simulated the proposed neural mechanisms and tested model estimates using data from human perceptual studies.
Collapse
|
8
|
Anthwal S, Ganotra D. An overview of optical flow-based approaches for motion segmentation. THE IMAGING SCIENCE JOURNAL 2019. [DOI: 10.1080/13682199.2019.1641316] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Shivangi Anthwal
- Department of Applied Science and Humanities, Indira Gandhi Delhi Technical University for Women, Delhi, India
| | - Dinesh Ganotra
- Department of Applied Science and Humanities, Indira Gandhi Delhi Technical University for Women, Delhi, India
| |
Collapse
|
9
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
10
|
Rushton SK, Chen R, Li L. Ability to identify scene-relative object movement is not limited by, or yoked to, ability to perceive heading. J Vis 2018; 18:11. [PMID: 30029224 DOI: 10.1167/18.6.11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
During locomotion humans can judge where they are heading relative to the scene and the movement of objects within the scene. Both judgments rely on identifying global components of optic flow. What is the relationship between the perception of heading, and the identification of object movement during self-movement? Do they rely on a shared mechanism? One way to address these questions is to compare performance on the two tasks. We designed stimuli that allowed direct comparison of the precision of heading and object movement judgments. Across a series of experiments, we found the precision was typically higher when judging scene-relative object movement than when judging heading. We also found that manipulations of the content of the visual scene can change the relative precision of the two judgments. These results demonstrate that the ability to judge scene-relative object movement during self-movement is not limited by, or yoked to, the ability to judge the direction of self-movement.
Collapse
Affiliation(s)
- Simon K Rushton
- School of Psychology, Cardiff University, Cardiff, Wales, UK
| | - Rongrong Chen
- Department of Psychology, The University of Hong Kong, Hong Kong SAR
| | - Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, PRC
| |
Collapse
|
11
|
Roudaia E, Calabro F, Vaina L, Newell F. Aging Impairs Audiovisual Facilitation of Object Motion Within Self-Motion. Multisens Res 2018; 31:251-272. [DOI: 10.1163/22134808-00002600] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Accepted: 07/27/2017] [Indexed: 11/19/2022]
Abstract
The presence of a moving sound has been shown to facilitate the detection of an independently moving visual target embedded among an array of identical moving objects simulating forward self-motion (Calabro et al., Proc. R. Soc. B, 2011). Given that the perception of object motion within self-motion declines with aging, we investigated whether older adults can also benefit from the presence of a congruent dynamic sound when detecting object motion within self-motion. Visual stimuli consisted of nine identical spheres randomly distributed inside a virtual rectangular prism. For 1 s, all the spheres expanded outward simulating forward observer translation at a constant speed. One of the spheres (the target) had independent motion either approaching or moving away from the observer at three different speeds. In the visual condition, stimuli contained no sound. In the audiovisual condition, the visual stimulus was accompanied by a broadband noise sound co-localized with the target, whose loudness increased or decreased congruent with the target’s direction. Participants reported which of the spheres had independent motion. Younger participants showed higher target detection accuracy in the audiovisual compared to the visual condition at the slowest speed level. Older participants showed overall poorer target detection accuracy than the younger participants, but the presence of the sound had no effect on older participants’ target detection accuracy at either speed level. These results indicate that aging may impair cross-modal integration in some contexts. Potential reasons for the absence of auditory facilitation in older adults are discussed.
Collapse
Affiliation(s)
- Eugenie Roudaia
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Finnegan J. Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Lucia M. Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Neurology, Harvard Medical School, Boston, MA, USA
| | - Fiona N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
12
|
Rogers C, Rushton SK, Warren PA. Peripheral Visual Cues Contribute to the Perception of Object Movement During Self-Movement. Iperception 2017; 8:2041669517736072. [PMID: 29201335 PMCID: PMC5700793 DOI: 10.1177/2041669517736072] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Safe movement through the environment requires us to monitor our surroundings for moving objects or people. However, identification of moving objects in the scene is complicated by self-movement, which adds motion across the retina. To identify world-relative object movement, the brain thus has to ‘compensate for’ or ‘parse out’ the components of retinal motion that are due to self-movement. We have previously demonstrated that retinal cues arising from central vision contribute to solving this problem. Here, we investigate the contribution of peripheral vision, commonly thought to provide strong cues to self-movement. Stationary participants viewed a large field of view display, with radial flow patterns presented in the periphery, and judged the trajectory of a centrally presented probe. Across two experiments, we demonstrate and quantify the contribution of peripheral optic flow to flow parsing during forward and backward movement.
Collapse
Affiliation(s)
| | | | - Paul A Warren
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
13
|
Niehorster DC, Li L. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion. Iperception 2017; 8:2041669517708206. [PMID: 28567272 PMCID: PMC5439648 DOI: 10.1177/2041669517708206] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.
Collapse
Affiliation(s)
| | - Li Li
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong; Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, China
| |
Collapse
|
14
|
Royden CS, Parsons D, Travatello J. The effect of monocular depth cues on the detection of moving objects by moving observers. Vision Res 2016; 124:7-14. [PMID: 27264029 DOI: 10.1016/j.visres.2016.05.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Revised: 05/19/2016] [Accepted: 05/23/2016] [Indexed: 11/26/2022]
Abstract
An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Daniel Parsons
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| | - Joshua Travatello
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
15
|
Vaina LM, Buonanno F, Rushton SK. Spared ability to perceive direction of locomotor heading and scene-relative object movement despite inability to perceive relative motion. Med Sci Monit 2014; 20:1563-71. [PMID: 25183375 PMCID: PMC4161606 DOI: 10.12659/msm.892199] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND All contemporary models of perception of locomotor heading from optic flow (the characteristic patterns of retinal motion that result from self-movement) begin with relative motion. Therefore it would be expected that an impairment on perception of relative motion should impact on the ability to judge heading and other 3D motion tasks. MATERIAL AND METHODS We report two patients with occipital lobe lesions whom we tested on a battery of motion tasks. Patients were impaired on all tests that involved relative motion in plane (motion discontinuity, form from differences in motion direction or speed). Despite this they retained the ability to judge their direction of heading relative to a target. A potential confound is that observers can derive information about heading from scale changes bypassing the need to use optic flow. Therefore we ran further experiments in which we isolated optic flow and scale change. RESULTS Patients' performance was in normal ranges on both tests. The finding that ability to perceive heading can be retained despite an impairment on ability to judge relative motion questions the assumption that heading perception proceeds from initial processing of relative motion. Furthermore, on a collision detection task, SS and SR's performance was significantly better for simulated forward movement of the observer in the 3D scene, than for the static observer. This suggests that in spite of severe deficits on relative motion in the frontoparlel (xy) plane, information from self-motion helped identification objects moving along an intercept 3D relative motion trajectory. CONCLUSIONS This result suggests a potential use of a flow parsing strategy to detect in a 3D world the trajectory of moving objects when the observer is moving forward. These results have implications for developing rehabilitation strategies for deficits in visually guided navigation.
Collapse
Affiliation(s)
- Lucia Maria Vaina
- Brain and Vision Research Laboratory, Boston University, Boston, USA
| | - Ferdinando Buonanno
- Department of Neurology, Harvard Medical School, Massachusetts General Hospital, Neurology of Vision Laboratory, Boston, USA
| | - Simon K Rushton
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
16
|
Royden CS, Holloway MA. Detecting moving objects in an optic flow field using direction- and speed-tuned operators. Vision Res 2014; 98:14-25. [PMID: 24607912 DOI: 10.1016/j.visres.2014.02.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2013] [Revised: 01/25/2014] [Accepted: 02/21/2014] [Indexed: 11/20/2022]
Abstract
An observer moving through a scene must be able to identify moving objects. Psychophysical results have shown that people can identify moving objects based on the speed or direction of their movement relative to the optic flow field generated by the observer's motion. Here we show that a model that uses speed- and direction-tuned units, whose responses are based on the response properties of cells in the primate visual cortex, can successfully identify the borders of moving objects in a scene through which an observer is moving.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Michael A Holloway
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
17
|
Alberti CF, Peli E, Bowers AR. Driving with hemianopia: III. Detection of stationary and approaching pedestrians in a simulator. Invest Ophthalmol Vis Sci 2014; 55:368-74. [PMID: 24346175 DOI: 10.1167/iovs.13-12737] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
PURPOSE To compare blind-side detection performance of drivers with homonymous hemianopia (HH) for stationary and approaching pedestrians, initially appearing at small (4°) or large (14°) eccentricities in a driving simulator. While the stationary pedestrians did not represent an imminent threat, as their eccentricity increased rapidly as the vehicle advanced, the approaching pedestrians maintained a collision course with approximately constant eccentricity, walking or running, toward the travel lane as if to cross. METHODS Twelve participants with complete HH and without spatial neglect pressed the horn whenever they detected a pedestrian while driving along predetermined routes in two driving simulator sessions. Miss rates and reaction times were analyzed for 52 stationary and 52 approaching pedestrians. RESULTS Miss rates were higher and reaction times longer on the blind than the seeing side (P < 0.01). On the blind side, miss rates were lower for approaching than stationary pedestrians (16% vs. 29%, P = 0.01), especially at larger eccentricities (20% vs. 54%, P = 0.005), but reaction times for approaching pedestrians were longer (1.72 vs. 1.41 seconds; P = 0.03). Overall, the proportion of potential blind-side collisions (missed and late responses) was not different for the two paradigms (41% vs. 35%, P = 0.48), and significantly higher than for the seeing side (3%, P = 0.002). CONCLUSIONS In a realistic pedestrian detection task, drivers with HH exhibited significant blind-side detection deficits. Even when approaching pedestrians were detected, responses were often too late to avoid a potential collision.
Collapse
Affiliation(s)
- Concetta F Alberti
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| | | | | |
Collapse
|
18
|
Fajen BR, Parade MS, Matthis JS. Humans perceive object motion in world coordinates during obstacle avoidance. J Vis 2013; 13:25. [PMID: 23887048 PMCID: PMC3726133 DOI: 10.1167/13.8.25] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
A fundamental question about locomotion in the presence of moving objects is whether movements are guided based upon perceived object motion in an observer-centered or world-centered reference frame. The former captures object motion relative to the moving observer and depends on both observer and object motion. The latter captures object motion relative to the stationary environment and is independent of observer motion. Subjects walked through a virtual environment (VE) viewed through a head-mounted display and indicated whether they would pass in front of or behind a moving obstacle that was on course to cross their future path. Subjects' movement through the VE was manipulated such that object motion in observer coordinates was affected while object motion in world coordinates was the same. We found that when moving observers choose routes around moving obstacles, they rely on object motion perceived in world coordinates. This entails a process, which has been called flow parsing (Rushton & Warren, 2005; Warren & Rushton, 2009a), that recovers the component of optic flow due to object motion independent of self-motion. We found that when self-motion is real and actively generated, the process by which object motion is recovered relies on both visual and nonvisual information to factor out the influence of self-motion. The remaining component contains information about object motion in world coordinates that is needed to guide locomotion.
Collapse
Affiliation(s)
- Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA.
| | | | | |
Collapse
|
19
|
Foulkes AJ, Rushton SK, Warren PA. Flow parsing and heading perception show similar dependence on quality and quantity of optic flow. Front Behav Neurosci 2013; 7:49. [PMID: 23801945 PMCID: PMC3685810 DOI: 10.3389/fnbeh.2013.00049] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2012] [Accepted: 05/06/2013] [Indexed: 11/13/2022] Open
Abstract
Here we examine the relationship between the perception of heading and flow parsing. In a companion study we have investigated the pattern of dependence of human heading estimation on the quantity (amount of dots per frame) and quality (amount of directional noise) of motion information in an optic flow field. In the present study we investigated whether the flow parsing mechanism, which is thought to aid in the assessment of scene-relative object movement during observer movement, exhibits a similar pattern of dependence on these stimulus manipulations. Finding that the pattern of flow parsing effects was similar to that observed for heading thresholds would provide some evidence that these two complementary roles for optic flow processing are reliant on the same, or similar, neural computation. We found that the pattern of flow parsing effects observed does indeed display a striking similarity to the heading thresholds. As with judgements of heading, there is a critical value of around 25 dots per frame; below this value flow parsing effects rapidly deteriorate and above this value flow parsing effects are stable [see Warren et al. (1988) for similar results for heading]. Also, as with judgements of heading, when there were 50 or more dots there was a systematic effect of noise on the magnitude of the flow parsing effect. These results are discussed in the context of different possible schemes of flow processing to support both heading and flow parsing mechanisms.
Collapse
Affiliation(s)
- Andrew J Foulkes
- School of Psychological Sciences, The University of Manchester Manchester, UK
| | | | | |
Collapse
|
20
|
Visual and non-visual contributions to the perception of object motion during self-motion. PLoS One 2013; 8:e55446. [PMID: 23408983 PMCID: PMC3567075 DOI: 10.1371/journal.pone.0055446] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2012] [Accepted: 12/29/2012] [Indexed: 11/19/2022] Open
Abstract
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component - that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (experiment 1) and direction (experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects.
Collapse
|
21
|
Abstract
We have recently suggested that neural flow parsing mechanisms act to subtract global optic flow consistent with observer movement to aid in detecting and assessing scene-relative object movement. Here, we examine whether flow parsing can occur independently from heading estimation. To address this question we used stimuli comprising two superimposed optic flow fields comprising limited lifetime dots (one planar and one radial). This stimulus gives rise to the so-called optic flow illusion (OFI) in which perceived heading is biased in the direction of the planar flow field. Observers were asked to report the perceived direction of motion of a probe object placed in the OFI stimulus. If flow parsing depends upon a prior estimate of heading then the perceived trajectory should reflect global subtraction of a field consistent with the heading experienced under the OFI. In Experiment 1 we tested this prediction directly, finding instead that the perceived trajectory was biased markedly in the direction opposite to that predicted under the OFI. In Experiment 2 we demonstrate that the results of Experiment 1 are consistent with a positively weighted vector sum of the effects seen when viewing the probe together with individual radial and planar flow fields. These results suggest that flow parsing is not necessarily dependent on prior estimation of heading direction. We discuss the implications of this finding for our understanding of the mechanisms of flow parsing.
Collapse
Affiliation(s)
- Paul A. Warren
- School of Psychological Sciences, The University of Manchester, Manchester, UK,
| | | | - Andrew J. Foulkes
- School of Psychological Sciences, The University of Manchester, Manchester, UK,
| |
Collapse
|
22
|
Interaction of cortical networks mediating object motion detection by moving observers. Exp Brain Res 2012; 221:177-89. [PMID: 22811215 DOI: 10.1007/s00221-012-3159-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2011] [Accepted: 06/21/2012] [Indexed: 10/28/2022]
Abstract
The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects' psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation.
Collapse
|
23
|
MacNeilage PR, Zhang Z, DeAngelis GC, Angelaki DE. Vestibular facilitation of optic flow parsing. PLoS One 2012; 7:e40264. [PMID: 22768345 PMCID: PMC3388053 DOI: 10.1371/journal.pone.0040264] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 06/04/2012] [Indexed: 11/18/2022] Open
Abstract
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.
Collapse
Affiliation(s)
- Paul R MacNeilage
- Vertigo, Balance, and Oculomotor Research Center, University Hospital of Munich, Munich, Germany.
| | | | | | | |
Collapse
|
24
|
Use of speed cues in the detection of moving objects by moving observers. Vision Res 2012; 59:17-24. [PMID: 22406544 DOI: 10.1016/j.visres.2012.02.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2011] [Revised: 01/12/2012] [Accepted: 02/21/2012] [Indexed: 11/20/2022]
Abstract
When an observer moves through an environment containing stationary and moving objects, he or she must be able to determine which objects are moving relative to the others in order to navigate successfully and avoid collisions. We investigated whether image speed can be used as a cue to detect a moving object in the scene. Our results show that image speed can be used to detect moving objects as long as the object is moving sufficiently faster or slower than it would if it were part of the stationary scene.
Collapse
|
25
|
Kishore S, Hornick N, Sato N, Page WK, Duffy CJ. Driving strategy alters neuronal responses to self-movement: cortical mechanisms of distracted driving. ACTA ACUST UNITED AC 2011; 22:201-8. [PMID: 21653287 DOI: 10.1093/cercor/bhr115] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We presented naturalistic combinations of virtual self-movement stimuli while recording neuronal activity in monkey cerebral cortex. Monkeys used a joystick to drive to a straight ahead heading direction guided by either object motion or optic flow. The selected cue dominates neuronal responses, often mimicking responses evoked when that stimulus is presented alone. In some neurons, driving strategy creates selective response additivities. In others, it creates vulnerabilities to the disruptive effects of independently moving objects. Such cue interactions may be related to the disruptive effects of independently moving objects in Alzheimer's disease patients with navigational deficits.
Collapse
Affiliation(s)
- Sarita Kishore
- Department of Neurology, University of Rochester Medical Center, Rochester, NY 14642, USA
| | | | | | | | | |
Collapse
|
26
|
Cortical neurons combine visual cues about self-movement. Exp Brain Res 2010; 206:283-97. [PMID: 20852992 DOI: 10.1007/s00221-010-2406-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2010] [Accepted: 08/25/2010] [Indexed: 10/19/2022]
Abstract
Visual cues about self-movement are derived from the patterns of optic flow and the relative motion of discrete objects. We recorded dorsal medial superior temporal (MSTd) cortical neurons in monkeys that held centered visual fixation while viewing optic flow and object motion stimuli simulating the self-movement cues seen during translation on a circular path. Twenty stimulus configurations presented naturalistic combinations of optic flow with superimposed objects that simulated either earth-fixed landmark objects or independently moving animate objects. Landmarks and animate objects yield the same response interactions with optic flow; mainly additive effects, with a substantial number of sub- and super-additive responses. Sub- and super-additive interactions reflect each neuron's local and global motion sensitivities: Local motion sensitivity is based on the spatial arrangement of directions created by object motion and the surrounding optic flow. Global motion sensitivity is based on the temporal sequence of self-movement headings that define a simulated path through the environment. We conclude that MST neurons' spatio-temporal response properties combine object motion and optic flow cues to represent self-movement in diverse, naturalistic circumstances.
Collapse
|