1
|
Peltier NE, Anzai A, Moreno-Bote R, DeAngelis GC. A neural mechanism for optic flow parsing in macaque visual cortex. Curr Biol 2024:S0960-9822(24)01241-7. [PMID: 39389059 DOI: 10.1016/j.cub.2024.09.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/21/2024] [Accepted: 09/12/2024] [Indexed: 10/12/2024]
Abstract
For the brain to compute object motion in the world during self-motion, it must discount the global patterns of image motion (optic flow) caused by self-motion. Optic flow parsing is a proposed visual mechanism for computing object motion in the world, and studies in both humans and monkeys have demonstrated perceptual biases consistent with the operation of a flow-parsing mechanism. However, the neural basis of flow parsing remains unknown. We demonstrate, at both the individual unit and population levels, that neural activity in macaque middle temporal (MT) area is biased by peripheral optic flow in a manner that can at least partially account for perceptual biases induced by flow parsing. These effects cannot be explained by conventional surround suppression mechanisms or choice-related activity and have substantial neural latency. Together, our findings establish the first neural basis for the computation of scene-relative object motion based on flow parsing.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Rubén Moreno-Bote
- Center for Brain and Cognition & Department of Engineering, Universitat Pompeu Fabra, Barcelona 08002, Spain; Serra Húnter Fellow Programme, Universitat Pompeu Fabra, Barcelona 08002, Spain
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.
| |
Collapse
|
2
|
Hülemeier AG, Lappe M. Illusory percepts of curvilinear self-motion when moving through crowds. J Vis 2023; 23:6. [PMID: 38112491 PMCID: PMC10732088 DOI: 10.1167/jov.23.14.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 11/13/2023] [Indexed: 12/21/2023] Open
Abstract
Self-motion generates optic flow, a pattern of expanding visual motion. Heading estimation from optic flow analysis is accurate in rigid environments, but it becomes challenging when other human walkers introduce independent motion to the scene. Previous studies showed that heading perception is surprisingly accurate when moving through a crowd of walkers but revealed strong heading biases when either articulation or translation of biological motion were presented in isolation. We hypothesized that these biases resulted from misperceiving the self-motion as curvilinear. Such errors might manifest as opposite biases depending on whether the observer perceived the crowd motion as indication of his/her self-translation or self-rotation. Our study investigated the link between heading biases and illusory path perception. Participants assessed heading and path perception while observing optic flow stimuli with varying walker movements. Self-motion perception was accurate during natural locomotion (articulation and translation), but significant heading biases occurred when walkers only articulated or translated. In this case, participants often reported a curved path of travel. Heading error and curvature pointed in opposite directions. On average, participants perceived the walker motion as evidence for viewpoint rotation leading to curvilinear path percepts.
Collapse
Affiliation(s)
| | - Markus Lappe
- Department of Psychology, University of Münster, Münster, Germany
| |
Collapse
|
3
|
Zhao G, Orlosky J, Feiner S, Ratsamee P, Uranishi Y. Mitigation of VR Sickness During Locomotion With a Motion-Based Dynamic Vision Modulator. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4089-4103. [PMID: 35687624 DOI: 10.1109/tvcg.2022.3181262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In virtual reality, VR sickness resulting from continuous locomotion via controllers or joysticks is still a significant problem. In this article, we present a set of algorithms to mitigate VR sickness that dynamically modulate the user's field of view by modifying the contrast of the periphery based on movement, color, and depth. In contrast with previous work, this vision modulator is a shader that is triggered by specific motions known to cause VR sickness, such as acceleration, strafing, and linear velocity. Moreover, the algorithm is governed by delta velocity, delta angle, and average color of the view. We ran two experiments with different washout periods to investigate the effectiveness of dynamic modulation on the symptoms of VR sickness, in which we compared this approach against a baseline and pitch-black field-of-view restrictors. Our first experiment made use of a just-noticeable-sickness design, which can be useful for building experiments with a short washout period.
Collapse
|
4
|
Gundavarapu A, Chakravarthy VS. Modeling the development of cortical responses in primate dorsal ("where") pathway to optic flow using hierarchical neural field models. Front Neurosci 2023; 17:1154252. [PMID: 37284658 PMCID: PMC10239834 DOI: 10.3389/fnins.2023.1154252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 04/26/2023] [Indexed: 06/08/2023] Open
Abstract
Although there is a plethora of modeling literature dedicated to the object recognition processes of the ventral ("what") pathway of primate visual systems, modeling studies on the motion-sensitive regions like the Medial superior temporal area (MST) of the dorsal ("where") pathway are relatively scarce. Neurons in the MST area of the macaque monkey respond selectively to different types of optic flow sequences such as radial and rotational flows. We present three models that are designed to simulate the computation of optic flow performed by the MST neurons. Model-1 and model-2 each composed of three stages: Direction Selective Mosaic Network (DSMN), Cell Plane Network (CPNW) or the Hebbian Network (HBNW), and the Optic flow network (OF). The three stages roughly correspond to V1-MT-MST areas, respectively, in the primate motion pathway. Both these models are trained stage by stage using a biologically plausible variation of Hebbian rule. The simulation results show that, neurons in model-1 and model-2 (that are trained on translational, radial, and rotational sequences) develop responses that could account for MSTd cell properties found neurobiologically. On the other hand, model-3 consists of the Velocity Selective Mosaic Network (VSMN) followed by a convolutional neural network (CNN) which is trained on radial and rotational sequences using a supervised backpropagation algorithm. The quantitative comparison of response similarity matrices (RSMs), made out of convolution layer and last hidden layer responses, show that model-3 neuron responses are consistent with the idea of functional hierarchy in the macaque motion pathway. These results also suggest that the deep learning models could offer a computationally elegant and biologically plausible solution to simulate the development of cortical responses of the primate motion pathway.
Collapse
Affiliation(s)
- Anila Gundavarapu
- Computational Neuroscience Lab, Indian Institute of Technology Madras, Chennai, India
| | - V. Srinivasa Chakravarthy
- Computational Neuroscience Lab, Indian Institute of Technology Madras, Chennai, India
- Center for Complex Systems and Dynamics, Indian Institute of Technology Madras, Chennai, India
| |
Collapse
|
5
|
Ali M, Decker E, Layton OW. Temporal stability of human heading perception. J Vis 2023; 23:8. [PMID: 36786748 PMCID: PMC9932552 DOI: 10.1167/jov.23.2.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023] Open
Abstract
Humans are capable of accurately judging their heading from optic flow during straight forward self-motion. Despite the global coherence in the optic flow field, however, visual clutter and other naturalistic conditions create constant flux on the eye. This presents a problem that must be overcome to accurately perceive heading from optic flow-the visual system must maintain sensitivity to optic flow variations that correspond with actual changes in self-motion and disregard those that do not. One solution could involve integrating optic flow over time to stabilize heading signals while suppressing transient fluctuations. Stability, however, may come at the cost of sluggishness. Here, we investigate the stability of human heading perception when subjects judge their heading after the simulated direction of self-motion changes. We found that the initial heading exerted an attractive influence on judgments of the final heading. Consistent with an evolving heading representation, bias toward the initial heading increased with the size of the heading change and as the viewing duration of the optic flow consistent with the final heading decreased. Introducing periods of sensory dropout (blackouts) later in the trial increased bias whereas an earlier one did not. Simulations of a neural model, the Competitive Dynamics Model, demonstrates that a mechanism that produces an evolving heading signal through recurrent competitive interactions largely captures the human data. Our findings characterize how the visual system balances stability in heading perception with sensitivity to change and support the hypothesis that heading perception evolves over time.
Collapse
Affiliation(s)
- Mufaddal Ali
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Eli Decker
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Oliver W. Layton
- Department of Computer Science, Colby College, Waterville, ME, USA,https://sites.google.com/colby.edu/owlab
| |
Collapse
|
6
|
Matthis JS, Muller KS, Bonnen KL, Hayhoe MM. Retinal optic flow during natural locomotion. PLoS Comput Biol 2022; 18:e1009575. [PMID: 35192614 PMCID: PMC8896712 DOI: 10.1371/journal.pcbi.1009575] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/04/2022] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker's visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body's trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker's instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body's momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior.
Collapse
Affiliation(s)
- Jonathan Samir Matthis
- Department of Biology, Northeastern University, Boston, Massachusetts, United States of America
| | - Karl S. Muller
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Kathryn L. Bonnen
- School of Optometry, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Mary M. Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
7
|
ARTFLOW: A Fast, Biologically Inspired Neural Network that Learns Optic Flow Templates for Self-Motion Estimation. SENSORS 2021; 21:s21248217. [PMID: 34960310 PMCID: PMC8708706 DOI: 10.3390/s21248217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 12/02/2021] [Accepted: 12/03/2021] [Indexed: 11/20/2022]
Abstract
Most algorithms for steering, obstacle avoidance, and moving object detection rely on accurate self-motion estimation, a problem animals solve in real time as they navigate through diverse environments. One biological solution leverages optic flow, the changing pattern of motion experienced on the eye during self-motion. Here I present ARTFLOW, a biologically inspired neural network that learns patterns in optic flow to encode the observer’s self-motion. The network combines the fuzzy ART unsupervised learning algorithm with a hierarchical architecture based on the primate visual system. This design affords fast, local feature learning across parallel modules in each network layer. Simulations show that the network is capable of learning stable patterns from optic flow simulating self-motion through environments of varying complexity with only one epoch of training. ARTFLOW trains substantially faster and yields self-motion estimates that are far more accurate than a comparable network that relies on Hebbian learning. I show how ARTFLOW serves as a generative model to predict the optic flow that corresponds to neural activations distributed across the network.
Collapse
|
8
|
Neilson PD, Neilson MD, Bye RT. A Riemannian Geometry Theory of Synergy Selection for Visually-Guided Movement. Vision (Basel) 2021; 5:26. [PMID: 34070234 PMCID: PMC8163178 DOI: 10.3390/vision5020026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 05/06/2021] [Accepted: 05/08/2021] [Indexed: 11/16/2022] Open
Abstract
Bringing together a Riemannian geometry account of visual space with a complementary account of human movement synergies we present a neurally-feasible computational formulation of visuomotor task performance. This cohesive geometric theory addresses inherent nonlinear complications underlying the match between a visual goal and an optimal action to achieve that goal: (i) the warped geometry of visual space causes the position, size, outline, curvature, velocity and acceleration of images to change with changes in the place and orientation of the head, (ii) the relationship between head place and body posture is ill-defined, and (iii) mass-inertia loads on muscles vary with body configuration and affect the planning of minimum-effort movement. We describe a partitioned visuospatial memory consisting of the warped posture-and-place-encoded images of the environment, including images of visible body parts. We depict synergies as low-dimensional submanifolds embedded in the warped posture-and-place manifold of the body. A task-appropriate synergy corresponds to a submanifold containing those postures and places that match the posture-and-place-encoded visual images that encompass the required visual goal. We set out a reinforcement learning process that tunes an error-reducing association memory network to minimize any mismatch, thereby coupling visual goals with compatible movement synergies. A simulation of a two-degrees-of-freedom arm illustrates that, despite warping of both visual space and posture space, there exists a smooth one-to-one and onto invertible mapping between vision and proprioception.
Collapse
Affiliation(s)
- Peter D. Neilson
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
| | - Megan D. Neilson
- Independent Researcher, late School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia;
| | - Robin T. Bye
- Cyber-Physical Systems Laboratory, Department of ICT and Natural Sciences, NTNU—Norwegian University of Science and Technology, Postboks 1517, NO-6009 Ålesund, Norway;
| |
Collapse
|
9
|
Assessing the relative contribution of vision to odometry via manipulations of gait in an over-ground homing task. Exp Brain Res 2021; 239:1305-1316. [PMID: 33630131 DOI: 10.1007/s00221-021-06066-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 02/15/2021] [Indexed: 01/13/2023]
Abstract
The visual, vestibular, and haptic perceptual systems are each able to detect self-motion. Such information can be integrated during locomotion to perceive traversed distances. The process of distance integration is referred to as odometry. Visual odometry relies on information in optic flow patterns. For haptic odometry, such information is associated with leg movement patterns. Recently, it has been shown that haptic odometry is differently calibrated for different types of gaits. Here, we use this fact to examine the relative contributions of the perceptual systems to odometry. We studied a simple homing task in which participants travelled set distances away from an initial starting location (outbound phase), before turning and attempting to walk back to that location (inbound phase). We manipulated whether outbound gait was a walk or a gallop-walk. We also manipulated the outbound availability of optic flow. Inbound reports were performed via walking with eyes closed. Consistent with previous studies of haptic odometry, inbound reports were shorter when the outbound gait was a gallop-walk. We showed that the availability of optic flow decreased this effect. In contrast, the availability of optic flow did not have an observable effect when the outbound gait was walking. We interpreted this to suggest that visual odometry and haptic odometry via walking are similarly calibrated. By measuring the decrease in shortening in the gallop-walk condition, and scaling it relative to the walk condition, we estimated a relative contribution of optic flow to odometry of 41%. Our results present a proof of concept for a new, potentially more generalizable, method for examining the contributions of different perceptual systems to odometry, and by extension, path integration. We discuss implications for understanding human wayfinding.
Collapse
|
10
|
Di Marco S, Fattori P, Galati G, Galletti C, Lappe M, Maltempo T, Serra C, Sulpizio V, Pitzalis S. Preference for locomotion-compatible curved paths and forward direction of self-motion in somatomotor and visual areas. Cortex 2021; 137:74-92. [PMID: 33607346 DOI: 10.1016/j.cortex.2020.12.021] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 11/20/2020] [Accepted: 12/05/2020] [Indexed: 12/11/2022]
Abstract
During locomotion, leg movements define the direction of walking (forward or backward) and the path one is taking (straight or curved). These aspects of locomotion produce characteristic visual motion patterns during movement. Here, we tested whether cortical regions responding to either egomotion-compatible visual motion, or leg movements, or both, are sensitive to these locomotion-relevant aspects of visual motion. We compared a curved path (typically the visual feedback of a changing direction of movement in the environment) to a linear path for simulated forward and backward motion in an event-related fMRI experiment. We used an individual surface-based approach and two functional localizers to define (1) six egomotion-related areas (V6+, V3A, intraparietal motion area [IPSmot], cingulate sulcus visual area [CSv], posterior cingulate area [pCi], posterior insular cortex [PIC]) using the flow field stimulus and (2) three leg-related cortical regions (human PEc [hPEc], human PE [hPE] and primary somatosensory cortex [S-I]) using a somatomotor task. Then, we extracted the response from all these regions with respect to the main event-related fMRI experiment, consisting of passive viewing of an optic flow stimulus, simulating a forward or backward direction of self-motion in either linear or curved path. Results showed that some regions have a significant preference for the curved path motion (hPEc, hPE, S-I, IPSmot) or a preference for the forward motion (V3A), while other regions have both a significant preference for the curved path motion and for the forward compared to backward motion (V6+, CSv, pCi). We did not find any significant effects of the present stimuli in PIC. Since controlling locomotion mainly means controlling changes of walking direction in the environment during forward self-motion, such a differential functional profile among these cortical regions suggests that they play a differentiated role in the visual guidance of locomotion.
Collapse
Affiliation(s)
- Sara Di Marco
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Gaspare Galati
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Brain Imaging Laboratory, Department of Psychology, Sapienza University, Rome, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Markus Lappe
- Institute for Psychology, University of Muenster, Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Teresa Maltempo
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Chiara Serra
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Valentina Sulpizio
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| |
Collapse
|
11
|
Burlingham CS, Heeger DJ. Heading perception depends on time-varying evolution of optic flow. Proc Natl Acad Sci U S A 2020; 117:33161-33169. [PMID: 33328275 PMCID: PMC7776640 DOI: 10.1073/pnas.2022984117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
There is considerable support for the hypothesis that perception of heading in the presence of rotation is mediated by instantaneous optic flow. This hypothesis, however, has never been tested. We introduce a method, termed "nonvarying phase motion," for generating a stimulus that conveys a single instantaneous optic flow field, even though the stimulus is presented for an extended period of time. In this experiment, observers viewed stimulus videos and performed a forced-choice heading discrimination task. For nonvarying phase motion, observers made large errors in heading judgments. This suggests that instantaneous optic flow is insufficient for heading perception in the presence of rotation. These errors were mostly eliminated when the velocity of phase motion was varied over time to convey the evolving sequence of optic flow fields corresponding to a particular heading. This demonstrates that heading perception in the presence of rotation relies on the time-varying evolution of optic flow. We hypothesize that the visual system accurately computes heading, despite rotation, based on optic acceleration, the temporal derivative of optic flow.
Collapse
Affiliation(s)
| | - David J Heeger
- Department of Psychology, New York University, New York, NY 10003;
- Center for Neural Science, New York University, New York, NY 10003
| |
Collapse
|
12
|
Hülemeier AG, Lappe M. Combining biological motion perception with optic flow analysis for self-motion in crowds. J Vis 2020; 20:7. [PMID: 32902593 PMCID: PMC7488621 DOI: 10.1167/jov.20.9.7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Heading estimation from optic flow relies on the assumption that the visual world is rigid. This assumption is violated when one moves through a crowd of people, a common and socially important situation. The motion of people in the crowd contains cues to their translation in the form of the articulation of their limbs, known as biological motion. We investigated how translation and articulation of biological motion influence heading estimation from optic flow for self-motion in a crowd. Participants had to estimate their heading during simulated self-motion toward a group of walkers who collectively walked in a single direction. We found that the natural combination of translation and articulation produces surprisingly small heading errors. In contrast, experimental conditions that either present only translation or only articulation produced strong idiosyncratic biases. The individual biases explained well the variance in the natural combination. A second experiment showed that the benefit of articulation and the bias produced by articulation were specific to biological motion. An analysis of the differences in biases between conditions and participants showed that different perceptual mechanisms contribute to heading perception in crowds. We suggest that coherent group motion affects the reference frame of heading perception from optic flow.
Collapse
Affiliation(s)
| | - Markus Lappe
- Department of Psychology, University of Münster, Münster, Germany
| |
Collapse
|
13
|
Lakshminarasimhan KJ, Avila E, Neyhart E, DeAngelis GC, Pitkow X, Angelaki DE. Tracking the Mind's Eye: Primate Gaze Behavior during Virtual Visuomotor Navigation Reflects Belief Dynamics. Neuron 2020; 106:662-674.e5. [PMID: 32171388 PMCID: PMC7323886 DOI: 10.1016/j.neuron.2020.02.023] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/24/2019] [Accepted: 02/19/2020] [Indexed: 01/02/2023]
Abstract
To take the best actions, we often need to maintain and update beliefs about variables that cannot be directly observed. To understand the principles underlying such belief updates, we need tools to uncover subjects' belief dynamics from natural behavior. We tested whether eye movements could be used to infer subjects' beliefs about latent variables using a naturalistic navigation task. Humans and monkeys navigated to a remembered goal location in a virtual environment that provided optic flow but lacked explicit position cues. We observed eye movements that appeared to continuously track the goal location even when no visible target was present there. Accurate goal tracking was associated with improved task performance, and inhibiting eye movements in humans impaired navigation precision. These results suggest that gaze dynamics play a key role in action selection during challenging visuomotor behaviors and may possibly serve as a window into the subject's dynamically evolving internal beliefs.
Collapse
Affiliation(s)
- Kaushik J Lakshminarasimhan
- Center for Neural Science, New York University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - Eric Avila
- Center for Neural Science, New York University, New York, NY, USA
| | - Erin Neyhart
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | | | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA; Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
14
|
Riddell H, Li L, Lappe M. Heading perception from optic flow in the presence of biological motion. J Vis 2019; 19:25. [PMID: 31868898 DOI: 10.1167/19.14.25] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We investigated whether biological motion biases heading estimation from optic flow in a similar manner to nonbiological moving objects. In two experiments, observers judged their heading from displays depicting linear translation over a random-dot ground with normal point light walkers, spatially scrambled point light walkers, or laterally moving objects composed of random dots. In Experiment 1, we found that both types of walkers biased heading estimates similarly to moving objects when they obscured the focus of expansion of the background flow. In Experiment 2, we also found that walkers biased heading estimates when they did not obscure the focus of expansion. These results show that both regular and scrambled biological motion affect heading estimation in a similar manner to simple moving objects, and suggest that biological motion is not preferentially processed for the perception of self-motion.
Collapse
Affiliation(s)
- Hugh Riddell
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| | - Li Li
- Faculty of Arts and Science, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Markus Lappe
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| |
Collapse
|
15
|
Solari F, Caramenti M, Chessa M, Pretto P, Bülthoff HH, Bresciani JP. A Biologically-Inspired Model to Predict Perceived Visual Speed as a Function of the Stimulated Portion of the Visual Field. Front Neural Circuits 2019; 13:68. [PMID: 31736715 PMCID: PMC6831620 DOI: 10.3389/fncir.2019.00068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 10/07/2019] [Indexed: 11/15/2022] Open
Abstract
Spatial orientation relies on a representation of the position and orientation of the body relative to the surrounding environment. When navigating in the environment, this representation must be constantly updated taking into account the direction, speed, and amplitude of body motion. Visual information plays an important role in this updating process, notably via optical flow. Here, we systematically investigated how the size and the simulated portion of the field of view (FoV) affect perceived visual speed of human observers. We propose a computational model to account for the patterns of human data. This model is composed of hierarchical cells' layers that model the neural processing stages of the dorsal visual pathway. Specifically, we consider that the activity of the MT area is processed by populations of modeled MST cells that are sensitive to the differential components of the optical flow, thus producing selectivity for specific patterns of optical flow. Our results indicate that the proposed computational model is able to describe the experimental evidence and it could be used to predict expected biases of speed perception for conditions in which only some portions of the visual field are visible.
Collapse
Affiliation(s)
- Fabio Solari
- Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genova, Genoa, Italy
| | - Martina Caramenti
- Department of Neuroscience and Movement Science, University of Fribourg, Fribourg, Switzerland
- Istituto di Bioimmagini e Fisiologia Molecolare, Consiglio Nazionale delle Ricerche, Segrate, Italy
| | - Manuela Chessa
- Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genova, Genoa, Italy
| | | | - Heinrich H. Bülthoff
- Department of Cognitive and Computational Psychophysics, Max Planck Institute for Biological Cybernetics, Tubingen, Germany
| | - Jean-Pierre Bresciani
- Department of Neuroscience and Movement Science, University of Fribourg, Fribourg, Switzerland
- University Grenoble Alpes, LPNC, Grenoble, France
| |
Collapse
|
16
|
Retinal Stabilization Reveals Limited Influence of Extraretinal Signals on Heading Tuning in the Medial Superior Temporal Area. J Neurosci 2019; 39:8064-8078. [PMID: 31488610 DOI: 10.1523/jneurosci.0388-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Revised: 08/17/2019] [Accepted: 08/20/2019] [Indexed: 11/21/2022] Open
Abstract
Heading perception in primates depends heavily on visual optic-flow cues. Yet during self-motion, heading percepts remain stable, even though smooth-pursuit eye movements often distort optic flow. According to theoretical work, self-motion can be represented accurately by compensating for these distortions in two ways: via retinal mechanisms or via extraretinal efference-copy signals, which predict the sensory consequences of movement. Psychophysical evidence strongly supports the efference-copy hypothesis, but physiological evidence remains inconclusive. Neurons that signal the true heading direction during pursuit are found in visual areas of monkey cortex, including the dorsal medial superior temporal area (MSTd). Here we measured heading tuning in MSTd using a novel stimulus paradigm, in which we stabilize the optic-flow stimulus on the retina during pursuit. This approach isolates the effects on neuronal heading preferences of extraretinal signals, which remain active while the retinal stimulus is prevented from changing. Our results from 3 female monkeys demonstrate a significant but small influence of extraretinal signals on the preferred heading directions of MSTd neurons. Under our stimulus conditions, which are rich in retinal cues, we find that retinal mechanisms dominate physiological corrections for pursuit eye movements, suggesting that extraretinal cues, such as predictive efference-copy mechanisms, have a limited role under naturalistic conditions.SIGNIFICANCE STATEMENT Sensory systems discount stimulation caused by an animal's own behavior. For example, eye movements cause irrelevant retinal signals that could interfere with motion perception. The visual system compensates for such self-generated motion, but how this happens is unclear. Two theoretical possibilities are a purely visual calculation or one using an internal signal of eye movements to compensate for their effects. The latter can be isolated by experimentally stabilizing the image on a moving retina, but this approach has never been adopted to study motion physiology. Using this method, we find that extraretinal signals have little influence on activity in visual cortex, whereas visually based corrections for ongoing eye movements have stronger effects and are likely most important under real-world conditions.
Collapse
|
17
|
Leeder T, Fallahtafti F, Schieber M, Myers SA, Blaskewicz Boron J, Yentes JM. Optic flow improves step width and length in older adults while performing dual task. Aging Clin Exp Res 2019; 31:1077-1086. [PMID: 30367447 DOI: 10.1007/s40520-018-1059-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 10/16/2018] [Indexed: 10/28/2022]
Abstract
BACKGROUND Dual-task paradigms are used to investigate gait and cognitive declines in older adults (OA). Optic-flow is a virtual reality environment where the scene flows past the subject while walking on a treadmill, mimicking real-life locomotion. AIMS To investigate cost of environment (no optic-flow v. optic-flow) while completing single- and dual-task walking and dual-task costs (DTC; single- v. dual-task) in optic-flow and no optic-flow environments. METHODS Twenty OA and seven younger adults (YA) walked on a self-paced treadmill in 3-min segments per task and both environments. Five task conditions included: no task, semantic fluency (category), phonemic fluency (letters), word reading, and serial-subtraction. RESULTS OAs had a benefit of optic-flow compared to no optic-flow for step width (p = 0.015) and step length (p = 0.045) during letters compared to the YA. During letters, OA experienced improvement in step width DTC; whereas YA had a decrement in step width DTC from no optic-flow to optic-flow (p = 0.038). During serial-subtraction, OA had less step width DTC when compared to YA in both environments (p = 0.02). DISCUSSION During letters, step width and step length improved in OA while walking in optic-flow. Also, step width DTC differed between the two groups. Sensory information from optic-flow appears to benefit OA. Letters relies more on verbal ability and word knowledge, which are preserved in aging. However, YA use a complex speech style during dual tasking, searching for complex words and an increased speed of speech. CONCLUSIONS OA can benefit from optic-flow by improving spatial gait parameters, specifically, step width, during dual-task walking.
Collapse
|
18
|
Abstract
The ability to navigate through crowds of moving people accurately, efficiently, and without causing collisions is essential for our day-to-day lives. Vision provides key information about one's own self-motion as well as the motions of other people in the crowd. These two types of information (optic flow and biological motion) have each been investigated extensively; however, surprisingly little research has been dedicated to investigating how they are processed when presented concurrently. Here, we showed that patterns of biological motion have a negative impact on visual-heading estimation when people within the crowd move their limbs but do not move through the scene. Conversely, limb motion facilitates heading estimation when walkers move independently through the scene. Interestingly, this facilitation occurs for crowds containing both regular and perturbed depictions of humans, suggesting that it is likely caused by low-level motion cues inherent in the biological motion of other people.
Collapse
Affiliation(s)
- Hugh Riddell
- Department of Psychology, Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster
| | - Markus Lappe
- Department of Psychology, Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster
| |
Collapse
|
19
|
Cottereau BR, Smith AT, Rima S, Fize D, Héjja-Brichard Y, Renaud L, Lejards C, Vayssière N, Trotter Y, Durand JB. Processing of Egomotion-Consistent Optic Flow in the Rhesus Macaque Cortex. Cereb Cortex 2018; 27:330-343. [PMID: 28108489 PMCID: PMC5939222 DOI: 10.1093/cercor/bhw412] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2016] [Indexed: 11/12/2022] Open
Abstract
The cortical network that processes visual cues to self-motion was characterized with functional magnetic resonance imaging in 3 awake behaving macaques. The experimental protocol was similar to previous human studies in which the responses to a single large optic flow patch were contrasted with responses to an array of 9 similar flow patches. This distinguishes cortical regions where neurons respond to flow in their receptive fields regardless of surrounding motion from those that are sensitive to whether the overall image arises from self-motion. In all 3 animals, significant selectivity for egomotion-consistent flow was found in several areas previously associated with optic flow processing, and notably dorsal middle superior temporal area, ventral intra-parietal area, and VPS. It was also seen in areas 7a (Opt), STPm, FEFsem, FEFsac and in a region of the cingulate sulcus that may be homologous with human area CSv. Selectivity for egomotion-compatible flow was never total but was particularly strong in VPS and putative macaque CSv. Direct comparison of results with the equivalent human studies reveals several commonalities but also some differences.
Collapse
Affiliation(s)
- Benoit R Cottereau
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Andrew T Smith
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Samy Rima
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Denis Fize
- Laboratoire d'Anthropologie Moléculaire et Imagerie de Synthèse, CNRS-Université de Toulouse, Toulouse, France
| | - Yseult Héjja-Brichard
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Luc Renaud
- CNRS, CE2F PRIM UMS3537, Marseille, France.,Aix Marseille Université, Centre d'Exploration Fonctionnelle et de Formation, Marseille, France
| | - Camille Lejards
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Nathalie Vayssière
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Yves Trotter
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Jean-Baptiste Durand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| |
Collapse
|
20
|
Kuang S, Shi J, Wang Y, Zhang T. Where are you heading? Flexible integration of retinal and extra-retinal cues during self-motion perception. Psych J 2017; 6:141-152. [PMID: 28514063 DOI: 10.1002/pchj.165] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 01/26/2017] [Accepted: 02/02/2017] [Indexed: 11/08/2022]
Abstract
As we move forward in the environment, we experience a radial expansion of the retinal image, wherein the center corresponds to the instantaneous direction of self-motion. Humans can precisely perceive their heading direction even when the retinal motion is distorted by gaze shifts due to eye/body rotations. Previous studies have suggested that both retinal and extra-retinal strategies can compensate for the retinal image distortion. However, the relative contributions of each strategy remain unclear. To address this issue, we devised a two-alternative-headings discrimination task, in which participants had either real or simulated pursuit eye movements. The two conditions had the same retinal input but either with or without extra-retinal eye movement signals. Thus, the behavioral difference between conditions served as a metric of extra-retinal contribution. We systematically and independently manipulated pursuit speed, heading speed, and the reliability of retinal signals. We found that the levels of extra-retinal contributions increased with increasing pursuit speed (stronger extra-retinal signal), and with decreasing heading speed (weaker retinal signal). In addition, extra-retinal contributions also increased as we corrupted retinal signals with noise. Our results revealed that the relative magnitude of retinal and extra-retinal contributions was not fixed but rather flexibly adjusted to each specific task condition. This task-dependent, flexible integration appears to take the form of a reliability-based weighting scheme that maximizes heading performance.
Collapse
Affiliation(s)
- Shenbing Kuang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jinfu Shi
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yang Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Tao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
21
|
Abstract
In the current study, we explored observers' use of two distinct analyses for determining their direction of motion, or heading: a scene-based analysis and a motion-based analysis. In two experiments, subjects viewed sequentially presented, paired digitized images of real-world scenes and judged the direction of heading; the pairs were presented with various interstimulus intervals (ISIs). In Experiment 1, subjects could determine heading when the two frames were separated with a 1,000-ms ISI, long enough to eliminate apparent motion. In Experiment 2, subjects performed two tasks, a path-of-motion task and a memory-load task, under three different ISIs, 50 ms, 500 ms, and 1,000 ms. Heading accuracy decreased with an increase in ISI. Increasing memory load influenced heading judgments only for the longer ISI when motion-based information was not available. These results are consistent with the hypothesis that the scene-based analysis has a coarse spatial representation, is a sustained temporal process, and is capacity limited, whereas the motion-based analysis has a fine spatial resolution, is a transient temporal process, and is capacity unlimited.
Collapse
Affiliation(s)
- Sowon Hahn
- University of California at Riverside, USA
| | | | | |
Collapse
|
22
|
Ratzlaff M, Nawrot M. A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax. Perception 2016; 45:991-1007. [PMID: 27060180 PMCID: PMC4990516 DOI: 10.1177/0301006616643679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions.
Collapse
Affiliation(s)
- Michael Ratzlaff
- Center for Visual and Cognitive Neuroscience, Department of Psychology, North Dakota State University, Fargo, ND, USA
| | - Mark Nawrot
- Center for Visual and Cognitive Neuroscience, Department of Psychology, North Dakota State University, Fargo, ND, USA
| |
Collapse
|
23
|
3D Visual Response Properties of MSTd Emerge from an Efficient, Sparse Population Code. J Neurosci 2016; 36:8399-415. [PMID: 27511012 DOI: 10.1523/jneurosci.0396-16.2016] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 06/15/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Neurons in the dorsal subregion of the medial superior temporal (MSTd) area of the macaque respond to large, complex patterns of retinal flow, implying a role in the analysis of self-motion. Some neurons are selective for the expanding radial motion that occurs as an observer moves through the environment ("heading"), and computational models can account for this finding. However, ample evidence suggests that MSTd neurons exhibit a continuum of visual response selectivity to large-field motion stimuli. Furthermore, the underlying computational principles by which these response properties are derived remain poorly understood. Here we describe a computational model of macaque MSTd based on the hypothesis that neurons in MSTd efficiently encode the continuum of large-field retinal flow patterns on the basis of inputs received from neurons in MT with receptive fields that resemble basis vectors recovered with non-negative matrix factorization. These assumptions are sufficient to quantitatively simulate neurophysiological response properties of MSTd cells, such as 3D translation and rotation selectivity, suggesting that these properties might simply be a byproduct of MSTd neurons performing dimensionality reduction on their inputs. At the population level, model MSTd accurately predicts eye velocity and heading using a sparse distributed code, consistent with the idea that biological MSTd might be well equipped to efficiently encode various self-motion variables. The present work aims to add some structure to the often contradictory findings about macaque MSTd, and offers a biologically plausible account of a wide range of visual response properties ranging from single-unit selectivity to population statistics. SIGNIFICANCE STATEMENT Using a dimensionality reduction technique known as non-negative matrix factorization, we found that a variety of medial superior temporal (MSTd) neural response properties could be derived from MT-like input features. The responses that emerge from this technique, such as 3D translation and rotation selectivity, spiral tuning, and heading selectivity, can account for a number of empirical results. These findings (1) provide a further step toward a scientific understanding of the often nonintuitive response properties of MSTd neurons; (2) suggest that response properties, such as complex motion tuning and heading selectivity, might simply be a byproduct of MSTd neurons performing dimensionality reduction on their inputs; and (3) imply that motion perception in the cortex is consistent with ideas from the efficient-coding and free-energy principles.
Collapse
|
24
|
Layton OW, Fajen BR. Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow. PLoS Comput Biol 2016; 12:e1004942. [PMID: 27341686 PMCID: PMC4920404 DOI: 10.1371/journal.pcbi.1004942] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 04/22/2016] [Indexed: 11/18/2022] Open
Abstract
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model's heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
- * E-mail:
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
25
|
Greenlee M, Frank S, Kaliuzhna M, Blanke O, Bremmer F, Churan J, Cuturi LF, MacNeilage P, Smith A. Multisensory Integration in Self Motion Perception. Multisens Res 2016. [DOI: 10.1163/22134808-00002527] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
Collapse
Affiliation(s)
- Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Sebastian M. Frank
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Mariia Kaliuzhna
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Jan Churan
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Luigi F. Cuturi
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, UK
| |
Collapse
|
26
|
Layton OW, Fajen BR. The temporal dynamics of heading perception in the presence of moving objects. J Neurophysiol 2015; 115:286-300. [PMID: 26510765 DOI: 10.1152/jn.00866.2015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 10/26/2015] [Indexed: 11/22/2022] Open
Abstract
Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York
| | - Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York
| |
Collapse
|
27
|
Perrone JA, Liston DB. Redundancy reduction explains the expansion of visual direction space around the cardinal axes. Vision Res 2015; 111:31-42. [PMID: 25888929 DOI: 10.1016/j.visres.2015.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Revised: 02/21/2015] [Accepted: 03/27/2015] [Indexed: 11/30/2022]
Abstract
Motion direction discrimination in humans is worse for oblique directions than for the cardinal directions (the oblique effect). For some unknown reason, the human visual system makes systematic errors in the estimation of particular motion directions; a direction displacement near a cardinal axis appears larger than it really is whereas the same displacement near an oblique axis appears to be smaller. Although the perceptual effects are robust and are clearly measurable in smooth pursuit eye movements, all attempts to identify the neural underpinnings for the oblique effect have failed. Here we show that a model of image velocity estimation based on the known properties of neurons in primary visual cortex (V1) and the middle temporal (MT) visual area of the primate brain produces the oblique effect. We also provide an explanation for the unusual asymmetric patterns of inhibition that have been found surrounding MT neurons. These patterns are consistent with a mechanism within the visual system that prevents redundant velocity signals from being passed onto the next motion-integration stage, (dorsal Medial superior temporal, MSTd). We show that model redundancy-reduction mechanisms within the MT-MSTd pathway produce the oblique effect.
Collapse
Affiliation(s)
- John A Perrone
- The School of Psychology, University of Waikato, Hamilton, New Zealand.
| | - Dorion B Liston
- San Jose State University, San Jose, CA, USA; NASA Ames Research Center, Moffett Field, CA, USA
| |
Collapse
|
28
|
Lich M, Bremmer F. Self-motion perception in the elderly. Front Hum Neurosci 2014; 8:681. [PMID: 25309379 PMCID: PMC4163979 DOI: 10.3389/fnhum.2014.00681] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2014] [Accepted: 08/14/2014] [Indexed: 11/18/2022] Open
Abstract
Self-motion through space generates a visual pattern called optic flow. It can be used to determine one's direction of self-motion (heading). Previous studies have already shown that this perceptual ability, which is of critical importance during everyday life, changes with age. In most of these studies subjects were asked to judge whether they appeared to be heading to the left or right of a target. Thresholds were found to increase continuously with age. In our current study, we were interested in absolute rather than relative heading judgments and in the question about a potential neural correlate of an age-related deterioration of heading perception. Two groups, older test subjects and younger controls, were shown optic flow stimuli in a virtual-reality setup. Visual stimuli simulated self-motion through a 3-D cloud of dots and subjects had to indicate their perceived heading direction after each trial. In different subsets of experiments we varied individually relevant stimulus parameters: presentation time, number of dots in the display, stereoscopic vs. non-stereoscopic stimulation, and motion coherence. We found decrements in heading performance with age for each stimulus parameter. In a final step we aimed to determine a putative neural basis of this behavioral decline. To this end we modified a neural network model which previously has proven to be capable of reproduce and predict certain aspects of heading perception. We show that the observed data can be modeled by implementing an age related neuronal cell loss in this neural network. We conclude that a continuous decline of certain aspects of motion perception, among them heading, might be based on an age-related progressive loss of groups of neurons being activated by visual motion.
Collapse
Affiliation(s)
- Matthias Lich
- Department Neurophysics, Philipps-Universität Marburg Marburg, Germany
| | - Frank Bremmer
- Department Neurophysics, Philipps-Universität Marburg Marburg, Germany
| |
Collapse
|
29
|
Learning to navigate in a virtual world using optic flow and stereo disparity signals. ARTIFICIAL LIFE AND ROBOTICS 2014. [DOI: 10.1007/s10015-014-0153-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
30
|
Kaminiarz A, Schlack A, Hoffmann KP, Lappe M, Bremmer F. Visual selectivity for heading in the macaque ventral intraparietal area. J Neurophysiol 2014; 112:2470-80. [PMID: 25122709 DOI: 10.1152/jn.00410.2014] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.
Collapse
Affiliation(s)
| | - Anja Schlack
- Allgemeine Zoologie und Neurobiologie, University of Bochum, Bochum, Germany; and
| | - Klaus-Peter Hoffmann
- AG Neurophysik, University of Marburg, Marburg, Germany; Allgemeine Zoologie und Neurobiologie, University of Bochum, Bochum, Germany; and
| | - Markus Lappe
- Institut für Psychologie, University of Münster, Münster, Germany
| | - Frank Bremmer
- AG Neurophysik, University of Marburg, Marburg, Germany;
| |
Collapse
|
31
|
Royden CS, Holloway MA. Detecting moving objects in an optic flow field using direction- and speed-tuned operators. Vision Res 2014; 98:14-25. [PMID: 24607912 DOI: 10.1016/j.visres.2014.02.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2013] [Revised: 01/25/2014] [Accepted: 02/21/2014] [Indexed: 11/20/2022]
Abstract
An observer moving through a scene must be able to identify moving objects. Psychophysical results have shown that people can identify moving objects based on the speed or direction of their movement relative to the optic flow field generated by the observer's motion. Here we show that a model that uses speed- and direction-tuned units, whose responses are based on the response properties of cells in the primate visual cortex, can successfully identify the borders of moving objects in a scene through which an observer is moving.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Michael A Holloway
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
32
|
Finley JM, Statton MA, Bastian AJ. A novel optic flow pattern speeds split-belt locomotor adaptation. J Neurophysiol 2013; 111:969-76. [PMID: 24335220 DOI: 10.1152/jn.00513.2013] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Visual input provides vital information for helping us modify our walking pattern. For example, artificial optic flow can drive changes in step length during locomotion and may also be useful for augmenting locomotor training for individuals with gait asymmetries. Here we asked whether optic flow could modify the acquisition of a symmetric walking pattern during split-belt treadmill adaptation. Participants walked on a split-belt treadmill while watching a virtual scene that produced artificial optic flow. For the Stance Congruent group, the scene moved at the slow belt speed at foot strike on the slow belt and then moved at the fast belt speed at foot strike on the fast belt. This approximates what participants would see if they moved over ground with the same walking pattern. For the Stance Incongruent group, the scene moved fast during slow stance and vice versa. In this case, flow speed does not match what the foot is experiencing, but predicts the belt speed for the next foot strike. Results showed that the Stance Incongruent group learned more quickly than the Stance Congruent group even though each group learned the same amount during adaptation. The increase in learning rate was primarily driven by changes in spatial control of each limb, rather than temporal control. Interestingly, when this alternating optic flow pattern was presented alone, no adaptation occurred. Our results demonstrate that an unnatural pattern of optic flow, one that predicts the belt speed on the next foot strike, can be used to enhance learning rate during split-belt locomotor adaptation.
Collapse
Affiliation(s)
- James M Finley
- Motion Analysis Laboratory, Kennedy Krieger Institute, Baltimore, Maryland; and
| | | | | |
Collapse
|
33
|
Abstract
Neuronal selectivity results from both excitatory and suppressive inputs to a given neuron. Suppressive influences can often significantly modulate neuronal responses and impart novel selectivity in the context of behaviorally relevant stimuli. In this work, we use a naturalistic optic flow stimulus to explore the responses of neurons in the middle temporal area (MT) of the alert macaque monkey; these responses are interpreted using a hierarchical model that incorporates relevant nonlinear properties of upstream processing in the primary visual cortex (V1). In this stimulus context, MT neuron responses can be predicted from distinct excitatory and suppressive components. Excitation is spatially localized and matches the measured preferred direction of each neuron. Suppression is typically composed of two distinct components: (1) a directionally untuned component, which appears to play the role of surround suppression and normalization; and (2) a direction-selective component, with comparable tuning width as excitation and a distinct spatial footprint that is usually partially overlapping with excitation. The direction preference of this direction-tuned suppression varies widely across MT neurons: approximately one-third have overlapping suppression in the opposite direction as excitation, and many other neurons have suppression with similar direction preferences to excitation. There is also a population of MT neurons with orthogonally oriented suppression. We demonstrate that direction-selective suppression can impart selectivity of MT neurons to more complex velocity fields and that it can be used for improved estimation of the three-dimensional velocity of moving objects. Thus, considering MT neurons in a complex stimulus context reveals a diverse set of computations likely relevant for visual processing in natural visual contexts.
Collapse
|
34
|
Maloney RT, Watson TL, Clifford CWG. Human cortical and behavioral sensitivity to patterns of complex motion at eccentricity. J Neurophysiol 2013; 110:2545-56. [DOI: 10.1152/jn.00445.2013] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Complex patterns of image motion (contracting, expanding, rotating, and spiraling fields) are important in the coordination of visually guided behaviors. Whereas specialized detectors in monkey visual cortex show selectivity for particular patterns of complex motion, their representation in human visual cortex remains unclear. In the present study, functional magnetic resonance imaging (fMRI) was used to investigate the sensitivity of functionally defined regions of human visual cortex to parametrically modulated complex motion trajectories, coupled with complementary psychophysical testing. A unique stimulus design made it possible to disambiguate the neural responses and psychophysical sensitivity to complex motions per se from the distribution of local motions relative to the fovea, which are known to enhance cortical activity when presented radial to fixation. This involved presenting several small, separate motion fields in the periphery in a manner that distinguished them from global optic flow patterns. The patterns were morphed through complex motion space in a systematic time-locked fashion when presented in the scanner. Anisotropies were observed in the fMRI signal, marked by an enhanced response to expanding vs. contracting fields, even in early visual cortex. Anisotropies in the psychophysical sensitivity measures followed a similar pattern that was correlated with activity in areas hV4, V5/MT, and MST. This represents the first systematic examination of complex motion perception at both a behavioral and neural level in human observers. The characteristic processing anisotropy revealed in both data sets can inform models of complex motion processing, particularly with respect to computations performed in early visual cortex.
Collapse
Affiliation(s)
- Ryan T. Maloney
- Colour, Form and Motion Laboratory, School of Psychology, Faculty of Science, The University of Sydney, Sydney, New South Wales, Australia
- Australian Research Council Centre of Excellence in Vision Science, The University of Sydney, Sydney, New South Wales, Australia; and
| | - Tamara L. Watson
- School of Social Sciences and Psychology, The University of Western Sydney, Bankstown, New South Wales, Australia
| | - Colin W. G. Clifford
- Colour, Form and Motion Laboratory, School of Psychology, Faculty of Science, The University of Sydney, Sydney, New South Wales, Australia
- Australian Research Council Centre of Excellence in Vision Science, The University of Sydney, Sydney, New South Wales, Australia; and
| |
Collapse
|
35
|
Raudies F, Ringbauer S, Neumann H. A bio-inspired, computational model suggests velocity gradients of optic flow locally encode ordinal depth at surface borders and globally they encode self-motion. Neural Comput 2013; 25:2421-49. [PMID: 23663150 DOI: 10.1162/neco_a_00479] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Visual navigation requires the estimation of self-motion as well as the segmentation of objects from the background. We suggest a definition of local velocity gradients to compute types of self-motion, segment objects, and compute local properties of optical flow fields, such as divergence, curl, and shear. Such velocity gradients are computed as velocity differences measured locally tangent and normal to the direction of flow. Then these differences are rotated according to the local direction of flow to achieve independence of that direction. We propose a bio-inspired model for the computation of these velocity gradients for video sequences. Simulation results show that local gradients encode ordinal surface depth, assuming self-motion in a rigid scene or object motions in a nonrigid scene. For translational self-motion velocity, gradients can be used to distinguish between static and moving objects. The information about ordinal surface depth and self-motion can help steering control for visual navigation.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology and Center of Excellence for Learning in Education, Science, and Technology, Boston University, Boston, MA 02215, USA.
| | | | | |
Collapse
|
36
|
Duijnhouwer J, Noest AJ, Lankheet MJM, van den Berg AV, van Wezel RJA. Speed and direction response profiles of neurons in macaque MT and MST show modest constraint line tuning. Front Behav Neurosci 2013; 7:22. [PMID: 23576963 PMCID: PMC3616296 DOI: 10.3389/fnbeh.2013.00022] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2012] [Accepted: 03/05/2013] [Indexed: 11/13/2022] Open
Abstract
Several models of heading detection during smooth pursuit rely on the assumption of local constraint line tuning to exist in large scale motion detection templates. A motion detector that exhibits pure constraint line tuning responds maximally to any 2D-velocity in the set of vectors that can be decomposed into the central, or classic, preferred velocity (the shortest vector that still yields the maximum response) and any vector orthogonal to that. To test this assumption, we measured the firing rates of isolated middle temporal (MT) and medial superior temporal (MST) neurons to random dot stimuli moving in a range of directions and speeds. We found that as a function of 2D velocity, the pooled responses were best fit with a 2D Gaussian profile with a factor of elongation, orthogonal to the central preferred velocity, of roughly 1.5 for MST and 1.7 for MT. This means that MT and MST cells are more sharply tuned for speed than they are for direction; and that they indeed show some level of constraint line tuning. However, we argue that the observed elongation is insufficient to achieve behavioral heading discrimination accuracy on the order of 1-2 degrees as reported before.
Collapse
Affiliation(s)
- Jacob Duijnhouwer
- Center for Molecular and Behavioral Neuroscience, Rutgers University Newark, NJ, USA
| | | | | | | | | |
Collapse
|
37
|
Browning NA. A neural circuit for robust time-to-contact estimation based on primate MST. Neural Comput 2012; 24:2946-63. [PMID: 22845825 DOI: 10.1162/neco_a_00347] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Time-to-contact (TTC) estimation is beneficial for visual navigation. It can be estimated from an image projection, either in a camera or on the retina, by looking at the rate of expansion of an object. When expansion rate (E) is properly defined, TTC = 1/E. Primate dorsal MST cells have receptive field structures suited to the estimation of expansion and TTC. However, the role of MST cells in TTC estimation has been discounted because of large receptive fields, the fact that neither they nor preceding brain areas appear to decompose the motion field to estimate divergence, and a lack of experimental data. This letter demonstrates mathematically that template models of dorsal MST cells can be constructed such that the output of the template match provides an accurate and robust estimate of TTC. The template match extracts the relevant components of the motion field and scales them such that the output of each component of the template match is an estimate of expansion. It then combines these component estimates to provide a mean estimate of expansion across the object. The output of model MST provides a direct measure of TTC. The ViSTARS model of primate visual navigation was updated to incorporate the modified templates. In ViSTARS and in primates, speed is represented as a population code in V1 and MT. A population code for speed complicates TTC estimation from a template match. Results presented in this letter demonstrate that the updated template model of MST accurately codes TTC across a population of model MST cells. We conclude that the updated template model of dorsal MST simultaneously and accurately codes TTC and heading regardless of receptive field size, object size, or motion representation. It is possible that a subpopulation of MST cells in primates represents expansion in this way.
Collapse
Affiliation(s)
- N Andrew Browning
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, USA.
| |
Collapse
|
38
|
Interaction of first- and second-order signals in the extraction of global-motion and optic-flow. Vision Res 2012; 68:28-39. [PMID: 22819730 DOI: 10.1016/j.visres.2012.07.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2012] [Accepted: 07/09/2012] [Indexed: 11/22/2022]
Abstract
The intention of this series of experiments was to determine the extent to which the pathways sensitive to first-order and second-order motion are independent of one another at, and above, the level of global motion integration. We used translational, radial and rotational motion stimuli containing luminance-modulated dots, contrast-modulated dots, or a mixture of both. Our results show that the two classes of motion stimuli interact perceptually in a global motion coherence task, and the extent of this interaction is governed by whether the two varieties of local motion signal produce an equivalent response in the pathways that encode each type of motion. This provides strong psychophysical evidence that global motion and optic flow processing are cue-invariant. The fidelity of the first-order motion signal was moderated by either reducing the luminance of the dots or by increasing the displacement of the dots on each positional update. The experiments were carried out with two different types of second-order elements (contrast-modulated dots and flicker-modulated dots) and the results were comparable, suggesting that these findings are generalisable to a variety of second-order stimuli. In addition, the interaction between the two different types of second-order stimuli was investigated and we found that the relative modulation depth was also crucial to whether the two populations interacted. We conclude that the relative output of local motion sensors sensitive to either first-order or second-order motion dictates their weight in subsequent cue-invariant global motion computations.
Collapse
|
39
|
Raudies F, Hasselmo ME. Modeling boundary vector cell firing given optic flow as a cue. PLoS Comput Biol 2012; 8:e1002553. [PMID: 22761557 PMCID: PMC3386186 DOI: 10.1371/journal.pcbi.1002553] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2011] [Accepted: 04/25/2012] [Indexed: 11/24/2022] Open
Abstract
Boundary vector cells in entorhinal cortex fire when a rat is in locations at a specific distance from walls of an environment. This firing may originate from memory of the barrier location combined with path integration, or the firing may depend upon the apparent visual input image stream. The modeling work presented here investigates the role of optic flow, the apparent change of patterns of light on the retina, as input for boundary vector cell firing. Analytical spherical flow is used by a template model to segment walls from the ground, to estimate self-motion and the distance and allocentric direction of walls, and to detect drop-offs. Distance estimates of walls in an empty circular or rectangular box have a mean error of less than or equal to two centimeters. Integrating these estimates into a visually driven boundary vector cell model leads to the firing patterns characteristic for boundary vector cells. This suggests that optic flow can influence the firing of boundary vector cells. Over the past few decades a variety of cells in hippocampal structures have been analyzed and their function has been identified. Head direction cells indicate the world-centered direction of the animals head like a compass. Place cells fire in locations associated with visual, auditory, or olfactory cues. Grid cells fill open space like a carpet with their mosaic of firing. Boundary vector cells fire, if a boundary that cannot be passed by the animal appears at a certain distance and world-centered direction. All these cells are players in the navigation game; however, their interaction and linkage to sensory systems like vision and memory is not fully understood. Our model analyzes a potential link between the visual system and boundary vector cells. As part of the visual system, we model optic flow that is available to rats. Optic flow is defined as change of lightness patterns on the retina and contains information about self-motion and environment. This optic flow is used in our model to estimate the distance and direction of boundaries. Our model simulations suggest a link between optic flow and the firing of boundary vector cells.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology-CompNet, Boston University, Boston, Massachusetts, United States of America.
| | | |
Collapse
|
40
|
Raudies F, Mingolla E, Neumann H. Active gaze control improves optic flow-based segmentation and steering. PLoS One 2012; 7:e38446. [PMID: 22719889 PMCID: PMC3375264 DOI: 10.1371/journal.pone.0038446] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2012] [Accepted: 05/07/2012] [Indexed: 11/30/2022] Open
Abstract
An observer traversing an environment actively relocates gaze to fixate objects. Evidence suggests that gaze is frequently directed toward the center of an object considered as target but more likely toward the edges of an object that appears as an obstacle. We suggest that this difference in gaze might be motivated by specific patterns of optic flow that are generated by either fixating the center or edge of an object. To support our suggestion we derive an analytical model that shows: Tangentially fixating the outer surface of an obstacle leads to strong flow discontinuities that can be used for flow-based segmentation. Fixation of the target center while gaze and heading are locked without head-, body-, or eye-rotations gives rise to a symmetric expansion flow with its center at the point being approached, which facilitates steering toward a target. We conclude that gaze control incorporates ecological constraints to improve the robustness of steering and collision avoidance by actively generating flows appropriate to solve the task.
Collapse
Affiliation(s)
- Florian Raudies
- Center of Excellence for Learning in Education, Science, and Technology, Boston University, Boston, Massachusetts, United States of America.
| | | | | |
Collapse
|
41
|
Stroyan K, Nawrot M. Visual depth from motion parallax and eye pursuit. J Math Biol 2012; 64:1157-88. [PMID: 21695531 PMCID: PMC3348271 DOI: 10.1007/s00285-011-0445-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2011] [Revised: 05/26/2011] [Indexed: 10/18/2022]
Abstract
A translating observer viewing a rigid environment experiences "motion parallax", the relative movement upon the observer's retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In Nawrot and Stroyan (Vision Res 49:1969-1978, 2009) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments.
Collapse
Affiliation(s)
- Keith Stroyan
- Mathematics Department, University of Iowa, Iowa City, IA, 52242, USA.
| | | |
Collapse
|
42
|
Modeling the influence of optic flow on grid cell firing in the absence of other cues1. J Comput Neurosci 2012; 33:475-93. [PMID: 22555390 PMCID: PMC3484285 DOI: 10.1007/s10827-012-0396-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2011] [Revised: 03/30/2012] [Accepted: 04/03/2012] [Indexed: 11/17/2022]
Abstract
Information from the vestibular, sensorimotor, or visual systems can affect the firing of grid cells recorded in entorhinal cortex of rats. Optic flow provides information about the rat’s linear and rotational velocity and, thus, could influence the firing pattern of grid cells. To investigate this possible link, we model parts of the rat’s visual system and analyze their capability in estimating linear and rotational velocity. In our model a rat is simulated to move along trajectories recorded from rat’s foraging on a circular ground platform. Thus, we preserve the intrinsic statistics of real rats’ movements. Visual image motion is analytically computed for a spherical camera model and superimposed with noise in order to model the optic flow that would be available to the rat. This optic flow is fed into a template model to estimate the rat’s linear and rotational velocities, which in turn are fed into an oscillatory interference model of grid cell firing. Grid scores are reported while altering the flow noise, tilt angle of the optical axis with respect to the ground, the number of flow templates, and the frequency used in the oscillatory interference model. Activity patterns are compatible with those of grid cells, suggesting that optic flow can contribute to their firing.
Collapse
|
43
|
Nawrot M, Stroyan K. Integration time for the perception of depth from motion parallax. Vision Res 2012; 59:64-71. [PMID: 22406543 DOI: 10.1016/j.visres.2012.02.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2011] [Revised: 01/26/2012] [Accepted: 02/21/2012] [Indexed: 10/28/2022]
Abstract
The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio for a selection of points on a complicated stimulus.
Collapse
Affiliation(s)
- Mark Nawrot
- Center for Visual Neuroscience, Department of Psychology, North Dakota State University, Fargo, ND 58108, USA.
| | | |
Collapse
|
44
|
Hierarchical processing of complex motion along the primate dorsal visual pathway. Proc Natl Acad Sci U S A 2012; 109:E972-80. [PMID: 22308392 DOI: 10.1073/pnas.1115685109] [Citation(s) in RCA: 73] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Neurons in the medial superior temporal (MST) area of the primate visual cortex respond selectively to complex motion patterns defined by expansion, rotation, and deformation. Consequently they are often hypothesized to be involved in important behavioral functions, such as encoding the velocities of moving objects and surfaces relative to the observer. However, the computations underlying such selectivity are unknown. In this work we have developed a unique, naturalistic motion stimulus and used it to probe the complex selectivity of MST neurons. The resulting data were then used to estimate the properties of the feed-forward inputs to each neuron. This analysis yielded models that successfully accounted for much of the observed stimulus selectivity, provided that the inputs were combined via a nonlinear integration mechanism that approximates a multiplicative interaction among MST inputs. In simulations we found that this type of integration has the functional role of improving estimates of the 3D velocity of moving objects. As this computation is of general utility for detecting complex stimulus features, we suggest that it may represent a fundamental aspect of hierarchical sensory processing.
Collapse
|
45
|
Hanes DA. Mathematical requirements of visual–vestibular integration. J Math Biol 2011; 65:1245-66. [DOI: 10.1007/s00285-011-0494-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2011] [Revised: 11/16/2011] [Indexed: 10/15/2022]
|
46
|
Snyder JJ, Bischof WF. Knowing where we're heading--when nothing moves. Brain Res 2010; 1323:127-38. [PMID: 20132801 DOI: 10.1016/j.brainres.2010.01.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2009] [Revised: 01/18/2010] [Accepted: 01/24/2010] [Indexed: 10/19/2022]
Abstract
Past research indicates that observers rely strongly on flow-based and object-based motion information for determining egomotion or direction of heading. More recently, it has been shown that they also rely on displacement information that does not induce motion perception. As yet, little is known regarding the specific displacement cues that are used for heading estimation. In Experiment 1a, we show that the accuracy of heading estimates increases, as more displacement cues are available. In Experiments 1b and 2, we show that observers rely mostly on the displacement of objects and geometric cues for estimating heading. In Experiment 3, we show that the accuracy of detecting changes in heading when displacement cues are used is low. The results are interpreted in terms of two systems that may be available for estimating heading, one relying on movement information and providing navigational mechanisms, the other relying on displacement information and providing navigational planning and orienting mechanisms.
Collapse
Affiliation(s)
- Janice J Snyder
- Psychology Department, University of British Columbia Okanagan, 3333 University Way, Kelowna, BC, Canada V1V 1V7.
| | | |
Collapse
|
47
|
Yu CP, Page WK, Gaborski R, Duffy CJ. Receptive field dynamics underlying MST neuronal optic flow selectivity. J Neurophysiol 2010; 103:2794-807. [PMID: 20457855 DOI: 10.1152/jn.01085.2009] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Optic flow informs moving observers about their heading direction. Neurons in monkey medial superior temporal (MST) cortex show heading selective responses to optic flow and planar direction selective responses to patches of local motion. We recorded MST neuronal responses to a 90 x 90 degrees optic flow display and to a 3 x 3 array of local motion patches covering the same area. Our goal was to test the hypothesis that the optic flow responses reflect the sum of the local motion responses. The local motion responses of each neuron were modeled as mixtures of Gaussians, combining the effects of two Gaussian response functions derived using a genetic algorithm, and then used to predict that neuron's optic flow responses. Some neurons showed good correspondence between local motion models and optic flow responses, others showed substantial differences. We used the genetic algorithm to modulate the relative strength of each local motion segment's responses to accommodate interactions between segments that might modulate their relative efficacy during co-activation by global patterns of optic flow. These gain modulated models showed uniformly better fits to the optic flow responses, suggesting that coactivation of receptive field segments alters neuronal response properties. We tested this hypothesis by simultaneously presenting local motion stimuli at two different sites. These two-segment stimuli revealed that interactions between response segments have direction and location specific effects that can account for aspects of optic flow selectivity. We conclude that MST's optic flow selectivity reflects dynamic interactions between spatially distributed local planar motion response mechanisms.
Collapse
Affiliation(s)
- Chen Ping Yu
- Department of Computer Sciences, Rochester Institute of Technology Rochester, Rochester, New York, USA
| | | | | | | |
Collapse
|
48
|
Bremmer F, Kubischik M, Pekel M, Hoffmann KP, Lappe M. Visual selectivity for heading in monkey area MST. Exp Brain Res 2010; 200:51-60. [PMID: 19727690 DOI: 10.1007/s00221-009-1990-3] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2009] [Accepted: 08/08/2009] [Indexed: 12/01/2022]
Abstract
The control of self-motion is supported by visual, vestibular, and proprioceptive signals. Recent research has shown how these signals interact in the monkey medio-superior temporal area (area MST) to enhance and disambiguate the perception of heading during self-motion. Area MST is a central stage for self-motion processing from optic flow, and integrates flow Weld information with vestibular self-motion and extraretinal eye movement information. Such multimodal cue integration is clearly important to solidify perception. However to understand the information processing capabilities of the brain, one must also ask how much information can be deduced from a single cue alone. This is particularly pertinent for optic flow, where controversies over its usefulness for self-motion control have existed ever since Gibson proposed his direct approach to ecological perception. In our study, we therefore, tested macaque MST neurons for their heading selectivity in highly complex flow Welds based on the purely visual mechanisms. We recorded responses of MST neurons to simple radial flow Welds and to distorted flow Welds that simulated a self-motion plus an eye movement. About half of the cells compensated for such distortion and kept the same heading selectivity in both cases. Our results strongly support the notion of an involvement of area MST in the computation of heading.
Collapse
Affiliation(s)
- Frank Bremmer
- Allg. Zoologie und Neurobiologie, Ruhr Universität Bochum, 44780 Bochum, Germany.
| | | | | | | | | |
Collapse
|
49
|
Warren PA, Rushton SK. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues. Vision Res 2009; 49:1406-19. [PMID: 19480063 DOI: 10.1016/j.visres.2009.01.016] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.
Collapse
Affiliation(s)
- Paul A Warren
- School of Psychology and Communications Research Centre, Cardiff University, Cardiff, CF10 3AT Wales, UK.
| | | |
Collapse
|
50
|
Yang Y, Zhang J, Liang Z, Li G, Wang Y, Ma Y, Zhou Y, Leventhal AG. Aging affects the neural representation of speed in Macaque area MT. Cereb Cortex 2009; 19:1957-67. [PMID: 19037080 PMCID: PMC2733681 DOI: 10.1093/cercor/bhn221] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Human perception of speed declines with age. Much of the decline is probably mediated by changes in the middle temporal (MT) area, an extrastriate area whose neural activity is linked to the perception of speed. In the present study, we used random-dot patterns to study the effects of aging on speed-tuning curves in cortical area MT of macaque visual cortex. Our results provide evidence for a significant degradation of speed selectivity in MT. Cells in old animals preferred lower speeds than did those in young animals. Response modulation and discriminative capacity for speed in old monkeys were also significantly weaker than those in young ones. Concurrently, MT cells in old monkeys showed increased baseline responses, peak responses and response variability, and these changes were accompanied by decreased signal-to-noise ratios. We also found that speed discrimination thresholds in old animals were higher than in young ones. The foregoing neural changes may mediate the declines in visual motion perception that occur during senescence.
Collapse
Affiliation(s)
- Yun Yang
- Vision Research Laboratory, School of Life Science, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Jie Zhang
- Laboratory of Primate Cognitive Neuroscience, Kunming Institute of Zoology, Chinese Academy of Science, Kunming, Yunnan 650223, China
| | - Zhen Liang
- Vision Research Laboratory, School of Life Science, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Guangxing Li
- Vision Research Laboratory, School of Life Science, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Yongchang Wang
- Vision Research Laboratory, School of Life Science, University of Science and Technology of China, Hefei, Anhui 230027, China
- Department of Neurobiology and Anatomy, School of Medicine, University of Utah, Salt Lake City, UT 84132, USA
| | - Yuanye Ma
- Laboratory of Primate Cognitive Neuroscience, Kunming Institute of Zoology, Chinese Academy of Science, Kunming, Yunnan 650223, China
| | - Yifeng Zhou
- Vision Research Laboratory, School of Life Science, University of Science and Technology of China, Hefei, Anhui 230027, China
- State key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Science, Beijing 100101, China
| | - Audie G. Leventhal
- Vision Research Laboratory, School of Life Science, University of Science and Technology of China, Hefei, Anhui 230027, China
- Department of Neurobiology and Anatomy, School of Medicine, University of Utah, Salt Lake City, UT 84132, USA
| |
Collapse
|