1
|
Hülemeier AG, Lappe M. Illusory percepts of curvilinear self-motion when moving through crowds. J Vis 2023; 23:6. [PMID: 38112491 PMCID: PMC10732088 DOI: 10.1167/jov.23.14.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2023] Open
Abstract
Self-motion generates optic flow, a pattern of expanding visual motion. Heading estimation from optic flow analysis is accurate in rigid environments, but it becomes challenging when other human walkers introduce independent motion to the scene. Previous studies showed that heading perception is surprisingly accurate when moving through a crowd of walkers but revealed strong heading biases when either articulation or translation of biological motion were presented in isolation. We hypothesized that these biases resulted from misperceiving the self-motion as curvilinear. Such errors might manifest as opposite biases depending on whether the observer perceived the crowd motion as indication of his/her self-translation or self-rotation. Our study investigated the link between heading biases and illusory path perception. Participants assessed heading and path perception while observing optic flow stimuli with varying walker movements. Self-motion perception was accurate during natural locomotion (articulation and translation), but significant heading biases occurred when walkers only articulated or translated. In this case, participants often reported a curved path of travel. Heading error and curvature pointed in opposite directions. On average, participants perceived the walker motion as evidence for viewpoint rotation leading to curvilinear path percepts.
Collapse
Affiliation(s)
| | - Markus Lappe
- Department of Psychology, University of Münster, Münster, Germany
| |
Collapse
|
2
|
Gundavarapu A, Chakravarthy VS. Modeling the development of cortical responses in primate dorsal ("where") pathway to optic flow using hierarchical neural field models. Front Neurosci 2023; 17:1154252. [PMID: 37284658 PMCID: PMC10239834 DOI: 10.3389/fnins.2023.1154252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 04/26/2023] [Indexed: 06/08/2023] Open
Abstract
Although there is a plethora of modeling literature dedicated to the object recognition processes of the ventral ("what") pathway of primate visual systems, modeling studies on the motion-sensitive regions like the Medial superior temporal area (MST) of the dorsal ("where") pathway are relatively scarce. Neurons in the MST area of the macaque monkey respond selectively to different types of optic flow sequences such as radial and rotational flows. We present three models that are designed to simulate the computation of optic flow performed by the MST neurons. Model-1 and model-2 each composed of three stages: Direction Selective Mosaic Network (DSMN), Cell Plane Network (CPNW) or the Hebbian Network (HBNW), and the Optic flow network (OF). The three stages roughly correspond to V1-MT-MST areas, respectively, in the primate motion pathway. Both these models are trained stage by stage using a biologically plausible variation of Hebbian rule. The simulation results show that, neurons in model-1 and model-2 (that are trained on translational, radial, and rotational sequences) develop responses that could account for MSTd cell properties found neurobiologically. On the other hand, model-3 consists of the Velocity Selective Mosaic Network (VSMN) followed by a convolutional neural network (CNN) which is trained on radial and rotational sequences using a supervised backpropagation algorithm. The quantitative comparison of response similarity matrices (RSMs), made out of convolution layer and last hidden layer responses, show that model-3 neuron responses are consistent with the idea of functional hierarchy in the macaque motion pathway. These results also suggest that the deep learning models could offer a computationally elegant and biologically plausible solution to simulate the development of cortical responses of the primate motion pathway.
Collapse
Affiliation(s)
- Anila Gundavarapu
- Computational Neuroscience Lab, Indian Institute of Technology Madras, Chennai, India
| | - V. Srinivasa Chakravarthy
- Computational Neuroscience Lab, Indian Institute of Technology Madras, Chennai, India
- Center for Complex Systems and Dynamics, Indian Institute of Technology Madras, Chennai, India
| |
Collapse
|
3
|
Nakamura D, Gomi H. Decoding self-motion from visual image sequence predicts distinctive features of reflexive motor responses to visual motion. Neural Netw 2023; 162:516-530. [PMID: 36990001 DOI: 10.1016/j.neunet.2023.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 03/10/2023] [Accepted: 03/17/2023] [Indexed: 03/28/2023]
Abstract
Visual motion analysis is crucial for humans to detect external moving objects and self-motion which are informative for planning and executing actions for various interactions with environments. Here we show that the image motion analysis trained to decode the self-motion during human natural movements by a convolutional neural network exhibits similar specificities with the reflexive ocular and manual responses induced by a large-field visual motion, in terms of stimulus spatiotemporal frequency tuning. The spatiotemporal frequency tuning of the decoder peaked at high-temporal and low-spatial frequencies, as observed in the reflexive ocular and manual responses, but differed significantly from the frequency power of the visual image itself and the density distribution of self-motion. Further, artificial manipulations of the learning data sets predicted great changes in the specificity of the spatiotemporal tuning. Interestingly, despite similar spatiotemporal frequency tunings in the vertical-axis rotational direction and in the transversal direction to full-field visual stimuli, the tunings for center-masked stimuli were different between those directions, and the specificity difference is qualitatively similar to the discrepancy between ocular and manual responses, respectively. In addition, the representational analysis demonstrated that head-axis rotation was decoded by relatively simple spatial accumulation over the visual field, while the transversal motion was decoded by more complex spatial interaction of visual information. These synthetic model examinations support the idea that visual motion analyses eliciting the reflexive motor responses, which are critical in interacting with the external world, are acquired for decoding self-motion.
Collapse
|
4
|
Ali M, Decker E, Layton OW. Temporal stability of human heading perception. J Vis 2023; 23:8. [PMID: 36786748 PMCID: PMC9932552 DOI: 10.1167/jov.23.2.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023] Open
Abstract
Humans are capable of accurately judging their heading from optic flow during straight forward self-motion. Despite the global coherence in the optic flow field, however, visual clutter and other naturalistic conditions create constant flux on the eye. This presents a problem that must be overcome to accurately perceive heading from optic flow-the visual system must maintain sensitivity to optic flow variations that correspond with actual changes in self-motion and disregard those that do not. One solution could involve integrating optic flow over time to stabilize heading signals while suppressing transient fluctuations. Stability, however, may come at the cost of sluggishness. Here, we investigate the stability of human heading perception when subjects judge their heading after the simulated direction of self-motion changes. We found that the initial heading exerted an attractive influence on judgments of the final heading. Consistent with an evolving heading representation, bias toward the initial heading increased with the size of the heading change and as the viewing duration of the optic flow consistent with the final heading decreased. Introducing periods of sensory dropout (blackouts) later in the trial increased bias whereas an earlier one did not. Simulations of a neural model, the Competitive Dynamics Model, demonstrates that a mechanism that produces an evolving heading signal through recurrent competitive interactions largely captures the human data. Our findings characterize how the visual system balances stability in heading perception with sensitivity to change and support the hypothesis that heading perception evolves over time.
Collapse
Affiliation(s)
- Mufaddal Ali
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Eli Decker
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Oliver W. Layton
- Department of Computer Science, Colby College, Waterville, ME, USA,https://sites.google.com/colby.edu/owlab
| |
Collapse
|
5
|
Skelton PSM, Finn A, Brinkworth RSA. Contrast independent biologically inspired translational optic flow estimation. BIOLOGICAL CYBERNETICS 2022; 116:635-660. [PMID: 36303043 PMCID: PMC9691503 DOI: 10.1007/s00422-022-00948-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
The visual systems of insects are relatively simple compared to humans. However, they enable navigation through complex environments where insects perform exceptional levels of obstacle avoidance. Biology uses two separable modes of optic flow to achieve this: rapid gaze fixation (rotational motion known as saccades); and the inter-saccadic translational motion. While the fundamental process of insect optic flow has been known since the 1950's, so too has its dependence on contrast. The surrounding visual pathways used to overcome environmental dependencies are less well known. Previous work has shown promise for low-speed rotational motion estimation, but a gap remained in the estimation of translational motion, in particular the estimation of the time to impact. To consistently estimate the time to impact during inter-saccadic translatory motion, the fundamental limitation of contrast dependence must be overcome. By adapting an elaborated rotational velocity estimator from literature to work for translational motion, this paper proposes a novel algorithm for overcoming the contrast dependence of time to impact estimation using nonlinear spatio-temporal feedforward filtering. By applying bioinspired processes, approximately 15 points per decade of statistical discrimination were achieved when estimating the time to impact to a target across 360 background, distance, and velocity combinations: a 17-fold increase over the fundamental process. These results show the contrast dependence of time to impact estimation can be overcome in a biologically plausible manner. This, combined with previous results for low-speed rotational motion estimation, allows for contrast invariant computational models designed on the principles found in the biological visual system, paving the way for future visually guided systems.
Collapse
Affiliation(s)
- Phillip S. M. Skelton
- Centre for Defence Engineering Research and Training, College of Science and Engineering, Flinders University, 1284 South Road, Tonsley, South Australia 5042 Australia
| | - Anthony Finn
- Science, Technology, Engineering, and Mathematics, University of South Australia, 1 Mawson Lakes Boulevard, Mawson Lakes, South Australia 5095 Australia
| | - Russell S. A. Brinkworth
- Centre for Defence Engineering Research and Training, College of Science and Engineering, Flinders University, 1284 South Road, Tonsley, South Australia 5042 Australia
| |
Collapse
|
6
|
Layton OW, Fajen BR. Distributed encoding of curvilinear self-motion across spiral optic flow patterns. Sci Rep 2022; 12:13393. [PMID: 35927277 PMCID: PMC9352735 DOI: 10.1038/s41598-022-16371-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 07/08/2022] [Indexed: 11/09/2022] Open
Abstract
Self-motion along linear paths without eye movements creates optic flow that radiates from the direction of travel (heading). Optic flow-sensitive neurons in primate brain area MSTd have been linked to linear heading perception, but the neural basis of more general curvilinear self-motion perception is unknown. The optic flow in this case is more complex and depends on the gaze direction and curvature of the path. We investigated the extent to which signals decoded from a neural model of MSTd predict the observer's curvilinear self-motion. Specifically, we considered the contributions of MSTd-like units that were tuned to radial, spiral, and concentric optic flow patterns in "spiral space". Self-motion estimates decoded from units tuned to the full set of spiral space patterns were substantially more accurate and precise than those decoded from units tuned to radial expansion. Decoding only from units tuned to spiral subtypes closely approximated the performance of the full model. Only the full decoding model could account for human judgments when path curvature and gaze covaried in self-motion stimuli. The most predictive units exhibited bias in center-of-motion tuning toward the periphery, consistent with neurophysiology and prior modeling. Together, findings support a distributed encoding of curvilinear self-motion across spiral space.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Computer Science, Colby College, Waterville, ME, USA. .,Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA.
| | - Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
7
|
Maus N, Layton OW. Estimating heading from optic flow: Comparing deep learning network and human performance. Neural Netw 2022; 154:383-396. [DOI: 10.1016/j.neunet.2022.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 06/17/2022] [Accepted: 07/07/2022] [Indexed: 10/16/2022]
|
8
|
Rosenblum L, Grewe E, Churan J, Bremmer F. Influence of Tactile Flow on Visual Heading Perception. Multisens Res 2022; 35:291-308. [PMID: 35263712 DOI: 10.1163/22134808-bja10071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Accepted: 02/10/2022] [Indexed: 11/19/2022]
Abstract
The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.
Collapse
Affiliation(s)
- Lisa Rosenblum
- Department of Neurophysics, Philipps-Universität Marburg, Karl-von-Frisch-Straße 8a, 35043 Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, 35032 Marburg, Germany
| | - Elisa Grewe
- Department of Neurophysics, Philipps-Universität Marburg, Karl-von-Frisch-Straße 8a, 35043 Marburg, Germany
| | - Jan Churan
- Department of Neurophysics, Philipps-Universität Marburg, Karl-von-Frisch-Straße 8a, 35043 Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, 35032 Marburg, Germany
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg, Karl-von-Frisch-Straße 8a, 35043 Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, 35032 Marburg, Germany
| |
Collapse
|
9
|
Matthis JS, Muller KS, Bonnen KL, Hayhoe MM. Retinal optic flow during natural locomotion. PLoS Comput Biol 2022; 18:e1009575. [PMID: 35192614 PMCID: PMC8896712 DOI: 10.1371/journal.pcbi.1009575] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/04/2022] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker’s visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body’s trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker’s instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body’s momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior. We recorded the full body kinematics and binocular gaze of humans walking through real-world natural environment and estimated visual motion (optic flow) using both computational video analysis and geometric simulation. Contrary to the established theories of the role of optic flow in the control of locomotion, we found that eye-movement-free, head-centric optic flow is highly unstable due to the complex phasic trajectory of the head during natural locomotion, rendering it an unlikely candidate for heading perception. In contrast, retina-centered optic flow consisted of a regular pattern of outflowing motion centered on the fovea. Retinal optic flow contained highly consistent patterns that specified the walker’s trajectory relative to the point of fixation, which may provide powerful, retinotopic cues that may be used for the visual control of locomotion in natural environments. This examination of optic flow in real-world contexts suggest a need to re-evaluate existing theories of the role of optic flow in the visual control of action during natural behavior.
Collapse
Affiliation(s)
- Jonathan Samir Matthis
- Department of Biology, Northeastern University, Boston, Massachusetts, United States of America
- * E-mail:
| | - Karl S. Muller
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Kathryn L. Bonnen
- School of Optometry, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Mary M. Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
10
|
Self-motion illusions from distorted optic flow in multifocal glasses. iScience 2022; 25:103567. [PMID: 34988405 PMCID: PMC8693457 DOI: 10.1016/j.isci.2021.103567] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/03/2021] [Accepted: 12/01/2021] [Indexed: 11/24/2022] Open
Abstract
Progressive addition lenses (PALs) are ophthalmic lenses to correct presbyopia by providing improvements of near and far vision in different areas of the lens, but distorting the periphery of the wearer's field of view. Distortion-related difficulties reported by PAL wearers include unnatural self-motion perception. Visual self-motion perception is guided by optic flow, the pattern of retinal motion produced by self-motion. We tested the influence of PAL distortions on optic flow-based heading estimation using a model of heading perception and a virtual reality-based psychophysical experiment. The model predicted changes of heading estimation along a vertical axis, depending on visual field size and gaze direction. Consistent with this prediction, participants experienced upwards deviations of self-motion when gaze through the periphery of the lens was simulated, but not for gaze through the center. We conclude that PALs may lead to illusions of self-motion which could be remedied by a careful gaze strategy. Multifocal lenses impair vision of spectacle wearers with gaze-dependent distortions A model of heading perception from distorted optic flow suggest a misperception Heading perception was tested with a virtual reality-based simulation of distortions Distortions lead to gaze direction-dependent illusions in perceived vertical heading
Collapse
|
11
|
Wild B, Treue S. Primate extrastriate cortical area MST: a gateway between sensation and cognition. J Neurophysiol 2021; 125:1851-1882. [PMID: 33656951 DOI: 10.1152/jn.00384.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
12
|
Di Marco S, Fattori P, Galati G, Galletti C, Lappe M, Maltempo T, Serra C, Sulpizio V, Pitzalis S. Preference for locomotion-compatible curved paths and forward direction of self-motion in somatomotor and visual areas. Cortex 2021; 137:74-92. [PMID: 33607346 DOI: 10.1016/j.cortex.2020.12.021] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 11/20/2020] [Accepted: 12/05/2020] [Indexed: 12/11/2022]
Abstract
During locomotion, leg movements define the direction of walking (forward or backward) and the path one is taking (straight or curved). These aspects of locomotion produce characteristic visual motion patterns during movement. Here, we tested whether cortical regions responding to either egomotion-compatible visual motion, or leg movements, or both, are sensitive to these locomotion-relevant aspects of visual motion. We compared a curved path (typically the visual feedback of a changing direction of movement in the environment) to a linear path for simulated forward and backward motion in an event-related fMRI experiment. We used an individual surface-based approach and two functional localizers to define (1) six egomotion-related areas (V6+, V3A, intraparietal motion area [IPSmot], cingulate sulcus visual area [CSv], posterior cingulate area [pCi], posterior insular cortex [PIC]) using the flow field stimulus and (2) three leg-related cortical regions (human PEc [hPEc], human PE [hPE] and primary somatosensory cortex [S-I]) using a somatomotor task. Then, we extracted the response from all these regions with respect to the main event-related fMRI experiment, consisting of passive viewing of an optic flow stimulus, simulating a forward or backward direction of self-motion in either linear or curved path. Results showed that some regions have a significant preference for the curved path motion (hPEc, hPE, S-I, IPSmot) or a preference for the forward motion (V3A), while other regions have both a significant preference for the curved path motion and for the forward compared to backward motion (V6+, CSv, pCi). We did not find any significant effects of the present stimuli in PIC. Since controlling locomotion mainly means controlling changes of walking direction in the environment during forward self-motion, such a differential functional profile among these cortical regions suggests that they play a differentiated role in the visual guidance of locomotion.
Collapse
Affiliation(s)
- Sara Di Marco
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Gaspare Galati
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Brain Imaging Laboratory, Department of Psychology, Sapienza University, Rome, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Markus Lappe
- Institute for Psychology, University of Muenster, Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Teresa Maltempo
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Chiara Serra
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Valentina Sulpizio
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| |
Collapse
|
13
|
Burlingham CS, Heeger DJ. Heading perception depends on time-varying evolution of optic flow. Proc Natl Acad Sci U S A 2020; 117:33161-33169. [PMID: 33328275 PMCID: PMC7776640 DOI: 10.1073/pnas.2022984117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
There is considerable support for the hypothesis that perception of heading in the presence of rotation is mediated by instantaneous optic flow. This hypothesis, however, has never been tested. We introduce a method, termed "nonvarying phase motion," for generating a stimulus that conveys a single instantaneous optic flow field, even though the stimulus is presented for an extended period of time. In this experiment, observers viewed stimulus videos and performed a forced-choice heading discrimination task. For nonvarying phase motion, observers made large errors in heading judgments. This suggests that instantaneous optic flow is insufficient for heading perception in the presence of rotation. These errors were mostly eliminated when the velocity of phase motion was varied over time to convey the evolving sequence of optic flow fields corresponding to a particular heading. This demonstrates that heading perception in the presence of rotation relies on the time-varying evolution of optic flow. We hypothesize that the visual system accurately computes heading, despite rotation, based on optic acceleration, the temporal derivative of optic flow.
Collapse
Affiliation(s)
| | - David J Heeger
- Department of Psychology, New York University, New York, NY 10003;
- Center for Neural Science, New York University, New York, NY 10003
| |
Collapse
|
14
|
Hülemeier AG, Lappe M. Combining biological motion perception with optic flow analysis for self-motion in crowds. J Vis 2020; 20:7. [PMID: 32902593 PMCID: PMC7488621 DOI: 10.1167/jov.20.9.7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Heading estimation from optic flow relies on the assumption that the visual world is rigid. This assumption is violated when one moves through a crowd of people, a common and socially important situation. The motion of people in the crowd contains cues to their translation in the form of the articulation of their limbs, known as biological motion. We investigated how translation and articulation of biological motion influence heading estimation from optic flow for self-motion in a crowd. Participants had to estimate their heading during simulated self-motion toward a group of walkers who collectively walked in a single direction. We found that the natural combination of translation and articulation produces surprisingly small heading errors. In contrast, experimental conditions that either present only translation or only articulation produced strong idiosyncratic biases. The individual biases explained well the variance in the natural combination. A second experiment showed that the benefit of articulation and the bias produced by articulation were specific to biological motion. An analysis of the differences in biases between conditions and participants showed that different perceptual mechanisms contribute to heading perception in crowds. We suggest that coherent group motion affects the reference frame of heading perception from optic flow.
Collapse
Affiliation(s)
| | - Markus Lappe
- Department of Psychology, University of Münster, Münster, Germany
| |
Collapse
|
15
|
Abstract
Heading estimation from optic flow is crucial for safe locomotion but becomes inaccurate if independent object motion is present. In ecological settings, such motion typically involves other animals or humans walking across the scene. An independently walking person presents a local disturbance of the flow field, which moves across the flow field as the walker traverses the scene. Is the bias in heading estimation produced by the local disturbance of the flow field or by the movement of the walker through the scene? We present a novel flow field stimulus in which the local flow disturbance and the movement of the walker can be pitted against each other. Each frame of this stimulus consists of a structureless random dot distribution. Across frames, the body shape of a walker is molded by presenting different flow field dynamics within and outside the body shape. In different experimental conditions, the flow within the body shape can be congruent with the walker's movement, incongruent with it, or congruent with the background flow. We show that heading inaccuracy results from the local flow disturbance rather than the movement through the scene. Moreover, we show that the local disturbances of the optic flow can be used to segment the walker and support biological motion perception to some degree. The dichotomous result that the walker can be segmented from the scene but that heading perception is nonetheless influenced by the flow produced by the walker confirms separate visual pathways for heading estimation, object segmentation, and biological motion perception.
Collapse
Affiliation(s)
- Krischan Koerfer
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Markus Lappe
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| |
Collapse
|
16
|
Durant S, Zanker JM. The combined effect of eye movements improve head centred local motion information during walking. PLoS One 2020; 15:e0228345. [PMID: 31999777 PMCID: PMC6992003 DOI: 10.1371/journal.pone.0228345] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 01/13/2020] [Indexed: 11/18/2022] Open
Abstract
Eye movements play multiple roles in human behaviour—small stabilizing movements are important for keeping the image of the scene steady during locomotion, whilst large scanning movements search for relevant information. It has been proposed that eye movement induced retinal motion interferes with the estimation of self-motion based on optic flow. We investigated the effect of eye movements on retinal motion information during walking. Observers walked towards a target, wearing eye tracking glasses that simultaneously recorded the scene ahead and tracked the movements of both eyes. By realigning the frames of the recording from the scene ahead, relative to the centre of gaze, we could mimic the input received by the retina (retinocentric coordinates) and compare this to the input received by the scene camera (head centred coordinates). We asked which of these coordinate frames resulted in the least noisy motion information. Motion noise was calculated by finding the error in between the optic flow signal and a noise-free motion expansion pattern. We found that eye movements improved the optic flow information available, even when large diversions away from target were made.
Collapse
Affiliation(s)
- Szonya Durant
- Department of Psychology, University of London, Egham, England, United Kingdom
- * E-mail:
| | - Johannes M. Zanker
- Department of Psychology, University of London, Egham, England, United Kingdom
| |
Collapse
|
17
|
Riddell H, Li L, Lappe M. Heading perception from optic flow in the presence of biological motion. J Vis 2019; 19:25. [PMID: 31868898 DOI: 10.1167/19.14.25] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We investigated whether biological motion biases heading estimation from optic flow in a similar manner to nonbiological moving objects. In two experiments, observers judged their heading from displays depicting linear translation over a random-dot ground with normal point light walkers, spatially scrambled point light walkers, or laterally moving objects composed of random dots. In Experiment 1, we found that both types of walkers biased heading estimates similarly to moving objects when they obscured the focus of expansion of the background flow. In Experiment 2, we also found that walkers biased heading estimates when they did not obscure the focus of expansion. These results show that both regular and scrambled biological motion affect heading estimation in a similar manner to simple moving objects, and suggest that biological motion is not preferentially processed for the perception of self-motion.
Collapse
Affiliation(s)
- Hugh Riddell
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| | - Li Li
- Faculty of Arts and Science, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Markus Lappe
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| |
Collapse
|
18
|
Abstract
The ability to navigate through crowds of moving people accurately, efficiently, and without causing collisions is essential for our day-to-day lives. Vision provides key information about one's own self-motion as well as the motions of other people in the crowd. These two types of information (optic flow and biological motion) have each been investigated extensively; however, surprisingly little research has been dedicated to investigating how they are processed when presented concurrently. Here, we showed that patterns of biological motion have a negative impact on visual-heading estimation when people within the crowd move their limbs but do not move through the scene. Conversely, limb motion facilitates heading estimation when walkers move independently through the scene. Interestingly, this facilitation occurs for crowds containing both regular and perturbed depictions of humans, suggesting that it is likely caused by low-level motion cues inherent in the biological motion of other people.
Collapse
Affiliation(s)
- Hugh Riddell
- Department of Psychology, Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster
| | - Markus Lappe
- Department of Psychology, Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster
| |
Collapse
|
19
|
Layton OW, Fajen BR. Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow. PLoS Comput Biol 2016; 12:e1004942. [PMID: 27341686 PMCID: PMC4920404 DOI: 10.1371/journal.pcbi.1004942] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 04/22/2016] [Indexed: 11/18/2022] Open
Abstract
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model's heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
- * E-mail:
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
20
|
Greenlee M, Frank S, Kaliuzhna M, Blanke O, Bremmer F, Churan J, Cuturi LF, MacNeilage P, Smith A. Multisensory Integration in Self Motion Perception. Multisens Res 2016. [DOI: 10.1163/22134808-00002527] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
Collapse
Affiliation(s)
- Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Sebastian M. Frank
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Mariia Kaliuzhna
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Jan Churan
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Luigi F. Cuturi
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, UK
| |
Collapse
|
21
|
Lich M, Bremmer F. Self-motion perception in the elderly. Front Hum Neurosci 2014; 8:681. [PMID: 25309379 PMCID: PMC4163979 DOI: 10.3389/fnhum.2014.00681] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2014] [Accepted: 08/14/2014] [Indexed: 11/18/2022] Open
Abstract
Self-motion through space generates a visual pattern called optic flow. It can be used to determine one's direction of self-motion (heading). Previous studies have already shown that this perceptual ability, which is of critical importance during everyday life, changes with age. In most of these studies subjects were asked to judge whether they appeared to be heading to the left or right of a target. Thresholds were found to increase continuously with age. In our current study, we were interested in absolute rather than relative heading judgments and in the question about a potential neural correlate of an age-related deterioration of heading perception. Two groups, older test subjects and younger controls, were shown optic flow stimuli in a virtual-reality setup. Visual stimuli simulated self-motion through a 3-D cloud of dots and subjects had to indicate their perceived heading direction after each trial. In different subsets of experiments we varied individually relevant stimulus parameters: presentation time, number of dots in the display, stereoscopic vs. non-stereoscopic stimulation, and motion coherence. We found decrements in heading performance with age for each stimulus parameter. In a final step we aimed to determine a putative neural basis of this behavioral decline. To this end we modified a neural network model which previously has proven to be capable of reproduce and predict certain aspects of heading perception. We show that the observed data can be modeled by implementing an age related neuronal cell loss in this neural network. We conclude that a continuous decline of certain aspects of motion perception, among them heading, might be based on an age-related progressive loss of groups of neurons being activated by visual motion.
Collapse
Affiliation(s)
- Matthias Lich
- Department Neurophysics, Philipps-Universität Marburg Marburg, Germany
| | - Frank Bremmer
- Department Neurophysics, Philipps-Universität Marburg Marburg, Germany
| |
Collapse
|
22
|
Kaminiarz A, Schlack A, Hoffmann KP, Lappe M, Bremmer F. Visual selectivity for heading in the macaque ventral intraparietal area. J Neurophysiol 2014; 112:2470-80. [PMID: 25122709 DOI: 10.1152/jn.00410.2014] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.
Collapse
Affiliation(s)
| | - Anja Schlack
- Allgemeine Zoologie und Neurobiologie, University of Bochum, Bochum, Germany; and
| | - Klaus-Peter Hoffmann
- AG Neurophysik, University of Marburg, Marburg, Germany; Allgemeine Zoologie und Neurobiologie, University of Bochum, Bochum, Germany; and
| | - Markus Lappe
- Institut für Psychologie, University of Münster, Münster, Germany
| | - Frank Bremmer
- AG Neurophysik, University of Marburg, Marburg, Germany;
| |
Collapse
|
23
|
Raudies F, Ringbauer S, Neumann H. A bio-inspired, computational model suggests velocity gradients of optic flow locally encode ordinal depth at surface borders and globally they encode self-motion. Neural Comput 2013; 25:2421-49. [PMID: 23663150 DOI: 10.1162/neco_a_00479] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Visual navigation requires the estimation of self-motion as well as the segmentation of objects from the background. We suggest a definition of local velocity gradients to compute types of self-motion, segment objects, and compute local properties of optical flow fields, such as divergence, curl, and shear. Such velocity gradients are computed as velocity differences measured locally tangent and normal to the direction of flow. Then these differences are rotated according to the local direction of flow to achieve independence of that direction. We propose a bio-inspired model for the computation of these velocity gradients for video sequences. Simulation results show that local gradients encode ordinal surface depth, assuming self-motion in a rigid scene or object motions in a nonrigid scene. For translational self-motion velocity, gradients can be used to distinguish between static and moving objects. The information about ordinal surface depth and self-motion can help steering control for visual navigation.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology and Center of Excellence for Learning in Education, Science, and Technology, Boston University, Boston, MA 02215, USA.
| | | | | |
Collapse
|
24
|
Raudies F, Hasselmo ME. Modeling boundary vector cell firing given optic flow as a cue. PLoS Comput Biol 2012; 8:e1002553. [PMID: 22761557 PMCID: PMC3386186 DOI: 10.1371/journal.pcbi.1002553] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2011] [Accepted: 04/25/2012] [Indexed: 11/24/2022] Open
Abstract
Boundary vector cells in entorhinal cortex fire when a rat is in locations at a specific distance from walls of an environment. This firing may originate from memory of the barrier location combined with path integration, or the firing may depend upon the apparent visual input image stream. The modeling work presented here investigates the role of optic flow, the apparent change of patterns of light on the retina, as input for boundary vector cell firing. Analytical spherical flow is used by a template model to segment walls from the ground, to estimate self-motion and the distance and allocentric direction of walls, and to detect drop-offs. Distance estimates of walls in an empty circular or rectangular box have a mean error of less than or equal to two centimeters. Integrating these estimates into a visually driven boundary vector cell model leads to the firing patterns characteristic for boundary vector cells. This suggests that optic flow can influence the firing of boundary vector cells. Over the past few decades a variety of cells in hippocampal structures have been analyzed and their function has been identified. Head direction cells indicate the world-centered direction of the animals head like a compass. Place cells fire in locations associated with visual, auditory, or olfactory cues. Grid cells fill open space like a carpet with their mosaic of firing. Boundary vector cells fire, if a boundary that cannot be passed by the animal appears at a certain distance and world-centered direction. All these cells are players in the navigation game; however, their interaction and linkage to sensory systems like vision and memory is not fully understood. Our model analyzes a potential link between the visual system and boundary vector cells. As part of the visual system, we model optic flow that is available to rats. Optic flow is defined as change of lightness patterns on the retina and contains information about self-motion and environment. This optic flow is used in our model to estimate the distance and direction of boundaries. Our model simulations suggest a link between optic flow and the firing of boundary vector cells.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology-CompNet, Boston University, Boston, Massachusetts, United States of America.
| | | |
Collapse
|
25
|
Raudies F, Mingolla E, Neumann H. Active gaze control improves optic flow-based segmentation and steering. PLoS One 2012; 7:e38446. [PMID: 22719889 PMCID: PMC3375264 DOI: 10.1371/journal.pone.0038446] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2012] [Accepted: 05/07/2012] [Indexed: 11/30/2022] Open
Abstract
An observer traversing an environment actively relocates gaze to fixate objects. Evidence suggests that gaze is frequently directed toward the center of an object considered as target but more likely toward the edges of an object that appears as an obstacle. We suggest that this difference in gaze might be motivated by specific patterns of optic flow that are generated by either fixating the center or edge of an object. To support our suggestion we derive an analytical model that shows: Tangentially fixating the outer surface of an obstacle leads to strong flow discontinuities that can be used for flow-based segmentation. Fixation of the target center while gaze and heading are locked without head-, body-, or eye-rotations gives rise to a symmetric expansion flow with its center at the point being approached, which facilitates steering toward a target. We conclude that gaze control incorporates ecological constraints to improve the robustness of steering and collision avoidance by actively generating flows appropriate to solve the task.
Collapse
Affiliation(s)
- Florian Raudies
- Center of Excellence for Learning in Education, Science, and Technology, Boston University, Boston, Massachusetts, United States of America.
| | | | | |
Collapse
|
26
|
Modeling the influence of optic flow on grid cell firing in the absence of other cues1. J Comput Neurosci 2012; 33:475-93. [PMID: 22555390 PMCID: PMC3484285 DOI: 10.1007/s10827-012-0396-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2011] [Revised: 03/30/2012] [Accepted: 04/03/2012] [Indexed: 11/17/2022]
Abstract
Information from the vestibular, sensorimotor, or visual systems can affect the firing of grid cells recorded in entorhinal cortex of rats. Optic flow provides information about the rat’s linear and rotational velocity and, thus, could influence the firing pattern of grid cells. To investigate this possible link, we model parts of the rat’s visual system and analyze their capability in estimating linear and rotational velocity. In our model a rat is simulated to move along trajectories recorded from rat’s foraging on a circular ground platform. Thus, we preserve the intrinsic statistics of real rats’ movements. Visual image motion is analytically computed for a spherical camera model and superimposed with noise in order to model the optic flow that would be available to the rat. This optic flow is fed into a template model to estimate the rat’s linear and rotational velocities, which in turn are fed into an oscillatory interference model of grid cell firing. Grid scores are reported while altering the flow noise, tilt angle of the optical axis with respect to the ground, the number of flow templates, and the frequency used in the oscillatory interference model. Activity patterns are compatible with those of grid cells, suggesting that optic flow can contribute to their firing.
Collapse
|
27
|
Harvey BM, Braddick OJ. Similar adaptation effects on motion pattern detection and position discrimination tasks: unusual properties of global and local level motion adaptation. Vision Res 2011; 51:479-88. [PMID: 21223977 DOI: 10.1016/j.visres.2011.01.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2010] [Revised: 11/28/2010] [Accepted: 01/04/2011] [Indexed: 11/19/2022]
Abstract
Here we examine adaptation effects on pattern detection and position discrimination tasks in radial and rotational motion patterns, induced by adapting stimuli moving in the same or opposite directions to the test stimuli. Adaptation effects on the two tasks were similar, suggesting these tasks are performed by the same population of neurons. Global motion specific adaptation was then induced by presenting adaptation stimuli and test stimuli in different parts of the visual field. Again, adaptation effects on the two tasks were similar, but neither same-direction nor opposite-direction motion produced any adaptation effect on contracting motion patterns. Finally, adaptation stimuli were compared that should have similar effects on local motion processing neurons, but different effects on global motion processing neurons. Again, adaptation effects on the two tasks were similar. However, when global-level adaptation was avoided, no adaptation effects were seen with adaptation patterns moving in the opposite direction to the test pattern. Together, these last two experiments suggest that adaptation to opposite directions of motion from the test motion affects global motion processing but not local motion processing neurons.
Collapse
Affiliation(s)
- Benjamin M Harvey
- Department of Experimental Psychology, Utrecht University, The Netherlands.
| | | |
Collapse
|
28
|
Yu CP, Page WK, Gaborski R, Duffy CJ. Receptive field dynamics underlying MST neuronal optic flow selectivity. J Neurophysiol 2010; 103:2794-807. [PMID: 20457855 DOI: 10.1152/jn.01085.2009] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Optic flow informs moving observers about their heading direction. Neurons in monkey medial superior temporal (MST) cortex show heading selective responses to optic flow and planar direction selective responses to patches of local motion. We recorded MST neuronal responses to a 90 x 90 degrees optic flow display and to a 3 x 3 array of local motion patches covering the same area. Our goal was to test the hypothesis that the optic flow responses reflect the sum of the local motion responses. The local motion responses of each neuron were modeled as mixtures of Gaussians, combining the effects of two Gaussian response functions derived using a genetic algorithm, and then used to predict that neuron's optic flow responses. Some neurons showed good correspondence between local motion models and optic flow responses, others showed substantial differences. We used the genetic algorithm to modulate the relative strength of each local motion segment's responses to accommodate interactions between segments that might modulate their relative efficacy during co-activation by global patterns of optic flow. These gain modulated models showed uniformly better fits to the optic flow responses, suggesting that coactivation of receptive field segments alters neuronal response properties. We tested this hypothesis by simultaneously presenting local motion stimuli at two different sites. These two-segment stimuli revealed that interactions between response segments have direction and location specific effects that can account for aspects of optic flow selectivity. We conclude that MST's optic flow selectivity reflects dynamic interactions between spatially distributed local planar motion response mechanisms.
Collapse
Affiliation(s)
- Chen Ping Yu
- Department of Computer Sciences, Rochester Institute of Technology Rochester, Rochester, New York, USA
| | | | | | | |
Collapse
|
29
|
Bremmer F, Kubischik M, Pekel M, Hoffmann KP, Lappe M. Visual selectivity for heading in monkey area MST. Exp Brain Res 2010; 200:51-60. [PMID: 19727690 DOI: 10.1007/s00221-009-1990-3] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2009] [Accepted: 08/08/2009] [Indexed: 12/01/2022]
Abstract
The control of self-motion is supported by visual, vestibular, and proprioceptive signals. Recent research has shown how these signals interact in the monkey medio-superior temporal area (area MST) to enhance and disambiguate the perception of heading during self-motion. Area MST is a central stage for self-motion processing from optic flow, and integrates flow Weld information with vestibular self-motion and extraretinal eye movement information. Such multimodal cue integration is clearly important to solidify perception. However to understand the information processing capabilities of the brain, one must also ask how much information can be deduced from a single cue alone. This is particularly pertinent for optic flow, where controversies over its usefulness for self-motion control have existed ever since Gibson proposed his direct approach to ecological perception. In our study, we therefore, tested macaque MST neurons for their heading selectivity in highly complex flow Welds based on the purely visual mechanisms. We recorded responses of MST neurons to simple radial flow Welds and to distorted flow Welds that simulated a self-motion plus an eye movement. About half of the cells compensated for such distortion and kept the same heading selectivity in both cases. Our results strongly support the notion of an involvement of area MST in the computation of heading.
Collapse
Affiliation(s)
- Frank Bremmer
- Allg. Zoologie und Neurobiologie, Ruhr Universität Bochum, 44780 Bochum, Germany.
| | | | | | | | | |
Collapse
|
30
|
Browning NA, Grossberg S, Mingolla E. A neural model of how the brain computes heading from optic flow in realistic scenes. Cogn Psychol 2009; 59:320-56. [PMID: 19716125 DOI: 10.1016/j.cogpsych.2009.07.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2008] [Accepted: 07/20/2009] [Indexed: 11/15/2022]
Abstract
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard to behavioral and neural substrates. The current article develops a model that does both. The ViSTARS neural model describes interactions among neurons in the primate magnocellular pathway, including V1, MT(+), and MSTd. Model outputs are quantitatively similar to human heading data in response to complex natural scenes. The model estimates heading to within 1.5 degrees in random dot or photo-realistically rendered scenes, and within 3 degrees in video streams from driving in real-world environments. Simulated rotations of less than 1 degrees /s do not affect heading estimates, but faster simulated rotation rates do, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.
Collapse
Affiliation(s)
- N Andrew Browning
- Department of Cognitive and Neural Systems, Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
31
|
Hanes DA, Keller J, McCollum G. Motion parallax contribution to perception of self-motion and depth. BIOLOGICAL CYBERNETICS 2008; 98:273-293. [PMID: 18365242 DOI: 10.1007/s00422-008-0224-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2007] [Accepted: 09/19/2007] [Indexed: 05/26/2023]
Abstract
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.
Collapse
Affiliation(s)
- Douglas A Hanes
- Neuro-otology Department, Legacy Research Center, 1225 NE 2nd Avenue, Portland, OR 97232, USA.
| | | | | |
Collapse
|
32
|
A model for simultaneous computation of heading and depth in the presence of rotations. Vision Res 2007; 47:3025-40. [DOI: 10.1016/j.visres.2007.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2007] [Revised: 08/15/2007] [Accepted: 08/17/2007] [Indexed: 11/22/2022]
|
33
|
Abstract
The medial superior temporal (MST) area contains neurons with tuning for complex motion patterns, but very little is known about the generation of such responses. To explore how neuronal responses varied across complex motion pattern coherence, we recorded from single units while varying the strength of the global motion pattern in random dot stimuli. Stimuli were a family of optic flow patterns, consisting of radial motion, rotary motion, or combinations thereof ("spiral space"). We controlled the strength of the motion in the stimuli by varying the coherence--the proportion of dots carrying the signal. This allows motion strength to be varied independently of stimulus size, speed, or contrast. Most neurons' responses were well described as a linear function of stimulus coherence. Although more than half the cells possessed significant nonlinearities, these typically accounted for little additional variance. Nonlinear coherence response functions could either be compressive (e.g., saturating) or expansive and occurred in both the preferred and null direction responses. The presence of nonlinearities was not related to neuronal response properties such as preferred spiral-space direction or tuning bandwidth; however, cells with compressive nonlinearities in both the preferred and null directions tended to have higher response amplitudes and were more sensitive to weak motion signals. These cells did not appear to form a distinct subpopulation within MST. Our results suggest that MST neurons predominantly linearly encode increasing pattern motion energy within their RFs.
Collapse
Affiliation(s)
- Hilary W Heuer
- Howard Hughes Medical Institute, Department of Physiology and W.M. Keck Foundation Center for Integrative Neuroscience University of California, San Francisco, USA
| | | |
Collapse
|
34
|
Royden CS, Cahill JM, Conti DM. Factors affecting curved versus straight path heading perception. ACTA ACUST UNITED AC 2006; 68:184-93. [PMID: 16773892 DOI: 10.3758/bf03193668] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Displays commonly used for testing heading judgments in the presence of rotations are ambiguous to observers. They can be interpreted equally well as motion in a straight line while rotating the eyes or as motion on a curved path. This has led to conflicting results from studies that use these displays. In this study, we tested several factors that might influence which of these two interpretations observers see. These factors included the size of the field of view, the duration of the stimulus, textured scenes versus random-dot displays, and whether or not observers were given a description of their path. The only factor that had a significant effect on path perception was whether or not observers were given instructions describing their path of motion. Under all conditions without instructions, we found that observers responded in a way that was consistent with the perception of motion on a curved path.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Sciences, College of the Holy Cross, Worcester, MA 01610, USA.
| | | | | |
Collapse
|
35
|
Bocheva N. Detection of motion discontinuities between complex motions. Vision Res 2006; 46:129-40. [PMID: 16139859 DOI: 10.1016/j.visres.2005.06.037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2005] [Revised: 06/29/2005] [Accepted: 06/30/2005] [Indexed: 11/28/2022]
Abstract
Three main experiments were performed to evaluate the ability of human observers to detect non-homogeneity in a motion field caused by the presence of two adjacent complex motions, having a common motion component. The detection performance varied significantly depending on the common motion component in the motion field. The highest detection rate was observed when the common motion component was radial or rotational flow. The results imply that the selectivity to the presence of a complex motion in the optic flow depends both on the sensitivity of specialized mechanisms tuned to different complex motions and on inhibition of the units tuned to similar motions.
Collapse
Affiliation(s)
- Nadejda Bocheva
- Institute of Physiology, Bulgarian Academy of Sciences, Acad. G. Bonchev str. bl. 23, Sofia 1113, Bulgaria.
| |
Collapse
|
36
|
Turano KA, Yu D, Hao L, Hicks JC. Optic-flow and egocentric-direction strategies in walking: Central vs peripheral visual field. Vision Res 2005; 45:3117-32. [PMID: 16084556 DOI: 10.1016/j.visres.2005.06.017] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2004] [Revised: 06/09/2005] [Accepted: 06/14/2005] [Indexed: 10/25/2022]
Abstract
The impact of a central or peripheral visual field loss on the vision strategy used to guide walking was determined by measuring walking paths of visually impaired participants. An immersive virtual environment was used to dissociate the expected paths of the optic-flow and egocentric-direction strategies by offsetting the walker's point of view from the actual direction of walking. Environments consisted of a goal within a forest, the goal alone, or the forest alone following a brief presentation of the goal. The first two environments allowed an evaluation of the visual information used in a goal-directed task whereas the third environment investigated the information used in a memory-guided task. Participants had either a central (CFL) or peripheral visual field loss (PFL) or were fully sighted (FS). Results showed that, for the goal-directed task, the CFL group was less influenced by optic flow than was an age-matched FS group. Optic flow decreased heading error by only 1.3 degrees (16%) in the CFL group compared to 3.6 degrees (42%) in the FS group. The PFL group showed an optic-flow influence (2.4 degrees or 26%) comparable to an older, age-matched FS group (2.9 degrees or 31%). For the memory-guided task, all but the PFL group had heading errors comparable to those obtained in the goal-alone scene, demonstrating the ability to use an egocentric-direction strategy with a stored representation of either the goal's position or an offset relative to a landmark instead of a visible goal. The paths of the PFL group veered significantly from the predicted paths of both the optic-flow and egocentric-direction strategies. The findings of this study suggest that central vision is important for using optic flow to guide walking, whereas peripheral vision is important for establishing and/or updating an accurate representation of spatial structure for navigation.
Collapse
Affiliation(s)
- Kathleen A Turano
- The Johns Hopkins University School of Medicine, Wilmer Eye Institute, Baltimore, MD, USA.
| | | | | | | |
Collapse
|
37
|
Mann R, Langer MS. Spectrum analysis of motion parallax in a 3D cluttered scene and application to egomotion. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2005; 22:1717-31. [PMID: 16211798 DOI: 10.1364/josaa.22.001717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Previous methods for estimating observer motion in a rigid 3D scene assume that image velocities can be measured at isolated points. When the observer is moving through a cluttered 3D scene such as a forest, however, pointwise measurements of image velocity are more challenging to obtain because multiple depths, and hence multiple velocities, are present in most local image regions. We introduce a method for estimating egomotion that avoids pointwise image velocity estimation as a first step. In its place, the direction of motion parallax in local image regions is estimated, using a spectrum-based method, and these directions are then combined to directly estimate 3D observer motion. There are two advantages to this approach. First, the method can be applied to a wide range of 3D cluttered scenes, including those for which pointwise image velocities cannot be measured because only normal velocity information is available. Second, the egomotion estimates can be used as a posterior constraint on estimating pointwise image velocities, since known egomotion parameters constrain the candidate image velocities at each point to a one-dimensional rather than a two-dimensional space.
Collapse
Affiliation(s)
- Richard Mann
- School of Computer Science, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada.
| | | |
Collapse
|
38
|
Hanada M. Computational analyses for illusory transformations in the optic flow field and heading perception in the presence of moving objects. Vision Res 2005; 45:749-58. [PMID: 15639501 DOI: 10.1016/j.visres.2004.09.037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2004] [Revised: 09/23/2004] [Indexed: 11/24/2022]
Abstract
When we see a stimulus of a radial flow field (the target flow) overlapped with a lateral flow field or another radial flow field, the focus of expansion (FOE) of the target radial flow appears to be shifted in a direction. Royden and Conti [(2003). A model using MT-like motion-opponent operators explains an illusory transformation in the optic flow field. Vision Research, 43, 2811-2826] argued that local motion subtraction is crucial for explanation of this phenomenon. The flow field which causes the illusory displacement of FOE was computationally analyzed. It was shown that the flow field is approximately a rigid-motion flow; the flow can be generated by simulating a situation where an observer moves toward a stationary scene. The heading direction for the observer corresponds to the perceived position of the FOE of the radial flow pattern. It implies that any algorithms which assume rigidity of the scene and recover veridical heading explain the bias in perceived FOE. There is no need for local motion subtraction in order to explain the phenomena. Furthermore, the flow for an observer's translation in the presence of objects moving laterally or in depth was computationally analyzed. It was found that algorithms which minimizes standard error functions with less weights to the independently moving objects show similar biases in recovered heading to the bias of human observers. It implies that local motion subtraction is not necessary for explanation of the bias in perceived heading due to an object moving laterally or in depth, contrary to the argument of Royden [(2002). Computing heading in the presence of moving objects: a model that uses motion-opponent operators. Vision Research, 42, 3043-3058].
Collapse
Affiliation(s)
- Mitsuhiko Hanada
- Department of Media Architecture, Future University-Hakodate, 116-2 Kamedanakano-cho, Hakodate, Hokkaido 041-8655, Japan.
| |
Collapse
|
39
|
Hanada M. An algorithmic model of heading perception. BIOLOGICAL CYBERNETICS 2005; 92:8-20. [PMID: 15592681 DOI: 10.1007/s00422-004-0529-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2003] [Accepted: 10/07/2004] [Indexed: 05/24/2023]
Abstract
On the basis of Hanada and Ejima's (2000) model, an algorithmic model was presented to explain psychophysical data of van den Berg and Beintema (2000) that are inconsistent with vector-subtractive compensation for the rotational flow. The earlier model was modified in order not to use vector-subtractive compensation for the rotational flow. The proposed model computes the center of flow first and then estimates self-rotation; finally, heading is recovered from the center of flow and the estimate of self-rotation. The model explains the data of van de Berg and Beintema (2000). A fusion model of rotation estimates from different sources (efferent signals, proprioceptive feedback, vestibular signals about eye and head rotation, and visual motion) was also presented.
Collapse
Affiliation(s)
- Mitsuhiko Hanada
- Department of Cognitive and Information Sciences, Faculty of Letters, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
| |
Collapse
|
40
|
Perrone JA. A visual motion sensor based on the properties of V1 and MT neurons. Vision Res 2004; 44:1733-55. [PMID: 15135991 DOI: 10.1016/j.visres.2004.03.003] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2003] [Revised: 02/23/2004] [Indexed: 11/20/2022]
Abstract
The motion response properties of neurons increase in complexity as one moves from primary visual cortex (V1), up to higher cortical areas such as the middle temporal (MT) and the medial superior temporal area (MST). Many of the features of V1 neurons can now be replicated using computational models based on spatiotemporal filters. However until recently, relatively little was known about how the motion analysing properties of MT neurons could originate from the V1 neurons that provide their inputs. This has constrained the development of models of the MT-MST stages which have been linked to higher level motion processing tasks such as self-motion perception and depth estimation. I describe the construction of a motion sensor built up in stages from two spatiotemporal filters with properties based on V1 neurons. The resulting composite sensor is shown to have spatiotemporal frequency response profiles, speed and direction tuning responses that are comparable to MT neurons. The sensor is designed to work with digital images and can therefore be used as a realistic front-end to models of MT and MST neuron processing; it can be probed with the same two-dimensional motion stimuli used to test the neurons and has the potential to act as a building block for more complex models of motion processing.
Collapse
Affiliation(s)
- John A Perrone
- Department of Psychology, The University of Waikato, Private Bag 3105, Hamilton, New Zealand.
| |
Collapse
|
41
|
Abstract
Normal observers judge heading well both when moving in a straight line and when moving along a curved path. Judgments of curved path motion require depth variations in the scene while judgments of straight line heading (pure translation) do not. Here we show that a stroke patient who is impaired in low level 2D motion discrimination tasks and cannot accurately judge 3D structure from motion can accurately judge heading for straight line self-motion. This patient is impaired in judgments of curved path self-motion. This suggests that accurate heading judgments for observer translation do not require accurate 2D motion perception or 3D reconstruction of the scene. Judgments of curved path motion appear more dependent on accurate 2D motion perception.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, MA, USA
| | | |
Collapse
|
42
|
Royden CS, Conti DM. A model using MT-like motion-opponent operators explains an illusory transformation in the optic flow field. Vision Res 2003; 43:2811-26. [PMID: 14568097 DOI: 10.1016/s0042-6989(03)00481-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Previous studies have shown that a physiologically based model using motion-opponent operators to compute heading performs accurately for simulated observer translations. Here we show how this model can explain an illusory shift in the perceived focus of expansion of a radial flow field that occurs when a field of laterally moving dots is superimposed on a field of radially moving dots. Furthermore, we can use the model to predict the perceptual shift of the focus of expansion for novel visual stimuli. These results support the hypothesis that this illusion results from motion subtraction during the processing of optic flow fields.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, P.O. Box 116A, Worcester, MA 01610, USA.
| | | |
Collapse
|
43
|
Abstract
We propose a two-layer neuromorphic architecture by which motion field pattern, generated during locomotion, are processed by template detectors specialized for gaze-directed self-motion (expansion and rotation). The templates provide a gaze-centered computation for analyzing motion field in terms of how it is related to the fixation point (i.e., the fovea). The analysis is performed by relating the vectorial components of the act of motion to variations (i.e., asymmetries) of the local structure of the motion field. Notwithstanding their limited extension in space, such centric-minded templates extract, as a whole, global information from the input flow field, being sensitive to different local instances of the same global property of the vector field with respect to the fixation point; a quantitative analysis, in terms of vectorial operators, evidences this property as tuning curves for heading direction. Model performances, evaluated in several situations characterized by conditions of absence and presence of pursuit eye movements, validate the approach. We observe that the gaze-centered model provides an explicit testable hypothesis that can guide further explorations of visual motion processing in extrastriate cortical areas.
Collapse
Affiliation(s)
- Paolo Cavalleri
- Department of Biophysical and Electronic Engineering, University of Genoa, Via all'Opera Pia 11/A, 16145, Genova, Italy
| | | | | | | |
Collapse
|
44
|
Page WK, Duffy CJ. Heading representation in MST: sensory interactions and population encoding. J Neurophysiol 2003; 89:1994-2013. [PMID: 12686576 DOI: 10.1152/jn.00493.2002] [Citation(s) in RCA: 108] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Dorsal medial superior temporal cortex (MSTd)'s population response encodes heading direction from optic flow seen during fixation or pursuit. Vestibular responses in these neurons might enhance heading representation during self-movement in light or provide an alternative basis for heading representation during self-movement in darkness. We have compared these hypotheses by recording MSTd neuronal responses to translational self-movement in light and darkness, during fixation and pursuit. Translational movement in darkness, with gaze fixed, evokes transient vestibular responses during acceleration that reverse directionality during deceleration and persist without a fixation target. Movement in light increases the amplitude and duration of these responses so they mimic responses to simulated optic flow presented without translational movement. Pursuit of a stationary landmark during translational movement combines vestibular and visual effects with pursuit responses. Vestibular, visual, and pursuit effects interact so that single neuron heading responses vary across the stimulus period and between stimulus conditions. Combining single neuron responses by population vector summation yields stronger heading estimates in light than in darkness, with gaze fixed or during landmark pursuit. Adding translational movement to robust optic flow stimuli does not augment the population response. Vestibular signals enhance single neuron responses in light and maintain population heading estimation in darkness, potentially extending MSTd's heading representation across the continuum of naturalistic self-movement conditions.
Collapse
Affiliation(s)
- William K Page
- Departments of Neurology, Neurobiology, and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center, Rochester, New York 14642, USA
| | | |
Collapse
|
45
|
Abstract
Neurophysiological studies in MSTd report the existence of motion pattern selective cells whose visual motion properties span a continuum of values, suggesting a role in estimates of self-motion from optic flow. Biologically motivated models of heading estimation support this view, having identified similar visual motion properties within their "neural" structures. While such models have addressed the computational sufficiency of their respective feed-forward designs they have not explicitly examined the underlying computational structures, particularly as they relate to the interaction between planar and spiral motion responses within MSTd. Here we use an expanded stimulus training set that includes planar motions to extend the range of neurophysiological properties identified within an existing network structure [Network: Comput. Neural Syst. 9 (1998) 467]. In doing so, we quantify the emergent planar motion properties within the network hidden layer and examine how they interact, functionally and computationally, with cardinal/spiral motion pattern responses. Throughout the hidden layer we demonstrate that the input activation associated with a unit's preferred planar motion is consistent with an overlapping gradient hypothesis [J. Neurophysiol. 65(6) (1991) 1346]. Together with the change to a peripheral excitation profile in the presence of a unit's preferred spiral motion these results suggest a more complex computational architecture in which the cell's 'classical' receptive field properties are dependent on the type of stimulus used to map them. Based on the computational model we propose an experimental paradigm to investigate the existence of equivalent computational structures in MSTd.
Collapse
Affiliation(s)
- Scott A Beardsley
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
46
|
Royden CS. Computing heading in the presence of moving objects: a model that uses motion-opponent operators. Vision Res 2002; 42:3043-58. [PMID: 12480074 DOI: 10.1016/s0042-6989(02)00394-2] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Psychophysical experiments have shown that human heading judgments can be biased by the presence of moving objects. Here we present a theoretical argument that motion differences can account for the direction of bias seen in humans. We further examine the responses of a computer simulation of a model for computing heading that uses motion-opponent operators similar to cells in the primate middle temporal visual area. When moving objects are present, this model shows similar biases to those seen with humans, suggesting that such a model may underlie human heading computations.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, P.O. Box 116A, Worcester, MA 01610, USA
| |
Collapse
|
47
|
van den Berg AV, Beintema JA, Frens MA. Heading and path percepts from visual flow and eye pursuit signals. Vision Res 2002; 41:3467-86. [PMID: 11718788 DOI: 10.1016/s0042-6989(01)00023-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
The percept of self-motion through the environment is supported by visual motion signals and eye movement signals. The interaction between these signals by decoupling of the eye movement and the pattern of retinal motion during brief simulated ego-movement on straight or circular trajectories was studied. A new response method enabled subjects to report perceived destination and perceived curvature of their future path simultaneously. Various combinations of simulated gaze rotation in the retinal flow and eye pursuit were investigated. Simulated gaze rotation ranged from consistent and larger than, to opponent and larger than eye pursuit. It was found that the perceived destination shifts non-linearly with the mismatch between simulated gaze rotation and eye pursuit. The non-linearity is also revealed in the perceived tangent heading direction and perceived path curvature, although to different extent in different subjects. For the same retinal flow, eye pursuit that is consistent with the simulated gaze rotation reduces heading error and the perceived path straightens out. In contrast, perceived path and/or heading do not become more curved or more biased in the direction opposite to pursuit when the eye -in-head rotation is opposite to the simulated gaze rotation. These observations point to modulation of the effect of the extra-retinal pursuit signal by the visual evidence for eye rotation. In a second experiment, one presented to a stationary eye the sum of a component of simulated gaze rotation and radial flow. It was found that the bi-circular flow component, that characterizes the change in pattern of flow directions by the gaze rotation, induces a shift of perceived heading without appreciable perceived path curvature. Conversely, the complementary component of simulated gaze rotation (bi-radial flow) evokes a percept of motion on a curved path with a small tangent heading error. It was suggested that bi-circular and bi-radial flow components contribute primarily to percepts of heading and path curvature, respectively.
Collapse
Affiliation(s)
- A V van den Berg
- Department of Physiology, Helmholtz School for Autonomous Systems Research, Faculty of Medicine, Erasmus University Rotterdam, PO Box 1738, 3000 DR, Rotterdam, The Netherlands.
| | | | | |
Collapse
|
48
|
Beintema JA, van den Berg AV. Pursuit affects precision of perceived heading for small viewing apertures. Vision Res 2001; 41:2375-91. [PMID: 11459594 DOI: 10.1016/s0042-6989(01)00077-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We investigated the interaction between extra-retinal rotation signals and retinal motion signals in heading perception during pursuit eye movement. For limited viewing aperture, the variability in perceived heading strongly depends on the pattern of motion directions. Heading towards a point outside the aperture generates nearly parallel aperture flow. This results in lower precision of perceived heading than heading that renders the radial pattern of flow visible. We ask if the precision is limited by the pattern of flow visible on the retina or that on the screen. During fixation, the two patterns are identical. They are decoupled during pursuit, since pursuit changes radial flow within the aperture on the screen into nearly parallel flow on the retina, and vice versa. The extra-retinal signal is known to reduce systematic errors in the direction of pursuit, thus compensating for the rotational flow during pursuit. We now ask if the extra-retinal signal also affects the precision of heading percepts. It might if at the spatial integration stage the rotational flow has been subtracted out already. A compensation beyond the integration stage, however, cannot undo the change in retinal motion directions so that an effect of pursuit on precision cannot be avoided. We measured the variable and systematic errors in perceived heading during fixation and pursuit for a frontal plane approach, while varying duration, dot lifetime and aperture size. We found precision is effected by pursuit as much as predicted from the pattern of retinal flow, while compensation is significantly greater than zero. This means that the interaction between the extra-retinal signal and visual motion signals takes place after spatial integration of local motion signals. Furthermore, compensation increased significantly with longer duration (0.5-3.0 s), but not with larger aperture size (10-50 degrees ). A larger aperture size did increase the eccentricity of perceived heading.
Collapse
Affiliation(s)
- J A Beintema
- Department of Zoology and Neurobiology, Ruhr University Bochum, 44780, Bochum, Germany.
| | | |
Collapse
|
49
|
Takemura A, Inoue Y, Kawano K, Quaia C, Miles FA. Single-Unit Activity in Cortical Area MST Associated With Disparity-Vergence Eye Movements: Evidence for Population Coding. J Neurophysiol 2001; 85:2245-66. [PMID: 11353039 DOI: 10.1152/jn.2001.85.5.2245] [Citation(s) in RCA: 96] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Single-unit discharges were recorded in the medial superior temporal area (MST) of five behaving monkeys. Brief (230-ms) horizontal disparity steps were applied to large correlated or anticorrelated random-dot patterns (in which the dots had the same or opposite contrast, respectively, at the two eyes), eliciting vergence eye movements at short latencies [65.8 ± 4.5 (SD) ms]. Disparity tuning curves, describing the dependence of the initial vergence responses (measured over the period 50–110 ms after the step) on the magnitude of the steps, resembled the derivative of a Gaussian, the curves obtained with correlated and anticorrelated patterns having opposite sign. Cells with disparity-related activity were isolated using correlated stimuli, and disparity tuning curves describing the dependence of these initial neuronal responses (measured over the period of 40–100 ms) on the magnitude of the disparity step were constructed ( n = 102 cells). Using objective criteria and the fuzzy c-means clustering algorithm, disparity tuning curves were sorted into four groups based on their shapes. A post hoc comparison indicated that these four groups had features in common with four of the classes of disparity-selective neurons in striate cortex, but three of the four groups appeared to be part of a continuum. Most of the data were obtained from two monkeys, and when the disparity tuning curves of all the individual neurons recorded from either monkey were summed together, they fitted the disparity tuning curve for that same animal's vergence responses remarkably well ( r 2: 0.93, 0.98). Fifty-six of the neurons recorded from these two monkeys were also tested with anticorrelated patterns, and all showed significant modulation of their activity ( P < 0.005, 1-way ANOVA). Further, when all of the disparity tuning curves obtained with these patterns from either monkey were summed together, they too fitted the disparity tuning curve for that same animal's vergence responses very well ( r 2: 0.95, 0.96). Indeed, the summed activity even reproduced idiosyncratic differences in the vergence responses of the two monkeys. Based on these and other observations on the temporal coding of events, we hypothesize that the magnitude, direction, and time course of the initial vergence velocity responses associated with disparity steps applied to large patterns are all encoded in the summed activity of the disparity-sensitive cells in MST. Latency data suggest that this activity in MST occurs early enough to play an active role in the generation of vergence eye movements at short latencies.
Collapse
Affiliation(s)
- A Takemura
- Neuroscience Section, Electrotechnical Laboratory, Ibaraki 305, Japan.
| | | | | | | | | |
Collapse
|
50
|
Glennerster A, Hansard ME, Fitzgibbon AW. Fixation could simplify, not complicate, the interpretation of retinal flow. Vision Res 2001; 41:815-34. [PMID: 11248268 DOI: 10.1016/s0042-6989(00)00300-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The visual system must generate a reference frame to relate retinal images in spite of head and eye movements. We show how a reference frame for storing the visual direction and depth of points can be composed from the angles and changes in angles between pairs and triples of points. The representation has no unique origin in 3-D space nor a unique set of cardinal directions (basis vectors). We show how this relative representation could be built up over a series of fixations and for different directions of translation of the observer. Maintaining gaze on a point as the observer translates helps in building up this representation. In our model, retinal flow is divided into changes in eccentricity and changes in meridional angle. The latter, called 'polar angle disparities' for binocular viewing (Weinshall, 1990. Computer Vision Graphics and Image Processing, 49 222-241), can be used to recover the relief structure of the scene in a series of stages up to full Euclidean structure. We show how the direction of heading can be recovered by a similar series of stages.
Collapse
Affiliation(s)
- A Glennerster
- University Laboratory of Physiology, Parks Road, Oxford, OX1 3PT UK.
| | | | | |
Collapse
|