1
|
DiRisio GF, Ra Y, Qiu Y, Anzai A, DeAngelis GC. Neurons in Primate Area MSTd Signal Eye Movement Direction Inferred from Dynamic Perspective Cues in Optic Flow. J Neurosci 2023; 43:1888-1904. [PMID: 36725323 PMCID: PMC10027048 DOI: 10.1523/jneurosci.1885-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/18/2023] [Accepted: 01/24/2023] [Indexed: 02/03/2023] Open
Abstract
Smooth eye movements are common during natural viewing; we frequently rotate our eyes to track moving objects or to maintain fixation on an object during self-movement. Reliable information about smooth eye movements is crucial to various neural computations, such as estimating heading from optic flow or judging depth from motion parallax. While it is well established that extraretinal signals (e.g., efference copies of motor commands) carry critical information about eye velocity, the rotational optic flow field produced by eye rotations also carries valuable information. Although previous work has shown that dynamic perspective cues in optic flow can be used in computations that require estimates of eye velocity, it has remained unclear where and how the brain processes these visual cues and how they are integrated with extraretinal signals regarding eye rotation. We examined how neurons in the dorsal region of the medial superior temporal area (MSTd) of two male rhesus monkeys represent the direction of smooth pursuit eye movements based on both visual cues (dynamic perspective) and extraretinal signals. We find that most MSTd neurons have matched preferences for the direction of eye rotation based on visual and extraretinal signals. Moreover, neural responses to combinations of these signals are well predicted by a weighted linear summation model. These findings demonstrate a neural substrate for representing the velocity of smooth eye movements based on rotational optic flow and establish area MSTd as a key node for integrating visual and extraretinal signals into a more generalized representation of smooth eye movements.SIGNIFICANCE STATEMENT We frequently rotate our eyes to smoothly track objects of interest during self-motion. Information about eye velocity is crucial for a variety of computations performed by the brain, including depth perception and heading perception. Traditionally, information about eye rotation has been thought to arise mainly from extraretinal signals, such as efference copies of motor commands. Previous work shows that eye velocity can also be inferred from rotational optic flow that accompanies smooth eye movements, but the neural origins of these visual signals about eye rotation have remained unknown. We demonstrate that macaque neurons signal the direction of smooth eye rotation based on visual signals, and that they integrate both visual and extraretinal signals regarding eye rotation in a congruent fashion.
Collapse
Affiliation(s)
- Grace F DiRisio
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, University of Chicago, Chicago, Illinois 60637
| | - Yongsoo Ra
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Yinghui Qiu
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- College of Veterinary Medicine, Cornell University, Ithaca, New York 14853-6401
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| |
Collapse
|
2
|
Ali M, Decker E, Layton OW. Temporal stability of human heading perception. J Vis 2023; 23:8. [PMID: 36786748 PMCID: PMC9932552 DOI: 10.1167/jov.23.2.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023] Open
Abstract
Humans are capable of accurately judging their heading from optic flow during straight forward self-motion. Despite the global coherence in the optic flow field, however, visual clutter and other naturalistic conditions create constant flux on the eye. This presents a problem that must be overcome to accurately perceive heading from optic flow-the visual system must maintain sensitivity to optic flow variations that correspond with actual changes in self-motion and disregard those that do not. One solution could involve integrating optic flow over time to stabilize heading signals while suppressing transient fluctuations. Stability, however, may come at the cost of sluggishness. Here, we investigate the stability of human heading perception when subjects judge their heading after the simulated direction of self-motion changes. We found that the initial heading exerted an attractive influence on judgments of the final heading. Consistent with an evolving heading representation, bias toward the initial heading increased with the size of the heading change and as the viewing duration of the optic flow consistent with the final heading decreased. Introducing periods of sensory dropout (blackouts) later in the trial increased bias whereas an earlier one did not. Simulations of a neural model, the Competitive Dynamics Model, demonstrates that a mechanism that produces an evolving heading signal through recurrent competitive interactions largely captures the human data. Our findings characterize how the visual system balances stability in heading perception with sensitivity to change and support the hypothesis that heading perception evolves over time.
Collapse
Affiliation(s)
- Mufaddal Ali
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Eli Decker
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Oliver W. Layton
- Department of Computer Science, Colby College, Waterville, ME, USA,https://sites.google.com/colby.edu/owlab
| |
Collapse
|
3
|
Matthis JS, Muller KS, Bonnen KL, Hayhoe MM. Retinal optic flow during natural locomotion. PLoS Comput Biol 2022; 18:e1009575. [PMID: 35192614 PMCID: PMC8896712 DOI: 10.1371/journal.pcbi.1009575] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/04/2022] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker’s visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body’s trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker’s instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body’s momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior. We recorded the full body kinematics and binocular gaze of humans walking through real-world natural environment and estimated visual motion (optic flow) using both computational video analysis and geometric simulation. Contrary to the established theories of the role of optic flow in the control of locomotion, we found that eye-movement-free, head-centric optic flow is highly unstable due to the complex phasic trajectory of the head during natural locomotion, rendering it an unlikely candidate for heading perception. In contrast, retina-centered optic flow consisted of a regular pattern of outflowing motion centered on the fovea. Retinal optic flow contained highly consistent patterns that specified the walker’s trajectory relative to the point of fixation, which may provide powerful, retinotopic cues that may be used for the visual control of locomotion in natural environments. This examination of optic flow in real-world contexts suggest a need to re-evaluate existing theories of the role of optic flow in the visual control of action during natural behavior.
Collapse
Affiliation(s)
- Jonathan Samir Matthis
- Department of Biology, Northeastern University, Boston, Massachusetts, United States of America
- * E-mail:
| | - Karl S. Muller
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Kathryn L. Bonnen
- School of Optometry, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Mary M. Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
4
|
ARTFLOW: A Fast, Biologically Inspired Neural Network that Learns Optic Flow Templates for Self-Motion Estimation. SENSORS 2021; 21:s21248217. [PMID: 34960310 PMCID: PMC8708706 DOI: 10.3390/s21248217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 12/02/2021] [Accepted: 12/03/2021] [Indexed: 11/20/2022]
Abstract
Most algorithms for steering, obstacle avoidance, and moving object detection rely on accurate self-motion estimation, a problem animals solve in real time as they navigate through diverse environments. One biological solution leverages optic flow, the changing pattern of motion experienced on the eye during self-motion. Here I present ARTFLOW, a biologically inspired neural network that learns patterns in optic flow to encode the observer’s self-motion. The network combines the fuzzy ART unsupervised learning algorithm with a hierarchical architecture based on the primate visual system. This design affords fast, local feature learning across parallel modules in each network layer. Simulations show that the network is capable of learning stable patterns from optic flow simulating self-motion through environments of varying complexity with only one epoch of training. ARTFLOW trains substantially faster and yields self-motion estimates that are far more accurate than a comparable network that relies on Hebbian learning. I show how ARTFLOW serves as a generative model to predict the optic flow that corresponds to neural activations distributed across the network.
Collapse
|
5
|
Burlingham CS, Heeger DJ. Heading perception depends on time-varying evolution of optic flow. Proc Natl Acad Sci U S A 2020; 117:33161-33169. [PMID: 33328275 PMCID: PMC7776640 DOI: 10.1073/pnas.2022984117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
There is considerable support for the hypothesis that perception of heading in the presence of rotation is mediated by instantaneous optic flow. This hypothesis, however, has never been tested. We introduce a method, termed "nonvarying phase motion," for generating a stimulus that conveys a single instantaneous optic flow field, even though the stimulus is presented for an extended period of time. In this experiment, observers viewed stimulus videos and performed a forced-choice heading discrimination task. For nonvarying phase motion, observers made large errors in heading judgments. This suggests that instantaneous optic flow is insufficient for heading perception in the presence of rotation. These errors were mostly eliminated when the velocity of phase motion was varied over time to convey the evolving sequence of optic flow fields corresponding to a particular heading. This demonstrates that heading perception in the presence of rotation relies on the time-varying evolution of optic flow. We hypothesize that the visual system accurately computes heading, despite rotation, based on optic acceleration, the temporal derivative of optic flow.
Collapse
Affiliation(s)
| | - David J Heeger
- Department of Psychology, New York University, New York, NY 10003;
- Center for Neural Science, New York University, New York, NY 10003
| |
Collapse
|
6
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
7
|
Retinal Stabilization Reveals Limited Influence of Extraretinal Signals on Heading Tuning in the Medial Superior Temporal Area. J Neurosci 2019; 39:8064-8078. [PMID: 31488610 DOI: 10.1523/jneurosci.0388-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Revised: 08/17/2019] [Accepted: 08/20/2019] [Indexed: 11/21/2022] Open
Abstract
Heading perception in primates depends heavily on visual optic-flow cues. Yet during self-motion, heading percepts remain stable, even though smooth-pursuit eye movements often distort optic flow. According to theoretical work, self-motion can be represented accurately by compensating for these distortions in two ways: via retinal mechanisms or via extraretinal efference-copy signals, which predict the sensory consequences of movement. Psychophysical evidence strongly supports the efference-copy hypothesis, but physiological evidence remains inconclusive. Neurons that signal the true heading direction during pursuit are found in visual areas of monkey cortex, including the dorsal medial superior temporal area (MSTd). Here we measured heading tuning in MSTd using a novel stimulus paradigm, in which we stabilize the optic-flow stimulus on the retina during pursuit. This approach isolates the effects on neuronal heading preferences of extraretinal signals, which remain active while the retinal stimulus is prevented from changing. Our results from 3 female monkeys demonstrate a significant but small influence of extraretinal signals on the preferred heading directions of MSTd neurons. Under our stimulus conditions, which are rich in retinal cues, we find that retinal mechanisms dominate physiological corrections for pursuit eye movements, suggesting that extraretinal cues, such as predictive efference-copy mechanisms, have a limited role under naturalistic conditions.SIGNIFICANCE STATEMENT Sensory systems discount stimulation caused by an animal's own behavior. For example, eye movements cause irrelevant retinal signals that could interfere with motion perception. The visual system compensates for such self-generated motion, but how this happens is unclear. Two theoretical possibilities are a purely visual calculation or one using an internal signal of eye movements to compensate for their effects. The latter can be isolated by experimentally stabilizing the image on a moving retina, but this approach has never been adopted to study motion physiology. Using this method, we find that extraretinal signals have little influence on activity in visual cortex, whereas visually based corrections for ongoing eye movements have stronger effects and are likely most important under real-world conditions.
Collapse
|
8
|
Abstract
In the current study, we explored observers' use of two distinct analyses for determining their direction of motion, or heading: a scene-based analysis and a motion-based analysis. In two experiments, subjects viewed sequentially presented, paired digitized images of real-world scenes and judged the direction of heading; the pairs were presented with various interstimulus intervals (ISIs). In Experiment 1, subjects could determine heading when the two frames were separated with a 1,000-ms ISI, long enough to eliminate apparent motion. In Experiment 2, subjects performed two tasks, a path-of-motion task and a memory-load task, under three different ISIs, 50 ms, 500 ms, and 1,000 ms. Heading accuracy decreased with an increase in ISI. Increasing memory load influenced heading judgments only for the longer ISI when motion-based information was not available. These results are consistent with the hypothesis that the scene-based analysis has a coarse spatial representation, is a sustained temporal process, and is capacity limited, whereas the motion-based analysis has a fine spatial resolution, is a transient temporal process, and is capacity unlimited.
Collapse
Affiliation(s)
- Sowon Hahn
- University of California at Riverside, USA
| | | | | |
Collapse
|
9
|
Layton OW, Fajen BR. Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow. PLoS Comput Biol 2016; 12:e1004942. [PMID: 27341686 PMCID: PMC4920404 DOI: 10.1371/journal.pcbi.1004942] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 04/22/2016] [Indexed: 11/18/2022] Open
Abstract
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model's heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
- * E-mail:
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
10
|
Layton OW, Fajen BR. The temporal dynamics of heading perception in the presence of moving objects. J Neurophysiol 2015; 115:286-300. [PMID: 26510765 DOI: 10.1152/jn.00866.2015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 10/26/2015] [Indexed: 11/22/2022] Open
Abstract
Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York
| | - Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York
| |
Collapse
|
11
|
Perrone JA, Liston DB. Redundancy reduction explains the expansion of visual direction space around the cardinal axes. Vision Res 2015; 111:31-42. [PMID: 25888929 DOI: 10.1016/j.visres.2015.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Revised: 02/21/2015] [Accepted: 03/27/2015] [Indexed: 11/30/2022]
Abstract
Motion direction discrimination in humans is worse for oblique directions than for the cardinal directions (the oblique effect). For some unknown reason, the human visual system makes systematic errors in the estimation of particular motion directions; a direction displacement near a cardinal axis appears larger than it really is whereas the same displacement near an oblique axis appears to be smaller. Although the perceptual effects are robust and are clearly measurable in smooth pursuit eye movements, all attempts to identify the neural underpinnings for the oblique effect have failed. Here we show that a model of image velocity estimation based on the known properties of neurons in primary visual cortex (V1) and the middle temporal (MT) visual area of the primate brain produces the oblique effect. We also provide an explanation for the unusual asymmetric patterns of inhibition that have been found surrounding MT neurons. These patterns are consistent with a mechanism within the visual system that prevents redundant velocity signals from being passed onto the next motion-integration stage, (dorsal Medial superior temporal, MSTd). We show that model redundancy-reduction mechanisms within the MT-MSTd pathway produce the oblique effect.
Collapse
Affiliation(s)
- John A Perrone
- The School of Psychology, University of Waikato, Hamilton, New Zealand.
| | - Dorion B Liston
- San Jose State University, San Jose, CA, USA; NASA Ames Research Center, Moffett Field, CA, USA
| |
Collapse
|
12
|
Royden CS, Holloway MA. Detecting moving objects in an optic flow field using direction- and speed-tuned operators. Vision Res 2014; 98:14-25. [PMID: 24607912 DOI: 10.1016/j.visres.2014.02.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2013] [Revised: 01/25/2014] [Accepted: 02/21/2014] [Indexed: 11/20/2022]
Abstract
An observer moving through a scene must be able to identify moving objects. Psychophysical results have shown that people can identify moving objects based on the speed or direction of their movement relative to the optic flow field generated by the observer's motion. Here we show that a model that uses speed- and direction-tuned units, whose responses are based on the response properties of cells in the primate visual cortex, can successfully identify the borders of moving objects in a scene through which an observer is moving.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Michael A Holloway
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
13
|
Foulkes AJ, Rushton SK, Warren PA. Heading recovery from optic flow: comparing performance of humans and computational models. Front Behav Neurosci 2013; 7:53. [PMID: 23801946 PMCID: PMC3689323 DOI: 10.3389/fnbeh.2013.00053] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2012] [Accepted: 05/07/2013] [Indexed: 11/13/2022] Open
Abstract
Human observers can perceive their direction of heading with a precision of about a degree. Several computational models of the processes underpinning the perception of heading have been proposed. In the present study we set out to assess which of four candidate models best captured human performance; the four models we selected reflected key differences in terms of approach and methods to modelling optic flow processing to recover movement parameters. We first generated a performance profile for human observers by measuring how performance changed as we systematically manipulated both the quantity (number of dots in the stimulus per frame) and quality (amount of 2D directional noise) of the flow field information. We then generated comparable performance profiles for the four candidate models. Models varied markedly in terms of both their performance and similarity to human data. To formally assess the match between the models and human performance we regressed the output of each of the four models against human performance data. We were able to rule out two models that produced very different performance profiles to human observers. The remaining two shared some similarities with human performance profiles in terms of the magnitude and pattern of thresholds. However none of the models tested could capture all aspect of the human data.
Collapse
Affiliation(s)
- Andrew J. Foulkes
- School of Psychological Sciences, The University of ManchesterManchester, UK
| | | | - Paul A. Warren
- School of Psychological Sciences, The University of ManchesterManchester, UK
| |
Collapse
|
14
|
Richert M, Albright TD, Krekelberg B. The complex structure of receptive fields in the middle temporal area. Front Syst Neurosci 2013; 7:2. [PMID: 23508640 PMCID: PMC3589601 DOI: 10.3389/fnsys.2013.00002] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2012] [Accepted: 02/13/2013] [Indexed: 12/02/2022] Open
Abstract
Neurons in the middle temporal area (MT) are often viewed as motion detectors that prefer a single direction of motion in a single region of space. This assumption plays an important role in our understanding of visual processing, and models of motion processing in particular. We used extracellular recordings in area MT of awake, behaving monkeys (M. mulatta) to test this assumption with a novel reverse correlation approach. Nearly half of the MT neurons in our sample deviated significantly from the classical view. First, in many cells, direction preference changed with the location of the stimulus within the receptive field. Second, the spatial response profile often had multiple peaks with apparent gaps in between. This shows that visual motion analysis in MT has access to motion detectors that are more complex than commonly thought. This complexity could be a mere byproduct of imperfect development, but can also be understood as the natural consequence of the non-linear, recurrent interactions among laterally connected MT neurons. An important direction for future research is to investigate whether these in homogeneities are advantageous, how they can be incorporated into models of motion detection, and whether they can provide quantitative insight into the underlying effective connectivity.
Collapse
Affiliation(s)
- Micah Richert
- The Salk Institute for Biological Studies La Jolla, CA, USA
| | | | | |
Collapse
|
15
|
Layton OW, Browning NA. Recurrent competition explains temporal effects of attention in MSTd. Front Comput Neurosci 2012; 6:80. [PMID: 23060788 PMCID: PMC3464456 DOI: 10.3389/fncom.2012.00080] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2012] [Accepted: 09/19/2012] [Indexed: 12/03/2022] Open
Abstract
Navigation in a static environment along straight paths without eye movements produces radial optic flow fields. A singularity called the focus of expansion (FoE) specifies the direction of travel (heading) of the observer. Cells in primate dorsal medial superior temporal area (MSTd) respond to radial fields and are therefore thought to be heading-sensitive. Humans frequently shift their focus of attention while navigating, for example, depending on the favorable or threatening context of approaching independently moving objects. Recent neurophysiological studies show that the spatial tuning curves of primate MSTd neurons change based on the difference in visual angle between an attentional prime and the FoE. Moreover, the peak mean population activity in MSTd retreats linearly in time as the distance between the attentional prime and FoE increases. We present a dynamical neural circuit model that demonstrates the same linear temporal peak shift observed electrophysiologically. The model qualitatively matches the neuron tuning curves and population activation profiles. After model MT dynamically pools short-range motion, model MSTd incorporates recurrent competition between units tuned to different radial optic flow templates, and integrates attentional signals from model area frontal eye fields (FEF). In the model, population activity peaks occur when the recurrent competition is most active and uncertainty is greatest about the relative position of the FoE. The nature of attention, multiplicative or non-multiplicative, is largely irrelevant, so long as attention has a Gaussian-like profile. Using an appropriately tuned sigmoidal signal function to modulate recurrent feedback affords qualitative fits of deflections in the population activity that otherwise appear to be low-frequency noise. We predict that these deflections mark changes in the balance of attention between the priming and FoE locations.
Collapse
Affiliation(s)
- Oliver W Layton
- Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA
| | | |
Collapse
|
16
|
Raudies F, Hasselmo ME. Modeling boundary vector cell firing given optic flow as a cue. PLoS Comput Biol 2012; 8:e1002553. [PMID: 22761557 PMCID: PMC3386186 DOI: 10.1371/journal.pcbi.1002553] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2011] [Accepted: 04/25/2012] [Indexed: 11/24/2022] Open
Abstract
Boundary vector cells in entorhinal cortex fire when a rat is in locations at a specific distance from walls of an environment. This firing may originate from memory of the barrier location combined with path integration, or the firing may depend upon the apparent visual input image stream. The modeling work presented here investigates the role of optic flow, the apparent change of patterns of light on the retina, as input for boundary vector cell firing. Analytical spherical flow is used by a template model to segment walls from the ground, to estimate self-motion and the distance and allocentric direction of walls, and to detect drop-offs. Distance estimates of walls in an empty circular or rectangular box have a mean error of less than or equal to two centimeters. Integrating these estimates into a visually driven boundary vector cell model leads to the firing patterns characteristic for boundary vector cells. This suggests that optic flow can influence the firing of boundary vector cells. Over the past few decades a variety of cells in hippocampal structures have been analyzed and their function has been identified. Head direction cells indicate the world-centered direction of the animals head like a compass. Place cells fire in locations associated with visual, auditory, or olfactory cues. Grid cells fill open space like a carpet with their mosaic of firing. Boundary vector cells fire, if a boundary that cannot be passed by the animal appears at a certain distance and world-centered direction. All these cells are players in the navigation game; however, their interaction and linkage to sensory systems like vision and memory is not fully understood. Our model analyzes a potential link between the visual system and boundary vector cells. As part of the visual system, we model optic flow that is available to rats. Optic flow is defined as change of lightness patterns on the retina and contains information about self-motion and environment. This optic flow is used in our model to estimate the distance and direction of boundaries. Our model simulations suggest a link between optic flow and the firing of boundary vector cells.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology-CompNet, Boston University, Boston, Massachusetts, United States of America.
| | | |
Collapse
|
17
|
Raudies F, Mingolla E, Neumann H. Active gaze control improves optic flow-based segmentation and steering. PLoS One 2012; 7:e38446. [PMID: 22719889 PMCID: PMC3375264 DOI: 10.1371/journal.pone.0038446] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2012] [Accepted: 05/07/2012] [Indexed: 11/30/2022] Open
Abstract
An observer traversing an environment actively relocates gaze to fixate objects. Evidence suggests that gaze is frequently directed toward the center of an object considered as target but more likely toward the edges of an object that appears as an obstacle. We suggest that this difference in gaze might be motivated by specific patterns of optic flow that are generated by either fixating the center or edge of an object. To support our suggestion we derive an analytical model that shows: Tangentially fixating the outer surface of an obstacle leads to strong flow discontinuities that can be used for flow-based segmentation. Fixation of the target center while gaze and heading are locked without head-, body-, or eye-rotations gives rise to a symmetric expansion flow with its center at the point being approached, which facilitates steering toward a target. We conclude that gaze control incorporates ecological constraints to improve the robustness of steering and collision avoidance by actively generating flows appropriate to solve the task.
Collapse
Affiliation(s)
- Florian Raudies
- Center of Excellence for Learning in Education, Science, and Technology, Boston University, Boston, Massachusetts, United States of America.
| | | | | |
Collapse
|
18
|
Use of speed cues in the detection of moving objects by moving observers. Vision Res 2012; 59:17-24. [PMID: 22406544 DOI: 10.1016/j.visres.2012.02.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2011] [Revised: 01/12/2012] [Accepted: 02/21/2012] [Indexed: 11/20/2022]
Abstract
When an observer moves through an environment containing stationary and moving objects, he or she must be able to determine which objects are moving relative to the others in order to navigate successfully and avoid collisions. We investigated whether image speed can be used as a cue to detect a moving object in the scene. Our results show that image speed can be used to detect moving objects as long as the object is moving sufficiently faster or slower than it would if it were part of the stationary scene.
Collapse
|
19
|
Sikoglu EM, Calabro FJ, Beardsley SA, Vaina LM. Integration mechanisms for heading perception. ACTA ACUST UNITED AC 2010; 23:197-221. [PMID: 20529443 DOI: 10.1163/187847510x503605] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Previous studies of heading perception suggest that human observers employ spatiotemporal pooling to accommodate noise in optic flow stimuli. Here, we investigated how spatial and temporal integration mechanisms are used for judgments of heading through a psychophysical experiment involving three different types of noise. Furthermore, we developed two ideal observer models to study the components of the spatial information used by observers when performing the heading task. In the psychophysical experiment, we applied three types of direction noise to optic flow stimuli to differentiate the involvement of spatial and temporal integration mechanisms. The results indicate that temporal integration mechanisms play a role in heading perception, though their contribution is weaker than that of the spatial integration mechanisms. To elucidate how observers process spatial information to extract heading from a noisy optic flow field, we compared psychophysical performance in response to random-walk direction noise with that of two ideal observer models (IOMs). One model relied on 2D screen-projected flow information (2D-IOM), while the other used environmental, i.e., 3D, flow information (3D-IOM). The results suggest that human observers compensate for the loss of information during the 2D retinal projection of the visual scene for modest amounts of noise. This suggests the likelihood of a 3D reconstruction during heading perception, which breaks down under extreme levels of noise.
Collapse
Affiliation(s)
- Elif M Sikoglu
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA
| | | | | | | |
Collapse
|
20
|
Royden CS, Connors EM. The detection of moving objects by moving observers. Vision Res 2010; 50:1014-24. [DOI: 10.1016/j.visres.2010.03.008] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2009] [Revised: 01/29/2010] [Accepted: 03/16/2010] [Indexed: 11/24/2022]
|
21
|
Snyder JJ, Bischof WF. Knowing where we're heading--when nothing moves. Brain Res 2010; 1323:127-38. [PMID: 20132801 DOI: 10.1016/j.brainres.2010.01.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2009] [Revised: 01/18/2010] [Accepted: 01/24/2010] [Indexed: 10/19/2022]
Abstract
Past research indicates that observers rely strongly on flow-based and object-based motion information for determining egomotion or direction of heading. More recently, it has been shown that they also rely on displacement information that does not induce motion perception. As yet, little is known regarding the specific displacement cues that are used for heading estimation. In Experiment 1a, we show that the accuracy of heading estimates increases, as more displacement cues are available. In Experiments 1b and 2, we show that observers rely mostly on the displacement of objects and geometric cues for estimating heading. In Experiment 3, we show that the accuracy of detecting changes in heading when displacement cues are used is low. The results are interpreted in terms of two systems that may be available for estimating heading, one relying on movement information and providing navigational mechanisms, the other relying on displacement information and providing navigational planning and orienting mechanisms.
Collapse
Affiliation(s)
- Janice J Snyder
- Psychology Department, University of British Columbia Okanagan, 3333 University Way, Kelowna, BC, Canada V1V 1V7.
| | | |
Collapse
|
22
|
Warren PA, Rushton SK. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues. Vision Res 2009; 49:1406-19. [PMID: 19480063 DOI: 10.1016/j.visres.2009.01.016] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.
Collapse
Affiliation(s)
- Paul A Warren
- School of Psychology and Communications Research Centre, Cardiff University, Cardiff, CF10 3AT Wales, UK.
| | | |
Collapse
|
23
|
Browning NA, Grossberg S, Mingolla E. A neural model of how the brain computes heading from optic flow in realistic scenes. Cogn Psychol 2009; 59:320-56. [PMID: 19716125 DOI: 10.1016/j.cogpsych.2009.07.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2008] [Accepted: 07/20/2009] [Indexed: 11/15/2022]
Abstract
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard to behavioral and neural substrates. The current article develops a model that does both. The ViSTARS neural model describes interactions among neurons in the primate magnocellular pathway, including V1, MT(+), and MSTd. Model outputs are quantitatively similar to human heading data in response to complex natural scenes. The model estimates heading to within 1.5 degrees in random dot or photo-realistically rendered scenes, and within 3 degrees in video streams from driving in real-world environments. Simulated rotations of less than 1 degrees /s do not affect heading estimates, but faster simulated rotation rates do, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.
Collapse
Affiliation(s)
- N Andrew Browning
- Department of Cognitive and Neural Systems, Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
24
|
Affiliation(s)
- Kenneth H. Britten
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California 95616;
| |
Collapse
|
25
|
Evidence for flow-parsing in radial flow displays. Vision Res 2008; 48:655-63. [PMID: 18243274 DOI: 10.1016/j.visres.2007.10.023] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2007] [Revised: 10/16/2007] [Accepted: 10/18/2007] [Indexed: 11/21/2022]
Abstract
Retinal motion of objects is not in itself enough to signal whether or how objects are moving in the world; the same pattern of retinal motion can result from movement of the object, the observer or both. Estimation of scene-relative movement of an object is vital for successful completion of many simple everyday tasks. Recent research has provided evidence for a neural flow-parsing mechanism which uses the brain's sensitivity to optic flow to separate retinal motion signals into those components due to observer movement and those due to the movement of objects in the scene. In this study we provide further evidence that flow-parsing is implicated in the assessment of object trajectory during observer movement. Furthermore, it is shown that flow-parsing involves a global analysis of retinal motion, as might be expected if optic flow processing underpinned this mechanism.
Collapse
|
26
|
A model for simultaneous computation of heading and depth in the presence of rotations. Vision Res 2007; 47:3025-40. [DOI: 10.1016/j.visres.2007.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2007] [Revised: 08/15/2007] [Accepted: 08/17/2007] [Indexed: 11/22/2022]
|
27
|
Royden CS, Cahill JM, Conti DM. Factors affecting curved versus straight path heading perception. ACTA ACUST UNITED AC 2006; 68:184-93. [PMID: 16773892 DOI: 10.3758/bf03193668] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Displays commonly used for testing heading judgments in the presence of rotations are ambiguous to observers. They can be interpreted equally well as motion in a straight line while rotating the eyes or as motion on a curved path. This has led to conflicting results from studies that use these displays. In this study, we tested several factors that might influence which of these two interpretations observers see. These factors included the size of the field of view, the duration of the stimulus, textured scenes versus random-dot displays, and whether or not observers were given a description of their path. The only factor that had a significant effect on path perception was whether or not observers were given instructions describing their path of motion. Under all conditions without instructions, we found that observers responded in a way that was consistent with the perception of motion on a curved path.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Sciences, College of the Holy Cross, Worcester, MA 01610, USA.
| | | | | |
Collapse
|
28
|
Bex PJ, Falkenberg HK. Resolution of complex motion detectors in the central and peripheral visual field. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2006; 23:1598-607. [PMID: 16783422 DOI: 10.1364/josaa.23.001598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We examine how local direction signals are combined to compute the focus of radial motion (FRM) in random dot patterns and examine how this process changes across the visual field. Equivalent noise analysis showed that a loss in FRM accuracy was largely attributable to an increase in local motion detector noise with little or no change in efficiency across the visual field. The minimum separation for discriminating the foci of two overlapping optic flow patterns increased in the periphery faster than predicted from the resolution for a single FRM. This behavior requires that observers average numerous local velocities to estimate the FRM, which enables resistance to internal and external noise and endows the system with the property of position invariance. However, such pooling limits the precision with which multiple looming objects can be discriminated, especially in the peripheral visual field.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | | |
Collapse
|
29
|
Barraza JF, Grzywacz NM. Parametric decomposition of optic flow by humans. Vision Res 2005; 45:2481-91. [PMID: 15963549 DOI: 10.1016/j.visres.2005.04.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2004] [Revised: 01/03/2005] [Accepted: 04/20/2005] [Indexed: 11/18/2022]
Abstract
Ego motion and natural motions in the world generate complex optic flows in the retina. These optic flows, if produced by rigid surface patches, can be decomposed into four components, including rotation and expansion. We showed previously that humans can precisely estimate parameters of these components, such as the angular velocity of a rotational motion and the rate of expansion of a radial motion. However, natural optic flows mostly display motions containing a combination of more than one of these components. Here, we report that when a pure motion (e.g., rotation) is combined with its orthogonal component (e.g., expansion), no bias is found in the estimate of the component parameters. This suggests that the visual system can decompose complex motions. However, this decomposition is such that the presence of the orthogonal component increases the discrimination threshold for the original component. We propose a model for how the brain decomposes the optic flow into its elementary components. The model accounts for how errors in the estimate of local-velocity vectors affect the decomposition, producing the increase of discrimination thresholds.
Collapse
Affiliation(s)
- José F Barraza
- Departamento de Luminotecnia, Luz y Visión, Universidad Nacional de Tucumán, Consejo Nacional de Investigaciones Científicas y Técnicas, Tucumán, Argentina.
| | | |
Collapse
|
30
|
Wurfel JD, Barraza JF, Grzywacz NM. Measurement of rate of expansion in the perception of radial motion. Vision Res 2005; 45:2740-51. [PMID: 16023697 DOI: 10.1016/j.visres.2005.03.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2004] [Revised: 03/08/2005] [Accepted: 03/29/2005] [Indexed: 11/29/2022]
Abstract
Optic flow generated by rigid surface patches can be decomposed into a small number of elementary motion types. In these experiments, we show that the human visual system can evaluate expansion, one of these motion types, metrically. Moreover, we show that the discrimination of rates of expansion are spatially local. Because the estimation of the focus of expansion is somewhat imprecise, this locality sometimes produces predictable errors in the estimation of rate of expansion. One can make predictions like this with a model adapted from one previously developed for angular-velocity discrimination.
Collapse
Affiliation(s)
- Jeff D Wurfel
- Neuroscience Graduate Program, University of Southern California, Hedco Neuroscience Building, MC 2520, Los Angeles, CA 90089-2520, USA.
| | | | | |
Collapse
|
31
|
Bex PJ, Dakin SC. Spatial interference among moving targets. Vision Res 2005; 45:1385-98. [PMID: 15743609 DOI: 10.1016/j.visres.2004.12.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2004] [Revised: 11/18/2004] [Accepted: 12/08/2004] [Indexed: 11/16/2022]
Abstract
Peripheral vision for static form is limited both by reduced spatial acuity and by interference among adjacent features ('crowding'). However, the visibility of acuity-corrected image motion is relatively constant across the visual field. We measured whether spatial interference among nearby moving elements is similarly invariant of retinal eccentricity and assessed if motion integration could account for any observed sensitivity loss. We report that sensitivity to the direction of motion of a central target-highly visible in isolation-was strongly impaired by four drifting flanking elements. The extent of spatial interference increased with eccentricity. Random-direction flanks and flanks whose directions formed global patterns of rotation or expansion were more disruptive than flanks forming global patterns of translation, regardless of the relative direction of the target element. Spatial interference was low-pass tuned for spatial frequency and broadly tuned for temporal frequency. We show that these results challenge the generality of models of spatial interference that are based on retinal image quality, masking, confusions between target and flanks, attentional resolution limits or (simple) "averaging" of element parameters. Instead, the results suggest that spatial interference is a consequence of the integration of meaningful image structure within large receptive fields. The underlying connectivity of this integration favours low spatial frequency structure but is broadly tuned for speed.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, 11-43 Bath Street, London EC1V 9EL, UK.
| | | |
Collapse
|
32
|
Hanada M. Computational analyses for illusory transformations in the optic flow field and heading perception in the presence of moving objects. Vision Res 2005; 45:749-58. [PMID: 15639501 DOI: 10.1016/j.visres.2004.09.037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2004] [Revised: 09/23/2004] [Indexed: 11/24/2022]
Abstract
When we see a stimulus of a radial flow field (the target flow) overlapped with a lateral flow field or another radial flow field, the focus of expansion (FOE) of the target radial flow appears to be shifted in a direction. Royden and Conti [(2003). A model using MT-like motion-opponent operators explains an illusory transformation in the optic flow field. Vision Research, 43, 2811-2826] argued that local motion subtraction is crucial for explanation of this phenomenon. The flow field which causes the illusory displacement of FOE was computationally analyzed. It was shown that the flow field is approximately a rigid-motion flow; the flow can be generated by simulating a situation where an observer moves toward a stationary scene. The heading direction for the observer corresponds to the perceived position of the FOE of the radial flow pattern. It implies that any algorithms which assume rigidity of the scene and recover veridical heading explain the bias in perceived FOE. There is no need for local motion subtraction in order to explain the phenomena. Furthermore, the flow for an observer's translation in the presence of objects moving laterally or in depth was computationally analyzed. It was found that algorithms which minimizes standard error functions with less weights to the independently moving objects show similar biases in recovered heading to the bias of human observers. It implies that local motion subtraction is not necessary for explanation of the bias in perceived heading due to an object moving laterally or in depth, contrary to the argument of Royden [(2002). Computing heading in the presence of moving objects: a model that uses motion-opponent operators. Vision Research, 42, 3043-3058].
Collapse
Affiliation(s)
- Mitsuhiko Hanada
- Department of Media Architecture, Future University-Hakodate, 116-2 Kamedanakano-cho, Hakodate, Hokkaido 041-8655, Japan.
| |
Collapse
|
33
|
Hanada M. An algorithmic model of heading perception. BIOLOGICAL CYBERNETICS 2005; 92:8-20. [PMID: 15592681 DOI: 10.1007/s00422-004-0529-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2003] [Accepted: 10/07/2004] [Indexed: 05/24/2023]
Abstract
On the basis of Hanada and Ejima's (2000) model, an algorithmic model was presented to explain psychophysical data of van den Berg and Beintema (2000) that are inconsistent with vector-subtractive compensation for the rotational flow. The earlier model was modified in order not to use vector-subtractive compensation for the rotational flow. The proposed model computes the center of flow first and then estimates self-rotation; finally, heading is recovered from the center of flow and the estimate of self-rotation. The model explains the data of van de Berg and Beintema (2000). A fusion model of rotation estimates from different sources (efferent signals, proprioceptive feedback, vestibular signals about eye and head rotation, and visual motion) was also presented.
Collapse
Affiliation(s)
- Mitsuhiko Hanada
- Department of Cognitive and Information Sciences, Faculty of Letters, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
| |
Collapse
|
34
|
Abstract
Normal observers judge heading well both when moving in a straight line and when moving along a curved path. Judgments of curved path motion require depth variations in the scene while judgments of straight line heading (pure translation) do not. Here we show that a stroke patient who is impaired in low level 2D motion discrimination tasks and cannot accurately judge 3D structure from motion can accurately judge heading for straight line self-motion. This patient is impaired in judgments of curved path self-motion. This suggests that accurate heading judgments for observer translation do not require accurate 2D motion perception or 3D reconstruction of the scene. Judgments of curved path motion appear more dependent on accurate 2D motion perception.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, MA, USA
| | | |
Collapse
|
35
|
Chapter 3 Building blocks for time-to-contact estimation by the brain. ACTA ACUST UNITED AC 2004. [DOI: 10.1016/s0166-4115(04)80005-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
36
|
Royden CS, Conti DM. A model using MT-like motion-opponent operators explains an illusory transformation in the optic flow field. Vision Res 2003; 43:2811-26. [PMID: 14568097 DOI: 10.1016/s0042-6989(03)00481-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Previous studies have shown that a physiologically based model using motion-opponent operators to compute heading performs accurately for simulated observer translations. Here we show how this model can explain an illusory shift in the perceived focus of expansion of a radial flow field that occurs when a field of laterally moving dots is superimposed on a field of radially moving dots. Furthermore, we can use the model to predict the perceptual shift of the focus of expansion for novel visual stimuli. These results support the hypothesis that this illusion results from motion subtraction during the processing of optic flow fields.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, P.O. Box 116A, Worcester, MA 01610, USA.
| | | |
Collapse
|
37
|
Ben Hamed S, Page W, Duffy C, Pouget A. MSTd neuronal basis functions for the population encoding of heading direction. J Neurophysiol 2003; 90:549-58. [PMID: 12750416 DOI: 10.1152/jn.00639.2002] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Basis functions have been extensively used in models of neural computation because they can be combined linearly to approximate any nonlinear functions of the encoded variables. We investigated whether dorsal medial superior temporal (MSTd) area neurons use basis functions to simultaneously encode heading direction, eye position, and the velocity of ocular pursuit. Using optimal linear estimators, we first show that the head-centered and eye-centered position of a focus of expansion (FOE) in optic flow, pursuit direction, and eye position can all be estimated from the single-trial responses of 144 MSTd neurons with an average accuracy of 2-3 degrees, a value consistent with the discrimination thresholds measured in humans and monkeys. We then examined the format of the neural code for the head-centered position of the FOE, eye position, and pursuit direction. The basis function hypothesis predicts that a large majority of cells in MSTd should encode two or more signals simultaneously and combine these signals nonlinearly. Our analysis shows that 95% of the neurons encode two or more signals, whereas 76% code all three signals. Of the 95% of cells encoding two or more signals, 90% show nonlinear interactions between the encoded variables. These findings support the notion that MSTd may use basis functions to represent the FOE in optic flow, eye position, and pursuit.
Collapse
Affiliation(s)
- S Ben Hamed
- Department of Brain and Cognitive Science and the Center for Visual Science, University of Rochester, NY 14627, USA
| | | | | | | |
Collapse
|
38
|
Abstract
We propose a two-layer neuromorphic architecture by which motion field pattern, generated during locomotion, are processed by template detectors specialized for gaze-directed self-motion (expansion and rotation). The templates provide a gaze-centered computation for analyzing motion field in terms of how it is related to the fixation point (i.e., the fovea). The analysis is performed by relating the vectorial components of the act of motion to variations (i.e., asymmetries) of the local structure of the motion field. Notwithstanding their limited extension in space, such centric-minded templates extract, as a whole, global information from the input flow field, being sensitive to different local instances of the same global property of the vector field with respect to the fixation point; a quantitative analysis, in terms of vectorial operators, evidences this property as tuning curves for heading direction. Model performances, evaluated in several situations characterized by conditions of absence and presence of pursuit eye movements, validate the approach. We observe that the gaze-centered model provides an explicit testable hypothesis that can guide further explorations of visual motion processing in extrastriate cortical areas.
Collapse
Affiliation(s)
- Paolo Cavalleri
- Department of Biophysical and Electronic Engineering, University of Genoa, Via all'Opera Pia 11/A, 16145, Genova, Italy
| | | | | | | |
Collapse
|
39
|
Royden CS. Computing heading in the presence of moving objects: a model that uses motion-opponent operators. Vision Res 2002; 42:3043-58. [PMID: 12480074 DOI: 10.1016/s0042-6989(02)00394-2] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Psychophysical experiments have shown that human heading judgments can be biased by the presence of moving objects. Here we present a theoretical argument that motion differences can account for the direction of bias seen in humans. We further examine the responses of a computer simulation of a model for computing heading that uses motion-opponent operators similar to cells in the primate middle temporal visual area. When moving objects are present, this model shows similar biases to those seen with humans, suggesting that such a model may underlie human heading computations.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, P.O. Box 116A, Worcester, MA 01610, USA
| |
Collapse
|
40
|
van den Berg AV, Beintema JA, Frens MA. Heading and path percepts from visual flow and eye pursuit signals. Vision Res 2002; 41:3467-86. [PMID: 11718788 DOI: 10.1016/s0042-6989(01)00023-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
The percept of self-motion through the environment is supported by visual motion signals and eye movement signals. The interaction between these signals by decoupling of the eye movement and the pattern of retinal motion during brief simulated ego-movement on straight or circular trajectories was studied. A new response method enabled subjects to report perceived destination and perceived curvature of their future path simultaneously. Various combinations of simulated gaze rotation in the retinal flow and eye pursuit were investigated. Simulated gaze rotation ranged from consistent and larger than, to opponent and larger than eye pursuit. It was found that the perceived destination shifts non-linearly with the mismatch between simulated gaze rotation and eye pursuit. The non-linearity is also revealed in the perceived tangent heading direction and perceived path curvature, although to different extent in different subjects. For the same retinal flow, eye pursuit that is consistent with the simulated gaze rotation reduces heading error and the perceived path straightens out. In contrast, perceived path and/or heading do not become more curved or more biased in the direction opposite to pursuit when the eye -in-head rotation is opposite to the simulated gaze rotation. These observations point to modulation of the effect of the extra-retinal pursuit signal by the visual evidence for eye rotation. In a second experiment, one presented to a stationary eye the sum of a component of simulated gaze rotation and radial flow. It was found that the bi-circular flow component, that characterizes the change in pattern of flow directions by the gaze rotation, induces a shift of perceived heading without appreciable perceived path curvature. Conversely, the complementary component of simulated gaze rotation (bi-radial flow) evokes a percept of motion on a curved path with a small tangent heading error. It was suggested that bi-circular and bi-radial flow components contribute primarily to percepts of heading and path curvature, respectively.
Collapse
Affiliation(s)
- A V van den Berg
- Department of Physiology, Helmholtz School for Autonomous Systems Research, Faculty of Medicine, Erasmus University Rotterdam, PO Box 1738, 3000 DR, Rotterdam, The Netherlands.
| | | | | |
Collapse
|
41
|
Necessity of spatial pooling for the perception of heading in nonrigid environments. J Exp Psychol Hum Percept Perform 2002. [DOI: 10.1037/0096-1523.28.5.1192] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
42
|
Li L, Warren WH. Perception of heading during rotation: sufficiency of dense motion parallax and reference objects. Vision Res 2001; 40:3873-94. [PMID: 11090678 DOI: 10.1016/s0042-6989(00)00196-6] [Citation(s) in RCA: 75] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
How do observers perceive the path of self-motion during rotation? Previous research suggests that extra-retinal information about eye movements is necessary at high rotation rates (2-5 degrees /s), but those experiments used sparse random-dot displays. With dense texture-mapped displays, we find the path can be perceived from retinal flow alone at high simulated rotation rates if (a) dense motion parallax and (b) at least one reference object are available. We propose that the visual system determines instantaneous heading from the first-order motion parallax field, and recovers the path of self-motion by updating heading over time with respect to reference objects in the scene.
Collapse
Affiliation(s)
- L Li
- Department of Cognitive and Linguistic Sciences, Brown University, Box 1978, 02912, Providence, RI, USA
| | | |
Collapse
|
43
|
Abstract
We examined human heading judgement from second-order motion which was generated by random-dots with the contrast polarity determined randomly on each frame. It was found that human observers can judge heading fairly accurately from second-order motion when pure translation is simulated or when self-motion toward a ground plane with gaze rotation is simulated but they cannot when self-motion toward cloud-like random dots with gaze rotations is simulated. It is suggested that the human visual system cannot decompose the flow fields into rotational and translational components by using second-order motion information alone, but it can do in some ways from the flow field of the ground plane.
Collapse
Affiliation(s)
- M Hanada
- Graduate School of Human and Environmental Studies, Kyoto University, Yoshida-nihonmatsu-cho, Sakyo-ku, 606-8501, Kyoto, Japan.
| | | |
Collapse
|
44
|
Abstract
We investigated effects of roll (rotation around line of sight) and pitch (rotation around the horizontal axis) components of retinal flow on heading judgement from visual motion information. It was found that performance level of human observers for yaw (rotation around the vertical axis) plus pitch is little different from that for only yaw although there is bias in perceived heading toward the fixation point, and that heading judgement is fairly robust with respect to roll. It was also found that there are some observers who can perceive heading with pitch, yaw and roll at a roll rate of 11.5 degrees /s without extra-retinal information. It suggests that there exist compensation mechanisms for roll in the human visual system.
Collapse
Affiliation(s)
- M Hanada
- Graduate School of Human and Environmental Studies, Kyoto University, Yoshida-nihonmatsu-cho, Sakyo-ku, 606-8501, Kyoto, Japan.
| | | |
Collapse
|
45
|
Abstract
Observer translation through the environment can be accompanied by rotation of the eye about any axis. For rotation about the vertical axis (horizontal rotation) during translation in the horizontal plane, it is known that the absence of depth in the scene and an extra retinal signal leads to a systematic error in the observer's perceived direction of heading. This heading error is related in magnitude and direction to the shift of the centre of retinal flow (CF) that occurs because of the rotation. Rotation about any axis that deviates from the heading direction results in a CF shift. So far, however, the effect of rotation about the line of sight (torsion) on perceived heading has not been investigated. We simulated observer translation towards a wall or cloud, while simultaneously simulating eye rotation about the vertical axis, the torsional axis or combinations thereof. We find only small systematic effects of torsion on the set of 2D perceived headings, regardless of the simulated horizontal rotation. In proportion to the CF shift, the systematic errors are significantly smaller for pure torsion than for pure horizontal rotation. In contrast to errors caused by horizontal rotation, the torsional errors are hardly reduced by addition of depth to the scene. We suggest the difference in behaviour reflects the difference in symmetry of the field of view relative to the axis of rotation: the higher symmetry in the case of torsion may allow for a more accurate estimation of the rotational flow. Moreover, we report a new phenomenon. Simulated horizontal rotation during simulated wall approach increases the heading-dependency of errors, causing a larger compression of perceived heading in the horizontal direction than in the vertical direction.
Collapse
Affiliation(s)
- J A Beintema
- Medical Faculty, Erasmus Universiteit Rotterdam, The Netherlands.
| | | |
Collapse
|
46
|
Abstract
We developed a new computational model of human heading judgement from retinal flow. The model uses two assumptions: a large number of sampling points in the flow field and a symmetric sampling region around the origin. The algorithm estimates self-rotation parameters by calculating statistics whose expectations correspond to the rotation parameters. After the rotational components are removed from the retinal flow, the heading direction is recovered from the flow field. Performance of the model was compared with human data in three psychophysical experiments. In the first experiment, we generated stimuli which simulated self-motion toward the ground, a cloud or a frontoparallel plane and found that the simulation results of the model were consistent with human performance. In the second and third experiment, we measured the slope of the perceived versus simulated heading function when a perturbation velocity weighted according to the distance relative to the fixation distance was added to the vertical velocity component under the cloud condition. It was found that as the magnitude of the perturbation was increased, the slope of the function increased. The characteristics observed in the experiments can be explained well by the proposed model.
Collapse
Affiliation(s)
- M Hanada
- Graduate School of Human and Environmental Studies, Kyoto University, Japan.
| | | |
Collapse
|
47
|
Lappe M. Computational Mechanisms for Optic Flow Analysis in Primate Cortex. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2000; 44:235-68. [PMID: 10605649 DOI: 10.1016/s0074-7742(08)60745-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Affiliation(s)
- M Lappe
- Department of Zoology and Neurobiology, Ruhr University Bochum, Germany
| |
Collapse
|
48
|
Abstract
Humans perceive heading accurately when they rotate their eyes. This is remarkable, because (1) the pursuit eye movement makes the retinal flow more complicated; and (2) the eye rotation causes a continuous change of the heading direction on the retina. The first problem prevents a simple association of the centre of flow on the retina with the heading direction. To solve it, the brain needs to take into account the flow associated with the eye's rotation. But even if this is done correctly, the resulting estimate of the heading is retino-centric and changing over time. Thus, the processing time to retrieve the heading from the flow field will cause a lag with respect to the actual heading direction. We investigated the latency for heading perception. We presented step wise changes of the centre of expanding flow to stationary and moving eyes. This mimics the movement of the heading direction across the retina, but avoids the complicating effects of rotational flow. For a stationary eye, we found a bias in perceived heading that corresponds to a latency of 300 ms or more. Yet, errors in heading perception are marginal normally, because we found an opposite bias for the moving eye, which counters the errors due to latency and a changing retino-centric heading direction. This suggests that the current heading direction is predicted from the extra-retinal signal and the delayed visual signals.
Collapse
|
49
|
Abstract
Accurate and efficient control of self-motion is an important requirement for our daily behavior. Visual feedback about self-motion is provided by optic flow. Optic flow can be used to estimate the direction of self-motion ('heading') rapidly and efficiently. Analysis of oculomotor behavior reveals that eye movements usually accompany self-motion. Such eye movements introduce additional retinal image motion so that the flow pattern on the retina usually consists of a combination of self-movement and eye movement components. The question of whether this 'retinal flow' alone allows the brain to estimate heading, or whether an additional 'extraretinal' eye movement signal is needed, has been controversial. This article reviews recent studies that suggest that heading can be estimated visually but extraretinal signals are used to disambiguate problematic situations. The dorsal stream of primate cortex contains motion processing areas that are selective for optic flow and self-motion. Models that link the properties of neurons in these areas to the properties of heading perception suggest possible underlying mechanisms of the visual perception of self-motion.
Collapse
|
50
|
Rushton SK, Harris JM, Lloyd MR, Wann JP. Guidance of locomotion on foot uses perceived target location rather than optic flow. Curr Biol 1998; 8:1191-4. [PMID: 9799736 DOI: 10.1016/s0960-9822(07)00492-7] [Citation(s) in RCA: 181] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
What visual information do we use to guide movement through our environment? Self-movement produces a pattern of motion on the retina, called optic flow. During translation, the direction of movement (locomotor direction) is specified by the point in the flow field from which the motion vectors radiate - the focus of expansion (FoE) [1-3]. If an eye movement is made, however, the FoE no longer specifies locomotor direction [4], but the 'heading' direction can still be judged accurately [5]. Models have been proposed that remove confounding rotational motion due to eye movements by decomposing the retinal flow into its separable translational and rotational components ([6-7] are early examples). An alternative theory is based upon the use of invariants in the retinal flow field [8]. The assumption underpinning all these models (see also [9-11]), and associated psychophysical [5,12,13] and neurophysiological studies [14-16], is that locomotive heading is guided by optic flow. In this paper we challenge that assumption for the control of direction of locomotion on foot. Here we have explored the role of perceived location by recording the walking trajectories of people wearing displacing prism glasses. The results suggest that perceived location, rather than optic or retinal flow, is the predominant cue that guides locomotion on foot.
Collapse
Affiliation(s)
- S K Rushton
- Department of Psychology University of Edinburgh 7 George Square, Edinburgh, EH8 9JZ, UK.
| | | | | | | |
Collapse
|