101
|
Upadhyay UD, Page WK, Duffy CJ. MST responses to pursuit across optic flow with motion parallax. J Neurophysiol 2000; 84:818-26. [PMID: 10938308 DOI: 10.1152/jn.2000.84.2.818] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Self-movement creates the patterned visual motion of optic flow with a focus of expansion (FOE) that indicates heading direction. During pursuit eye movements, depth cues create a retinal flow field that contains multiple FOEs, potentially complicating heading perception. Paradoxically, human heading perception during pursuit is improved by depth cues. We have studied medial superior temporal (MST) neurons to see whether their heading selectivity is also improved under these conditions. The responses of 134 MST neurons were recorded during the presentation of optic flow stimuli containing one or three speed-defined depth planes. During pursuit, multiple depth-plane stimuli evoked larger responses (71% of neurons) and stronger heading selectivity (70% of neurons). Responses to the three speed-defined depth-planes presented separately showed that most neurons (54%) preferred one of the planes. Responses to multiple depth-plane stimuli were larger than the averaged responses to the three component planes, suggesting enhancing interactions between depth-planes. Thus speed preferences create selective responses to one of many depth-planes in the retinal flow field. The presence of multiple depth-planes enhances those responses. These properties might improve heading perception during pursuit and contribute to relative depth perception.
Collapse
Affiliation(s)
- U D Upadhyay
- Departments of Neurology, Brain and Cognitive Sciences, Neurobiology and Anatomy, and Ophthalmology and the Center for Visual Science, The University of Rochester Medical Center, Rochester, New York 14642, USA
| | | | | |
Collapse
|
102
|
Cutting JE, Wang RF. Heading judgments in minimal environments: the value of a heuristic when invariants are rare. PERCEPTION & PSYCHOPHYSICS 2000; 62:1146-59. [PMID: 11019613 DOI: 10.3758/bf03212119] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Observers made systematic heading judgments in two experiments simulating their translation through an environment with only two trees. When those trees converged or decelerated apart, observers tended to follow the invariant information and make heading judgments outside the near member of the pair. When those trees accelerated apart, however, observers tended to follow the heuristic information and make judgments outside the far member, although this result was tempered by the angular separation between the trees and their relative acceleration. The simultaneous existence and use of invariants and heuristics are discussed in terms of different metatheoretical approaches to perception.
Collapse
Affiliation(s)
- J E Cutting
- Department of Psychology, Cornell University, Ithaca, NY 14853-7601, USA.
| | | |
Collapse
|
103
|
Wann J, Land M. Steering with or without the flow: is the retrieval of heading necessary? Trends Cogn Sci 2000; 4:319-324. [PMID: 10904256 DOI: 10.1016/s1364-6613(00)01513-8] [Citation(s) in RCA: 81] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- J Wann
- Department of Psychology, University of Reading, 6 Earley Gate, Reading, UK RG6 6AL
| | | |
Collapse
|
104
|
Freeman TC, Banks MS, Crowell JA. Extraretinal and retinal amplitude and phase errors during Filehne illusion and path perception. PERCEPTION & PSYCHOPHYSICS 2000; 62:900-9. [PMID: 10997037 DOI: 10.3758/bf03212076] [Citation(s) in RCA: 22] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Pursuit eye movements give rise to retinal motion. To judge stimulus motion relative to the head, the visual system must correct for the eye movement by using an extraretinal, eye-velocity signal. Such correction is important in a variety of motion estimation tasks including judgments of object motion relative to the head and judgments of self-motion direction from optic flow. The Filehne illusion (where a stationary object appears to move opposite to the pursuit) results from a mismatch between retinal and extraretinal speed estimates. A mismatch in timing could also exist. Speed and timing errors were investigated using sinusoidal pursuit eye movements. We describe a new illusion--the slalom illusion--in which the perceived direction of self-motion oscillates left and right when the eyes move sinusoidally. A linear model is presented that determines the gain ratio and phase difference of extraretinal and retinal signals accompanying the Filehne and slalom illusions. The speed mismatch and timing differences were measured in the Filehne and self-motion situations using a motion-nulling procedure. Timing errors were very small for the Filehne and slalom illusions. However, the ratios of extraretinal to retinal gain were consistently less than 1, so both illusions are the consequence of a mismatch between estimates of retinal and extraretinal speed. The relevance of the results for recovering the direction of self-motion during pursuit eye movements is discussed.
Collapse
|
105
|
Affiliation(s)
- J P Wann
- Department of Psychology, University of Reading, 3 Earley Gate, Reading RG6 6AL, UK.
| | | |
Collapse
|
106
|
Harris MG, Giachritsis CD. Coarse-grained information dominates fine-grained information in judgments of time-to-contact from retinal flow. Vision Res 2000; 40:601-11. [PMID: 10824264 DOI: 10.1016/s0042-6989(99)00209-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
To investigate the relative importance of fine- and coarse-grained structure in the analysis of retinal flow, subjects made estimates of time-to-contact from random dot kinematograms depicting movement towards a flat, sparsely textured surface. Individual display elements moved smoothly away from each other while expanding smoothly in size. By artificially manipulating the rate at which the individual elements expanded we showed that this cue has only a small effect upon performance. When individual elements were replaced by small clusters of dots, expansion of the clusters had a similarly small effect upon performance. However, estimates of time-to-contact were possible when a single expanding cluster was presented in isolation. We conclude that both types of information are available to the subject but that estimates of time-to-contact are based primarily on coarse-grained changes in the position of image elements and that fine-grained changes in element size or position play only a minor role.
Collapse
Affiliation(s)
- M G Harris
- Cognitive Science Research Centre, School of Psychology, University of Birmingham, UK.
| | | |
Collapse
|
107
|
Abstract
Observer translation through the environment can be accompanied by rotation of the eye about any axis. For rotation about the vertical axis (horizontal rotation) during translation in the horizontal plane, it is known that the absence of depth in the scene and an extra retinal signal leads to a systematic error in the observer's perceived direction of heading. This heading error is related in magnitude and direction to the shift of the centre of retinal flow (CF) that occurs because of the rotation. Rotation about any axis that deviates from the heading direction results in a CF shift. So far, however, the effect of rotation about the line of sight (torsion) on perceived heading has not been investigated. We simulated observer translation towards a wall or cloud, while simultaneously simulating eye rotation about the vertical axis, the torsional axis or combinations thereof. We find only small systematic effects of torsion on the set of 2D perceived headings, regardless of the simulated horizontal rotation. In proportion to the CF shift, the systematic errors are significantly smaller for pure torsion than for pure horizontal rotation. In contrast to errors caused by horizontal rotation, the torsional errors are hardly reduced by addition of depth to the scene. We suggest the difference in behaviour reflects the difference in symmetry of the field of view relative to the axis of rotation: the higher symmetry in the case of torsion may allow for a more accurate estimation of the rotational flow. Moreover, we report a new phenomenon. Simulated horizontal rotation during simulated wall approach increases the heading-dependency of errors, causing a larger compression of perceived heading in the horizontal direction than in the vertical direction.
Collapse
Affiliation(s)
- J A Beintema
- Medical Faculty, Erasmus Universiteit Rotterdam, The Netherlands.
| | | |
Collapse
|
108
|
Abstract
We developed a new computational model of human heading judgement from retinal flow. The model uses two assumptions: a large number of sampling points in the flow field and a symmetric sampling region around the origin. The algorithm estimates self-rotation parameters by calculating statistics whose expectations correspond to the rotation parameters. After the rotational components are removed from the retinal flow, the heading direction is recovered from the flow field. Performance of the model was compared with human data in three psychophysical experiments. In the first experiment, we generated stimuli which simulated self-motion toward the ground, a cloud or a frontoparallel plane and found that the simulation results of the model were consistent with human performance. In the second and third experiment, we measured the slope of the perceived versus simulated heading function when a perturbation velocity weighted according to the distance relative to the fixation distance was added to the vertical velocity component under the cloud condition. It was found that as the magnitude of the perturbation was increased, the slope of the function increased. The characteristics observed in the experiments can be explained well by the proposed model.
Collapse
Affiliation(s)
- M Hanada
- Graduate School of Human and Environmental Studies, Kyoto University, Japan.
| | | |
Collapse
|
109
|
van den Berg AV, Beintema JA. The mechanism of interaction between visual flow and eye velocity signals for heading perception. Neuron 2000; 26:747-52. [PMID: 10896169 DOI: 10.1016/s0896-6273(00)81210-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
A translating eye receives a radial pattern of motion that is centered on the direction of heading. If the eye is rotating and translating, visual and extraretinal signals help to cancel the rotation and to perceive heading correctly. This involves (1) an interaction between visual and eye movement signals and (2) a motion template stage that analyzes the pattern of visual motion. Early interaction leads to motion templates that integrate head-centered motion signals in the visual field. Integration of retinal motion signals leads to late interaction. Here, we show that retinal flow limits precision of heading. This result argues against an early, vector subtraction type of interaction, but is consistent with a late, gain field type of interaction with eye velocity signals and neurophysiological findings in area MST of the monkey.
Collapse
Affiliation(s)
- A V van den Berg
- Helmholtz School for Autonomous Systems Research, Department of Physiology, Faculty of Medicine, Erasmus University Rotterdam, The Netherlands.
| | | |
Collapse
|
110
|
Hanada M, Ejima Y. Method for recovery of heading from motion. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2000; 17:966-973. [PMID: 10850466 DOI: 10.1364/josaa.17.000966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
A new method for recovery of heading from motion is developed on the basis of Longuet-Higgins and Prazdny's algorithm [Proc. R. Soc. London Ser. B 208, 385 (1980)]. In the algorithm a radial virtual flow field is generated and the difference between the original velocity field and the virtual radial field is computed. The difference vectors, which are directed to the heading point in the projected plane, allow us to estimate the direction of heading. The simulations of the algorithm were performed, and it was shown that the method estimates the direction of heading accurately.
Collapse
Affiliation(s)
- M Hanada
- Graduate School of Human and Environmental Studies, Kyoto University, Japan.
| | | |
Collapse
|
111
|
A computational model for the detection of object motion by moving observer using self-motion signals. Inf Sci (N Y) 2000. [DOI: 10.1016/s0020-0255(99)00110-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
112
|
Andersen RA, Shenoy KV, Crowell JA, Bradley DC. Neural mechanisms for self-motion perception in area MST. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2000; 44:219-33. [PMID: 10605648 DOI: 10.1016/s0074-7742(08)60744-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2023]
Affiliation(s)
- R A Andersen
- Division of Biology, California Institute of Technology, Pasadena, USA
| | | | | | | |
Collapse
|
113
|
Ivins J, Porrill J, Frisby J, Orban G. The 'ecological' probability density function for linear optic flow: implications for neurophysiology. Perception 2000; 28:17-32. [PMID: 10627850 DOI: 10.1068/p2807] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
A theoretical analysis of the recovery of shape from optic flow highlights the importance of the deformation components; however, pure deforming stimuli elicit few responses from flow-sensitive neurons in the medial superior temporal (MST) area of the cerebral cortex. This finding has prompted the conclusion that MST cells are not involved in shape recovery. However, this conclusion may be unjustified in view of the emerging consensus that MST cells perform nonlinear pattern matching, rather than linear projection as implicitly assumed in many neurophysiological studies. Artificial neural models suggest that the input probability density function (PDF) is crucial in determining the distribution of responses shown by pattern-matching cells. This paper therefore describes a Monte-Carlo study of the joint PDF for linear optic-flow components produced by ego-motion in a simulated planar environment. The recent search for deformation-selective cells in MST is then used to illustrate the importance of the input PDF in determining cell characteristics. The results are consistent with the finding that MST cells exhibit a continuum of responses to translation, rotation, and divergence. In addition, there are negative correlations between the deformation and conformal components of optic flow. Consequently, if cells responsible for shape analysis are present in the MST area, they should respond best to combinations of deformation with other first-order flow components, rather than to the pure stimuli used in previous neurophysiological studies.
Collapse
Affiliation(s)
- J Ivins
- Department of Computer Science, Curtin University of Technology, Perth, Western Australia.
| | | | | | | |
Collapse
|
114
|
Lappe M. Computational Mechanisms for Optic Flow Analysis in Primate Cortex. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2000; 44:235-68. [PMID: 10605649 DOI: 10.1016/s0074-7742(08)60745-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Affiliation(s)
- M Lappe
- Department of Zoology and Neurobiology, Ruhr University Bochum, Germany
| |
Collapse
|
115
|
Sherk H, Fowler GA. Optic Flow and the Visual Guidance of Locomotion in the Cat. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2000; 44:141-70. [PMID: 10605645 DOI: 10.1016/s0074-7742(08)60741-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2023]
Affiliation(s)
- H Sherk
- Department of Biological Structure, University of Washington, Seattle, USA
| | | |
Collapse
|
116
|
Bremmer F, Duhamel JR, Ben Hamed S, Graf W. Stages of self-motion processing in primate posterior parietal cortex. INTERNATIONAL REVIEW OF NEUROBIOLOGY 1999; 44:173-98. [PMID: 10605646 DOI: 10.1016/s0074-7742(08)60742-4] [Citation(s) in RCA: 39] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- F Bremmer
- Department of Zoology and Neurobiology, Ruhr University Bochum, Germany
| | | | | | | |
Collapse
|
117
|
Abstract
Humans perceive heading accurately when they rotate their eyes. This is remarkable, because (1) the pursuit eye movement makes the retinal flow more complicated; and (2) the eye rotation causes a continuous change of the heading direction on the retina. The first problem prevents a simple association of the centre of flow on the retina with the heading direction. To solve it, the brain needs to take into account the flow associated with the eye's rotation. But even if this is done correctly, the resulting estimate of the heading is retino-centric and changing over time. Thus, the processing time to retrieve the heading from the flow field will cause a lag with respect to the actual heading direction. We investigated the latency for heading perception. We presented step wise changes of the centre of expanding flow to stationary and moving eyes. This mimics the movement of the heading direction across the retina, but avoids the complicating effects of rotational flow. For a stationary eye, we found a bias in perceived heading that corresponds to a latency of 300 ms or more. Yet, errors in heading perception are marginal normally, because we found an opposite bias for the moving eye, which counters the errors due to latency and a changing retino-centric heading direction. This suggests that the current heading direction is predicted from the extra-retinal signal and the delayed visual signals.
Collapse
|
118
|
Abstract
Accurate and efficient control of self-motion is an important requirement for our daily behavior. Visual feedback about self-motion is provided by optic flow. Optic flow can be used to estimate the direction of self-motion ('heading') rapidly and efficiently. Analysis of oculomotor behavior reveals that eye movements usually accompany self-motion. Such eye movements introduce additional retinal image motion so that the flow pattern on the retina usually consists of a combination of self-movement and eye movement components. The question of whether this 'retinal flow' alone allows the brain to estimate heading, or whether an additional 'extraretinal' eye movement signal is needed, has been controversial. This article reviews recent studies that suggest that heading can be estimated visually but extraretinal signals are used to disambiguate problematic situations. The dorsal stream of primate cortex contains motion processing areas that are selective for optic flow and self-motion. Models that link the properties of neurons in these areas to the properties of heading perception suggest possible underlying mechanisms of the visual perception of self-motion.
Collapse
|
119
|
Kim J, Turano KA. Optimal spatial frequencies for discrimination of motion direction in optic flow patterns. Vision Res 1999; 39:3175-85. [PMID: 10615489 DOI: 10.1016/s0042-6989(99)00024-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
Spatial frequency tuning functions were measured for direction discrimination of optic flow patterns. Three subjects discriminated the direction of a curved motion path using computer generated optic flow patterns composed of randomly positioned dots. Performance was measured with unfiltered patterns and with patterns that were spatially filtered across a range of spatial frequencies (center spatial frequencies of 0.4, 0.8, 1.6, 3.2, 6.4, and 9.6 c/deg). The same subjects discriminated the direction of uniform, translational motion on the fronto-parallel plane. The uniform motion patterns were also composed of randomly positioned dots, that were either unfiltered or filtered with the same spatial filters used for the optic flow patterns. The peak spatial frequency was the same for both the optic flow and uniform motion patterns. For both types of motion, a narrow band (1.5 octaves) of optimal spatial frequencies was sufficient to support the same level of performance as found with unfiltered, broadband patterns. Additional experiments demonstrated that the peak spatial frequency for the optic flow patterns varies with mean image speed in the same manner as has been reported for moving sinusoidal gratings. These findings confirm the hypothesis that the outputs of the local motion mechanisms thought to underlie the perception of uniform motion provide the inputs to, and constrain the operation of, the mechanism that processes self motion from optic flow patterns.
Collapse
Affiliation(s)
- J Kim
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | | |
Collapse
|
120
|
Abstract
Pursuit eye movements introduce retinal motion that complicates the recovery of self-motion from retinal flow. An extra-retinal, eye-velocity signal could be used to aid estimation of the observer's path, perhaps by converting retino-centric into head-centric motion. This conversion is apparently not precise because we often misperceive head-centric object velocity: in the Filehne illusion, for example, a stationary object appears to move in the opposite direction to the eye movement. Similar errors should be expected when extra-retinal, eye-velocity signals are used in self-motion tasks. However, most self-motion studies conclude that path direction is recovered quite accurately. Path perception and the Filehne illusion were therefore compared directly in order to examine the apparent discrepancy. A nulling technique determined the velocity of simulated eye rotation that cancelled the perceived curvature of the path or, in a Filehne condition, the perceived rotation of the ground-plane stimulus. In either case, observers typically set the simulated eye rotation to be a fixed proportion of the actual eye pursuit made. No differences were found between path perception and Filehne illusion. The apparent inaccuracy of path perception during a real eye movement was confirmed in a second experiment, using a standard 'mouse-pointing' technique. The experiments provide support for a model of head-centric motion perception based on extra-retinal and retinal signals that are linearly related to pursuit and retinal speed, respectively.
Collapse
Affiliation(s)
- T C Freeman
- School of Psychology, Cardiff University, UK.
| |
Collapse
|
121
|
Abstract
Radial patterns of optic flow contain a centre of expansion that indicates the observer's direction of self-movement. When the radial pattern is viewed with transparently overlapping unidirectional motion, the centre of expansion appears to shift in the direction of the unidirectional motion [Duffy, C.J. & Wurtz, R.H. (1993) Vision Res., 33, 1481-1490]. Neurons in the medial superior temporal (MST) area of monkey cerebral cortex are thought to mediate optic flow analysis, but they do not shift their responses to parallel the illusion created by transparent overlap. The population-based model of optic flow analysis proposed by Lappe and Rauschecker replicates the illusory shift observed in perceptual studies [Lappe, M. & Rauschecker, J.P. (1995) Vision Res., 35, 1619-1631]. We analysed the behaviour of constituent neurons in the model, to gain insight into neuronal mechanisms underlying the illusion. Single model neurons did not show the illusory shift but rather graded variations of their response specificity. The shift required the aggregate response of the population. We compared the model's predictions about the behaviour of single neurons with the responses recorded from area MST. The predicted distribution of overlap effects agreed with that observed in area MST. The success of the population-based model in predicting the illusion and the neuronal behaviour suggests that area MST uses the graded responses of single neurons to create a population response that supports optic flow perception.
Collapse
Affiliation(s)
- M Lappe
- Department of Zoology, Ruhr University Bochum, D-44780 Bochum, Germany.
| | | |
Collapse
|
122
|
Shenoy KV, Bradley DC, Andersen RA. Influence of gaze rotation on the visual response of primate MSTd neurons. J Neurophysiol 1999; 81:2764-86. [PMID: 10368396 DOI: 10.1152/jn.1999.81.6.2764] [Citation(s) in RCA: 81] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When we move forward, the visual image on our retina expands. Humans rely on the focus, or center, of this expansion to estimate their direction of heading and, as long as the eyes are still, the retinal focus corresponds to the heading. However, smooth rotation of the eyes adds nearly uniform visual motion to the expanding retinal image and causes a displacement of the retinal focus. In spite of this, humans accurately judge their heading during pursuit eye movements and during active, smooth head rotations even though the retinal focus no longer corresponds to the heading. Recent studies in macaque suggest that correction for pursuit may occur in the dorsal aspect of the medial superior temporal area (MSTd) because these neurons are tuned to the retinal position of the focus and they modify their tuning during pursuit to compensate partially for the focus shift. However, the question remains whether these neurons also shift focus tuning to compensate for smooth head rotations that commonly occur during gaze tracking. To investigate this question, we recorded from 80 MSTd neurons while monkeys tracked a visual target either by pursuing with their eyes or by vestibulo-ocular reflex cancellation (VORC; whole-body rotation with eyes fixed in head and head fixed on body). VORC is a passive, smooth head rotation condition that selectively activates the vestibular canals. We found that neurons shift their focus tuning in a similar way whether focus displacement is caused by pursuit or by VORC. Across the population, compensation averaged 88 and 77% during pursuit and VORC, respectively (tuning shift divided by the retinal focus to true heading difference). Moreover the degree of compensation during pursuit and VORC was correlated in individual cells (P < 0.001). Finally neurons that did not compensate appreciably tended to be gain-modulated during pursuit and VORC and may constitute an intermediate stage in the compensation process. These results indicate that many MSTd cells compensate for general gaze rotation, whether produced by eye-in-head or head-in-world rotation, and further implicate MSTd as a critical stage in the computation of heading. Interestingly vestibular cues present during VORC allow many cells to compensate even though humans do not accurately judge their heading in this condition. This suggests that MSTd may use vestibular information to create a compensated heading representation within at least a subpopulation of cells, which is accessed perceptually only when additional cues related to active head rotations are also present.
Collapse
Affiliation(s)
- K V Shenoy
- Division of Biology, California Institute of Technology, Pasadena, California 91125, USA
| | | | | |
Collapse
|
123
|
Wright MJ, Gurney KN. Visual discrimination of direction changes based upon two types of angular motion. Vision Res 1999; 39:1927-41. [PMID: 10343781 DOI: 10.1016/s0042-6989(98)00246-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
We address the question of how the visual system analyses changes in direction. Using plaid stimuli, we define type O direction changes which entail a change in the orientations of the plaid components, and type V direction changes in which the orientations of the components remain constant, relative to the observer but their relative speeds change. Lower thresholds for discriminating type O and type V direction changes were compared. Type O thresholds for clockwise/anticlockwise direction change were very low (0.2-0.5 degree), were resistant to directional noise, and showed a low-pass relationship with drift velocity. Type V thresholds on the other hand were higher (1-5 degrees), and exhibited a bandpass relationship with drift velocity. Type O direction changes gave low thresholds at short inter-stimulus intervals (ISI) (< 160 ms) and higher thresholds (successive orientation discrimination) at long ISI (240 ms-12.8 s). Type V thresholds, on the other hand, exhibited no short-range process and performance at short ISI, was no better than for successive direction discrimination at long ISI. A two-stage rotary motion model is sufficient to explain the discrimination of type O direction changes and results rule out a model based on velocity discrimination. For type V direction changes, a two-stage mechanism is insufficient and results are consistent with a minimum of three computational stages.
Collapse
Affiliation(s)
- M J Wright
- Department of Human Sciences, Brunel University, Uxbridge, UK.
| | | |
Collapse
|
124
|
Abstract
Although the orientation of an arm in space or the static view of an object may be represented by a population of neurons in complex ways, how these variables change with movement often follows simple linear rules, reflecting the underlying geometric constraints in the physical world. A theoretical analysis is presented for how such constraints affect the average firing rates of sensory and motor neurons during natural movements with low degrees of freedom, such as a limb movement and rigid object motion. When applied to nonrigid reaching arm movements, the linear theory accounts for cosine directional tuning with linear speed modulation, predicts a curl-free spatial distribution of preferred directions, and also explains why the instantaneous motion of the hand can be recovered from the neural population activity. For three-dimensional motion of a rigid object, the theory predicts that, to a first approximation, the response of a sensory neuron should have a preferred translational direction and a preferred rotation axis in space, both with cosine tuning functions modulated multiplicatively by speed and angular speed, respectively. Some known tuning properties of motion-sensitive neurons follow as special cases. Acceleration tuning and nonlinear speed modulation are considered in an extension of the linear theory. This general approach provides a principled method to derive mechanism-insensitive neuronal properties by exploiting the inherently low dimensionality of natural movements.
Collapse
|
125
|
Cutting JE, Wang RF, Flückiger M, Baumberger B. Human heading judgments and object-based motion information. Vision Res 1999; 39:1079-105. [PMID: 10343828 DOI: 10.1016/s0042-6989(98)00175-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In four experiments, we explored observers' ability to make heading judgments from simulated linear and circular translations through sparse forests and with pursuit fixation on one tree. We assessed observers' performance and information use in both regression and factorial designs. In all experiments we found that observers used three sources of object-based information to make their judgments--the displacement direction of the nearest object seen (a heuristic), inward displacement towards the fovea (an invariant) and outward deceleration (a second invariant). We found no support for the idea that observers use motion information pooled over regions of the visual field.
Collapse
Affiliation(s)
- J E Cutting
- Department of Psychology, Cornell University, Ithaca, NY 14853-7601, USA.
| | | | | | | |
Collapse
|
126
|
Page WK, Duffy CJ. MST neuronal responses to heading direction during pursuit eye movements. J Neurophysiol 1999; 81:596-610. [PMID: 10036263 DOI: 10.1152/jn.1999.81.2.596] [Citation(s) in RCA: 76] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
As you move through the environment, you see a radial pattern of visual motion with a focus of expansion (FOE) that indicates your heading direction. When self-movement is combined with smooth pursuit eye movements, the turning of the eye distorts the retinal image of the FOE but somehow you still can perceive heading. We studied neurons in the medial superior temporal area (MST) of monkey visual cortex, recording responses to FOE stimuli presented during fixation and smooth pursuit eye movements. Almost all neurons showed significant changes in their FOE selective responses during pursuit eye movements. However, the vector average of all the neuronal responses indicated the direction of the FOE during both fixation and pursuit. Furthermore, the amplitude of the net vector increased with increasing FOE eccentricity. We conclude that neuronal population encoding in MST might contribute to pursuit-tolerant heading perception.
Collapse
Affiliation(s)
- W K Page
- Department of Neurology, The Center for Visual Science, The University of Rochester Medical Center, Rochester, New York 14642, USA
| | | |
Collapse
|
127
|
Royden CS, Hildreth EC. Differential effects of shared attention on perception of heading and 3-D object motion. PERCEPTION & PSYCHOPHYSICS 1999; 61:120-33. [PMID: 10070204 DOI: 10.3758/bf03211953] [Citation(s) in RCA: 21] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When a person moves in a straight line through a stationary environment, the images of object surfaces move in a radial pattern away from a single point. This point, known as the focus of expansion (FOE), corresponds to the person's direction of motion. People judge their heading from image motion quite well in this situation. They perform most accurately when they can see the region around the FOE, which contains the most useful information for this task. Furthermore, a large moving object in the scene has no effect on observer heading judgments unless it obscures the FOE. Therefore, observers may obtain the most accurate heading judgments by focusing their attention on the region around the FOE. However, in many situations (e.g., driving), the observer must pay attention to other moving objects in the scene (e.g., cars and pedestrians) to avoid collisions. These objects may be located far from the FOE in the visual field. We tested whether people can accurately judge their heading and the three-dimensional (3-D) motion of objects while paying attention to one or the other task. The results show that differential allocation of attention affects people's ability to judge 3-D object motion much more than it affects their ability to judge heading. This suggests that heading judgments are computed globally, whereas judgments about object motion may require more focused attention.
Collapse
Affiliation(s)
- C S Royden
- Department of Computer Science, Wellesley College, MA 02481, USA.
| | | |
Collapse
|
128
|
Rushton SK, Harris JM, Lloyd MR, Wann JP. Guidance of locomotion on foot uses perceived target location rather than optic flow. Curr Biol 1998; 8:1191-4. [PMID: 9799736 DOI: 10.1016/s0960-9822(07)00492-7] [Citation(s) in RCA: 181] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
What visual information do we use to guide movement through our environment? Self-movement produces a pattern of motion on the retina, called optic flow. During translation, the direction of movement (locomotor direction) is specified by the point in the flow field from which the motion vectors radiate - the focus of expansion (FoE) [1-3]. If an eye movement is made, however, the FoE no longer specifies locomotor direction [4], but the 'heading' direction can still be judged accurately [5]. Models have been proposed that remove confounding rotational motion due to eye movements by decomposing the retinal flow into its separable translational and rotational components ([6-7] are early examples). An alternative theory is based upon the use of invariants in the retinal flow field [8]. The assumption underpinning all these models (see also [9-11]), and associated psychophysical [5,12,13] and neurophysiological studies [14-16], is that locomotive heading is guided by optic flow. In this paper we challenge that assumption for the control of direction of locomotion on foot. Here we have explored the role of perceived location by recording the walking trajectories of people wearing displacing prism glasses. The results suggest that perceived location, rather than optic or retinal flow, is the predominant cue that guides locomotion on foot.
Collapse
Affiliation(s)
- S K Rushton
- Department of Psychology University of Edinburgh 7 George Square, Edinburgh, EH8 9JZ, UK.
| | | | | | | |
Collapse
|
129
|
Ehrlich SM, Beck DM, Crowell JA, Freeman TC, Banks MS. Depth information and perceived self-motion during simulated gaze rotations. Vision Res 1998; 38:3129-45. [PMID: 9893821 DOI: 10.1016/s0042-6989(97)00427-6] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
When presented with random-dot displays with little depth information, observers cannot determine their direction of self-motion accurately in the presence of rotational flow without appropriate extra-retinal information (Royden CS et al. Vis Res 1994;34:3197-214.). On theoretical grounds, one might expect improved performance when depth information is added to the display (van den Berg AV and Brenner E. Nature 1994;371:700-2). We examined this possibility by having observers indicate perceived self-motion paths when the amount of depth information was varied. When stereoscopic cues and a variety of monocular depth cues were added, observers still misperceived the depicted self-motion when the rotational flow in the display was not accompanied by an appropriate extra-retinal, eye-velocity signal. Specifically, they perceived curved self-motion paths with the curvature in the direction of the simulated eye rotation. The distance to the response marker was crucial to the objective measurement of this misperception. When the marker distance was small, the observers' settings were reasonably accurate despite the misperception of the depicted self-motion. When the marker distance was large, the settings exhibited the errors reported previously by Royden CS et al. Vis Res 1994;34-3197-3214. The path judgement errors observers make during simulated gaze rotations appear to be the result of misattributing path-independent rotation to self-motion along a circular path with path-dependent rotation. An analysis of the information an observer could use to avoid such errors reveals that the addition of depth information is of little use.
Collapse
Affiliation(s)
- S M Ehrlich
- Department of Psychology, School of Optometry, University of California, Berkeley 94720-2020, USA
| | | | | | | | | |
Collapse
|
130
|
Abstract
We have proposed previously a computational neural-network model by which the complex patterns of retinal image motion generated during locomotion (optic flow) can be processed by specialized detectors acting as templates for specific instances of self-motion. The detectors in this template model respond to global optic flow by sampling image motion over a large portion of the visual field through networks of local motion sensors with properties similar to those of neurons found in the middle temporal (MT) area of primate extrastriate visual cortex. These detectors, arranged within cortical-like maps, were designed to extract self-translation (heading) and self-rotation, as well as the scene layout (relative distances) ahead of a moving observer. We then postulated that heading from optic flow is directly encoded by individual neurons acting as heading detectors within the medial superior temporal (MST) area. Others have questioned whether individual MST neurons can perform this function because some of their receptive-field properties seem inconsistent with this role. To resolve this issue, we systematically compared MST responses with those of detectors from two different configurations of the model under matched stimulus conditions. We found that the characteristic physiological properties of MST neurons can be explained by the template model. We conclude that MST neurons are well suited to support self-motion estimation via a direct encoding of heading and that the template model provides an explicit set of testable hypotheses that can guide future exploration of MST and adjacent areas within the superior temporal sulcus.
Collapse
|
131
|
Abstract
The visual motion - or optic flow - that results from an observer's own movement can indicate the direction of heading through the environment. Recent experiments have strengthened the argument that neurons in a specialized region of the cerebral cortex are critical for the analysis of this important class of visual stimuli.
Collapse
Affiliation(s)
- R H Wurtz
- Laboratory of Sensorimotor Research National Eye Institute National Institutes of Health Bethesda, Maryland, 20892-4435, USA
| |
Collapse
|
132
|
Abstract
Eye or head rotation would influence perceived heading direction if it were coded by cells tuned only to retinal flow patterns that correspond to linear self-movement. We propose a model for heading detection based on motion templates that are also Gaussian-tuned to the amount of rotational flow. Such retinal flow templates allow explicit use of extra-retinal signals to create templates tuned to head-centric flow as seen by the stationary eye. Our model predicts an intermediate layer of 'eye velocity gain fields' in which 'rate-coded' eye velocity is multiplied with responses of templates sensitive to specific retinal flow patterns. By combination of the activities of one retinal flow template and many units with an eye velocity gain field, a new type of unit appears: its preferred retinal flow changes dynamically in accordance with the eye rotation velocity. This unit's activity becomes thereby approximately invariant to the amount of eye rotation. The units with eye velocity gain fields from the motion-analogue of the units with eye position gain fields found in area 7a, which according to our general approach, are needed to transform position from retino-centric to head-centric coordinates. The rotation-tuned templates can also provide rate-coded visual estimates of eye rotation to allow a pure visual compensation for rotational flow. Our model is consistent with psychophysical data that indicate a role for extra-retinal as well as visual rotation signals in the correct perception of heading.
Collapse
Affiliation(s)
- J A Beintema
- Helmholtz School for Autonomous Systems Research, Department of Physiology, Erasmus University Rotterdam, The Netherlands
| | | |
Collapse
|
133
|
Britten KH, van Wezel RJ. Electrical microstimulation of cortical area MST biases heading perception in monkeys. Nat Neurosci 1998; 1:59-63. [PMID: 10195110 DOI: 10.1038/259] [Citation(s) in RCA: 226] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
As we move through the environment, the pattern of visual motion on the retina provides rich information about our movement through the scene. Human subjects can use this information, often termed "optic flow", to accurately estimate their direction of self movement (heading) from relatively sparse displays. Physiological observations on the motion-sensitive areas of monkey visual cortex suggest that the medial superior temporal area (MST) is well suited for the analysis of optic flow information. To test whether MST is involved in extracting heading from optic flow, we perturbed its activity in monkeys trained on a heading discrimination task. Electrical microstimulation of MST frequently biased the monkeys' decisions about their heading, and these induced biases were often quite large. This result suggests that MST has a direct role in the perception of heading from optic flow.
Collapse
Affiliation(s)
- K H Britten
- UC Davis Center for Neuroscience, California, USA.
| | | |
Collapse
|
134
|
Lappe M, Pekel M, Hoffmann KP. Optokinetic eye movements elicited by radial optic flow in the macaque monkey. J Neurophysiol 1998; 79:1461-80. [PMID: 9497425 DOI: 10.1152/jn.1998.79.3.1461] [Citation(s) in RCA: 53] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
We recorded spontaneous eye movements elicited by radial optic flow in three macaque monkeys using the scleral search coil technique. Computer-generated stimuli simulated forward or backward motion of the monkey with respect to a number of small illuminated dots arranged on a virtual ground plane. We wanted to see whether optokinetic eye movements are induced by radial optic flow stimuli that simulate self-movement, quantify their parameters, and consider their effects on the processing of optic flow. A regular pattern of interchanging fast and slow eye movements with a frequency of 2 Hz was observed. When we shifted the horizontal position of the focus of expansion (FOE) during simulated forward motion (expansional optic flow), median horizontal eye position also shifted in the same direction but only by a smaller amount; for simulated backward motion (contractional optic flow), median eye position shifted in the opposite direction. We relate this to a change in Schlagfeld typically observed in optokinetic nystagmus. Direction and speed of slow phase eye movements were compared with the local flow field motion in gaze direction (the foveal flow). Eye movement direction matched well the foveal motion. Small systematic deviations could be attributed to an integration of the global motion pattern. Eye speed on average did not match foveal stimulus speed, as the median gain was only approximately 0.5-0.6. The gain was always lower for expanding than for contracting stimuli. We analyzed the time course of the eye movement immediately after each saccade. We found remarkable differences in the initial development of gain and directional following for expansion and contraction. For expansion, directional following and gain were initially poor and strongly influenced by the ongoing eye movement before the saccade. This was not the case for contraction. These differences also can be linked to properties of the optokinetic system. We conclude that optokinetic eye movements can be elicited by radial optic flow fields simulating self-motion. These eye movements are linked to the parafoveal flow field, i.e., the motion in the direction of gaze. In the retinal projection of the optic flow, such eye movements superimpose retinal slip. This results in complex retinal motion patterns, especially because the gain of the eye movement is small and variable. This observation has special relevance for mechanisms that determine self-motion from retinal flow fields. It is necessary to consider the influence of eye movements in optic flow analysis, but our results suggest that direction and speed of an eye movement should be treated differently.
Collapse
Affiliation(s)
- M Lappe
- Department of Zoology and Neurobiology, Ruhr University Bochum, D-44780 Bochum, Germany
| | | | | |
Collapse
|
135
|
Abstract
Many cells in the dorsal part of the medial superior temporal (MST) region of visual cortex respond selectively to specific combinations of expansion/contraction, translation, and rotation motions. Previous investigators have suggested that these cells may respond selectively to the flow fields generated by self-motion of an observer. These patterns can also be generated by the relative motion between an observer and a particular object. We explored a neurally constrained model based on the hypothesis that neurons in MST partially segment the motion fields generated by several independently moving objects. Inputs to the model were generated from sequences of ray-traced images that simulated realistic motion situations, combining observer motion, eye movements, and independent object motions. The input representation was based on the response properties of neurons in the middle temporal area (MT), which provides the primary input to area MST. After applying an unsupervised optimization technique, the units became tuned to patterns signaling coherent motion, matching many of the known properties of MST cells. The results of this model are consistent with recent studies indicating that MST cells primarily encode information concerning the relative three-dimensional motion between objects and the observer.
Collapse
|
136
|
Heading backward: Perceived direction of movement in contracting and expanding optical flow fields. Psychon Bull Rev 1997. [DOI: 10.3758/bf03214342] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
137
|
Royden CS. Mathematical analysis of motion-opponent mechanisms used in the determination of heading and depth. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 1997; 14:2128-2143. [PMID: 9291603 DOI: 10.1364/josaa.14.002128] [Citation(s) in RCA: 50] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
A mathematical analysis is presented of a model that uses motion-opponent operators similar to neurons found in the primate middle temporal visual area, to determine observer heading and depth from optical flow information. The response of these operators to depth changes in the form of a slanted plane or a step edge is analyzed, and the outputs of odd-symmetric operators are compared with that of circularly symmetric operators. The analysis shows sources of error from these operators in determining heading and depth and suggests how some of these errors can be mitigated. Simulations are presented that show that the model performs well for a variety of situations.
Collapse
Affiliation(s)
- C S Royden
- Department of Computer Science, Wellesley College, Massachusetts 02181, USA
| |
Collapse
|
138
|
Abstract
Human observers cannot judge heading accurately in the presence of simulated gaze rotations under many conditions [Royden et al. (1994). Vision Research, 34, 3197-3214]. They make errors in the direction of rotation with magnitudes proportional to the rotation rate. Two hypotheses have been advanced to explain this phenomenon. The extra-retinal-signal hypothesis states that the observer's estimate of gaze rotation is always based on an extra-retinal signal such as an efference copy. In the absence of such a signal, the observer assumes that no rotation has taken place and responds accordingly. The retinal-image hypothesis states that visual input dominates when the extra-retinal signal is small or absent; under this hypothesis, errors with simulated rotations are the consequence of faulty visual mechanisms. Perrone and Stone [(1994). Vision Research, 34, 2917-2938] proposed a model that purports to account for these errors using retinal-image information (optic flow) alone; its assumptions make it inefficient under some conditions. The most important of these assumptions is that the fixated target is stationary with respect to the world (the gaze-stabilization constraint). I compared the model's performance to human data from two experiments of Royden et al. [(1994). Vision Research, 34, 3197-3214]. One experiment simulated translation while tracking a target attached to the scene (gaze-stabilized), while the other simulated translation while tracking a target that was not attached (gaze-unstabilized). The incorporation of the gaze-stabilization constraint leads to a predicted asymmetry for the errors in the gaze-unstabilized experiment that is not observed in human data. I conclude that the model as it stands is not consistent with human behavior. It is possible, however, that the predicted asymmetry is masked in human data by a counteracting asymmetry in a hypothetical processing stage subsequent to the heading estimation that extrapolates the observer's future path of self-motion.
Collapse
Affiliation(s)
- J A Crowell
- School of Optometry, University of California at Berkeley 94720, USA
| |
Collapse
|
139
|
Cutting JE, Vishton PM, Flückiger M, Baumberger B, Gerndt JD. Heading and path information from retinal flow in naturalistic environments. PERCEPTION & PSYCHOPHYSICS 1997; 59:426-41. [PMID: 9136272 DOI: 10.3758/bf03211909] [Citation(s) in RCA: 43] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
In four experiments, we explored the heading and path information available to observers as we simulated their locomotion through a cluttered environment while they fixated an object off to the side. Previously, we presented a theory about the information available and used in such situations. For such a theory to be valid, one must be sure of eye position, but we had been unable to monitor gaze systematically; in Experiment 1, we monitored eye position and found performance best when observers fixated the designated object at the center of the display. In Experiment 2, when we masked portions of the display, we found that performance generally matched the amount of display visible when scaled to retinal sensitivity. In Experiments 3 and 4, we then explored the metric of information about heading (nominal vs. absolute) available and found good nominal information but increasingly poor and biased absolute information as observers looked farther from the aimpoint. Part of the cause for this appears to be that some observers perceive that they have traversed a curved path even when taking a linear one. In all cases, we compared our results with those in the literature.
Collapse
Affiliation(s)
- J E Cutting
- Department of Psychology, Cornell University, Ithaca, NY 14853-7601, USA.
| | | | | | | | | |
Collapse
|
140
|
van den Berg AV, Beintema JA. Motion templates with eye velocity gain fields for transformation of retinal to head centric flow. Neuroreport 1997; 8:835-40. [PMID: 9141048 DOI: 10.1097/00001756-199703030-00006] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Heading perception from the optic flow is more difficult during eye rotations than when the eye is stationary, because the centre of the retinal motion identifies the fixation direction rather than the direction of heading. Eye movement signals helps when motion parallax is absent. This paper distinguishes two different possibilities for interactions between eye movement and visual motion signals to perceive heading with a rotating eye. A pre-motion template transformation changes local retinal velocity into head centric velocity. These velocities then feed head centric motion templates. A post-motion template model combines oculomotor signals with retinal motion templates to arrive at head centric flow templates. The latter scheme involves eye velocity gain fields similar to the eye position gain fields as found in area 7a. We propose that the parietal cortex transforms retinal to head centric direction and retinal to head centric flow on the same principle.
Collapse
Affiliation(s)
- A V van den Berg
- Helmholtz School for Autonomous Systems Research and Department of Physiology, Medical Faculty, Erasmus University Rotterdam, The Netherlands
| | | |
Collapse
|
141
|
Abstract
Recent studies have suggested that humans cannot estimate their direction of forward translation (heading) from the resulting retinal motion (flow field) alone when rotation rates are higher than approximately 1 deg/sec. It has been argued that either oculomotor or static depth cues are necessary to disambiguate the rotational and translational components of the flow field and, thus, to support accurate heading estimation. We have re-examined this issue using visually simulated motion along a curved path towards a layout of random points as the stimulus. Our data show that, in this curvilinear motion paradigm, five of six observers could estimate their heading relatively accurately and precisely (error and uncertainty < approximately 4 deg), even for rotation rates as high as 16 deg/sec, without the benefit of either oculomotor or static depth cues signaling rotation rate. Such performance is inconsistent with models of human self-motion estimation that require rotation information from sources other than the flow field to cancel the rotational flow.
Collapse
Affiliation(s)
- L S Stone
- Flight Management and Human Factors Division, NASA Ames Research Center, Moffett Field, CA 94035-1000, USA.
| | | |
Collapse
|
142
|
Duffy CJ, Wurtz RH. Planar directional contributions to optic flow responses in MST neurons. J Neurophysiol 1997; 77:782-96. [PMID: 9065850 DOI: 10.1152/jn.1997.77.2.782] [Citation(s) in RCA: 48] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
Many neurons in the dorsal region of the medial superior temporal area (MSTd) of monkey cerebral cortex respond to optic flow stimuli in which the center of motion is shifted off the center of the visual field. Each shifted-center-of-motion stimulus presents both different directions of planar motion throughout the visual field and a unique pattern of global motion across the visual field. We investigated the contribution of planar motion to the responses of these neurons in two experiments. In the first, we compared the responses of 243 neurons to planar motion and to shifted-center-of-motion stimuli created by vector summation of planar motion and radial or circular motion. We found that many neurons preferred the same directions of motion in the combined stimuli as in the planar stimuli, but other neurons did not. When we divided our sample into one group with stronger directionality to both planar and vector combination stimuli and one group with weaker directionality, we found that the neurons with the stronger directionality were those that showed the greatest similarity in the preferred direction of motion for both the planar and combined stimuli. In a second set of experiments, we overlapped planar motion and radial or circular motion to create transparent stimuli with the same motion components as the vector combination stimuli, but without the shifted centers of motion. We found that the neurons that responded most strongly to the planar motion when it was combined with radial or circular motion also responded best when the planar motion was overlapped by a transparent motion stimulus. We conclude that the responses of those neurons with stronger directional responses to both the motion of planar and vector combination stimuli are most readily understood as responding to the total planar motion in the stimulus, a planar motion mechanism. Other neurons that had weaker directional responses showed no such similarity in the preferred directions of planar motion in the vector combination and the transparent overlap stimuli and fit best with a mechanism dependent on the global motion pattern. We also found that neurons having significant responses to both radial and circular motion also responded to the spiral stimuli that result from a vector combination of radial and circular motion. The preferred planar-spiral vector combination stimulus was frequently the one containing that neurons' preferred direction of planar motion, which makes them similar to other MSTd neurons.
Collapse
Affiliation(s)
- C J Duffy
- Laboratory of Sensorimotor Research, National Institutes of Health, National Eye Institute, Bethesda, Maryland 20892, USA
| | | |
Collapse
|
143
|
Andersen RA, Snyder LH, Bradley DC, Xing J. Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annu Rev Neurosci 1997; 20:303-30. [PMID: 9056716 DOI: 10.1146/annurev.neuro.20.1.303] [Citation(s) in RCA: 875] [Impact Index Per Article: 32.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Recent experiments are reviewed that indicate that sensory signals from many modalities, as well as efference copy signals from motor structures, converge in the posterior parietal cortex in order to code the spatial locations of goals for movement. These signals are combined using a specific gain mechanism that enables the different coordinate frames of the various input signals to be combined into common, distributed spatial representations. These distributed representations can be used to convert the sensory locations of stimuli into the appropriate motor coordinates required for making directed movements. Within these spatial representations of the posterior parietal cortex are neural activities related to higher cognitive functions, including attention. We review recent studies showing that the encoding of intentions to make movements is also among the cognitive functions of this area.
Collapse
Affiliation(s)
- R A Andersen
- Division of Biology, California Institute of Technology, Pasadena 91125, USA
| | | | | | | |
Collapse
|
144
|
Abstract
How does the brain process visual information about self-motion? In monkey cortex, the analysis of visual motion is performed by successive areas specialized in different aspects of motion processing. Whereas neurons in the middle temporal (MT) area are direction-selective for local motion, neurons in the medial superior temporal (MST) area respond to motion patterns. A neural network model attempts to link these properties to the psychophysics of human heading detection from optic flow. It proposes that populations of neurons represent specific directions of heading. We quantitatively compared single-unit recordings in area MST with single-neuron simulations in this model. Predictions were derived from simulations and subsequently tested in recorded neurons. Neuronal activities depended on the position of the singular point in the optic flow. Best responses to opposing motions occurred for opposite locations of the singular point in the visual field. Excitation by one type of motion is paired with inhibition by the opposite motion. Activity maxima often occur for peripheral singular points. The averaged recorded shape of the response modulations is sigmoidal, which is in agreement with model predictions. We also tested whether the activity of the neuronal population in MST can represent the directions of heading in our stimuli. A simple least-mean-square minimization could retrieve the direction of heading from the neuronal activities with a precision of 4.3 degrees. Our results show good agreement between the proposed model and the neuronal responses in area MST and further support the hypothesis that area MST is involved in visual navigation.
Collapse
|
145
|
Abstract
Several groups have proposed that area MSTd of the macaque monkey has a role in processing optical flow information used in the analysis of self motion, based on its neurons' selectivity for large-field motion patterns such as expansion, contraction, and rotation. It has also been suggested that this cortical region may be important in analyzing the complex motions of objects. More generally, MSTd could be involved in the generic function of complex motion pattern representation, with its cells responsible for integrating local motion signals sent forward from area MT into a more unified representation. If MSTd is extracting generic motion pattern signals, it would be important that the preferred tuning of MSTd neurons not depend on the particular features and cues that allow these motions to be represented. To test this idea, we examined the diversity of stimulus features and cues over which MSTd cells can extract information about motion patterns such as expansion, contraction, rotation, and spirals. The different classes of stimuli included: coherently moving random dot patterns, solid squares, outlines of squares, a square aperture moving in front of an underlying stationary pattern of random dots, a square composed entirely of flicker, and a square of nonFourier motion. When a unit was tuned with respect to motion pattern producing the most vigorous response in a neuron was nearly the same for each class. Although preferred tuning was invariant, the magnitude and width of the tuning curves often varied between classes. Thus, MSTd is form/cue invariant for complex motions, making it an appropriate candidate for analysis of object motion as well as motion introduced by observer translation.
Collapse
|
146
|
Abstract
To study the contribution of vision to the perception of ego-motion, one often dissociates the retinal flow from the corresponding extra-retinal information on eye, head and body movement. This puts the observer in a conflict concerning the experienced ego-motion. When the retinal flow of a translating and rotating eye is shown to a stationary eye, observes often perceive ego-motion on a curved path. In contrast, when they receive the same retinal flow with a rotating eye subjects correctly perceive the simulated rectilinear ego-motion. Thus, different visual representations of ego-motion gain precedence when using the conflict stimulus and when using conditions in which the visual and extra-retinal information accord. Because the flow-pattern can be decomposed in many different ways, the brain could represent the same flow-pattern as a rotation about an axis through the eye plus rectilinear ego-motion or a rotation about an axis outside the eye (corresponding to circular ego-motion) plus motion towards the axis of rotation. The circular motion path percept minimizes the conflict with extra-retinal eye movement information if the axis of rotation is placed at the fixation point. However, in simulated eye rotation displays subjects also perceive illusory motion in depth of the stationary fixation point. This illusory motion is argue to reflect the ego-centric decomposition. Errors are small when subjects judge their heading on the basis of this illusory motion. For the same display much larger errors are made, however, when subjects judge heading from the entire motion pattern, which often results in perceived ego-motion on a curved path. This indicates that subjects can choose between tow different representations of ego-motion resulting in different perceived heading.
Collapse
Affiliation(s)
- A V van den Berg
- Helmholtz School for Autonomous Systems Research, Medical Faculty, Erasmus University Rotterdam, The Netherlands.
| |
Collapse
|
147
|
Royden CS, Hildreth EC. Human heading judgments in the presence of moving objects. PERCEPTION & PSYCHOPHYSICS 1996; 58:836-56. [PMID: 8768180 DOI: 10.3758/bf03205487] [Citation(s) in RCA: 71] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
When moving toward a stationary scene, people judge their heading quite well from visual information alone. Much experimental and modeling work has been presented to analyze how people judge their heading for stationary scenes. However, in everyday life, we often move through scenes that contain moving objects. Most models have difficulty computing heading when moving objects are in the scene, and few studies have examined how well humans perform in the presence of moving objects. In this study, we tested how well people judge their heading in the presence of moving objects. We found that people perform remarkably well under a variety of conditions. The only condition that affects an observer's ability to judge heading accurately consists of a large moving object crossing the observer's path. In this case, the presence of the object causes a small bias in the heading judgments. For objects moving horizontally with respect to the observer, this bias is in the object's direction of motion. These results present a challenge for computational models.
Collapse
Affiliation(s)
- C S Royden
- Department of Computer Science, Wellesley College, MA 02181, USA.
| | | |
Collapse
|
148
|
Abstract
The ability to judge heading during tracking eye movements has recently been examined by several investigators. To assess the use of retinal-image and extra-retinal information in this task, the previous work has compared heading judgments with executed as opposed to simulated eye movements. For eye movement velocities greater than 1 deg/sec, observers seem to require the eye-velocity information provided by extra-retinal signals that accompany tracking eye movements. When those signals are not provided, such as with simulated eye movements, observers perceive their self-motion as curvilinear translation rather than the linear translation plus eye rotation being presented. The interpretation of the previous results is complicated, however, by the fact that the simulated eye movement condition may have created a conflict between two possible estimates of the heading: one based on extra-retinal solutions and the other based on retina-image solutions. In four experiments, we minimized this potential conflict by having observers judge heading in the presence of rotations consisting of mixtures of executed and simulated eye movements. The results showed that the heading is estimated more accurately when rotational flow is created by executed eye movements alone. In addition, the magnitude of errors in heading estimates is essentially proportional to the amount of rotational flow created by a simulated eye rotation (independent of the total magnitude of the rotational flow). The fact that error magnitude is proportional to the amount of simulated rotation suggests that the visual system attributes rotational flow unaccompanied by an eye movement to a displacement of the direction of translation in the direction of the simulated eye rotation.
Collapse
Affiliation(s)
- M S Banks
- Department of Psychology, University of California, Berkeley 94720, USA
| | | | | | | |
Collapse
|
149
|
Sherk H, Kim JN, Mulligan K. Are the preferred directions of neurons in cat extrastriate cortex related to optic flow? Vis Neurosci 1995; 12:887-94. [PMID: 8924412 DOI: 10.1017/s0952523800009445] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
It has been proposed that one area of extrastriate cortex in the cat, the lateral suprasylvian area (LS), plays an important role in visual analysis during locomotion (Rauschecker et al., 1987). Cells in LS reportedly tend to prefer directions along a trajectory originating at the center of gaze, and passing outward through the receptive-field center. Such directions coincide with the directions of image motion in an optic flow field, the pattern seen by locomoting observers when they fixate the point towards which they are heading (Gibson, 1950). We re-examined this issue for cells in LS with receptive fields in the lower visual field. Cells recorded posterior to Horsley-Clarke A2 showed a clear correlation between preferred direction and receptive-field location, but not that predicted: preferred directions were generally orthogonal to "optic flow" directions. Since these cells were all located posterior to those in studies showing a bias for "optic flow" directions, we hypothesized that there are two cell populations within LS, an anterior population that tends to prefer radial-outward directions, and a posterior population that tends to prefer directions orthogonal to radial. Data from earlier mapping experiments (Sherk & Mulligan, 1993) supported this idea.
Collapse
Affiliation(s)
- H Sherk
- Department of Biological Structure, University of Washington, Seattle 98195-7420, USA
| | | | | |
Collapse
|
150
|
Kaiser MK, Hecht H. Time-to-passage judgments in nonconstant optical flow fields. PERCEPTION & PSYCHOPHYSICS 1995; 57:817-25. [PMID: 7651806 DOI: 10.3758/bf03206797] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The time until an approaching object will pass an observer (time to passage, or TTP) is optically specified by a global flow field even in the absence of local expansion or size cues. Kaiser and Mowafy (1993) have demonstrated that observers are in fact sensitive to this global flow information. The present studies investigate two factors that are usually ignored in work related TTP: (1) non-constant motion functions and (2) concomitant eye rotation. Non-constant velocities violate an assumption of some TTP derivations, and eye rotations may complicate heading extraction. Such factors have practical significance, for example, in the case of a pilot accelerating an aircraft or executing a roll. In our studies, a flow field of constant-sized stars was presented monocularly on a large screen. TTP judgments had to be made on the basis of one target star. The flow field varied in its acceleration pattern and its roll component. Observers did not appear to utilize acceleration information. In particular, TTPs with decelerating motion were consistently underestimated. TTP judgments were fairly robust with respect to roll, even when roll axis and track vector were decoupled. However, substantial decoupling between heading and track vector led to a decrement in performance, in both the presence and the absence of roll.
Collapse
Affiliation(s)
- M K Kaiser
- NASA Ames Research Center, Moffett Field, CA 94035-1000, USA
| | | |
Collapse
|