1
|
Sun Q, Zhan LZ, You FH, Dong XF. Attention affects the perception of self-motion direction from optic flow. iScience 2024; 27:109373. [PMID: 38500831 PMCID: PMC10946324 DOI: 10.1016/j.isci.2024.109373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 01/02/2024] [Accepted: 02/27/2024] [Indexed: 03/20/2024] Open
Abstract
Many studies have demonstrated that attention affects the perception of many visual features. However, previous studies show conflicting results regarding the effect of attention on the perception of self-motion direction (i.e., heading) from optic flow. To address this question, we conducted three behavioral experiments and found that estimation accuracies of large headings (>14°) decreased with attention load, discrimination thresholds of these headings increased with attention load, and heading estimates were systematically compressed toward the focus of attention. Therefore, the current study demonstrated that attention affected heading perception from optic flow, showing that the perception is both information-driven and cognitive.
Collapse
Affiliation(s)
- Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
- Zhejiang Philosophy and Social Science Laboratory for the Mental Health and Crisis Intervention of Children and Adolescents, Zhejiang Normal University, Jinhua, P.R. China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, P.R. China
| | - Lin-Zhe Zhan
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
| | - Fan-Huan You
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
| | - Xiao-Fei Dong
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
| |
Collapse
|
2
|
A model of how depth facilitates scene-relative object motion perception. PLoS Comput Biol 2019; 15:e1007397. [PMID: 31725723 PMCID: PMC6879150 DOI: 10.1371/journal.pcbi.1007397] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 11/26/2019] [Accepted: 09/12/2019] [Indexed: 12/02/2022] Open
Abstract
Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer’s retina and radically influences an object’s retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object—otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object’s retinal motion, improving the accuracy of the object’s movement direction represented by motion signals. Research has shown that the accuracy with which humans perceive object motion during self-motion improves in the presence of stereo cues. Using a neural modelling approach, we explore whether this finding can be explained through improved estimation of the retinal motion induced by self-motion. Our results show that depth cues that provide information about scene structure may have a large effect on the specificity with which the neural mechanisms for motion perception represent the visual self-motion signal. This in turn enables effective removal of the retinal motion due to self-motion when the goal is to perceive object motion relative to the stationary world. These results reveal a hitherto unknown critical function of stereo tuning in the MT-MST complex, and shed important light on how the brain may recruit signals from upstream and downstream brain areas to simultaneously perceive self-motion and object motion.
Collapse
|
3
|
Yu CP, Page WK, Gaborski R, Duffy CJ. Receptive field dynamics underlying MST neuronal optic flow selectivity. J Neurophysiol 2010; 103:2794-807. [PMID: 20457855 DOI: 10.1152/jn.01085.2009] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Optic flow informs moving observers about their heading direction. Neurons in monkey medial superior temporal (MST) cortex show heading selective responses to optic flow and planar direction selective responses to patches of local motion. We recorded MST neuronal responses to a 90 x 90 degrees optic flow display and to a 3 x 3 array of local motion patches covering the same area. Our goal was to test the hypothesis that the optic flow responses reflect the sum of the local motion responses. The local motion responses of each neuron were modeled as mixtures of Gaussians, combining the effects of two Gaussian response functions derived using a genetic algorithm, and then used to predict that neuron's optic flow responses. Some neurons showed good correspondence between local motion models and optic flow responses, others showed substantial differences. We used the genetic algorithm to modulate the relative strength of each local motion segment's responses to accommodate interactions between segments that might modulate their relative efficacy during co-activation by global patterns of optic flow. These gain modulated models showed uniformly better fits to the optic flow responses, suggesting that coactivation of receptive field segments alters neuronal response properties. We tested this hypothesis by simultaneously presenting local motion stimuli at two different sites. These two-segment stimuli revealed that interactions between response segments have direction and location specific effects that can account for aspects of optic flow selectivity. We conclude that MST's optic flow selectivity reflects dynamic interactions between spatially distributed local planar motion response mechanisms.
Collapse
Affiliation(s)
- Chen Ping Yu
- Department of Computer Sciences, Rochester Institute of Technology Rochester, Rochester, New York, USA
| | | | | | | |
Collapse
|
4
|
Browning NA, Grossberg S, Mingolla E. A neural model of how the brain computes heading from optic flow in realistic scenes. Cogn Psychol 2009; 59:320-56. [PMID: 19716125 DOI: 10.1016/j.cogpsych.2009.07.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2008] [Accepted: 07/20/2009] [Indexed: 11/15/2022]
Abstract
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard to behavioral and neural substrates. The current article develops a model that does both. The ViSTARS neural model describes interactions among neurons in the primate magnocellular pathway, including V1, MT(+), and MSTd. Model outputs are quantitatively similar to human heading data in response to complex natural scenes. The model estimates heading to within 1.5 degrees in random dot or photo-realistically rendered scenes, and within 3 degrees in video streams from driving in real-world environments. Simulated rotations of less than 1 degrees /s do not affect heading estimates, but faster simulated rotation rates do, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.
Collapse
Affiliation(s)
- N Andrew Browning
- Department of Cognitive and Neural Systems, Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
5
|
Evidence for flow-parsing in radial flow displays. Vision Res 2008; 48:655-63. [PMID: 18243274 DOI: 10.1016/j.visres.2007.10.023] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2007] [Revised: 10/16/2007] [Accepted: 10/18/2007] [Indexed: 11/21/2022]
Abstract
Retinal motion of objects is not in itself enough to signal whether or how objects are moving in the world; the same pattern of retinal motion can result from movement of the object, the observer or both. Estimation of scene-relative movement of an object is vital for successful completion of many simple everyday tasks. Recent research has provided evidence for a neural flow-parsing mechanism which uses the brain's sensitivity to optic flow to separate retinal motion signals into those components due to observer movement and those due to the movement of objects in the scene. In this study we provide further evidence that flow-parsing is implicated in the assessment of object trajectory during observer movement. Furthermore, it is shown that flow-parsing involves a global analysis of retinal motion, as might be expected if optic flow processing underpinned this mechanism.
Collapse
|
6
|
Bex PJ, Falkenberg HK. Resolution of complex motion detectors in the central and peripheral visual field. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2006; 23:1598-607. [PMID: 16783422 DOI: 10.1364/josaa.23.001598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We examine how local direction signals are combined to compute the focus of radial motion (FRM) in random dot patterns and examine how this process changes across the visual field. Equivalent noise analysis showed that a loss in FRM accuracy was largely attributable to an increase in local motion detector noise with little or no change in efficiency across the visual field. The minimum separation for discriminating the foci of two overlapping optic flow patterns increased in the periphery faster than predicted from the resolution for a single FRM. This behavior requires that observers average numerous local velocities to estimate the FRM, which enables resistance to internal and external noise and endows the system with the property of position invariance. However, such pooling limits the precision with which multiple looming objects can be discriminated, especially in the peripheral visual field.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | | |
Collapse
|
7
|
Duijnhouwer J, Beintema JA, van den Berg AV, van Wezel RJA. An illusory transformation of optic flow fields without local motion interactions. Vision Res 2006; 46:439-43. [PMID: 16009393 DOI: 10.1016/j.visres.2005.05.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2005] [Revised: 05/09/2005] [Accepted: 05/12/2005] [Indexed: 11/21/2022]
Abstract
The focus of expansion (FOE) of a radially expanding optic flow pattern that is overlapped by unidirectional laminar flow is perceptually displaced in the direction of that laminar flow. There is continuing debate on whether this effect is due to local or global motion interactions. Here, we show psychophysically that under conditions without local motion transparency the illusion becomes weaker but can still be observed. In our experiments, the radial and laminar-flow fields were not presented with overlap but separately to the left and right halves of the visual field with a blank vertical strip of 15 degrees horizontal width in between. The illusory shift observed in this condition cannot be explained by local motion interactions because (a) no transparent motion was present in the stimulus, and (b) the receptive fields of cortical cells involved in the analysis of local motion cross the vertical midline of the visual field to a limited extent. We conclude that global motion detectors that integrate motion from both halves of the visual field play a role in shifting the perceived position of the FOE and that local motion interactions may be sufficient, but are not necessary for the optic flow illusion to occur.
Collapse
Affiliation(s)
- Jacob Duijnhouwer
- Functional Neurobiology, Helmholtz Research Institute, Utrecht, The Netherlands.
| | | | | | | |
Collapse
|
8
|
Bex PJ, Dakin SC. Spatial interference among moving targets. Vision Res 2005; 45:1385-98. [PMID: 15743609 DOI: 10.1016/j.visres.2004.12.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2004] [Revised: 11/18/2004] [Accepted: 12/08/2004] [Indexed: 11/16/2022]
Abstract
Peripheral vision for static form is limited both by reduced spatial acuity and by interference among adjacent features ('crowding'). However, the visibility of acuity-corrected image motion is relatively constant across the visual field. We measured whether spatial interference among nearby moving elements is similarly invariant of retinal eccentricity and assessed if motion integration could account for any observed sensitivity loss. We report that sensitivity to the direction of motion of a central target-highly visible in isolation-was strongly impaired by four drifting flanking elements. The extent of spatial interference increased with eccentricity. Random-direction flanks and flanks whose directions formed global patterns of rotation or expansion were more disruptive than flanks forming global patterns of translation, regardless of the relative direction of the target element. Spatial interference was low-pass tuned for spatial frequency and broadly tuned for temporal frequency. We show that these results challenge the generality of models of spatial interference that are based on retinal image quality, masking, confusions between target and flanks, attentional resolution limits or (simple) "averaging" of element parameters. Instead, the results suggest that spatial interference is a consequence of the integration of meaningful image structure within large receptive fields. The underlying connectivity of this integration favours low spatial frequency structure but is broadly tuned for speed.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, 11-43 Bath Street, London EC1V 9EL, UK.
| | | |
Collapse
|
9
|
Hanada M. Computational analyses for illusory transformations in the optic flow field and heading perception in the presence of moving objects. Vision Res 2005; 45:749-58. [PMID: 15639501 DOI: 10.1016/j.visres.2004.09.037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2004] [Revised: 09/23/2004] [Indexed: 11/24/2022]
Abstract
When we see a stimulus of a radial flow field (the target flow) overlapped with a lateral flow field or another radial flow field, the focus of expansion (FOE) of the target radial flow appears to be shifted in a direction. Royden and Conti [(2003). A model using MT-like motion-opponent operators explains an illusory transformation in the optic flow field. Vision Research, 43, 2811-2826] argued that local motion subtraction is crucial for explanation of this phenomenon. The flow field which causes the illusory displacement of FOE was computationally analyzed. It was shown that the flow field is approximately a rigid-motion flow; the flow can be generated by simulating a situation where an observer moves toward a stationary scene. The heading direction for the observer corresponds to the perceived position of the FOE of the radial flow pattern. It implies that any algorithms which assume rigidity of the scene and recover veridical heading explain the bias in perceived FOE. There is no need for local motion subtraction in order to explain the phenomena. Furthermore, the flow for an observer's translation in the presence of objects moving laterally or in depth was computationally analyzed. It was found that algorithms which minimizes standard error functions with less weights to the independently moving objects show similar biases in recovered heading to the bias of human observers. It implies that local motion subtraction is not necessary for explanation of the bias in perceived heading due to an object moving laterally or in depth, contrary to the argument of Royden [(2002). Computing heading in the presence of moving objects: a model that uses motion-opponent operators. Vision Research, 42, 3043-3058].
Collapse
Affiliation(s)
- Mitsuhiko Hanada
- Department of Media Architecture, Future University-Hakodate, 116-2 Kamedanakano-cho, Hakodate, Hokkaido 041-8655, Japan.
| |
Collapse
|
10
|
Nakamura S. Effects of spatial arrangement of visual stimulus on inverted self-motion perception induced by the foreground motion: examination of OKN-suppression hypothesis. Vision Res 2004; 44:1951-60. [PMID: 15145688 DOI: 10.1016/j.visres.2004.03.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2003] [Revised: 12/25/2003] [Indexed: 11/28/2022]
Abstract
Our previous study revealed that a slowly moving foreground, which is presented in front of a fast-moving orthogonal background, can induce self-motion perception in the same direction as its motion (inverted vection; Vis. Res. 40 (2000) 2915). The present study shows that inverted vection becomes stronger in the conditions where the foreground stimulus is presented in the central area of observer's visual field and the observer's eyes converge on the same depth plane. These stimulus conditions are consistent with the one where the foreground can induce observer's optokinetic nystagmus more effectively, and therefore, the results of this study support our hypothesis in that mis-registered eye-movement information caused by the suppression of optokinetic nystagmus induced by the foreground motion is a critical factor in perceiving inverted vection.
Collapse
Affiliation(s)
- Shinji Nakamura
- Faculty of Social and Information Sciences, Nihon Fukushi University, 26-2 Higashihaemicho Handa, Aichi 475-0012, Japan.
| |
Collapse
|
11
|
Royden CS, Conti DM. A model using MT-like motion-opponent operators explains an illusory transformation in the optic flow field. Vision Res 2003; 43:2811-26. [PMID: 14568097 DOI: 10.1016/s0042-6989(03)00481-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Previous studies have shown that a physiologically based model using motion-opponent operators to compute heading performs accurately for simulated observer translations. Here we show how this model can explain an illusory shift in the perceived focus of expansion of a radial flow field that occurs when a field of laterally moving dots is superimposed on a field of radially moving dots. Furthermore, we can use the model to predict the perceptual shift of the focus of expansion for novel visual stimuli. These results support the hypothesis that this illusion results from motion subtraction during the processing of optic flow fields.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, P.O. Box 116A, Worcester, MA 01610, USA.
| | | |
Collapse
|
12
|
Ito H, Fujimoto C. Compound self-motion perception induced by two kinds of optical motion. PERCEPTION & PSYCHOPHYSICS 2003; 65:874-87. [PMID: 14528897 DOI: 10.3758/bf03194821] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Two kinds of flow patterns consisting of random dots were presented simultaneously to subjects to investigate whether or not two kinds of vection occur simultaneously. One pattern induces vertical linear self-translation, whereas the other induces self-rotation around a vertical axis (when either pattern is presented alone). Three sets of conditions were tested. The first condition was one in which random dots moved in a summed direction of both flow vectors. In the second condition, both flow patterns were simply overlaid, whereas in the third condition, the two kinds of flow patterns were overlaid with a depth separation produced by binocular disparity. The subjects perceived both kinds of vection simultaneously in directions opposite to those of the corresponding flow components under the first condition, whereas either vection occurred mainly under the second condition. Under the third condition, both of the flows induced each kind of vection simultaneously, despite there being no physical vector summation of dot motion. The background flow induced vection in a direction opposite to the flow direction, whereas the foreground flow induced vection in the same direction as the flow direction. These results show that induced self-translation and induced self-rotation can occur simultaneously in two ways.
Collapse
Affiliation(s)
- Hiroyuki Ito
- Department of Visual Communication Design, Kyushu Institute of Design, Fukuoka, Japan.
| | | |
Collapse
|
13
|
Royden CS. Computing heading in the presence of moving objects: a model that uses motion-opponent operators. Vision Res 2002; 42:3043-58. [PMID: 12480074 DOI: 10.1016/s0042-6989(02)00394-2] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Psychophysical experiments have shown that human heading judgments can be biased by the presence of moving objects. Here we present a theoretical argument that motion differences can account for the direction of bias seen in humans. We further examine the responses of a computer simulation of a model for computing heading that uses motion-opponent operators similar to cells in the primate middle temporal visual area. When moving objects are present, this model shows similar biases to those seen with humans, suggesting that such a model may underlie human heading computations.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, P.O. Box 116A, Worcester, MA 01610, USA
| |
Collapse
|
14
|
Bex PJ, Dakin SC. Comparison of the spatial-frequency selectivity of local and global motion detectors. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2002; 19:670-677. [PMID: 11934159 DOI: 10.1364/josaa.19.000670] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Convergent physiological and behavioral evidence indicates that the initial receptive fields responsible for motion detection are spatially localized. Consequently, the perception of global patterns of movement (such as expansion) requires that the output of these local mechanisms be integrated across visual space. We have differentiated local and global motion processes, with mixtures of coherent and incoherent moving patterns composed of bandpass filtered dots, and have measured their spatial-frequency selectivity. We report that local motion detectors show narrow-band spatial-frequency tuning (i.e., they respond only to a narrow range of spatial frequencies) but that global motion detectors show broadband spatial-frequency tuning (i.e., they integrate across a broad range of spatial frequencies), with a preference for low spatial frequencies.
Collapse
|
15
|
Beardsley SA, Vaina LM. A laterally interconnected neural architecture in MST accounts for psychophysical discrimination of complex motion patterns. J Comput Neurosci 2001; 10:255-80. [PMID: 11443285 DOI: 10.1023/a:1011264014799] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The complex patterns of visual motion formed across the retina during self-motion, often referred to as optic flow, provide a rich source of information describing our dynamic relationship within the environment. Psychophysical studies indicate the existence of specialized detectors for component motion patterns (radial, circular, planar) that are consistent with the visual motion properties of cells in the medial superior temporal area (MST) of nonhuman primates. Here we use computational modeling and psychophysics to investigate the structural and functional role of these specialized detectors in performing a graded motion pattern (GMP) discrimination task. In the psychophysical task perceptual discrimination varied significantly with the type of motion pattern presented, suggesting perceptual correlates to the preferred motion bias reported in MST. Simulated perceptual discrimination in a population of independent MST-like neural responses showed inconsistent psychophysical performance that varied as a function of the visual motion properties within the population code. Robust psychophysical performance was achieved by fully interconnecting neural populations such that they inhibited nonpreferred units. Taken together, these results suggest that robust processing of the complex motion patterns associated with self-motion and optic flow may be mediated by an inhibitory structure of neural interactions in MST.
Collapse
Affiliation(s)
- S A Beardsley
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA
| | | |
Collapse
|
16
|
Lappe M. Computational Mechanisms for Optic Flow Analysis in Primate Cortex. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2000; 44:235-68. [PMID: 10605649 DOI: 10.1016/s0074-7742(08)60745-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Affiliation(s)
- M Lappe
- Department of Zoology and Neurobiology, Ruhr University Bochum, Germany
| |
Collapse
|
17
|
Lappe M, Grigo A. How stereovision interacts with optic flow perception: neural mechanisms. Neural Netw 1999; 12:1325-1329. [PMID: 12662636 DOI: 10.1016/s0893-6080(99)00061-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Optic flow, the global visual motion experienced during self-movement, supplies important navigational information. Optic flow analysis in the visual system is aided by several other visual and non-visual signals. Recent psychophysical findings demonstrate an interaction of optic flow perception and stereoscopic depth vision. Retinal disparity strongly affects an optic flow illusion, which can be related to the mechanisms of visual self-motion detection. To investigate the neuronal basis of this interaction, we tested several hypotheses by introducing different disparity contributions in a detailed neurobiological model of optic flow processing in monkey cortex. The disparity-dependent modification, which accounted best for the data suggests a specific contribution of a subset of stereoscopically modulated cortical neurons present in areas MT and MST.
Collapse
Affiliation(s)
- M Lappe
- Department of Zoology and Neurobiology, Ruhr University Bochum, D-44780, Bochum, Germany
| | | |
Collapse
|
18
|
Abstract
Accurate and efficient control of self-motion is an important requirement for our daily behavior. Visual feedback about self-motion is provided by optic flow. Optic flow can be used to estimate the direction of self-motion ('heading') rapidly and efficiently. Analysis of oculomotor behavior reveals that eye movements usually accompany self-motion. Such eye movements introduce additional retinal image motion so that the flow pattern on the retina usually consists of a combination of self-movement and eye movement components. The question of whether this 'retinal flow' alone allows the brain to estimate heading, or whether an additional 'extraretinal' eye movement signal is needed, has been controversial. This article reviews recent studies that suggest that heading can be estimated visually but extraretinal signals are used to disambiguate problematic situations. The dorsal stream of primate cortex contains motion processing areas that are selective for optic flow and self-motion. Models that link the properties of neurons in these areas to the properties of heading perception suggest possible underlying mechanisms of the visual perception of self-motion.
Collapse
|
19
|
Abstract
Radial patterns of optic flow contain a centre of expansion that indicates the observer's direction of self-movement. When the radial pattern is viewed with transparently overlapping unidirectional motion, the centre of expansion appears to shift in the direction of the unidirectional motion [Duffy, C.J. & Wurtz, R.H. (1993) Vision Res., 33, 1481-1490]. Neurons in the medial superior temporal (MST) area of monkey cerebral cortex are thought to mediate optic flow analysis, but they do not shift their responses to parallel the illusion created by transparent overlap. The population-based model of optic flow analysis proposed by Lappe and Rauschecker replicates the illusory shift observed in perceptual studies [Lappe, M. & Rauschecker, J.P. (1995) Vision Res., 35, 1619-1631]. We analysed the behaviour of constituent neurons in the model, to gain insight into neuronal mechanisms underlying the illusion. Single model neurons did not show the illusory shift but rather graded variations of their response specificity. The shift required the aggregate response of the population. We compared the model's predictions about the behaviour of single neurons with the responses recorded from area MST. The predicted distribution of overlap effects agreed with that observed in area MST. The success of the population-based model in predicting the illusion and the neuronal behaviour suggests that area MST uses the graded responses of single neurons to create a population response that supports optic flow perception.
Collapse
Affiliation(s)
- M Lappe
- Department of Zoology, Ruhr University Bochum, D-44780 Bochum, Germany.
| | | |
Collapse
|
20
|
Abstract
We have proposed previously a computational neural-network model by which the complex patterns of retinal image motion generated during locomotion (optic flow) can be processed by specialized detectors acting as templates for specific instances of self-motion. The detectors in this template model respond to global optic flow by sampling image motion over a large portion of the visual field through networks of local motion sensors with properties similar to those of neurons found in the middle temporal (MT) area of primate extrastriate visual cortex. These detectors, arranged within cortical-like maps, were designed to extract self-translation (heading) and self-rotation, as well as the scene layout (relative distances) ahead of a moving observer. We then postulated that heading from optic flow is directly encoded by individual neurons acting as heading detectors within the medial superior temporal (MST) area. Others have questioned whether individual MST neurons can perform this function because some of their receptive-field properties seem inconsistent with this role. To resolve this issue, we systematically compared MST responses with those of detectors from two different configurations of the model under matched stimulus conditions. We found that the characteristic physiological properties of MST neurons can be explained by the template model. We conclude that MST neurons are well suited to support self-motion estimation via a direct encoding of heading and that the template model provides an explicit set of testable hypotheses that can guide future exploration of MST and adjacent areas within the superior temporal sulcus.
Collapse
|
21
|
Abstract
Eye or head rotation would influence perceived heading direction if it were coded by cells tuned only to retinal flow patterns that correspond to linear self-movement. We propose a model for heading detection based on motion templates that are also Gaussian-tuned to the amount of rotational flow. Such retinal flow templates allow explicit use of extra-retinal signals to create templates tuned to head-centric flow as seen by the stationary eye. Our model predicts an intermediate layer of 'eye velocity gain fields' in which 'rate-coded' eye velocity is multiplied with responses of templates sensitive to specific retinal flow patterns. By combination of the activities of one retinal flow template and many units with an eye velocity gain field, a new type of unit appears: its preferred retinal flow changes dynamically in accordance with the eye rotation velocity. This unit's activity becomes thereby approximately invariant to the amount of eye rotation. The units with eye velocity gain fields from the motion-analogue of the units with eye position gain fields found in area 7a, which according to our general approach, are needed to transform position from retino-centric to head-centric coordinates. The rotation-tuned templates can also provide rate-coded visual estimates of eye rotation to allow a pure visual compensation for rotational flow. Our model is consistent with psychophysical data that indicate a role for extra-retinal as well as visual rotation signals in the correct perception of heading.
Collapse
Affiliation(s)
- J A Beintema
- Helmholtz School for Autonomous Systems Research, Department of Physiology, Erasmus University Rotterdam, The Netherlands
| | | |
Collapse
|
22
|
Lappe M. A model of the combination of optic flow and extraretinal eye movement signals in primate extrastriate visual cortex. Neural model of self-motion from optic flow and extraretinal cues. Neural Netw 1998; 11:397-414. [PMID: 12662818 DOI: 10.1016/s0893-6080(98)00013-6] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The determination of the direction of heading from optic flow is a complicated task. To solve it the visual system complements the optic flow by non-visual information about the occurrence of eye movements. Psychophysical studies have shown that the need for this combination depends on the structure the visual scene. In a depth-rich visual environment motion parallax can be exploited to differentiate self-translation from eye rotation. In the absence of motion parallax, i.e. in the case of movement towards a frontoparallel plane, extraretinal signals are necessary for correct heading perception ([Warren and Hannon, 1990]). [Lappe and Rauschecker (1993b)] have proposed a model of visual heading detection that reproduces many of the psychophysical findings in the absence of extraretinal input and links them to properties of single neurons in the primate visual cortex. The present work proposes a neural network model that integrates extraretinal signals into this network. The model is compared with psychophysical and neurophysiological data from experiments in human and non-human primates. The combined visual/extraretinal model reproduces human behavior in the case of movement towards a frontoparallel plane. Single model neurons exhibit several similarities to neurons from the medial superior temporal (MST) area of the macaque monkey. Similar to MST cells ([Erickson and Thier, 1991]) they differentiate between self-induced visual motion that results from eye movements in a stationary environment, and real motion in the environment. The model predicts that this differentiation can also be achieved visually, i.e. without extraretinal input. Other simulations followed experiments by [Bradley et al. (1996)], in which flow fields were presented that simulated observer translation towards a frontoparallel plane plus an eye rotation. Similar to MST cells, model neurons shift their preference for the focus of expansion along the direction of the eye movement when extraretinal input is not available. They respond to the retinal location of the focus of expansion which is shifted by the eye movement. In the presence of extraretinal input the preference for the focus of expansion is largely invariant to eye movements and tied to the location of the focus of expansion with regard to the visual scene. The model proposes that extraretinal compensation for eye movements need not be perfect in single neurons to achieve accurate heading detection. It thereby shows that the incomplete compensation found in most MST neurons is sufficient to explain the psychophysical data.
Collapse
Affiliation(s)
- Markus Lappe
- Department of Zoology and Neurobiology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
23
|
Grigo A, Lappe M. Interaction of stereo vision and optic flow processing revealed by an illusory stimulus. Vision Res 1998; 38:281-90. [PMID: 9536354 DOI: 10.1016/s0042-6989(97)00123-5] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The influence of stereoscopic vision on the perception of optic flow fields was investigated in experiments based on a recently described illusion. In this illusion, subjects perceive a shift of the center of an expanding optic flow field when it is transparently superimposed by a unidirectional motion pattern. This illusory shift can be explained by the visual system taking the presented flow pattern as a certain self-motion flow field. Here we examined the dependence of the illusory transformation on differences in depth between the two superimposed motion patterns. Presenting them with different relative binocular disparities, we found a strong variation in the magnitude of the illusory shift. Especially when translation was in front of expansion, a highly significant decrease of the illusory shift occurred, down to 25% of its magnitude at zero disparity. These findings confirm the assumption that the motion pattern is interpreted as a self-motion flow field. In a further experiment we presented monocular depth cues by changing dot size and dot density. This caused a reduction of the illusory shift which is distinctly smaller than under stereoscopic presentation. We conclude that the illusory optic flow transformation is modified by depth information, especially by binocular disparity. The findings are linked to the phenomenon of induced motion and are related to neurophysiology.
Collapse
Affiliation(s)
- A Grigo
- Department of Zoology and Neurobiology, Ruhr University Bochum, Germany
| | | |
Collapse
|
24
|
Abstract
Current models of motion perception depend on unidirectional motion-sensitive mechanisms that provide local inputs for complex pattern motion, such as optic flow. To test the generality of such models, we asked observers to compare the speed of radial gratings with the translational speed of vertical gratings. The speed of the radial gratings was consistently overestimated by 20-60% relative to that of translating gratings that were identical in all other respects. The speed bias was not associated with a general spatial or temporal processing bias, nor with the high relative speed of points about the center of expansion/contraction. The bias increased non-linearly with the size of sectors of the radiating pattern exposed. As the motion of the two patterns was locally identical but judged differently, the apparent speed of both kinds of motion cannot be served by any mechanism, nor described by any model, that is based entirely on local motion signals. We speculate that the greater apparent speed of the radial motion has to do with apparent motion in depth.
Collapse
Affiliation(s)
- P J Bex
- Center for Visual Science, University of Rochester, NY 14627-0268, USA.
| | | |
Collapse
|
25
|
Royden CS, Hildreth EC. Human heading judgments in the presence of moving objects. PERCEPTION & PSYCHOPHYSICS 1996; 58:836-56. [PMID: 8768180 DOI: 10.3758/bf03205487] [Citation(s) in RCA: 71] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
When moving toward a stationary scene, people judge their heading quite well from visual information alone. Much experimental and modeling work has been presented to analyze how people judge their heading for stationary scenes. However, in everyday life, we often move through scenes that contain moving objects. Most models have difficulty computing heading when moving objects are in the scene, and few studies have examined how well humans perform in the presence of moving objects. In this study, we tested how well people judge their heading in the presence of moving objects. We found that people perform remarkably well under a variety of conditions. The only condition that affects an observer's ability to judge heading accurately consists of a large moving object crossing the observer's path. In this case, the presence of the object causes a small bias in the heading judgments. For objects moving horizontally with respect to the observer, this bias is in the object's direction of motion. These results present a challenge for computational models.
Collapse
Affiliation(s)
- C S Royden
- Department of Computer Science, Wellesley College, MA 02181, USA.
| | | |
Collapse
|