1
|
Sun Q, Zhan LZ, You FH, Dong XF. Attention affects the perception of self-motion direction from optic flow. iScience 2024; 27:109373. [PMID: 38500831 PMCID: PMC10946324 DOI: 10.1016/j.isci.2024.109373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 01/02/2024] [Accepted: 02/27/2024] [Indexed: 03/20/2024] Open
Abstract
Many studies have demonstrated that attention affects the perception of many visual features. However, previous studies show conflicting results regarding the effect of attention on the perception of self-motion direction (i.e., heading) from optic flow. To address this question, we conducted three behavioral experiments and found that estimation accuracies of large headings (>14°) decreased with attention load, discrimination thresholds of these headings increased with attention load, and heading estimates were systematically compressed toward the focus of attention. Therefore, the current study demonstrated that attention affected heading perception from optic flow, showing that the perception is both information-driven and cognitive.
Collapse
Affiliation(s)
- Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
- Zhejiang Philosophy and Social Science Laboratory for the Mental Health and Crisis Intervention of Children and Adolescents, Zhejiang Normal University, Jinhua, P.R. China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, P.R. China
| | - Lin-Zhe Zhan
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
| | - Fan-Huan You
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
| | - Xiao-Fei Dong
- School of Psychology, Zhejiang Normal University, Jinhua, P.R. China
| |
Collapse
|
2
|
Ali M, Decker E, Layton OW. Temporal stability of human heading perception. J Vis 2023; 23:8. [PMID: 36786748 PMCID: PMC9932552 DOI: 10.1167/jov.23.2.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023] Open
Abstract
Humans are capable of accurately judging their heading from optic flow during straight forward self-motion. Despite the global coherence in the optic flow field, however, visual clutter and other naturalistic conditions create constant flux on the eye. This presents a problem that must be overcome to accurately perceive heading from optic flow-the visual system must maintain sensitivity to optic flow variations that correspond with actual changes in self-motion and disregard those that do not. One solution could involve integrating optic flow over time to stabilize heading signals while suppressing transient fluctuations. Stability, however, may come at the cost of sluggishness. Here, we investigate the stability of human heading perception when subjects judge their heading after the simulated direction of self-motion changes. We found that the initial heading exerted an attractive influence on judgments of the final heading. Consistent with an evolving heading representation, bias toward the initial heading increased with the size of the heading change and as the viewing duration of the optic flow consistent with the final heading decreased. Introducing periods of sensory dropout (blackouts) later in the trial increased bias whereas an earlier one did not. Simulations of a neural model, the Competitive Dynamics Model, demonstrates that a mechanism that produces an evolving heading signal through recurrent competitive interactions largely captures the human data. Our findings characterize how the visual system balances stability in heading perception with sensitivity to change and support the hypothesis that heading perception evolves over time.
Collapse
Affiliation(s)
- Mufaddal Ali
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Eli Decker
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Oliver W. Layton
- Department of Computer Science, Colby College, Waterville, ME, USA,https://sites.google.com/colby.edu/owlab
| |
Collapse
|
3
|
Layton OW, Fajen BR. Distributed encoding of curvilinear self-motion across spiral optic flow patterns. Sci Rep 2022; 12:13393. [PMID: 35927277 PMCID: PMC9352735 DOI: 10.1038/s41598-022-16371-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 07/08/2022] [Indexed: 11/09/2022] Open
Abstract
Self-motion along linear paths without eye movements creates optic flow that radiates from the direction of travel (heading). Optic flow-sensitive neurons in primate brain area MSTd have been linked to linear heading perception, but the neural basis of more general curvilinear self-motion perception is unknown. The optic flow in this case is more complex and depends on the gaze direction and curvature of the path. We investigated the extent to which signals decoded from a neural model of MSTd predict the observer's curvilinear self-motion. Specifically, we considered the contributions of MSTd-like units that were tuned to radial, spiral, and concentric optic flow patterns in "spiral space". Self-motion estimates decoded from units tuned to the full set of spiral space patterns were substantially more accurate and precise than those decoded from units tuned to radial expansion. Decoding only from units tuned to spiral subtypes closely approximated the performance of the full model. Only the full decoding model could account for human judgments when path curvature and gaze covaried in self-motion stimuli. The most predictive units exhibited bias in center-of-motion tuning toward the periphery, consistent with neurophysiology and prior modeling. Together, findings support a distributed encoding of curvilinear self-motion across spiral space.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Computer Science, Colby College, Waterville, ME, USA. .,Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA.
| | - Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
4
|
Maus N, Layton OW. Estimating heading from optic flow: Comparing deep learning network and human performance. Neural Netw 2022; 154:383-396. [DOI: 10.1016/j.neunet.2022.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 06/17/2022] [Accepted: 07/07/2022] [Indexed: 10/16/2022]
|
5
|
Layton OW, Powell N, Steinmetz ST, Fajen BR. Estimating curvilinear self-motion from optic flow with a biologically inspired neural system. BIOINSPIRATION & BIOMIMETICS 2022; 17:046013. [PMID: 35580573 DOI: 10.1088/1748-3190/ac709b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 05/17/2022] [Indexed: 06/15/2023]
Abstract
Optic flow provides rich information about world-relative self-motion and is used by many animals to guide movement. For example, self-motion along linear, straight paths without eye movements, generates optic flow that radiates from a singularity that specifies the direction of travel (heading). Many neural models of optic flow processing contain heading detectors that are tuned to the position of the singularity, the design of which is informed by brain area MSTd of primate visual cortex that has been linked to heading perception. Such biologically inspired models could be useful for efficient self-motion estimation in robots, but existing systems are tailored to the limited scenario of linear self-motion and neglect sensitivity to self-motion along more natural curvilinear paths. The observer in this case experiences more complex motion patterns, the appearance of which depends on the radius of the curved path (path curvature) and the direction of gaze. Indeed, MSTd neurons have been shown to exhibit tuning to optic flow patterns other than radial expansion, a property that is rarely captured in neural models. We investigated in a computational model whether a population of MSTd-like sensors tuned to radial, spiral, ground, and other optic flow patterns could support the accurate estimation of parameters describing both linear and curvilinear self-motion. We used deep learning to decode self-motion parameters from the signals produced by the diverse population of MSTd-like units. We demonstrate that this system is capable of accurately estimating curvilinear path curvature, clockwise/counterclockwise sign, and gaze direction relative to the path tangent in both synthetic and naturalistic videos of simulated self-motion. Estimates remained stable over time while rapidly adapting to dynamic changes in the observer's curvilinear self-motion. Our results show that coupled biologically inspired and artificial neural network systems hold promise as a solution for robust vision-based self-motion estimation in robots.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Computer Science, Colby College, Waterville, ME, United States of America
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States of America
| | - Nathaniel Powell
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States of America
| | - Scott T Steinmetz
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States of America
| | - Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States of America
| |
Collapse
|
6
|
Steinmetz ST, Layton OW, Powell NV, Fajen BR. A Dynamic Efficient Sensory Encoding Approach to Adaptive Tuning in Neural Models of Optic Flow Processing. Front Comput Neurosci 2022; 16:844289. [PMID: 35431848 PMCID: PMC9011806 DOI: 10.3389/fncom.2022.844289] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 02/10/2022] [Indexed: 11/13/2022] Open
Abstract
This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.
Collapse
Affiliation(s)
- Scott T. Steinmetz
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
- *Correspondence: Scott T. Steinmetz,
| | - Oliver W. Layton
- Computer Science Department, Colby College, Waterville, ME, United States
| | - Nathaniel V. Powell
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Brett R. Fajen
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| |
Collapse
|
7
|
Neural correlates associated with impaired global motion perception in cerebral visual impairment (CVI). Neuroimage Clin 2022; 32:102821. [PMID: 34628303 PMCID: PMC8501506 DOI: 10.1016/j.nicl.2021.102821] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 08/07/2021] [Accepted: 09/07/2021] [Indexed: 12/17/2022]
Abstract
Cerebral visual impairment (CVI) is associated with impaired global motion processing. Mean motion coherence thresholds was higher in individuals with CVI. fMRI responses in area hMT+ showed an aberrant response profile in CVI. White matter tract reconstruction revealed cortico-cortical dysmyelination in CVI.
Cerebral visual impairment (CVI) is associated with a wide range of visual perceptual deficits including global motion processing. However, the underlying neurophysiological basis for these impairments remain poorly understood. We investigated global motion processing abilities in individuals with CVI compared to neurotypical controls using a combined behavioral and multi-modal neuroimaging approach. We found that CVI participants had a significantly higher mean motion coherence threshold (determined using a random dot kinematogram pattern simulating optic flow motion) compared to controls. Using functional magnetic resonance imaging (fMRI), we investigated activation response profiles in functionally defined early (i.e. primary visual cortex; area V1) and higher order (i.e. middle temporal cortex; area hMT+) stages of motion processing. In area V1, responses to increasing motion coherence were similar in both groups. However, in the CVI group, activation in area hMT+ was significantly reduced compared to controls, and consistent with a surround facilitation (rather than suppression) response profile. White matter tract reconstruction obtained from high angular resolution diffusion imaging (HARDI) revealed evidence of increased mean, axial, and radial diffusivities within cortico-cortical (i.e. V1-hMT+), but not thalamo-hMT+ connections. Overall, our results suggest that global motion processing deficits in CVI may be associated with impaired signal integration and segregation mechanisms, as well as white matter integrity at the level of area hMT+.
Collapse
|
8
|
ARTFLOW: A Fast, Biologically Inspired Neural Network that Learns Optic Flow Templates for Self-Motion Estimation. SENSORS 2021; 21:s21248217. [PMID: 34960310 PMCID: PMC8708706 DOI: 10.3390/s21248217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 12/02/2021] [Accepted: 12/03/2021] [Indexed: 11/20/2022]
Abstract
Most algorithms for steering, obstacle avoidance, and moving object detection rely on accurate self-motion estimation, a problem animals solve in real time as they navigate through diverse environments. One biological solution leverages optic flow, the changing pattern of motion experienced on the eye during self-motion. Here I present ARTFLOW, a biologically inspired neural network that learns patterns in optic flow to encode the observer’s self-motion. The network combines the fuzzy ART unsupervised learning algorithm with a hierarchical architecture based on the primate visual system. This design affords fast, local feature learning across parallel modules in each network layer. Simulations show that the network is capable of learning stable patterns from optic flow simulating self-motion through environments of varying complexity with only one epoch of training. ARTFLOW trains substantially faster and yields self-motion estimates that are far more accurate than a comparable network that relies on Hebbian learning. I show how ARTFLOW serves as a generative model to predict the optic flow that corresponds to neural activations distributed across the network.
Collapse
|
9
|
Modeling Physiological Sources of Heading Bias from Optic Flow. eNeuro 2021; 8:ENEURO.0307-21.2021. [PMID: 34642226 PMCID: PMC8607907 DOI: 10.1523/eneuro.0307-21.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 09/01/2021] [Accepted: 09/20/2021] [Indexed: 11/21/2022] Open
Abstract
Human heading perception from optic flow is accurate for directions close to the straight-ahead and systematic biases emerge in the periphery (Cuturi and Macneilage, 2013; Sun et al., 2020). In pursuit of the underlying neural mechanisms, primate brain dorsal medial superior temporal (MSTd) area has been a focus because of its causal link with heading perception (Gu et al., 2012). Computational models generally explain heading sensitivity in individual MSTd neurons as a feedforward integration of motion signals from medial temporal (MT) area that resemble full-field optic flow patterns consistent with the preferred heading direction (Britten, 2008; Mineault et al., 2012). In the present simulation study, we quantified within the structure of this feedforward model how physiological properties of MT and MSTd shape heading signals. We found that known physiological tuning characteristics generally supported the accuracy of heading estimation, but not always. A weak-to-moderate overrepresentation of peripheral headings in MSTd garnered the highest accuracy and precision out of the models that we tested. The model also performed well when noise corrupted high proportions of the optic flow vectors. Such a peripheral MSTd model performed well when units possessed a range of receptive field (RF) sizes and were strongly direction tuned. Physiological biases in MT direction tuning toward the radial direction also supported heading estimation, but the tendency for MT preferred speed and RF size to scale with eccentricity did not. Our findings help elucidate the extent to which different physiological tuning properties influence the accuracy and precision of neural heading signals.
Collapse
|
10
|
Hülemeier AG, Lappe M. Combining biological motion perception with optic flow analysis for self-motion in crowds. J Vis 2020; 20:7. [PMID: 32902593 PMCID: PMC7488621 DOI: 10.1167/jov.20.9.7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Heading estimation from optic flow relies on the assumption that the visual world is rigid. This assumption is violated when one moves through a crowd of people, a common and socially important situation. The motion of people in the crowd contains cues to their translation in the form of the articulation of their limbs, known as biological motion. We investigated how translation and articulation of biological motion influence heading estimation from optic flow for self-motion in a crowd. Participants had to estimate their heading during simulated self-motion toward a group of walkers who collectively walked in a single direction. We found that the natural combination of translation and articulation produces surprisingly small heading errors. In contrast, experimental conditions that either present only translation or only articulation produced strong idiosyncratic biases. The individual biases explained well the variance in the natural combination. A second experiment showed that the benefit of articulation and the bias produced by articulation were specific to biological motion. An analysis of the differences in biases between conditions and participants showed that different perceptual mechanisms contribute to heading perception in crowds. We suggest that coherent group motion affects the reference frame of heading perception from optic flow.
Collapse
Affiliation(s)
| | - Markus Lappe
- Department of Psychology, University of Münster, Münster, Germany
| |
Collapse
|
11
|
Riddell H, Li L, Lappe M. Heading perception from optic flow in the presence of biological motion. J Vis 2019; 19:25. [PMID: 31868898 DOI: 10.1167/19.14.25] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We investigated whether biological motion biases heading estimation from optic flow in a similar manner to nonbiological moving objects. In two experiments, observers judged their heading from displays depicting linear translation over a random-dot ground with normal point light walkers, spatially scrambled point light walkers, or laterally moving objects composed of random dots. In Experiment 1, we found that both types of walkers biased heading estimates similarly to moving objects when they obscured the focus of expansion of the background flow. In Experiment 2, we also found that walkers biased heading estimates when they did not obscure the focus of expansion. These results show that both regular and scrambled biological motion affect heading estimation in a similar manner to simple moving objects, and suggest that biological motion is not preferentially processed for the perception of self-motion.
Collapse
Affiliation(s)
- Hugh Riddell
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| | - Li Li
- Faculty of Arts and Science, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Markus Lappe
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| |
Collapse
|
12
|
A model of how depth facilitates scene-relative object motion perception. PLoS Comput Biol 2019; 15:e1007397. [PMID: 31725723 PMCID: PMC6879150 DOI: 10.1371/journal.pcbi.1007397] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 11/26/2019] [Accepted: 09/12/2019] [Indexed: 12/02/2022] Open
Abstract
Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer’s retina and radically influences an object’s retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object—otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object’s retinal motion, improving the accuracy of the object’s movement direction represented by motion signals. Research has shown that the accuracy with which humans perceive object motion during self-motion improves in the presence of stereo cues. Using a neural modelling approach, we explore whether this finding can be explained through improved estimation of the retinal motion induced by self-motion. Our results show that depth cues that provide information about scene structure may have a large effect on the specificity with which the neural mechanisms for motion perception represent the visual self-motion signal. This in turn enables effective removal of the retinal motion due to self-motion when the goal is to perceive object motion relative to the stationary world. These results reveal a hitherto unknown critical function of stereo tuning in the MT-MST complex, and shed important light on how the brain may recruit signals from upstream and downstream brain areas to simultaneously perceive self-motion and object motion.
Collapse
|
13
|
Spatial suppression promotes rapid figure-ground segmentation of moving objects. Nat Commun 2019; 10:2732. [PMID: 31266956 PMCID: PMC6606582 DOI: 10.1038/s41467-019-10653-8] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2018] [Accepted: 05/21/2019] [Indexed: 12/21/2022] Open
Abstract
Segregation of objects from their backgrounds is a fundamental visual function and one that is particularly effective when objects are in motion. Theoretically, suppressive center-surround mechanisms are well suited for accomplishing motion segregation. This longstanding hypothesis, however, has received limited empirical support. We report converging correlational and causal evidence that spatial suppression of background motion signals is critical for rapid segmentation of moving objects. Motion segregation ability is strongly predicted by both individual and stimulus-driven variations in spatial suppression strength. Moreover, aging-related superiority in perceiving background motion is associated with profound impairments in motion segregation. This segregation deficit is alleviated via perceptual learning, but only when motion segregation training also causes decreased sensitivity to background motion. We argue that perceptual insensitivity to large moving stimuli effectively implements background subtraction, which, in turn, enhances the visibility of moving objects and accounts for the observed link between spatial suppression and motion segregation. The visual system excels at segregating moving objects from their backgrounds, a key visual function hypothesized to be driven by suppressive centre-surround mechanisms. Here, the authors show that spatial suppression of background motion signals is critical for rapid segmentation of moving objects.
Collapse
|
14
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
15
|
Going with the Flow: The Neural Mechanisms Underlying Illusions of Complex-Flow Motion. J Neurosci 2019; 39:2664-2685. [PMID: 30777886 DOI: 10.1523/jneurosci.2112-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Revised: 01/07/2019] [Accepted: 01/08/2019] [Indexed: 11/21/2022] Open
Abstract
Studying the mismatch between perception and reality helps us better understand the constructive nature of the visual brain. The Pinna-Brelstaff motion illusion is a compelling example illustrating how a complex moving pattern can generate an illusory motion perception. When an observer moves toward (expansion) or away (contraction) from the Pinna-Brelstaff figure, the figure appears to rotate. The neural mechanisms underlying the illusory complex-flow motion of rotation, expansion, and contraction remain unknown. We studied this question at both perceptual and neuronal levels in behaving male macaques by using carefully parametrized Pinna-Brelstaff figures that induce the above motion illusions. We first demonstrate that macaques perceive illusory motion in a manner similar to that of human observers. Neurophysiological recordings were subsequently performed in the middle temporal area (MT) and the dorsal portion of the medial superior temporal area (MSTd). We find that subgroups of MSTd neurons encoding a particular global pattern of real complex-flow motion (rotation, expansion, contraction) also represent illusory motion patterns of the same class. They require an extra 15 ms to reliably discriminate the illusion. In contrast, MT neurons encode both real and illusory local motions with similar temporal delays. These findings reveal that illusory complex-flow motion is first represented in MSTd by the same neurons that normally encode real complex-flow motion. However, the extraction of global illusory motion in MSTd from other classes of real complex-flow motion requires extra processing time. Our study illustrates a cascaded integration mechanism from MT to MSTd underlying the transformation from external physical to internal nonveridical flow-motion perception.SIGNIFICANCE STATEMENT The neural basis of the transformation from objective reality to illusory percepts of rotation, expansion, and contraction remains unknown. We demonstrate psychophysically that macaques perceive these illusory complex-flow motions in a manner similar to that of human observers. At the neural level, we show that medial superior temporal (MSTd) neurons represent illusory flow motions as if they were real by globally integrating middle temporal area (MT) local motion signals. Furthermore, while MT neurons reliably encode real and illusory local motions with similar temporal delays, MSTd neurons take a significantly longer time to process the signals associated with illusory percepts. Our work extends previous complex-flow motion studies by providing the first detailed analysis of the neuron-specific mechanisms underlying complex forms of illusory motion integration from MT to MSTd.
Collapse
|
16
|
Yu X, Hou H, Spillmann L, Gu Y. Causal Evidence of Motion Signals in Macaque Middle Temporal Area Weighted-Pooled for Global Heading Perception. Cereb Cortex 2018; 28:612-624. [PMID: 28057722 DOI: 10.1093/cercor/bhw402] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 12/13/2016] [Indexed: 11/14/2022] Open
Abstract
Accurate heading perception relies on visual information integrated across a wide field, that is, optic flow. Numerous computational studies have speculated how local visual information might be pooled by the brain to compute heading, but these hypotheses lack direct neurophysiological support. In the current study, we instructed human and monkey subjects to judge heading directions based on global optic flow. We showed that a local perturbation cue applied within only a small part of the visual field could bias the subjects' heading judgments, and shift the neuronal tuning in the macaque middle temporal (MT) area at the same time. Electrical microstimulation in MT significantly biased the animals' heading judgments predictable from the tuning of the stimulated neurons. Masking the visual stimuli within these neurons' receptive fields could not remove the stimulation effect, indicating a sufficient role of the MT signals pooled by downstream neurons for global heading estimation. Interestingly, this pooling is not homogeneous because stimulating neurons with excitatory surrounds produced relatively larger effects than stimulating neurons with inhibitory surrounds. Thus our data not only provide direct causal evidence, but also new insights into the neural mechanisms of pooling local motion information for global heading estimation.
Collapse
Affiliation(s)
- Xuefei Yu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Han Hou
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lothar Spillmann
- On leave of absence from Department of Neurology, University of Freiburg, Freiburg 79110, Germany
| | - Yong Gu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
17
|
Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization. J Neurosci 2017; 37:11204-11219. [PMID: 29030435 DOI: 10.1523/jneurosci.1177-17.2017] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Revised: 10/02/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion.SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd.
Collapse
|
18
|
Layton OW, Fajen BR. Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow. PLoS Comput Biol 2016; 12:e1004942. [PMID: 27341686 PMCID: PMC4920404 DOI: 10.1371/journal.pcbi.1004942] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 04/22/2016] [Indexed: 11/18/2022] Open
Abstract
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model's heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
- * E-mail:
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
19
|
Kim HR, Pitkow X, Angelaki DE, DeAngelis GC. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons. J Neurophysiol 2016; 116:1449-67. [PMID: 27334948 DOI: 10.1152/jn.00005.2016] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Accepted: 06/16/2016] [Indexed: 11/22/2022] Open
Abstract
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: "congruent" and "opposite" cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York;
| |
Collapse
|
20
|
Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object. J Neurosci 2016; 35:13599-607. [PMID: 26446214 DOI: 10.1523/jneurosci.2267-15.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion.
Collapse
|
21
|
Layton OW, Fajen BR. The temporal dynamics of heading perception in the presence of moving objects. J Neurophysiol 2015; 115:286-300. [PMID: 26510765 DOI: 10.1152/jn.00866.2015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 10/26/2015] [Indexed: 11/22/2022] Open
Abstract
Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York
| | - Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York
| |
Collapse
|
22
|
Tadin D. Suppressive mechanisms in visual motion processing: From perception to intelligence. Vision Res 2015; 115:58-70. [PMID: 26299386 DOI: 10.1016/j.visres.2015.08.005] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Revised: 07/31/2015] [Accepted: 08/04/2015] [Indexed: 11/19/2022]
Abstract
Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and individuals with schizophrenia-a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores.
Collapse
Affiliation(s)
- Duje Tadin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA; Center for Visual Science, University of Rochester, Rochester, NY 14627, USA; Department of Ophthalmology, University of Rochester School of Medicine, Rochester, NY 14642, USA.
| |
Collapse
|
23
|
Royden CS, Holloway MA. Detecting moving objects in an optic flow field using direction- and speed-tuned operators. Vision Res 2014; 98:14-25. [PMID: 24607912 DOI: 10.1016/j.visres.2014.02.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2013] [Revised: 01/25/2014] [Accepted: 02/21/2014] [Indexed: 11/20/2022]
Abstract
An observer moving through a scene must be able to identify moving objects. Psychophysical results have shown that people can identify moving objects based on the speed or direction of their movement relative to the optic flow field generated by the observer's motion. Here we show that a model that uses speed- and direction-tuned units, whose responses are based on the response properties of cells in the primate visual cortex, can successfully identify the borders of moving objects in a scene through which an observer is moving.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Michael A Holloway
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
24
|
A unified model of heading and path perception in primate MSTd. PLoS Comput Biol 2014; 10:e1003476. [PMID: 24586130 PMCID: PMC3930491 DOI: 10.1371/journal.pcbi.1003476] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2013] [Accepted: 01/03/2014] [Indexed: 11/20/2022] Open
Abstract
Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory estimation from flow.
Collapse
|
25
|
Raudies F, Neumann H. Modeling heading and path perception from optic flow in the case of independently moving objects. Front Behav Neurosci 2013; 7:23. [PMID: 23554589 PMCID: PMC3612589 DOI: 10.3389/fnbeh.2013.00023] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2012] [Accepted: 03/13/2013] [Indexed: 11/18/2022] Open
Abstract
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology, Boston UniversityBoston, MA, USA
- Center of Excellence for Learning in Education, Science, and Technology, Boston UniversityBoston, MA, USA
| | - Heiko Neumann
- Center of Excellence for Learning in Education, Science, and Technology, Boston UniversityBoston, MA, USA
- Institute for Neural Information Processing, University of UlmUlm, Germany
| |
Collapse
|
26
|
Use of speed cues in the detection of moving objects by moving observers. Vision Res 2012; 59:17-24. [PMID: 22406544 DOI: 10.1016/j.visres.2012.02.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2011] [Revised: 01/12/2012] [Accepted: 02/21/2012] [Indexed: 11/20/2022]
Abstract
When an observer moves through an environment containing stationary and moving objects, he or she must be able to determine which objects are moving relative to the others in order to navigate successfully and avoid collisions. We investigated whether image speed can be used as a cue to detect a moving object in the scene. Our results show that image speed can be used to detect moving objects as long as the object is moving sufficiently faster or slower than it would if it were part of the stationary scene.
Collapse
|
27
|
Tsui JMG, Pack CC. Contrast sensitivity of MT receptive field centers and surrounds. J Neurophysiol 2011; 106:1888-900. [DOI: 10.1152/jn.00165.2011] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Neurons throughout the visual system have receptive fields with both excitatory and suppressive components. The latter are responsible for a phenomenon known as surround suppression, in which responses decrease as a stimulus is extended beyond a certain size. Previous work has shown that surround suppression in the primary visual cortex depends strongly on stimulus contrast. Such complex center-surround interactions are thought to relate to a variety of functions, although little is known about how they affect responses in the extrastriate visual cortex. We have therefore examined the interaction of center and surround in the middle temporal (MT) area of the macaque ( Macaca mulatta) extrastriate cortex by recording neuronal responses to stimuli of different sizes and contrasts. Our findings indicate that surround suppression in MT is highly contrast dependent, with the strongest suppression emerging unexpectedly at intermediate stimulus contrasts. These results can be explained by a simple model that takes into account the nonlinear contrast sensitivity of the neurons that provide input to MT. The model also provides a qualitative link to previous reports of a topographic organization of area MT based on clusters of neurons with differing surround suppression strength. We show that this organization can be detected in the gamma-band local field potentials (LFPs) and that the model parameters can predict the contrast sensitivity of these LFP responses. Overall our results show that surround suppression in area MT is far more common than previously suspected, highlighting the potential functional importance of the accumulation of nonlinearities along the dorsal visual pathway.
Collapse
Affiliation(s)
- James M. G. Tsui
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Christopher C. Pack
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
28
|
Kishore S, Hornick N, Sato N, Page WK, Duffy CJ. Driving strategy alters neuronal responses to self-movement: cortical mechanisms of distracted driving. ACTA ACUST UNITED AC 2011; 22:201-8. [PMID: 21653287 DOI: 10.1093/cercor/bhr115] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We presented naturalistic combinations of virtual self-movement stimuli while recording neuronal activity in monkey cerebral cortex. Monkeys used a joystick to drive to a straight ahead heading direction guided by either object motion or optic flow. The selected cue dominates neuronal responses, often mimicking responses evoked when that stimulus is presented alone. In some neurons, driving strategy creates selective response additivities. In others, it creates vulnerabilities to the disruptive effects of independently moving objects. Such cue interactions may be related to the disruptive effects of independently moving objects in Alzheimer's disease patients with navigational deficits.
Collapse
Affiliation(s)
- Sarita Kishore
- Department of Neurology, University of Rochester Medical Center, Rochester, NY 14642, USA
| | | | | | | | | |
Collapse
|
29
|
Cortical neurons combine visual cues about self-movement. Exp Brain Res 2010; 206:283-97. [PMID: 20852992 DOI: 10.1007/s00221-010-2406-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2010] [Accepted: 08/25/2010] [Indexed: 10/19/2022]
Abstract
Visual cues about self-movement are derived from the patterns of optic flow and the relative motion of discrete objects. We recorded dorsal medial superior temporal (MSTd) cortical neurons in monkeys that held centered visual fixation while viewing optic flow and object motion stimuli simulating the self-movement cues seen during translation on a circular path. Twenty stimulus configurations presented naturalistic combinations of optic flow with superimposed objects that simulated either earth-fixed landmark objects or independently moving animate objects. Landmarks and animate objects yield the same response interactions with optic flow; mainly additive effects, with a substantial number of sub- and super-additive responses. Sub- and super-additive interactions reflect each neuron's local and global motion sensitivities: Local motion sensitivity is based on the spatial arrangement of directions created by object motion and the surrounding optic flow. Global motion sensitivity is based on the temporal sequence of self-movement headings that define a simulated path through the environment. We conclude that MST neurons' spatio-temporal response properties combine object motion and optic flow cues to represent self-movement in diverse, naturalistic circumstances.
Collapse
|
30
|
Mapstone M, Duffy CJ. Approaching objects cause confusion in patients with Alzheimer's disease regarding their direction of self-movement. Brain 2010; 133:2690-701. [PMID: 20647265 DOI: 10.1093/brain/awq140] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Navigation requires real-time heading estimation based-on self-movement cues from optic flow and object motion. We presented a simulated heading discrimination task to young, middle-aged and older adult, normal, control subjects and to patients with mild cognitive impairment or Alzheimer's disease. Age-related decline and neurodegenerative disease effects were evident on a battery of neuropsychological and visual motion psychophysical measures. All subject groups made more accurate heading judgements when using optic flow patterns than when using simulated movement past earth-fixed objects. When both optic flow and congruent object were presented together, heading judgements showed intermediate accuracy. In separate trials, we combined optic flow with non-congruent object motion, simulating an independently moving object. In the case of non-congruent objects, almost all of our subjects shifted their perceived self-movement to heading in the direction of the moving object. However, patients with Alzheimer's disease uniquely indicated that perceived self-movement was straight-ahead, in the direction of visual fixation. The tendency to be confused by objects that appear to move independently in the simulated visual scene corresponded to the difficulty patients with Alzheimer's disease encountered in real-world navigation through the hospital lobby (R(2) = 0.87). This was not the case in older normal controls (R(2) = 0.09). We conclude that perceptual factors limit safe, autonomous navigation in early Alzheimer's disease. In particular, the presence of independently moving objects in naturalistic environments limits the capacity of patients with Alzheimer's disease to judge their heading of self-movement.
Collapse
Affiliation(s)
- Mark Mapstone
- Department of Neurology, University of Rochester Medical Centre, 601 Elmwood Avenue, Rochester, NY 14642-0673, USA
| | | |
Collapse
|
31
|
Sikoglu EM, Calabro FJ, Beardsley SA, Vaina LM. Integration mechanisms for heading perception. ACTA ACUST UNITED AC 2010; 23:197-221. [PMID: 20529443 DOI: 10.1163/187847510x503605] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Previous studies of heading perception suggest that human observers employ spatiotemporal pooling to accommodate noise in optic flow stimuli. Here, we investigated how spatial and temporal integration mechanisms are used for judgments of heading through a psychophysical experiment involving three different types of noise. Furthermore, we developed two ideal observer models to study the components of the spatial information used by observers when performing the heading task. In the psychophysical experiment, we applied three types of direction noise to optic flow stimuli to differentiate the involvement of spatial and temporal integration mechanisms. The results indicate that temporal integration mechanisms play a role in heading perception, though their contribution is weaker than that of the spatial integration mechanisms. To elucidate how observers process spatial information to extract heading from a noisy optic flow field, we compared psychophysical performance in response to random-walk direction noise with that of two ideal observer models (IOMs). One model relied on 2D screen-projected flow information (2D-IOM), while the other used environmental, i.e., 3D, flow information (3D-IOM). The results suggest that human observers compensate for the loss of information during the 2D retinal projection of the visual scene for modest amounts of noise. This suggests the likelihood of a 3D reconstruction during heading perception, which breaks down under extreme levels of noise.
Collapse
Affiliation(s)
- Elif M Sikoglu
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA
| | | | | | | |
Collapse
|
32
|
Royden CS, Connors EM. The detection of moving objects by moving observers. Vision Res 2010; 50:1014-24. [DOI: 10.1016/j.visres.2010.03.008] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2009] [Revised: 01/29/2010] [Accepted: 03/16/2010] [Indexed: 11/24/2022]
|
33
|
Browning NA, Grossberg S, Mingolla E. A neural model of how the brain computes heading from optic flow in realistic scenes. Cogn Psychol 2009; 59:320-56. [PMID: 19716125 DOI: 10.1016/j.cogpsych.2009.07.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2008] [Accepted: 07/20/2009] [Indexed: 11/15/2022]
Abstract
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard to behavioral and neural substrates. The current article develops a model that does both. The ViSTARS neural model describes interactions among neurons in the primate magnocellular pathway, including V1, MT(+), and MSTd. Model outputs are quantitatively similar to human heading data in response to complex natural scenes. The model estimates heading to within 1.5 degrees in random dot or photo-realistically rendered scenes, and within 3 degrees in video streams from driving in real-world environments. Simulated rotations of less than 1 degrees /s do not affect heading estimates, but faster simulated rotation rates do, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.
Collapse
Affiliation(s)
- N Andrew Browning
- Department of Cognitive and Neural Systems, Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
34
|
Kim NG. Dynamic Occlusion and Optical Flow From Corrugated Surfaces. ECOLOGICAL PSYCHOLOGY 2008. [DOI: 10.1080/10407410802189166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
35
|
A model for simultaneous computation of heading and depth in the presence of rotations. Vision Res 2007; 47:3025-40. [DOI: 10.1016/j.visres.2007.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2007] [Revised: 08/15/2007] [Accepted: 08/17/2007] [Indexed: 11/22/2022]
|
36
|
Bex PJ, Falkenberg HK. Resolution of complex motion detectors in the central and peripheral visual field. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2006; 23:1598-607. [PMID: 16783422 DOI: 10.1364/josaa.23.001598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We examine how local direction signals are combined to compute the focus of radial motion (FRM) in random dot patterns and examine how this process changes across the visual field. Equivalent noise analysis showed that a loss in FRM accuracy was largely attributable to an increase in local motion detector noise with little or no change in efficiency across the visual field. The minimum separation for discriminating the foci of two overlapping optic flow patterns increased in the periphery faster than predicted from the resolution for a single FRM. This behavior requires that observers average numerous local velocities to estimate the FRM, which enables resistance to internal and external noise and endows the system with the property of position invariance. However, such pooling limits the precision with which multiple looming objects can be discriminated, especially in the peripheral visual field.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | | |
Collapse
|
37
|
Logan DJ, Duffy CJ. Cortical area MSTd combines visual cues to represent 3-D self-movement. ACTA ACUST UNITED AC 2005; 16:1494-507. [PMID: 16339087 DOI: 10.1093/cercor/bhj082] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
As arboreal primates move through the jungle, they are immersed in visual motion that they must distinguish from the movement of predators and prey. We recorded dorsal medial superior temporal (MSTd) cortical neuronal responses to visual motion stimuli simulating self-movement and object motion. MSTd neurons encode the heading of simulated self-movement in three-dimensional (3-D) space. 3-D heading responses can be evoked either by the large patterns of visual motion in optic flow or by the visual object motion seen when an observer passes an earth-fixed landmark. Responses to naturalistically combined optic flow and object motion depend on their relative directions: an object moving as part of the optic flow field has little effect on neuronal responses. In contrast, an object moving separately from the optic flow field has large effects, decreasing the amplitude of the population response and shifting the population's heading estimate to match the direction of object motion as the object moves toward central vision. These effects parallel those seen in human heading perception with minimal effects of objects moving with the optic flow and substantial effects of objects violating the optic flow. We conclude that MSTd can contribute to navigation by supporting 3-D heading estimation, potentially switching from optic flow to object cues when a moving object passes in front of the observer.
Collapse
Affiliation(s)
- David J Logan
- Department of Neurology, and the Center for Visual Science, The University of Rochester Medical Center, Rochester, NY 14642, USA
| | | |
Collapse
|
38
|
Wurfel JD, Barraza JF, Grzywacz NM. Measurement of rate of expansion in the perception of radial motion. Vision Res 2005; 45:2740-51. [PMID: 16023697 DOI: 10.1016/j.visres.2005.03.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2004] [Revised: 03/08/2005] [Accepted: 03/29/2005] [Indexed: 11/29/2022]
Abstract
Optic flow generated by rigid surface patches can be decomposed into a small number of elementary motion types. In these experiments, we show that the human visual system can evaluate expansion, one of these motion types, metrically. Moreover, we show that the discrimination of rates of expansion are spatially local. Because the estimation of the focus of expansion is somewhat imprecise, this locality sometimes produces predictable errors in the estimation of rate of expansion. One can make predictions like this with a model adapted from one previously developed for angular-velocity discrimination.
Collapse
Affiliation(s)
- Jeff D Wurfel
- Neuroscience Graduate Program, University of Southern California, Hedco Neuroscience Building, MC 2520, Los Angeles, CA 90089-2520, USA.
| | | | | |
Collapse
|
39
|
Bex PJ, Dakin SC. Spatial interference among moving targets. Vision Res 2005; 45:1385-98. [PMID: 15743609 DOI: 10.1016/j.visres.2004.12.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2004] [Revised: 11/18/2004] [Accepted: 12/08/2004] [Indexed: 11/16/2022]
Abstract
Peripheral vision for static form is limited both by reduced spatial acuity and by interference among adjacent features ('crowding'). However, the visibility of acuity-corrected image motion is relatively constant across the visual field. We measured whether spatial interference among nearby moving elements is similarly invariant of retinal eccentricity and assessed if motion integration could account for any observed sensitivity loss. We report that sensitivity to the direction of motion of a central target-highly visible in isolation-was strongly impaired by four drifting flanking elements. The extent of spatial interference increased with eccentricity. Random-direction flanks and flanks whose directions formed global patterns of rotation or expansion were more disruptive than flanks forming global patterns of translation, regardless of the relative direction of the target element. Spatial interference was low-pass tuned for spatial frequency and broadly tuned for temporal frequency. We show that these results challenge the generality of models of spatial interference that are based on retinal image quality, masking, confusions between target and flanks, attentional resolution limits or (simple) "averaging" of element parameters. Instead, the results suggest that spatial interference is a consequence of the integration of meaningful image structure within large receptive fields. The underlying connectivity of this integration favours low spatial frequency structure but is broadly tuned for speed.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, 11-43 Bath Street, London EC1V 9EL, UK.
| | | |
Collapse
|
40
|
Hanada M. Computational analyses for illusory transformations in the optic flow field and heading perception in the presence of moving objects. Vision Res 2005; 45:749-58. [PMID: 15639501 DOI: 10.1016/j.visres.2004.09.037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2004] [Revised: 09/23/2004] [Indexed: 11/24/2022]
Abstract
When we see a stimulus of a radial flow field (the target flow) overlapped with a lateral flow field or another radial flow field, the focus of expansion (FOE) of the target radial flow appears to be shifted in a direction. Royden and Conti [(2003). A model using MT-like motion-opponent operators explains an illusory transformation in the optic flow field. Vision Research, 43, 2811-2826] argued that local motion subtraction is crucial for explanation of this phenomenon. The flow field which causes the illusory displacement of FOE was computationally analyzed. It was shown that the flow field is approximately a rigid-motion flow; the flow can be generated by simulating a situation where an observer moves toward a stationary scene. The heading direction for the observer corresponds to the perceived position of the FOE of the radial flow pattern. It implies that any algorithms which assume rigidity of the scene and recover veridical heading explain the bias in perceived FOE. There is no need for local motion subtraction in order to explain the phenomena. Furthermore, the flow for an observer's translation in the presence of objects moving laterally or in depth was computationally analyzed. It was found that algorithms which minimizes standard error functions with less weights to the independently moving objects show similar biases in recovered heading to the bias of human observers. It implies that local motion subtraction is not necessary for explanation of the bias in perceived heading due to an object moving laterally or in depth, contrary to the argument of Royden [(2002). Computing heading in the presence of moving objects: a model that uses motion-opponent operators. Vision Research, 42, 3043-3058].
Collapse
Affiliation(s)
- Mitsuhiko Hanada
- Department of Media Architecture, Future University-Hakodate, 116-2 Kamedanakano-cho, Hakodate, Hokkaido 041-8655, Japan.
| |
Collapse
|
41
|
Perrone JA. A visual motion sensor based on the properties of V1 and MT neurons. Vision Res 2004; 44:1733-55. [PMID: 15135991 DOI: 10.1016/j.visres.2004.03.003] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2003] [Revised: 02/23/2004] [Indexed: 11/20/2022]
Abstract
The motion response properties of neurons increase in complexity as one moves from primary visual cortex (V1), up to higher cortical areas such as the middle temporal (MT) and the medial superior temporal area (MST). Many of the features of V1 neurons can now be replicated using computational models based on spatiotemporal filters. However until recently, relatively little was known about how the motion analysing properties of MT neurons could originate from the V1 neurons that provide their inputs. This has constrained the development of models of the MT-MST stages which have been linked to higher level motion processing tasks such as self-motion perception and depth estimation. I describe the construction of a motion sensor built up in stages from two spatiotemporal filters with properties based on V1 neurons. The resulting composite sensor is shown to have spatiotemporal frequency response profiles, speed and direction tuning responses that are comparable to MT neurons. The sensor is designed to work with digital images and can therefore be used as a realistic front-end to models of MT and MST neuron processing; it can be probed with the same two-dimensional motion stimuli used to test the neurons and has the potential to act as a building block for more complex models of motion processing.
Collapse
Affiliation(s)
- John A Perrone
- Department of Psychology, The University of Waikato, Private Bag 3105, Hamilton, New Zealand.
| |
Collapse
|
42
|
Royden CS, Conti DM. A model using MT-like motion-opponent operators explains an illusory transformation in the optic flow field. Vision Res 2003; 43:2811-26. [PMID: 14568097 DOI: 10.1016/s0042-6989(03)00481-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Previous studies have shown that a physiologically based model using motion-opponent operators to compute heading performs accurately for simulated observer translations. Here we show how this model can explain an illusory shift in the perceived focus of expansion of a radial flow field that occurs when a field of laterally moving dots is superimposed on a field of radially moving dots. Furthermore, we can use the model to predict the perceptual shift of the focus of expansion for novel visual stimuli. These results support the hypothesis that this illusion results from motion subtraction during the processing of optic flow fields.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, P.O. Box 116A, Worcester, MA 01610, USA.
| | | |
Collapse
|