1
|
Hacohen-Brown S, Gilboa-Schechtman E, Zaidel A. Modality-specific effects of threat on self-motion perception. BMC Biol 2024; 22:120. [PMID: 38783286 PMCID: PMC11119305 DOI: 10.1186/s12915-024-01911-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 05/08/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND Threat and individual differences in threat-processing bias perception of stimuli in the environment. Yet, their effect on perception of one's own (body-based) self-motion in space is unknown. Here, we tested the effects of threat on self-motion perception using a multisensory motion simulator with concurrent threatening or neutral auditory stimuli. RESULTS Strikingly, threat had opposite effects on vestibular and visual self-motion perception, leading to overestimation of vestibular, but underestimation of visual self-motions. Trait anxiety tended to be associated with an enhanced effect of threat on estimates of self-motion for both modalities. CONCLUSIONS Enhanced vestibular perception under threat might stem from shared neural substrates with emotional processing, whereas diminished visual self-motion perception may indicate that a threatening stimulus diverts attention away from optic flow integration. Thus, threat induces modality-specific biases in everyday experiences of self-motion.
Collapse
Affiliation(s)
- Shira Hacohen-Brown
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, 5290002, Ramat Gan, Israel
| | - Eva Gilboa-Schechtman
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, 5290002, Ramat Gan, Israel
- Department of Psychology, Bar-Ilan University, 5290002, Ramat-Gan, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, 5290002, Ramat Gan, Israel.
| |
Collapse
|
2
|
Hülemeier AG, Lappe M. Limb articulation of biological motion can induce illusory motion perception during self-motion. Iperception 2024; 15:20416695241246755. [PMID: 38903983 PMCID: PMC11188058 DOI: 10.1177/20416695241246755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/27/2024] [Indexed: 06/22/2024] Open
Abstract
When one walks toward a crowd of pedestrians, dealing with their biological motion while controlling one's own self-motion is a difficult perceptual task. Limb articulation of a walker is naturally coupled to the walker's translation through the scene and allows the separation of optic flow generated by self-motion from the biological motion of other pedestrians. Recent research has shown that if limb articulation and translation mismatch, such as for walking in place, self-motion perception becomes biased. This bias may reflect an illusory motion attributed to the pedestrian crowd from the articulation of their limbs. To investigate this hypothesis, we presented observers with a simulation of forward self-motion toward a laterally moving crowd of point-light walkers and asked them to report the perceived lateral speed of the crowd. To investigate the dependence of the crowd speed percept on biological motion, we also included conditions in which the points of the walker were spatially scrambled to destroy body form and limb articulation. We observed illusory crowd speed percepts that were related to the articulation rate of the biological motion. Scrambled walkers also produced illusory motion but it was not related to articulation rate. We conclude that limb articulation induces percepts of crowd motion that can be used for interpreting self-motion toward crowds.
Collapse
Affiliation(s)
- Anna-Gesina Hülemeier
- Institute for Psychology, University of Münster, Münster, North-Rhine Westphalia, Germany
| | - Markus Lappe
- Institute for Psychology, University of Münster, Münster, North-Rhine Westphalia, Germany
| |
Collapse
|
3
|
Hülemeier AG, Lappe M. Illusory percepts of curvilinear self-motion when moving through crowds. J Vis 2023; 23:6. [PMID: 38112491 PMCID: PMC10732088 DOI: 10.1167/jov.23.14.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2023] Open
Abstract
Self-motion generates optic flow, a pattern of expanding visual motion. Heading estimation from optic flow analysis is accurate in rigid environments, but it becomes challenging when other human walkers introduce independent motion to the scene. Previous studies showed that heading perception is surprisingly accurate when moving through a crowd of walkers but revealed strong heading biases when either articulation or translation of biological motion were presented in isolation. We hypothesized that these biases resulted from misperceiving the self-motion as curvilinear. Such errors might manifest as opposite biases depending on whether the observer perceived the crowd motion as indication of his/her self-translation or self-rotation. Our study investigated the link between heading biases and illusory path perception. Participants assessed heading and path perception while observing optic flow stimuli with varying walker movements. Self-motion perception was accurate during natural locomotion (articulation and translation), but significant heading biases occurred when walkers only articulated or translated. In this case, participants often reported a curved path of travel. Heading error and curvature pointed in opposite directions. On average, participants perceived the walker motion as evidence for viewpoint rotation leading to curvilinear path percepts.
Collapse
Affiliation(s)
| | - Markus Lappe
- Department of Psychology, University of Münster, Münster, Germany
| |
Collapse
|
4
|
Vafaii H, Yates JL, Butts DA. Hierarchical VAEs provide a normative account of motion processing in the primate brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.27.559646. [PMID: 37808629 PMCID: PMC10557690 DOI: 10.1101/2023.09.27.559646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
The relationship between perception and inference, as postulated by Helmholtz in the 19th century, is paralleled in modern machine learning by generative models like Variational Autoencoders (VAEs) and their hierarchical variants. Here, we evaluate the role of hierarchical inference and its alignment with brain function in the domain of motion perception. We first introduce a novel synthetic data framework, Retinal Optic Flow Learning (ROFL), which enables control over motion statistics and their causes. We then present a new hierarchical VAE and test it against alternative models on two downstream tasks: (i) predicting ground truth causes of retinal optic flow (e.g., self-motion); and (ii) predicting the responses of neurons in the motion processing pathway of primates. We manipulate the model architectures (hierarchical versus non-hierarchical), loss functions, and the causal structure of the motion stimuli. We find that hierarchical latent structure in the model leads to several improvements. First, it improves the linear decodability of ground truth factors and does so in a sparse and disentangled manner. Second, our hierarchical VAE outperforms previous state-of-the-art models in predicting neuronal responses and exhibits sparse latent-to-neuron relationships. These results depend on the causal structure of the world, indicating that alignment between brains and artificial neural networks depends not only on architecture but also on matching ecologically relevant stimulus statistics. Taken together, our results suggest that hierarchical Bayesian inference underlines the brain's understanding of the world, and hierarchical VAEs can effectively model this understanding.
Collapse
|
5
|
Ali M, Decker E, Layton OW. Temporal stability of human heading perception. J Vis 2023; 23:8. [PMID: 36786748 PMCID: PMC9932552 DOI: 10.1167/jov.23.2.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023] Open
Abstract
Humans are capable of accurately judging their heading from optic flow during straight forward self-motion. Despite the global coherence in the optic flow field, however, visual clutter and other naturalistic conditions create constant flux on the eye. This presents a problem that must be overcome to accurately perceive heading from optic flow-the visual system must maintain sensitivity to optic flow variations that correspond with actual changes in self-motion and disregard those that do not. One solution could involve integrating optic flow over time to stabilize heading signals while suppressing transient fluctuations. Stability, however, may come at the cost of sluggishness. Here, we investigate the stability of human heading perception when subjects judge their heading after the simulated direction of self-motion changes. We found that the initial heading exerted an attractive influence on judgments of the final heading. Consistent with an evolving heading representation, bias toward the initial heading increased with the size of the heading change and as the viewing duration of the optic flow consistent with the final heading decreased. Introducing periods of sensory dropout (blackouts) later in the trial increased bias whereas an earlier one did not. Simulations of a neural model, the Competitive Dynamics Model, demonstrates that a mechanism that produces an evolving heading signal through recurrent competitive interactions largely captures the human data. Our findings characterize how the visual system balances stability in heading perception with sensitivity to change and support the hypothesis that heading perception evolves over time.
Collapse
Affiliation(s)
- Mufaddal Ali
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Eli Decker
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Oliver W. Layton
- Department of Computer Science, Colby College, Waterville, ME, USA,https://sites.google.com/colby.edu/owlab
| |
Collapse
|
6
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
7
|
Bill J, Gershman SJ, Drugowitsch J. Visual motion perception as online hierarchical inference. Nat Commun 2022; 13:7403. [PMID: 36456546 PMCID: PMC9715570 DOI: 10.1038/s41467-022-34805-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/07/2022] [Indexed: 12/03/2022] Open
Abstract
Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and neural computations that allow the visual system to infer this structure online from a volatile stream of visual information. We propose online hierarchical Bayesian inference as a principled solution for how the brain might solve this complex perceptual task. We derive an online Expectation-Maximization algorithm that explains human percepts qualitatively and quantitatively for a diverse set of stimuli, covering classical psychophysics experiments, ambiguous motion scenes, and illusory motion displays. We thereby identify normative explanations for the origin of human motion structure perception and make testable predictions for future psychophysics experiments. The proposed online hierarchical inference model furthermore affords a neural network implementation which shares properties with motion-sensitive cortical areas and motivates targeted experiments to reveal the neural representations of latent structure.
Collapse
Affiliation(s)
- Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA. .,Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Samuel J Gershman
- Department of Psychology, Harvard University, Cambridge, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA.,Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
8
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
9
|
Maus N, Layton OW. Estimating heading from optic flow: Comparing deep learning network and human performance. Neural Netw 2022; 154:383-396. [DOI: 10.1016/j.neunet.2022.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 06/17/2022] [Accepted: 07/07/2022] [Indexed: 10/16/2022]
|
10
|
Modeling Physiological Sources of Heading Bias from Optic Flow. eNeuro 2021; 8:ENEURO.0307-21.2021. [PMID: 34642226 PMCID: PMC8607907 DOI: 10.1523/eneuro.0307-21.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 09/01/2021] [Accepted: 09/20/2021] [Indexed: 11/21/2022] Open
Abstract
Human heading perception from optic flow is accurate for directions close to the straight-ahead and systematic biases emerge in the periphery (Cuturi and Macneilage, 2013; Sun et al., 2020). In pursuit of the underlying neural mechanisms, primate brain dorsal medial superior temporal (MSTd) area has been a focus because of its causal link with heading perception (Gu et al., 2012). Computational models generally explain heading sensitivity in individual MSTd neurons as a feedforward integration of motion signals from medial temporal (MT) area that resemble full-field optic flow patterns consistent with the preferred heading direction (Britten, 2008; Mineault et al., 2012). In the present simulation study, we quantified within the structure of this feedforward model how physiological properties of MT and MSTd shape heading signals. We found that known physiological tuning characteristics generally supported the accuracy of heading estimation, but not always. A weak-to-moderate overrepresentation of peripheral headings in MSTd garnered the highest accuracy and precision out of the models that we tested. The model also performed well when noise corrupted high proportions of the optic flow vectors. Such a peripheral MSTd model performed well when units possessed a range of receptive field (RF) sizes and were strongly direction tuned. Physiological biases in MT direction tuning toward the radial direction also supported heading estimation, but the tendency for MT preferred speed and RF size to scale with eccentricity did not. Our findings help elucidate the extent to which different physiological tuning properties influence the accuracy and precision of neural heading signals.
Collapse
|
11
|
Abstract
During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA.,
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| |
Collapse
|
12
|
Hülemeier AG, Lappe M. Combining biological motion perception with optic flow analysis for self-motion in crowds. J Vis 2020; 20:7. [PMID: 32902593 PMCID: PMC7488621 DOI: 10.1167/jov.20.9.7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Heading estimation from optic flow relies on the assumption that the visual world is rigid. This assumption is violated when one moves through a crowd of people, a common and socially important situation. The motion of people in the crowd contains cues to their translation in the form of the articulation of their limbs, known as biological motion. We investigated how translation and articulation of biological motion influence heading estimation from optic flow for self-motion in a crowd. Participants had to estimate their heading during simulated self-motion toward a group of walkers who collectively walked in a single direction. We found that the natural combination of translation and articulation produces surprisingly small heading errors. In contrast, experimental conditions that either present only translation or only articulation produced strong idiosyncratic biases. The individual biases explained well the variance in the natural combination. A second experiment showed that the benefit of articulation and the bias produced by articulation were specific to biological motion. An analysis of the differences in biases between conditions and participants showed that different perceptual mechanisms contribute to heading perception in crowds. We suggest that coherent group motion affects the reference frame of heading perception from optic flow.
Collapse
Affiliation(s)
| | - Markus Lappe
- Department of Psychology, University of Münster, Münster, Germany
| |
Collapse
|
13
|
Abstract
Heading estimation from optic flow is crucial for safe locomotion but becomes inaccurate if independent object motion is present. In ecological settings, such motion typically involves other animals or humans walking across the scene. An independently walking person presents a local disturbance of the flow field, which moves across the flow field as the walker traverses the scene. Is the bias in heading estimation produced by the local disturbance of the flow field or by the movement of the walker through the scene? We present a novel flow field stimulus in which the local flow disturbance and the movement of the walker can be pitted against each other. Each frame of this stimulus consists of a structureless random dot distribution. Across frames, the body shape of a walker is molded by presenting different flow field dynamics within and outside the body shape. In different experimental conditions, the flow within the body shape can be congruent with the walker's movement, incongruent with it, or congruent with the background flow. We show that heading inaccuracy results from the local flow disturbance rather than the movement through the scene. Moreover, we show that the local disturbances of the optic flow can be used to segment the walker and support biological motion perception to some degree. The dichotomous result that the walker can be segmented from the scene but that heading perception is nonetheless influenced by the flow produced by the walker confirms separate visual pathways for heading estimation, object segmentation, and biological motion perception.
Collapse
Affiliation(s)
- Krischan Koerfer
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Markus Lappe
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| |
Collapse
|
14
|
Riddell H, Li L, Lappe M. Heading perception from optic flow in the presence of biological motion. J Vis 2019; 19:25. [PMID: 31868898 DOI: 10.1167/19.14.25] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We investigated whether biological motion biases heading estimation from optic flow in a similar manner to nonbiological moving objects. In two experiments, observers judged their heading from displays depicting linear translation over a random-dot ground with normal point light walkers, spatially scrambled point light walkers, or laterally moving objects composed of random dots. In Experiment 1, we found that both types of walkers biased heading estimates similarly to moving objects when they obscured the focus of expansion of the background flow. In Experiment 2, we also found that walkers biased heading estimates when they did not obscure the focus of expansion. These results show that both regular and scrambled biological motion affect heading estimation in a similar manner to simple moving objects, and suggest that biological motion is not preferentially processed for the perception of self-motion.
Collapse
Affiliation(s)
- Hugh Riddell
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| | - Li Li
- Faculty of Arts and Science, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Markus Lappe
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| |
Collapse
|
15
|
Cross-modal size-contrast illusion: Acoustic increases in intensity and bandwidth modulate haptic representation of object size. Sci Rep 2019; 9:14440. [PMID: 31595003 PMCID: PMC6783429 DOI: 10.1038/s41598-019-50912-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 09/12/2019] [Indexed: 01/20/2023] Open
Abstract
Changes in the retinal size of stationary objects provide a cue to the observer's motion in the environment: Increases indicate the observer's forward motion, and decreases backward motion. In this study, a series of images each comprising a pair of pine-tree figures were translated into auditory modality using sensory substitution software. Resulting auditory stimuli were presented in an ascending sequence (i.e. increasing in intensity and bandwidth compatible with forward motion), descending sequence (i.e. decreasing in intensity and bandwidth compatible with backward motion), or in a scrambled order. During the presentation of stimuli, blindfolded participants estimated the lengths of wooden sticks by haptics. Results showed that those exposed to the stimuli compatible with forward motion underestimated the lengths of the sticks. This consistent underestimation may share some aspects with visual size-contrast effects such as the Ebbinghaus illusion. In contrast, participants in the other two conditions did not show such magnitude of error in size estimation; which is consistent with the "adaptive perceptual bias" towards acoustic increases in intensity and bandwidth. In sum, we report a novel cross-modal size-contrast illusion, which reveals that auditory motion cues compatible with listeners' forward motion modulate haptic representations of object size.
Collapse
|
16
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
17
|
Cheng X, Lou C, Ding X, Liu W, Zhang X, Fan Z, Harris J. Perceived shift of the centres of contracting and expanding optic flow fields: Different biases in the lower-right and upper-right visual quadrants. PLoS One 2019; 14:e0211912. [PMID: 30845166 PMCID: PMC6405070 DOI: 10.1371/journal.pone.0211912] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 01/22/2019] [Indexed: 11/19/2022] Open
Abstract
We studied differences in localizing the centres of flow in radially expanding and contracting patterns in different regions of the visual field. Our results suggest that the perceived centre of a peripherally viewed expanding pattern is shifted towards the fovea relative to that of a contracting pattern, but only in the lower right and upper right visual quadrants and when a single speed gradient with appropriate overall speeds of the trajectories of the moving dots was used. The biases were not systematically related to differences of sensitivity to optic flow in different quadrants. Further experiments demonstrated that the biases were likely due to a combination of two effects: an advantage of global processing in favor of the lower visual hemifield and a hemispheric asymmetry in attentional allocation in favor of motion-induced spatial displacement in the right visual hemifield. The bias in the lower right visual quadrant was speed gradient-sensitive and could be reduced to a non-significant level with the usage of multiple speed gradients, possibly due to a special role of the lower visual hemifield in extracting global information from the multiple speed gradients. A holistic processing on multiple speed gradients, rather than a predominant processing on a single speed gradient, was likely adopted. In contrast, the perceived bias in the upper right visual quadrant was overall speed-sensitive and could be reduced to a non-significant level with the reduction of the overall speeds of the trajectories. The implications of these results for understanding motion-induced spatial illusions are discussed.
Collapse
Affiliation(s)
- Xiaorong Cheng
- School of Psychology, Central China Normal University, Wuhan, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China
| | - Chunmiao Lou
- School of Psychology, Central China Normal University, Wuhan, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China
| | - Xianfeng Ding
- School of Psychology, Central China Normal University, Wuhan, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China
| | - Wei Liu
- School of Psychology, Central China Normal University, Wuhan, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China
| | - Xueling Zhang
- School of Psychology, Central China Normal University, Wuhan, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China
| | - Zhao Fan
- School of Psychology, Central China Normal University, Wuhan, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China
- * E-mail: (ZF); (JH)
| | - John Harris
- School of Psychology and Clinical Language Sciences, The University of Reading, Whiteknights, Reading, United Kingdom
- * E-mail: (ZF); (JH)
| |
Collapse
|
18
|
Abstract
The ability to navigate through crowds of moving people accurately, efficiently, and without causing collisions is essential for our day-to-day lives. Vision provides key information about one's own self-motion as well as the motions of other people in the crowd. These two types of information (optic flow and biological motion) have each been investigated extensively; however, surprisingly little research has been dedicated to investigating how they are processed when presented concurrently. Here, we showed that patterns of biological motion have a negative impact on visual-heading estimation when people within the crowd move their limbs but do not move through the scene. Conversely, limb motion facilitates heading estimation when walkers move independently through the scene. Interestingly, this facilitation occurs for crowds containing both regular and perturbed depictions of humans, suggesting that it is likely caused by low-level motion cues inherent in the biological motion of other people.
Collapse
Affiliation(s)
- Hugh Riddell
- Department of Psychology, Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster
| | - Markus Lappe
- Department of Psychology, Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster
| |
Collapse
|
19
|
Yu X, Hou H, Spillmann L, Gu Y. Causal Evidence of Motion Signals in Macaque Middle Temporal Area Weighted-Pooled for Global Heading Perception. Cereb Cortex 2018; 28:612-624. [PMID: 28057722 DOI: 10.1093/cercor/bhw402] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 12/13/2016] [Indexed: 11/14/2022] Open
Abstract
Accurate heading perception relies on visual information integrated across a wide field, that is, optic flow. Numerous computational studies have speculated how local visual information might be pooled by the brain to compute heading, but these hypotheses lack direct neurophysiological support. In the current study, we instructed human and monkey subjects to judge heading directions based on global optic flow. We showed that a local perturbation cue applied within only a small part of the visual field could bias the subjects' heading judgments, and shift the neuronal tuning in the macaque middle temporal (MT) area at the same time. Electrical microstimulation in MT significantly biased the animals' heading judgments predictable from the tuning of the stimulated neurons. Masking the visual stimuli within these neurons' receptive fields could not remove the stimulation effect, indicating a sufficient role of the MT signals pooled by downstream neurons for global heading estimation. Interestingly, this pooling is not homogeneous because stimulating neurons with excitatory surrounds produced relatively larger effects than stimulating neurons with inhibitory surrounds. Thus our data not only provide direct causal evidence, but also new insights into the neural mechanisms of pooling local motion information for global heading estimation.
Collapse
Affiliation(s)
- Xuefei Yu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Han Hou
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lothar Spillmann
- On leave of absence from Department of Neurology, University of Freiburg, Freiburg 79110, Germany
| | - Yong Gu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
20
|
Yang L, Gu Y. Distinct spatial coordinate of visual and vestibular heading signals in macaque FEFsem and MSTd. eLife 2017; 6. [PMID: 29134944 PMCID: PMC5685470 DOI: 10.7554/elife.29809] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/03/2017] [Indexed: 11/17/2022] Open
Abstract
Precise heading estimate requires integration of visual optic flow and vestibular inertial motion originating from distinct spatial coordinates (eye- and head-centered, respectively). To explore whether the two heading signals may share a common reference frame along the hierarchy of cortical stages, we explored two multisensory areas in macaques: the smooth pursuit area of the frontal eye field (FEFsem) closer to the motor side, and the dorsal portion of medial superior temporal area (MSTd) closer to the sensory side. In both areas, vestibular signals are head-centered, whereas visual signals are mainly eye-centered. However, visual signals in FEFsem are more shifted towards the head coordinate compared to MSTd. These results are robust being largely independent on: (1) smooth pursuit eye movement, (2) motion parallax cue, and (3) behavioral context for active heading estimation, indicating that the visual and vestibular heading signals may be represented in distinct spatial coordinate in sensory cortices.
Collapse
Affiliation(s)
- Lihua Yang
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
21
|
Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization. J Neurosci 2017; 37:11204-11219. [PMID: 29030435 DOI: 10.1523/jneurosci.1177-17.2017] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Revised: 10/02/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion.SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd.
Collapse
|
22
|
Kuang S, Shi J, Wang Y, Zhang T. Where are you heading? Flexible integration of retinal and extra-retinal cues during self-motion perception. Psych J 2017; 6:141-152. [PMID: 28514063 DOI: 10.1002/pchj.165] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 01/26/2017] [Accepted: 02/02/2017] [Indexed: 11/08/2022]
Abstract
As we move forward in the environment, we experience a radial expansion of the retinal image, wherein the center corresponds to the instantaneous direction of self-motion. Humans can precisely perceive their heading direction even when the retinal motion is distorted by gaze shifts due to eye/body rotations. Previous studies have suggested that both retinal and extra-retinal strategies can compensate for the retinal image distortion. However, the relative contributions of each strategy remain unclear. To address this issue, we devised a two-alternative-headings discrimination task, in which participants had either real or simulated pursuit eye movements. The two conditions had the same retinal input but either with or without extra-retinal eye movement signals. Thus, the behavioral difference between conditions served as a metric of extra-retinal contribution. We systematically and independently manipulated pursuit speed, heading speed, and the reliability of retinal signals. We found that the levels of extra-retinal contributions increased with increasing pursuit speed (stronger extra-retinal signal), and with decreasing heading speed (weaker retinal signal). In addition, extra-retinal contributions also increased as we corrupted retinal signals with noise. Our results revealed that the relative magnitude of retinal and extra-retinal contributions was not fixed but rather flexibly adjusted to each specific task condition. This task-dependent, flexible integration appears to take the form of a reliability-based weighting scheme that maximizes heading performance.
Collapse
Affiliation(s)
- Shenbing Kuang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jinfu Shi
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yang Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Tao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
23
|
Smith AT, Greenlee MW, DeAngelis GC, Angelaki D. Distributed Visual–Vestibular Processing in the Cerebral Cortex of Man and Macaque. Multisens Res 2017. [DOI: 10.1163/22134808-00002568] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Recent advances in understanding the neurobiological underpinnings of visual–vestibular interactions underlying self-motion perception are reviewed with an emphasis on comparisons between the macaque and human brains. In both species, several distinct cortical regions have been identified that are active during both visual and vestibular stimulation and in some of these there is clear evidence for sensory integration. Several possible cross-species homologies between cortical regions are identified. A key feature of cortical organization is that the same information is apparently represented in multiple, anatomically diverse cortical regions, suggesting that information about self-motion is used for different purposes in different brain regions.
Collapse
Affiliation(s)
- Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| | - Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, 93053 Regensburg, Germany
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| |
Collapse
|
24
|
Abstract
In the current study, we explored observers' use of two distinct analyses for determining their direction of motion, or heading: a scene-based analysis and a motion-based analysis. In two experiments, subjects viewed sequentially presented, paired digitized images of real-world scenes and judged the direction of heading; the pairs were presented with various interstimulus intervals (ISIs). In Experiment 1, subjects could determine heading when the two frames were separated with a 1,000-ms ISI, long enough to eliminate apparent motion. In Experiment 2, subjects performed two tasks, a path-of-motion task and a memory-load task, under three different ISIs, 50 ms, 500 ms, and 1,000 ms. Heading accuracy decreased with an increase in ISI. Increasing memory load influenced heading judgments only for the longer ISI when motion-based information was not available. These results are consistent with the hypothesis that the scene-based analysis has a coarse spatial representation, is a sustained temporal process, and is capacity limited, whereas the motion-based analysis has a fine spatial resolution, is a transient temporal process, and is capacity unlimited.
Collapse
Affiliation(s)
- Sowon Hahn
- University of California at Riverside, USA
| | | | | |
Collapse
|
25
|
Bertin RJV, Israël I. Optic-Flow-Based Perception of Two-Dimensional Trajectories and the Effects of a Single Landmark. Perception 2016; 34:453-75. [PMID: 15943053 DOI: 10.1068/p5292] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Human observers can detect their heading direction on a short time scale on the basis of optic flow. We investigated the visual perception and reconstruction of visually travelled two-dimensional (2-D) trajectories from optic flow, with and without a landmark. As in our previous study, seated, stationary subjects wore a head-mounted display in which optic-flow stimuli were shown that simulated various manoeuvres: linear or curvilinear 2-D trajectories over a horizontal plane, with observer orientation either fixed in space, fixed relative to the path, or changing relative to both. Afterwards, they reproduced the perceived manoeuvre with a model vehicle, whose position and orientation were recorded. Previous results had suggested that our stimuli can induce illusory percepts when translation and yaw are unyoked. We tested that hypothesis and investigated how perception of the travelled trajectory depends on the amount of yaw and the average path-relative orientation. Using a structured visual environment instead of only dots, or making available additional extra-retinal information, can improve perception of ambiguous optic-flow stimuli. We investigated the amount of necessary structuring, specifically the effect of additional visual and/or extra-retinal information provided by a single landmark in conditions where illusory percepts occur. While yaw was perceived correctly, the travelled path was less accurately perceived, but still adequately when the simulated orientation was fixed in space or relative to the trajectory. When the amount of yaw was not equal to the rotation of the path, or in the opposite direction, subjects still perceived orientation as fixed relative to the trajectory. This caused trajectory misperception because yaw was wrongly attributed to a rotation of the path: path perception is governed by the amount of yaw in the manoeuvre. Trajectory misperception also occurs when orientation is fixed relative to a curvilinear path, but not tangential to it. A single landmark could improve perception. Our results confirm and extend previous findings that, for unambiguous perception of ego-motion from optic flow, additional information is required in many cases, which can take the form of fairly minimal, visual information.
Collapse
Affiliation(s)
- René J V Bertin
- College de France/LPPA, 11 place Marcelin Berthelot, 75005 Paris, France.
| | | |
Collapse
|
26
|
Layton OW, Fajen BR. Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow. PLoS Comput Biol 2016; 12:e1004942. [PMID: 27341686 PMCID: PMC4920404 DOI: 10.1371/journal.pcbi.1004942] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 04/22/2016] [Indexed: 11/18/2022] Open
Abstract
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model's heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
- * E-mail:
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
27
|
Kim HR, Pitkow X, Angelaki DE, DeAngelis GC. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons. J Neurophysiol 2016; 116:1449-67. [PMID: 27334948 DOI: 10.1152/jn.00005.2016] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Accepted: 06/16/2016] [Indexed: 11/22/2022] Open
Abstract
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: "congruent" and "opposite" cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York;
| |
Collapse
|
28
|
Royden CS, Parsons D, Travatello J. The effect of monocular depth cues on the detection of moving objects by moving observers. Vision Res 2016; 124:7-14. [PMID: 27264029 DOI: 10.1016/j.visres.2016.05.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Revised: 05/19/2016] [Accepted: 05/23/2016] [Indexed: 11/26/2022]
Abstract
An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Daniel Parsons
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| | - Joshua Travatello
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
29
|
Abstract
Using micro-video cameras attached to the heads of 2 dogs, we examined their optical behavior while catching Frisbees. Our findings reveal that dogs use the same viewer-based navigational heuristics previously found with baseball players (i.e., maintaining the target along a linear optical trajectory, LOT, with optical speed constancy). On trials in which the Frisbee dramatically changed direction, the dog maintained an LOT with speed constancy until it apparently could no longer do so and then simply established a new LOT and optical speed until interception. This work demonstrates the use of simple control mechanisms that utilize invariant geometric properties to accomplish interceptive tasks. It confirms a common interception strategy that extends both across species and to complex target trajectories.
Collapse
|
30
|
Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object. J Neurosci 2016; 35:13599-607. [PMID: 26446214 DOI: 10.1523/jneurosci.2267-15.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion.
Collapse
|
31
|
Layton OW, Fajen BR. The temporal dynamics of heading perception in the presence of moving objects. J Neurophysiol 2015; 115:286-300. [PMID: 26510765 DOI: 10.1152/jn.00866.2015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 10/26/2015] [Indexed: 11/22/2022] Open
Abstract
Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York
| | - Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York
| |
Collapse
|
32
|
Issen L, Huxlin KR, Knill D. Spatial integration of optic flow information in direction of heading judgments. J Vis 2015; 15:14. [PMID: 26024461 DOI: 10.1167/15.6.14] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
While we know that humans are extremely sensitive to optic flow information about direction of heading, we do not know how they integrate information across the visual field. We adapted the standard cue perturbation paradigm to investigate how young adult observers integrate optic flow information from different regions of the visual field to judge direction of heading. First, subjects judged direction of heading when viewing a three-dimensional field of random dots simulating linear translation through the world. We independently perturbed the flow in one visual field quadrant to indicate a different direction of heading relative to the other three quadrants. We then used subjects' judgments of direction of heading to estimate the relative influence of flow information in each quadrant on perception. Human subjects behaved similarly to the ideal observer in terms of integrating motion information across the visual field with one exception: Subjects overweighted information in the upper half of the visual field. The upper-field bias was robust under several different stimulus conditions, suggesting that it may represent a physiological adaptation to the uneven distribution of task-relevant motion information in our visual world.
Collapse
|
33
|
Navigational strategy used to intercept fly balls under real-world conditions with moving visual background fields. Atten Percept Psychophys 2014; 77:613-25. [PMID: 25425225 DOI: 10.3758/s13414-014-0797-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This study explored the navigational strategy used to intercept fly balls in a real-world environment under conditions with moving visual background fields. Fielders ran across a gymnasium attempting to catch fly balls that varied in distance and direction. During each trial, the launched balls traveled in front of a moving background texture that was projected onto an entire wall of a gymnasium. The background texture consisted of a field of random dots that moved together, at a constant speed and direction that varied between trials. The fielder route deviation was defined as the signed area swept out between the actual running path and a straight-line path to the destination, and these route deviation values were compared as a function of the background motion conditions. The findings confirmed that the moving visual background fields systematically altered the fielder running paths, which curved more forward and then to the side when the background gradient moved laterally with the ball, and curved more to the side and then forward when the background gradient moved opposite the ball. Fielder running paths deviated systematically, in a manner consistent with the use of a geometric optical control strategy that helps guide real-world perception-action tasks of interception, such as catching balls.
Collapse
|
34
|
Raudies F, Neumann H. Modeling heading and path perception from optic flow in the case of independently moving objects. Front Behav Neurosci 2013; 7:23. [PMID: 23554589 PMCID: PMC3612589 DOI: 10.3389/fnbeh.2013.00023] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2012] [Accepted: 03/13/2013] [Indexed: 11/18/2022] Open
Abstract
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs.
Collapse
Affiliation(s)
- Florian Raudies
- Center for Computational Neuroscience and Neural Technology, Boston UniversityBoston, MA, USA
- Center of Excellence for Learning in Education, Science, and Technology, Boston UniversityBoston, MA, USA
| | - Heiko Neumann
- Center of Excellence for Learning in Education, Science, and Technology, Boston UniversityBoston, MA, USA
- Institute for Neural Information Processing, University of UlmUlm, Germany
| |
Collapse
|
35
|
Abstract
We have recently suggested that neural flow parsing mechanisms act to subtract global optic flow consistent with observer movement to aid in detecting and assessing scene-relative object movement. Here, we examine whether flow parsing can occur independently from heading estimation. To address this question we used stimuli comprising two superimposed optic flow fields comprising limited lifetime dots (one planar and one radial). This stimulus gives rise to the so-called optic flow illusion (OFI) in which perceived heading is biased in the direction of the planar flow field. Observers were asked to report the perceived direction of motion of a probe object placed in the OFI stimulus. If flow parsing depends upon a prior estimate of heading then the perceived trajectory should reflect global subtraction of a field consistent with the heading experienced under the OFI. In Experiment 1 we tested this prediction directly, finding instead that the perceived trajectory was biased markedly in the direction opposite to that predicted under the OFI. In Experiment 2 we demonstrate that the results of Experiment 1 are consistent with a positively weighted vector sum of the effects seen when viewing the probe together with individual radial and planar flow fields. These results suggest that flow parsing is not necessarily dependent on prior estimation of heading direction. We discuss the implications of this finding for our understanding of the mechanisms of flow parsing.
Collapse
Affiliation(s)
- Paul A. Warren
- School of Psychological Sciences, The University of Manchester, Manchester, UK,
| | | | - Andrew J. Foulkes
- School of Psychological Sciences, The University of Manchester, Manchester, UK,
| |
Collapse
|
36
|
MacNeilage PR, Zhang Z, DeAngelis GC, Angelaki DE. Vestibular facilitation of optic flow parsing. PLoS One 2012; 7:e40264. [PMID: 22768345 PMCID: PMC3388053 DOI: 10.1371/journal.pone.0040264] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 06/04/2012] [Indexed: 11/18/2022] Open
Abstract
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.
Collapse
Affiliation(s)
- Paul R MacNeilage
- Vertigo, Balance, and Oculomotor Research Center, University Hospital of Munich, Munich, Germany.
| | | | | | | |
Collapse
|
37
|
Use of speed cues in the detection of moving objects by moving observers. Vision Res 2012; 59:17-24. [PMID: 22406544 DOI: 10.1016/j.visres.2012.02.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2011] [Revised: 01/12/2012] [Accepted: 02/21/2012] [Indexed: 11/20/2022]
Abstract
When an observer moves through an environment containing stationary and moving objects, he or she must be able to determine which objects are moving relative to the others in order to navigate successfully and avoid collisions. We investigated whether image speed can be used as a cue to detect a moving object in the scene. Our results show that image speed can be used to detect moving objects as long as the object is moving sufficiently faster or slower than it would if it were part of the stationary scene.
Collapse
|
38
|
DeAngelis G, Angelaki D. Visual–Vestibular Integration for Self-Motion Perception. Front Neurosci 2011. [DOI: 10.1201/b11092-39] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
39
|
|
40
|
Mapstone M, Duffy CJ. Approaching objects cause confusion in patients with Alzheimer's disease regarding their direction of self-movement. Brain 2010; 133:2690-701. [PMID: 20647265 DOI: 10.1093/brain/awq140] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Navigation requires real-time heading estimation based-on self-movement cues from optic flow and object motion. We presented a simulated heading discrimination task to young, middle-aged and older adult, normal, control subjects and to patients with mild cognitive impairment or Alzheimer's disease. Age-related decline and neurodegenerative disease effects were evident on a battery of neuropsychological and visual motion psychophysical measures. All subject groups made more accurate heading judgements when using optic flow patterns than when using simulated movement past earth-fixed objects. When both optic flow and congruent object were presented together, heading judgements showed intermediate accuracy. In separate trials, we combined optic flow with non-congruent object motion, simulating an independently moving object. In the case of non-congruent objects, almost all of our subjects shifted their perceived self-movement to heading in the direction of the moving object. However, patients with Alzheimer's disease uniquely indicated that perceived self-movement was straight-ahead, in the direction of visual fixation. The tendency to be confused by objects that appear to move independently in the simulated visual scene corresponded to the difficulty patients with Alzheimer's disease encountered in real-world navigation through the hospital lobby (R(2) = 0.87). This was not the case in older normal controls (R(2) = 0.09). We conclude that perceptual factors limit safe, autonomous navigation in early Alzheimer's disease. In particular, the presence of independently moving objects in naturalistic environments limits the capacity of patients with Alzheimer's disease to judge their heading of self-movement.
Collapse
Affiliation(s)
- Mark Mapstone
- Department of Neurology, University of Rochester Medical Centre, 601 Elmwood Avenue, Rochester, NY 14642-0673, USA
| | | |
Collapse
|
41
|
Royden CS, Connors EM. The detection of moving objects by moving observers. Vision Res 2010; 50:1014-24. [DOI: 10.1016/j.visres.2010.03.008] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2009] [Revised: 01/29/2010] [Accepted: 03/16/2010] [Indexed: 11/24/2022]
|
42
|
Fetsch CR, Deangelis GC, Angelaki DE. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory. Eur J Neurosci 2010; 31:1721-9. [PMID: 20584175 PMCID: PMC3108057 DOI: 10.1111/j.1460-9568.2010.07207.x] [Citation(s) in RCA: 92] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Collapse
Affiliation(s)
- Christopher R Fetsch
- Department of Anatomy and Neurobiology, Washington University School of Medicine, 660 S. Euclid Ave., Box 8108, St. Louis, MO 63110, USA
| | | | | |
Collapse
|
43
|
Browning NA, Grossberg S, Mingolla E. A neural model of how the brain computes heading from optic flow in realistic scenes. Cogn Psychol 2009; 59:320-56. [PMID: 19716125 DOI: 10.1016/j.cogpsych.2009.07.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2008] [Accepted: 07/20/2009] [Indexed: 11/15/2022]
Abstract
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard to behavioral and neural substrates. The current article develops a model that does both. The ViSTARS neural model describes interactions among neurons in the primate magnocellular pathway, including V1, MT(+), and MSTd. Model outputs are quantitatively similar to human heading data in response to complex natural scenes. The model estimates heading to within 1.5 degrees in random dot or photo-realistically rendered scenes, and within 3 degrees in video streams from driving in real-world environments. Simulated rotations of less than 1 degrees /s do not affect heading estimates, but faster simulated rotation rates do, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.
Collapse
Affiliation(s)
- N Andrew Browning
- Department of Cognitive and Neural Systems, Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
44
|
Beardsley SA, Vaina LM. An effect of relative motion on trajectory discrimination. Vision Res 2008; 48:1040-52. [PMID: 18304601 DOI: 10.1016/j.visres.2008.01.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2006] [Revised: 12/28/2007] [Accepted: 01/10/2008] [Indexed: 10/22/2022]
Abstract
Psychophysical studies point to the existence of specialized mechanisms sensitive to the relative motion between an object and its background. Such mechanisms would seem ideal for the motion-based segmentation of objects; however, their properties and role in processing the visual scene remain unclear. Here we examine the contribution of relative motion mechanisms to the processing of object trajectory. In a series of four psychophysical experiments we examine systematically the effects of relative direction and speed differences on the perceived trajectory of an object against a moving background. We show that background motion systematically influences the discrimination of object direction. Subjects' ability to discriminate direction was consistently better for objects moving opposite a translating background than for objects moving in the same direction as the background. This effect was limited to the case of a translating background and did not affect perceived trajectory for more complex background motions associated with self-motion. We interpret these differences as providing support for the role of relative motion mechanisms in the segmentation and representation of object motions that do not occlude the path of an observer's self-motion.
Collapse
Affiliation(s)
- Scott A Beardsley
- Department of Biomedical Engineering, Marquette University, P.O. Box 1881, Milwaukee, WI 53201, USA.
| | | |
Collapse
|
45
|
A model for simultaneous computation of heading and depth in the presence of rotations. Vision Res 2007; 47:3025-40. [DOI: 10.1016/j.visres.2007.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2007] [Revised: 08/15/2007] [Accepted: 08/17/2007] [Indexed: 11/22/2022]
|
46
|
Fetsch CR, Wang S, Gu Y, DeAngelis GC, Angelaki DE. Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area. J Neurosci 2007; 27:700-12. [PMID: 17234602 PMCID: PMC1995026 DOI: 10.1523/jneurosci.3553-06.2007] [Citation(s) in RCA: 101] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Heading perception is a complex task that generally requires the integration of visual and vestibular cues. This sensory integration is complicated by the fact that these two modalities encode motion in distinct spatial reference frames (visual, eye-centered; vestibular, head-centered). Visual and vestibular heading signals converge in the primate dorsal subdivision of the medial superior temporal area (MSTd), a region thought to contribute to heading perception, but the reference frames of these signals remain unknown. We measured the heading tuning of MSTd neurons by presenting optic flow (visual condition), inertial motion (vestibular condition), or a congruent combination of both cues (combined condition). Static eye position was varied from trial to trial to determine the reference frame of tuning (eye-centered, head-centered, or intermediate). We found that tuning for optic flow was predominantly eye-centered, whereas tuning for inertial motion was intermediate but closer to head-centered. Reference frames in the two unimodal conditions were rarely matched in single neurons and uncorrelated across the population. Notably, reference frames in the combined condition varied as a function of the relative strength and spatial congruency of visual and vestibular tuning. This represents the first investigation of spatial reference frames in a naturalistic, multimodal condition in which cues may be integrated to improve perceptual performance. Our results compare favorably with the predictions of a recent neural network model that uses a recurrent architecture to perform optimal cue integration, suggesting that the brain could use a similar computational strategy to integrate sensory signals expressed in distinct frames of reference.
Collapse
Affiliation(s)
- Christopher R. Fetsch
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Sentao Wang
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Yong Gu
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Gregory C. DeAngelis
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Dora E. Angelaki
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| |
Collapse
|
47
|
Kim NG. Active Steering along Corrugated Surfaces. Perception 2006; 35:895-909. [PMID: 16970199 DOI: 10.1068/p5239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
A study is reported of the effect of dynamic occlusion that arises during locomotion over corrugated surfaces and its facilitating role on the control of locomotion, especially in cluttered environments. Surfaces varied in degree of corrugation and type of texture. Heading accuracy was assessed by having participants perform an active steering task. Results demonstrated the advantage of texture-mapped image surfaces over discrete element surfaces in the corrugated conditions. Observers appear to exploit accretion and deletion of optical texture at the occluding edge to extract and use information about heading direction for the control of movements in cluttered environments.
Collapse
|
48
|
Gu Y, Watkins PV, Angelaki DE, DeAngelis GC. Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area. J Neurosci 2006; 26:73-85. [PMID: 16399674 PMCID: PMC1538979 DOI: 10.1523/jneurosci.2356-05.2006] [Citation(s) in RCA: 226] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Robust perception of self-motion requires integration of visual motion signals with nonvisual cues. Neurons in the dorsal subdivision of the medial superior temporal area (MSTd) may be involved in this sensory integration, because they respond selectively to global patterns of optic flow, as well as translational motion in darkness. Using a virtual-reality system, we have characterized the three-dimensional (3D) tuning of MSTd neurons to heading directions defined by optic flow alone, inertial motion alone, and congruent combinations of the two cues. Among 255 MSTd neurons, 98% exhibited significant 3D heading tuning in response to optic flow, whereas 64% were selective for heading defined by inertial motion. Heading preferences for visual and inertial motion could be aligned but were just as frequently opposite. Moreover, heading selectivity in response to congruent visual/vestibular stimulation was typically weaker than that obtained using optic flow alone, and heading preferences under congruent stimulation were dominated by the visual input. Thus, MSTd neurons generally did not integrate visual and nonvisual cues to achieve better heading selectivity. A simple two-layer neural network, which received eye-centered visual inputs and head-centered vestibular inputs, reproduced the major features of the MSTd data. The network was trained to compute heading in a head-centered reference frame under all stimulus conditions, such that it performed a selective reference-frame transformation of visual, but not vestibular, signals. The similarity between network hidden units and MSTd neurons suggests that MSTd may be an early stage of sensory convergence involved in transforming optic flow information into a (head-centered) reference frame that facilitates integration with vestibular signals.
Collapse
Affiliation(s)
- Yong Gu
- Department of Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110, USA
| | | | | | | |
Collapse
|
49
|
Logan DJ, Duffy CJ. Cortical area MSTd combines visual cues to represent 3-D self-movement. ACTA ACUST UNITED AC 2005; 16:1494-507. [PMID: 16339087 DOI: 10.1093/cercor/bhj082] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
As arboreal primates move through the jungle, they are immersed in visual motion that they must distinguish from the movement of predators and prey. We recorded dorsal medial superior temporal (MSTd) cortical neuronal responses to visual motion stimuli simulating self-movement and object motion. MSTd neurons encode the heading of simulated self-movement in three-dimensional (3-D) space. 3-D heading responses can be evoked either by the large patterns of visual motion in optic flow or by the visual object motion seen when an observer passes an earth-fixed landmark. Responses to naturalistically combined optic flow and object motion depend on their relative directions: an object moving as part of the optic flow field has little effect on neuronal responses. In contrast, an object moving separately from the optic flow field has large effects, decreasing the amplitude of the population response and shifting the population's heading estimate to match the direction of object motion as the object moves toward central vision. These effects parallel those seen in human heading perception with minimal effects of objects moving with the optic flow and substantial effects of objects violating the optic flow. We conclude that MSTd can contribute to navigation by supporting 3-D heading estimation, potentially switching from optic flow to object cues when a moving object passes in front of the observer.
Collapse
Affiliation(s)
- David J Logan
- Department of Neurology, and the Center for Visual Science, The University of Rochester Medical Center, Rochester, NY 14642, USA
| | | |
Collapse
|
50
|
Wurfel JD, Barraza JF, Grzywacz NM. Measurement of rate of expansion in the perception of radial motion. Vision Res 2005; 45:2740-51. [PMID: 16023697 DOI: 10.1016/j.visres.2005.03.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2004] [Revised: 03/08/2005] [Accepted: 03/29/2005] [Indexed: 11/29/2022]
Abstract
Optic flow generated by rigid surface patches can be decomposed into a small number of elementary motion types. In these experiments, we show that the human visual system can evaluate expansion, one of these motion types, metrically. Moreover, we show that the discrimination of rates of expansion are spatially local. Because the estimation of the focus of expansion is somewhat imprecise, this locality sometimes produces predictable errors in the estimation of rate of expansion. One can make predictions like this with a model adapted from one previously developed for angular-velocity discrimination.
Collapse
Affiliation(s)
- Jeff D Wurfel
- Neuroscience Graduate Program, University of Southern California, Hedco Neuroscience Building, MC 2520, Los Angeles, CA 90089-2520, USA.
| | | | | |
Collapse
|