1
|
DiRisio GF, Ra Y, Qiu Y, Anzai A, DeAngelis GC. Neurons in Primate Area MSTd Signal Eye Movement Direction Inferred from Dynamic Perspective Cues in Optic Flow. J Neurosci 2023; 43:1888-1904. [PMID: 36725323 PMCID: PMC10027048 DOI: 10.1523/jneurosci.1885-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/18/2023] [Accepted: 01/24/2023] [Indexed: 02/03/2023] Open
Abstract
Smooth eye movements are common during natural viewing; we frequently rotate our eyes to track moving objects or to maintain fixation on an object during self-movement. Reliable information about smooth eye movements is crucial to various neural computations, such as estimating heading from optic flow or judging depth from motion parallax. While it is well established that extraretinal signals (e.g., efference copies of motor commands) carry critical information about eye velocity, the rotational optic flow field produced by eye rotations also carries valuable information. Although previous work has shown that dynamic perspective cues in optic flow can be used in computations that require estimates of eye velocity, it has remained unclear where and how the brain processes these visual cues and how they are integrated with extraretinal signals regarding eye rotation. We examined how neurons in the dorsal region of the medial superior temporal area (MSTd) of two male rhesus monkeys represent the direction of smooth pursuit eye movements based on both visual cues (dynamic perspective) and extraretinal signals. We find that most MSTd neurons have matched preferences for the direction of eye rotation based on visual and extraretinal signals. Moreover, neural responses to combinations of these signals are well predicted by a weighted linear summation model. These findings demonstrate a neural substrate for representing the velocity of smooth eye movements based on rotational optic flow and establish area MSTd as a key node for integrating visual and extraretinal signals into a more generalized representation of smooth eye movements.SIGNIFICANCE STATEMENT We frequently rotate our eyes to smoothly track objects of interest during self-motion. Information about eye velocity is crucial for a variety of computations performed by the brain, including depth perception and heading perception. Traditionally, information about eye rotation has been thought to arise mainly from extraretinal signals, such as efference copies of motor commands. Previous work shows that eye velocity can also be inferred from rotational optic flow that accompanies smooth eye movements, but the neural origins of these visual signals about eye rotation have remained unknown. We demonstrate that macaque neurons signal the direction of smooth eye rotation based on visual signals, and that they integrate both visual and extraretinal signals regarding eye rotation in a congruent fashion.
Collapse
Affiliation(s)
- Grace F DiRisio
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, University of Chicago, Chicago, Illinois 60637
| | - Yongsoo Ra
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Yinghui Qiu
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- College of Veterinary Medicine, Cornell University, Ithaca, New York 14853-6401
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| |
Collapse
|
2
|
French RL, DeAngelis GC. Scene-relative object motion biases depth percepts. Sci Rep 2022; 12:18480. [PMID: 36323845 PMCID: PMC9630409 DOI: 10.1038/s41598-022-23219-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 10/27/2022] [Indexed: 11/07/2022] Open
Abstract
An important function of the visual system is to represent 3D scene structure from a sequence of 2D images projected onto the retinae. During observer translation, the relative image motion of stationary objects at different distances (motion parallax) provides potent depth information. However, if an object moves relative to the scene, this complicates the computation of depth from motion parallax since there will be an additional component of image motion related to scene-relative object motion. To correctly compute depth from motion parallax, only the component of image motion caused by self-motion should be used by the brain. Previous experimental and theoretical work on perception of depth from motion parallax has assumed that objects are stationary in the world. Thus, it is unknown whether perceived depth based on motion parallax is biased by object motion relative to the scene. Naïve human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object could be either stationary or moving laterally at different velocities, and subjects were asked to judge the depth of the object relative to the plane of fixation. Subjects showed a far bias when object and observer moved in the same direction, and a near bias when object and observer moved in opposite directions. This pattern of biases is expected if subjects confound image motion due to self-motion with that due to scene-relative object motion. These biases were large when the object was viewed monocularly, and were greatly reduced, but not eliminated, when binocular disparity cues were provided. Our findings establish that scene-relative object motion can confound perceptual judgements of depth during self-motion.
Collapse
Affiliation(s)
- Ranran L. French
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| | - Gregory C. DeAngelis
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| |
Collapse
|
3
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
4
|
Abstract
We perceive our environment through multiple independent sources of sensory input. The brain is tasked with deciding whether multiple signals are produced by the same or different events (i.e., solve the problem of causal inference). Here, we train a neural network to solve causal inference by either combining or separating visual and vestibular inputs in order to estimate self- and scene motion. We find that the network recapitulates key neurophysiological (i.e., congruent and opposite neurons) and behavioral (e.g., reliability-based cue weighting) properties of biological systems. We show how congruent and opposite neurons support motion estimation and how the balance in activity between these subpopulations determines whether to combine or separate multisensory signals. Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (“congruent” neurons), while others prefer opposing directions (“opposite” neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.
Collapse
|
5
|
Burlingham CS, Heeger DJ. Heading perception depends on time-varying evolution of optic flow. Proc Natl Acad Sci U S A 2020; 117:33161-33169. [PMID: 33328275 PMCID: PMC7776640 DOI: 10.1073/pnas.2022984117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
There is considerable support for the hypothesis that perception of heading in the presence of rotation is mediated by instantaneous optic flow. This hypothesis, however, has never been tested. We introduce a method, termed "nonvarying phase motion," for generating a stimulus that conveys a single instantaneous optic flow field, even though the stimulus is presented for an extended period of time. In this experiment, observers viewed stimulus videos and performed a forced-choice heading discrimination task. For nonvarying phase motion, observers made large errors in heading judgments. This suggests that instantaneous optic flow is insufficient for heading perception in the presence of rotation. These errors were mostly eliminated when the velocity of phase motion was varied over time to convey the evolving sequence of optic flow fields corresponding to a particular heading. This demonstrates that heading perception in the presence of rotation relies on the time-varying evolution of optic flow. We hypothesize that the visual system accurately computes heading, despite rotation, based on optic acceleration, the temporal derivative of optic flow.
Collapse
Affiliation(s)
| | - David J Heeger
- Department of Psychology, New York University, New York, NY 10003;
- Center for Neural Science, New York University, New York, NY 10003
| |
Collapse
|
6
|
Rogoskii I, Mushtruk M, Titova L, Snezhko O, Rogach S, Blesnyuk O, Rosamaha Y, Zubok T, Yeremenko O, Nadtochiy O. Engineering management of starter cultures in study of temperature of fermentation of sour-milk drink with apiproducts. POTRAVINARSTVO 2020. [DOI: 10.5219/1437] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The article considers the solution of problematic issues of engineering management of poly fermentation in the study of fermentation temperature of sour-milk drink with apiproducts. In the development of fermented dairy products, the components that are part of them, changes in their composition, and properties in the interconnection are considered as a technological system. The authors took into account that food technologies based on the use of the pure culture of one microorganism are limited by the capabilities of its fermentation system systems, the ultimate goal may not be achieved even by changing the conditions and parameters of cultivation. To successfully carry out fermentation processes in the technological system, a combination of cultures, associations of microorganisms with a wide range of fermentation products in contrast to one culture is promising to use. All experimental samples on a set of indicators prevailed control ones. The leader was a sample fermented with yeast with an equal ratio of cultures at a temperature of 38 – 40 °C. The authors found that the set of indicators of finished products for the production of sour-milk drinks with a complex of apiproducts, it is necessary to choose a three-strain poly fermentation product with a congruent ratio of cultures and set optimal fermentation regimes 39 ±1ºC for 5.0 ±0.3 hours.
Collapse
|
7
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
8
|
Rideaux R, Michael E, Welchman AE. Adaptation to Binocular Anticorrelation Results in Increased Neural Excitability. J Cogn Neurosci 2019; 32:100-110. [PMID: 31560264 DOI: 10.1162/jocn_a_01471] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Some neurons are tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause-that is, establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, some binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioral evidence supporting the existence of these neurons [Katyal, S., Vergeer, M., He, S., He, B., & Engel, S. A. Conflict-sensitive neurons gate interocular suppression in human visual cortex. Scientific Reports, 8, 1239, 2018; Kingdom, F. A. A., Jennings, B. J., & Georgeson, M. A. Adaptation to interocular difference. Journal of Vision, 18, 9, 2018; Janssen, P., Vogels, R., Liu, Y., & Orban, G. A. At least at the level of inferior temporal cortex, the stereo correspondence problem is solved. Neuron, 37, 693-701, 2003; Tsao, D. Y., Conway, B. R., & Livingstone, M. S. Receptive fields of disparity-tuned simple cells in macaque V1. Neuron, 38, 103-114, 2003; Cumming, B. G., & Parker, A. J. Responses of primary visual cortical neurons to binocular disparity without depth perception. Nature, 389, 280-283, 1997], their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers' steady-state visually evoked potentials in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger steady-state visually evoked potentials, whereas adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting "what not" neurons play a suppressive role in supporting stereopsis [Goncalves, N. R., & Welchman, A. E. "What not" detectors help the brain see in depth. Current Biology, 27, 1403-1412, 2017]; that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability.
Collapse
|
9
|
Retinal Stabilization Reveals Limited Influence of Extraretinal Signals on Heading Tuning in the Medial Superior Temporal Area. J Neurosci 2019; 39:8064-8078. [PMID: 31488610 DOI: 10.1523/jneurosci.0388-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Revised: 08/17/2019] [Accepted: 08/20/2019] [Indexed: 11/21/2022] Open
Abstract
Heading perception in primates depends heavily on visual optic-flow cues. Yet during self-motion, heading percepts remain stable, even though smooth-pursuit eye movements often distort optic flow. According to theoretical work, self-motion can be represented accurately by compensating for these distortions in two ways: via retinal mechanisms or via extraretinal efference-copy signals, which predict the sensory consequences of movement. Psychophysical evidence strongly supports the efference-copy hypothesis, but physiological evidence remains inconclusive. Neurons that signal the true heading direction during pursuit are found in visual areas of monkey cortex, including the dorsal medial superior temporal area (MSTd). Here we measured heading tuning in MSTd using a novel stimulus paradigm, in which we stabilize the optic-flow stimulus on the retina during pursuit. This approach isolates the effects on neuronal heading preferences of extraretinal signals, which remain active while the retinal stimulus is prevented from changing. Our results from 3 female monkeys demonstrate a significant but small influence of extraretinal signals on the preferred heading directions of MSTd neurons. Under our stimulus conditions, which are rich in retinal cues, we find that retinal mechanisms dominate physiological corrections for pursuit eye movements, suggesting that extraretinal cues, such as predictive efference-copy mechanisms, have a limited role under naturalistic conditions.SIGNIFICANCE STATEMENT Sensory systems discount stimulation caused by an animal's own behavior. For example, eye movements cause irrelevant retinal signals that could interfere with motion perception. The visual system compensates for such self-generated motion, but how this happens is unclear. Two theoretical possibilities are a purely visual calculation or one using an internal signal of eye movements to compensate for their effects. The latter can be isolated by experimentally stabilizing the image on a moving retina, but this approach has never been adopted to study motion physiology. Using this method, we find that extraretinal signals have little influence on activity in visual cortex, whereas visually based corrections for ongoing eye movements have stronger effects and are likely most important under real-world conditions.
Collapse
|
10
|
Rideaux R, Welchman AE. Proscription supports robust perceptual integration by suppression in human visual cortex. Nat Commun 2018; 9:1502. [PMID: 29666361 PMCID: PMC5904115 DOI: 10.1038/s41467-018-03400-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Accepted: 02/07/2018] [Indexed: 11/14/2022] Open
Abstract
Perception relies on integrating information within and between the senses, but how does the brain decide which pieces of information should be integrated and which kept separate? Here we demonstrate how proscription can be used to solve this problem: certain neurons respond best to unrealistic combinations of features to provide ‘what not’ information that drives suppression of unlikely perceptual interpretations. First, we present a model that captures both improved perception when signals are consistent (and thus should be integrated) and robust estimation when signals are conflicting. Second, we test for signatures of proscription in the human brain. We show that concentrations of inhibitory neurotransmitter GABA in a brain region intricately involved in integrating cues (V3B/KO) correlate with robust integration. Finally, we show that perturbing excitation/inhibition impairs integration. These results highlight the role of proscription in robust perception and demonstrate the functional purpose of ‘what not’ sensors in supporting sensory estimation. Perception relies on information integration but it is unclear how the brain decides which information to integrate and which to keep separate. Here, the authors develop and test a biologically inspired model of cue-integration, implicating a key role for GABAergic proscription in robust perception.
Collapse
Affiliation(s)
- Reuben Rideaux
- Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK
| | - Andrew E Welchman
- Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK.
| |
Collapse
|
11
|
Laquitaine S, Gardner JL. A Switching Observer for Human Perceptual Estimation. Neuron 2017; 97:462-474.e6. [PMID: 29290551 DOI: 10.1016/j.neuron.2017.12.011] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 10/23/2017] [Accepted: 12/05/2017] [Indexed: 11/25/2022]
Abstract
Human perceptual inference has been fruitfully characterized as a normative Bayesian process in which sensory evidence and priors are multiplicatively combined to form posteriors from which sensory estimates can be optimally read out. We tested whether this basic Bayesian framework could explain human subjects' behavior in two estimation tasks in which we varied the strength of sensory evidence (motion coherence or contrast) and priors (set of directions or orientations). We found that despite excellent agreement of estimates mean and variability with a Basic Bayesian observer model, the estimate distributions were bimodal with unpredicted modes near the prior and the likelihood. We developed a model that switched between prior and sensory evidence rather than integrating the two, which better explained the data than the Basic and several other Bayesian observers. Our data suggest that humans can approximate Bayesian optimality with a switching heuristic that forgoes multiplicative combination of priors and likelihoods.
Collapse
Affiliation(s)
- Steeve Laquitaine
- Department of Psychology, Stanford University, Stanford, CA 94305, USA; Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, Wako-shi, Saitama 351-0198, Japan
| | - Justin L Gardner
- Department of Psychology, Stanford University, Stanford, CA 94305, USA; Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, Wako-shi, Saitama 351-0198, Japan.
| |
Collapse
|
12
|
The Primary Role of Flow Processing in the Identification of Scene-Relative Object Movement. J Neurosci 2017; 38:1737-1743. [PMID: 29229707 PMCID: PMC5815455 DOI: 10.1523/jneurosci.3530-16.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Revised: 08/15/2017] [Accepted: 09/07/2017] [Indexed: 11/25/2022] Open
Abstract
Retinal image motion could be due to the movement of the observer through space or an object relative to the scene. Optic flow, form, and change of position cues all provide information that could be used to separate out retinal motion due to object movement from retinal motion due to observer movement. In Experiment 1, we used a minimal display to examine the contribution of optic flow and form cues. Human participants indicated the direction of movement of a probe object presented against a background of radially moving pairs of dots. By independently controlling the orientation of each dot pair, we were able to put flow cues to self-movement direction (the point from which all the motion radiated) and form cues to self-movement direction (the point toward which all the dot pairs were oriented) in conflict. We found that only flow cues influenced perceived probe movement. In Experiment 2, we switched to a rich stereo display composed of 3D objects to examine the contribution of flow and position cues. We moved the scene objects to simulate a lateral translation and counter-rotation of gaze. By changing the polarity of the scene objects (from light to dark and vice versa) between frames, we placed flow cues to self-movement direction in opposition to change of position cues. We found that again flow cues dominated the perceived probe movement relative to the scene. Together, these experiments indicate the neural network that processes optic flow has a primary role in the identification of scene-relative object movement. SIGNIFICANCE STATEMENT Motion of an object in the retinal image indicates relative movement between the observer and the object, but it does not indicate its cause: movement of an object in the scene; movement of the observer; or both. To isolate retinal motion due to movement of a scene object, the brain must parse out the retinal motion due to movement of the eye (“flow parsing”). Optic flow, form, and position cues all have potential roles in this process. We pitted the cues against each other and assessed their influence. We found that flow parsing relies on optic flow alone. These results indicate the primary role of the neural network that processes optic flow in the identification of scene-relative object movement.
Collapse
|
13
|
Kim HR, Angelaki DE, DeAngelis GC. The neural basis of depth perception from motion parallax. Philos Trans R Soc Lond B Biol Sci 2017; 371:rstb.2015.0256. [PMID: 27269599 DOI: 10.1098/rstb.2015.0256] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/06/2016] [Indexed: 11/12/2022] Open
Abstract
In addition to depth cues afforded by binocular vision, the brain processes relative motion signals to perceive depth. When an observer translates relative to their visual environment, the relative motion of objects at different distances (motion parallax) provides a powerful cue to three-dimensional scene structure. Although perception of depth based on motion parallax has been studied extensively in humans, relatively little is known regarding the neural basis of this visual capability. We review recent advances in elucidating the neural mechanisms for representing depth-sign (near versus far) from motion parallax. We examine a potential neural substrate in the middle temporal visual area for depth perception based on motion parallax, and we explore the nature of the signals that provide critical inputs for disambiguating depth-sign.This article is part of the themed issue 'Vision in our three-dimensional world'.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, NY 14627, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, NY 14627, USA
| |
Collapse
|
14
|
Gain Modulation as a Mechanism for Coding Depth from Motion Parallax in Macaque Area MT. J Neurosci 2017; 37:8180-8197. [PMID: 28739582 DOI: 10.1523/jneurosci.0393-17.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 06/30/2017] [Accepted: 07/20/2017] [Indexed: 11/21/2022] Open
Abstract
Observer translation produces differential image motion between objects that are located at different distances from the observer's point of fixation [motion parallax (MP)]. However, MP can be ambiguous with respect to depth sign (near vs far), and this ambiguity can be resolved by combining retinal image motion with signals regarding eye movement relative to the scene. We have previously demonstrated that both extra-retinal and visual signals related to smooth eye movements can modulate the responses of neurons in area MT of macaque monkeys, and that these modulations generate neural selectivity for depth sign. However, the neural mechanisms that govern this selectivity have remained unclear. In this study, we analyze responses of MT neurons as a function of both retinal velocity and direction of eye movement, and we show that smooth eye movements modulate MT responses in a systematic, temporally precise, and directionally specific manner to generate depth-sign selectivity. We demonstrate that depth-sign selectivity is primarily generated by multiplicative modulations of the response gain of MT neurons. Through simulations, we further demonstrate that depth can be estimated reasonably well by a linear decoding of a population of MT neurons with response gains that depend on eye velocity. Together, our findings provide the first mechanistic description of how visual cortical neurons signal depth from MP.SIGNIFICANCE STATEMENT Motion parallax is a monocular cue to depth that commonly arises during observer translation. To compute from motion parallax whether an object appears nearer or farther than the point of fixation requires combining retinal image motion with signals related to eye rotation, but the neurobiological mechanisms have remained unclear. This study provides the first mechanistic account of how this interaction takes place in the responses of cortical neurons. Specifically, we show that smooth eye movements modulate the gain of responses of neurons in area MT in a directionally specific manner to generate selectivity for depth sign from motion parallax. We also show, through simulations, that depth could be estimated from a population of such gain-modulated neurons.
Collapse
|
15
|
Goncalves NR, Welchman AE. "What Not" Detectors Help the Brain See in Depth. Curr Biol 2017; 27:1403-1412.e8. [PMID: 28502662 PMCID: PMC5457481 DOI: 10.1016/j.cub.2017.03.074] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 03/13/2017] [Accepted: 03/29/2017] [Indexed: 11/23/2022]
Abstract
Binocular stereopsis is one of the primary cues for three-dimensional (3D) vision in species ranging from insects to primates. Understanding how the brain extracts depth from two different retinal images represents a tractable challenge in sensory neuroscience that has so far evaded full explanation. Central to current thinking is the idea that the brain needs to identify matching features in the two retinal images (i.e., solving the “stereoscopic correspondence problem”) so that the depth of objects in the world can be triangulated. Although intuitive, this approach fails to account for key physiological and perceptual observations. We show that formulating the problem to identify “correct matches” is suboptimal and propose an alternative, based on optimal information encoding, that mixes disparity detection with “proscription”: exploiting dissimilar features to provide evidence against unlikely interpretations. We demonstrate the role of these “what not” responses in a neural network optimized to extract depth in natural images. The network combines information for and against the likely depth structure of the viewed scene, naturally reproducing key characteristics of both neural responses and perceptual interpretations. We capture the encoding and readout computations of the network in simple analytical form and derive a binocular likelihood model that provides a unified account of long-standing puzzles in 3D vision at the physiological and perceptual levels. We suggest that marrying detection with proscription provides an effective coding strategy for sensory estimation that may be useful for diverse feature domains (e.g., motion) and multisensory integration. The brain uses “what not” detectors to facilitate 3D vision Binocular mismatches are used to drive suppression of incompatible depths Proscription accounts for depth perception without binocular correspondence A simple analytical model captures perceptual and neural responses
Collapse
Affiliation(s)
- Nuno R Goncalves
- Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK
| | - Andrew E Welchman
- Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK.
| |
Collapse
|
16
|
Wohlgemuth MJ, Kothari NB, Moss CF. Action Enhances Acoustic Cues for 3-D Target Localization by Echolocating Bats. PLoS Biol 2016; 14:e1002544. [PMID: 27608186 PMCID: PMC5015854 DOI: 10.1371/journal.pbio.1002544] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Accepted: 08/04/2016] [Indexed: 11/19/2022] Open
Abstract
Under natural conditions, animals encounter a barrage of sensory information from which they must select and interpret biologically relevant signals. Active sensing can facilitate this process by engaging motor systems in the sampling of sensory information. The echolocating bat serves as an excellent model to investigate the coupling between action and sensing because it adaptively controls both the acoustic signals used to probe the environment and movements to receive echoes at the auditory periphery. We report here that the echolocating bat controls the features of its sonar vocalizations in tandem with the positioning of the outer ears to maximize acoustic cues for target detection and localization. The bat’s adaptive control of sonar vocalizations and ear positioning occurs on a millisecond timescale to capture spatial information from arriving echoes, as well as on a longer timescale to track target movement. Our results demonstrate that purposeful control over sonar sound production and reception can serve to improve acoustic cues for localization tasks. This finding also highlights the general importance of movement to sensory processing across animal species. Finally, our discoveries point to important parallels between spatial perception by echolocation and vision. As an echolocating bat tracks a moving target, it produces head waggles and adjusts the separation of the tips of its ears to enhance cues for target detection and localization. These findings suggest parallels in active sensing between echolocation and vision. As animals operate in the natural environment, they must detect and process relevant sensory information embedded in complex and noisy signals. One strategy to overcome this challenge is to use active sensing or behavioral adjustments to extract sensory information from a selected region of the environment. We studied one of nature’s champions in auditory active sensing—the echolocating bat—to understand how this animal extracts task-relevant acoustic cues to detect and track a moving target. The bat produces high-frequency vocalizations and processes information carried by returning echoes to navigate and catch prey. This animal serves as an excellent model of active sensing because both sonar signal transmission and echo reception are under the animal’s active control. We used high-speed stereo video images of the bat’s head and ear movements, along with synchronized audio recordings, to study how the bat coordinates adaptive motor behaviors when detecting and tracking moving prey. We found that the bat synchronizes changes in sonar vocal production with changes in the movements of the head and ears to enhance acoustic cues for target detection and localization.
Collapse
Affiliation(s)
- Melville J. Wohlgemuth
- Department of Psychology and Institute for Systems Research, Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
- * E-mail:
| | - Ninad B. Kothari
- Department of Psychology and Institute for Systems Research, Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Cynthia F. Moss
- Department of Psychology and Institute for Systems Research, Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
17
|
Pinotsis DA, Perry G, Litvak V, Singh KD, Friston KJ. Intersubject variability and induced gamma in the visual cortex: DCM with empirical Bayes and neural fields. Hum Brain Mapp 2016; 37:4597-4614. [PMID: 27593199 PMCID: PMC5111616 DOI: 10.1002/hbm.23331] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2016] [Revised: 07/21/2016] [Accepted: 07/22/2016] [Indexed: 12/11/2022] Open
Abstract
This article describes the first application of a generic (empirical) Bayesian analysis of between‐subject effects in the dynamic causal modeling (DCM) of electrophysiological (MEG) data. It shows that (i) non‐invasive (MEG) data can be used to characterize subject‐specific differences in cortical microcircuitry and (ii) presents a validation of DCM with neural fields that exploits intersubject variability in gamma oscillations. We find that intersubject variability in visually induced gamma responses reflects changes in the excitation‐inhibition balance in a canonical cortical circuit. Crucially, this variability can be explained by subject‐specific differences in intrinsic connections to and from inhibitory interneurons that form a pyramidal‐interneuron gamma network. Our approach uses Bayesian model reduction to evaluate the evidence for (large sets of) nested models—and optimize the corresponding connectivity estimates at the within and between‐subject level. We also consider Bayesian cross‐validation to obtain predictive estimates for gamma‐response phenotypes, using a leave‐one‐out procedure. Hum Brain Mapp 37:4597–4614, 2016. © The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Dimitris A Pinotsis
- The Picower Institute for Learning & Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts.,The Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London, WC1N 3BG
| | - Gavin Perry
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, Wales, CF10 3AT, United Kingdom
| | - Vladimir Litvak
- The Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London, WC1N 3BG
| | - Krish D Singh
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, Wales, CF10 3AT, United Kingdom
| | - Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London, WC1N 3BG
| |
Collapse
|
18
|
3D Visual Response Properties of MSTd Emerge from an Efficient, Sparse Population Code. J Neurosci 2016; 36:8399-415. [PMID: 27511012 DOI: 10.1523/jneurosci.0396-16.2016] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 06/15/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Neurons in the dorsal subregion of the medial superior temporal (MSTd) area of the macaque respond to large, complex patterns of retinal flow, implying a role in the analysis of self-motion. Some neurons are selective for the expanding radial motion that occurs as an observer moves through the environment ("heading"), and computational models can account for this finding. However, ample evidence suggests that MSTd neurons exhibit a continuum of visual response selectivity to large-field motion stimuli. Furthermore, the underlying computational principles by which these response properties are derived remain poorly understood. Here we describe a computational model of macaque MSTd based on the hypothesis that neurons in MSTd efficiently encode the continuum of large-field retinal flow patterns on the basis of inputs received from neurons in MT with receptive fields that resemble basis vectors recovered with non-negative matrix factorization. These assumptions are sufficient to quantitatively simulate neurophysiological response properties of MSTd cells, such as 3D translation and rotation selectivity, suggesting that these properties might simply be a byproduct of MSTd neurons performing dimensionality reduction on their inputs. At the population level, model MSTd accurately predicts eye velocity and heading using a sparse distributed code, consistent with the idea that biological MSTd might be well equipped to efficiently encode various self-motion variables. The present work aims to add some structure to the often contradictory findings about macaque MSTd, and offers a biologically plausible account of a wide range of visual response properties ranging from single-unit selectivity to population statistics. SIGNIFICANCE STATEMENT Using a dimensionality reduction technique known as non-negative matrix factorization, we found that a variety of medial superior temporal (MSTd) neural response properties could be derived from MT-like input features. The responses that emerge from this technique, such as 3D translation and rotation selectivity, spiral tuning, and heading selectivity, can account for a number of empirical results. These findings (1) provide a further step toward a scientific understanding of the often nonintuitive response properties of MSTd neurons; (2) suggest that response properties, such as complex motion tuning and heading selectivity, might simply be a byproduct of MSTd neurons performing dimensionality reduction on their inputs; and (3) imply that motion perception in the cortex is consistent with ideas from the efficient-coding and free-energy principles.
Collapse
|
19
|
Kim HR, Pitkow X, Angelaki DE, DeAngelis GC. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons. J Neurophysiol 2016; 116:1449-67. [PMID: 27334948 DOI: 10.1152/jn.00005.2016] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Accepted: 06/16/2016] [Indexed: 11/22/2022] Open
Abstract
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: "congruent" and "opposite" cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York;
| |
Collapse
|
20
|
Joint representation of translational and rotational components of optic flow in parietal cortex. Proc Natl Acad Sci U S A 2016; 113:5077-82. [PMID: 27095846 DOI: 10.1073/pnas.1604818113] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals.
Collapse
|
21
|
Cao B, Mingolla E, Yazdanbakhsh A. Tuning Properties of MT and MSTd and Divisive Interactions for Eye-Movement Compensation. PLoS One 2015; 10:e0142964. [PMID: 26575648 PMCID: PMC4648577 DOI: 10.1371/journal.pone.0142964] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2014] [Accepted: 10/29/2015] [Indexed: 11/18/2022] Open
Abstract
The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.
Collapse
Affiliation(s)
- Bo Cao
- Department of Psychiatry and Behavioral Sciences, Medical School, The University of Texas Health Science Center at Houston, Houston, United States of America
| | - Ennio Mingolla
- Department of Communication Sciences and Disorders, Northeastern University, Boston, United States of America
| | - Arash Yazdanbakhsh
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, United States of America
- Department of Psychological & Brain Sciences, Boston University, Boston, United States of America
- * E-mail:
| |
Collapse
|
22
|
Cellular evidence for efference copy in Drosophila visuomotor processing. Nat Neurosci 2015; 18:1247-55. [PMID: 26237362 DOI: 10.1038/nn.4083] [Citation(s) in RCA: 117] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2015] [Accepted: 07/09/2015] [Indexed: 12/13/2022]
Abstract
Each time a locomoting fly turns, the visual image sweeps over the retina and generates a motion stimulus. Classic behavioral experiments suggested that flies use active neural-circuit mechanisms to suppress the perception of self-generated visual motion during intended turns. Direct electrophysiological evidence, however, has been lacking. We found that visual neurons in Drosophila receive motor-related inputs during rapid flight turns. These inputs arrived with a sign and latency appropriate for suppressing each targeted cell's visual response to the turn. Precise measurements of behavioral and neuronal response latencies supported the idea that motor-related inputs to optic flow-processing cells represent internal predictions of the expected visual drive induced by voluntary turns. Motor-related inputs to small object-selective visual neurons could reflect either proprioceptive feedback from the turn or internally generated signals. Our results in Drosophila echo the suppression of visual perception during rapid eye movements in primates, demonstrating common functional principles of sensorimotor processing across phyla.
Collapse
|
23
|
Sunkara A, DeAngelis GC, Angelaki DE. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex. eLife 2015; 4. [PMID: 25693417 PMCID: PMC4337725 DOI: 10.7554/elife.04693] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Accepted: 01/20/2015] [Indexed: 11/16/2022] Open
Abstract
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI:http://dx.doi.org/10.7554/eLife.04693.001 When strolling along a path beside a busy street, we can look around without losing our stride. The things we see change as we walk forward, and our view also changes if we turn our head—for example, to look at a passing car. Nevertheless, we can still tell that we are walking in a straight-line because our brain is able to compute the direction in which we are heading by discounting the visual changes caused by rotating our head or eyes. It remains unclear how the brain gets the information about head and eye movements that it would need to be able to do this. Many researchers had proposed that the brain estimates these rotations by using a copy of the neural signals that are sent to the muscles to move the eyes or head. However, it is possible that the brain can estimate head and eye rotations by directly analyzing the visual information from the eyes. One region of the brain that may contribute to this process is the ventral intraparietal area or ‘area VIP’ for short. Sunkara et al. devised an experiment that can help distinguish the effects of visual cues from copies of neural signals sent to the muscles during eye rotations. This involved training monkeys to look at a 3D display of moving dots, which gives the impression of moving through space. Sunkara et al. then measured the electrical signals in area VIP either when the monkey moved its eyes (to follow a moving target), or when the display changed to give the monkey the same visual cues as if it had rotated its eyes, when in fact it had not. Sunkara et al. found that the electrical signals recorded in area VIP when the monkey was given the illusion of rotating its eyes were similar to the signals recorded when the monkey actually rotated its eyes. This suggests that visual cues play an important role in correcting for the effects of eye rotations and correctly estimating the direction in which we are heading. Further research into the mechanisms behind this neural process could lead to new vision-based treatments for medical disorders that cause people to have balance problems. Similar research could also help to identify ways to improve navigation in automated vehicles, such as driverless cars. DOI:http://dx.doi.org/10.7554/eLife.04693.002
Collapse
Affiliation(s)
- Adhira Sunkara
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
24
|
|