1
|
Peltier NE, Anzai A, Moreno-Bote R, DeAngelis GC. A neural mechanism for optic flow parsing in macaque visual cortex. Curr Biol 2024; 34:4983-4997.e9. [PMID: 39389059 PMCID: PMC11537840 DOI: 10.1016/j.cub.2024.09.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/21/2024] [Accepted: 09/12/2024] [Indexed: 10/12/2024]
Abstract
For the brain to compute object motion in the world during self-motion, it must discount the global patterns of image motion (optic flow) caused by self-motion. Optic flow parsing is a proposed visual mechanism for computing object motion in the world, and studies in both humans and monkeys have demonstrated perceptual biases consistent with the operation of a flow-parsing mechanism. However, the neural basis of flow parsing remains unknown. We demonstrate, at both the individual unit and population levels, that neural activity in macaque middle temporal (MT) area is biased by peripheral optic flow in a manner that can at least partially account for perceptual biases induced by flow parsing. These effects cannot be explained by conventional surround suppression mechanisms or choice-related activity and have substantial neural latency. Together, our findings establish the first neural basis for the computation of scene-relative object motion based on flow parsing.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Rubén Moreno-Bote
- Center for Brain and Cognition & Department of Engineering, Universitat Pompeu Fabra, Barcelona 08002, Spain; Serra Húnter Fellow Programme, Universitat Pompeu Fabra, Barcelona 08002, Spain
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.
| |
Collapse
|
2
|
Kong L, Zeng F, Zhang Y, Li L, Chen A. The influence of form on motion signal processing in the ventral intraparietal area of macaque monkeys. Heliyon 2024; 10:e36913. [PMID: 39286089 PMCID: PMC11402950 DOI: 10.1016/j.heliyon.2024.e36913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 07/26/2024] [Accepted: 08/23/2024] [Indexed: 09/19/2024] Open
Abstract
The visual system relies on both motion and form signals to perceive the direction of self-motion, yet the coordination mechanisms between these two elements in this process remain elusive. In the current study, we employed heading perception as a model to delve into the interaction characteristics between form and motion signals. We recorded the responses of neurons in the ventral intraparietal area (VIP), an area with strong heading selectivity, to motion-only, form-only, and combined stimuli of simulated self-motion. Intriguingly, VIP neurons responded to form-only cues defined by Glass patterns, although they exhibited no tuning selectivity. In combined condition, introducing a small offset between form and motion cues significantly enhanced neuronal sensitivity to motion cues. However, with a larger offset, the enhancement effect on sensitivity became comparatively smaller. Moreover, we observed that the influence of form cues on neuronal response to motion cues is more pronounced in the later stage (1-2 s) of stimulation, with a relatively smaller effect in the early stage (0-1 s). This suggests a dynamic interaction between motion and form cues over time for heading perception. In summary, our study uncovered that in area VIP, form information plays a role in constructing accurate self-motion perception. This adds valuable insights into the complex dynamics of how the brain integrates motion and form cues for the perception of one's own movements.
Collapse
Affiliation(s)
- Lingqi Kong
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
| | - Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
| | - Yingying Zhang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
| | - Li Li
- Faculty of Arts and Science, New York University Shanghai, Shanghai, 200122, China
- New York University-East China Normal University Joint Research Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, 200062, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
- New York University-East China Normal University Joint Research Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, 200062, China
| |
Collapse
|
3
|
Egger SW, Keemink SW, Goldman MS, Britten KH. Context-dependence of deterministic and nondeterministic contributions to closed-loop steering control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.26.605325. [PMID: 39131368 PMCID: PMC11312469 DOI: 10.1101/2024.07.26.605325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
In natural circumstances, sensory systems operate in a closed loop with motor output, whereby actions shape subsequent sensory experiences. A prime example of this is the sensorimotor processing required to align one's direction of travel, or heading, with one's goal, a behavior we refer to as steering. In steering, motor outputs work to eliminate errors between the direction of heading and the goal, modifying subsequent errors in the process. The closed-loop nature of the behavior makes it challenging to determine how deterministic and nondeterministic processes contribute to behavior. We overcome this by applying a nonparametric, linear kernel-based analysis to behavioral data of monkeys steering through a virtual environment in two experimental contexts. In a given context, the results were consistent with previous work that described the transformation as a second-order linear system. Classically, the parameters of such second-order models are associated with physical properties of the limb such as viscosity and stiffness that are commonly assumed to be approximately constant. By contrast, we found that the fit kernels differed strongly across tasks in these and other parameters, suggesting context-dependent changes in neural and biomechanical processes. We additionally fit residuals to a simple noise model and found that the form of the noise was highly conserved across both contexts and animals. Strikingly, the fitted noise also closely matched that found previously in a human steering task. Altogether, this work presents a kernel-based analysis that characterizes the context-dependence of deterministic and non-deterministic components of a closed-loop sensorimotor task.
Collapse
Affiliation(s)
- Seth W. Egger
- Center for Neuroscience, University of California, Davis
| | - Sander W. Keemink
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
| | - Mark S. Goldman
- Center for Neuroscience, University of California, Davis
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
- Department of Ophthalmology and Vision Science, University of California, Davis
| | - Kenneth H. Britten
- Center for Neuroscience, University of California, Davis
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
| |
Collapse
|
4
|
Cruz TL, Chiappe ME. Multilevel visuomotor control of locomotion in Drosophila. Curr Opin Neurobiol 2023; 82:102774. [PMID: 37651855 DOI: 10.1016/j.conb.2023.102774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 07/26/2023] [Accepted: 08/01/2023] [Indexed: 09/02/2023]
Abstract
Vision is critical for the control of locomotion, but the underlying neural mechanisms by which visuomotor circuits contribute to the movement of the body through space are yet not well understood. Locomotion engages multiple control systems, forming distinct interacting "control levels" driven by the activity of distributed and overlapping circuits. Therefore, a comprehensive understanding of the mechanisms underlying locomotion control requires the consideration of all control levels and their necessary coordination. Due to their small size and the wide availability of experimental tools, Drosophila has become an important model system to study this coordination. Traditionally, insect locomotion has been divided into studying either the biomechanics and local control of limbs, or navigation and course control. However, recent developments in tracking techniques, and physiological and genetic tools in Drosophila have prompted researchers to examine multilevel control coordination in flight and walking.
Collapse
Affiliation(s)
- Tomás L Cruz
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
| | - M Eugenia Chiappe
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal.
| |
Collapse
|
5
|
DiRisio GF, Ra Y, Qiu Y, Anzai A, DeAngelis GC. Neurons in Primate Area MSTd Signal Eye Movement Direction Inferred from Dynamic Perspective Cues in Optic Flow. J Neurosci 2023; 43:1888-1904. [PMID: 36725323 PMCID: PMC10027048 DOI: 10.1523/jneurosci.1885-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/18/2023] [Accepted: 01/24/2023] [Indexed: 02/03/2023] Open
Abstract
Smooth eye movements are common during natural viewing; we frequently rotate our eyes to track moving objects or to maintain fixation on an object during self-movement. Reliable information about smooth eye movements is crucial to various neural computations, such as estimating heading from optic flow or judging depth from motion parallax. While it is well established that extraretinal signals (e.g., efference copies of motor commands) carry critical information about eye velocity, the rotational optic flow field produced by eye rotations also carries valuable information. Although previous work has shown that dynamic perspective cues in optic flow can be used in computations that require estimates of eye velocity, it has remained unclear where and how the brain processes these visual cues and how they are integrated with extraretinal signals regarding eye rotation. We examined how neurons in the dorsal region of the medial superior temporal area (MSTd) of two male rhesus monkeys represent the direction of smooth pursuit eye movements based on both visual cues (dynamic perspective) and extraretinal signals. We find that most MSTd neurons have matched preferences for the direction of eye rotation based on visual and extraretinal signals. Moreover, neural responses to combinations of these signals are well predicted by a weighted linear summation model. These findings demonstrate a neural substrate for representing the velocity of smooth eye movements based on rotational optic flow and establish area MSTd as a key node for integrating visual and extraretinal signals into a more generalized representation of smooth eye movements.SIGNIFICANCE STATEMENT We frequently rotate our eyes to smoothly track objects of interest during self-motion. Information about eye velocity is crucial for a variety of computations performed by the brain, including depth perception and heading perception. Traditionally, information about eye rotation has been thought to arise mainly from extraretinal signals, such as efference copies of motor commands. Previous work shows that eye velocity can also be inferred from rotational optic flow that accompanies smooth eye movements, but the neural origins of these visual signals about eye rotation have remained unknown. We demonstrate that macaque neurons signal the direction of smooth eye rotation based on visual signals, and that they integrate both visual and extraretinal signals regarding eye rotation in a congruent fashion.
Collapse
Affiliation(s)
- Grace F DiRisio
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, University of Chicago, Chicago, Illinois 60637
| | - Yongsoo Ra
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Yinghui Qiu
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- College of Veterinary Medicine, Cornell University, Ithaca, New York 14853-6401
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| |
Collapse
|
6
|
Maus N, Layton OW. Estimating heading from optic flow: Comparing deep learning network and human performance. Neural Netw 2022; 154:383-396. [DOI: 10.1016/j.neunet.2022.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 06/17/2022] [Accepted: 07/07/2022] [Indexed: 10/16/2022]
|
7
|
Luna R, Serrano-Pedraza I, Gegenfurtner KR, Schütz AC, Souto D. Achieving visual stability during smooth pursuit eye movements: Directional and confidence judgements favor a recalibration model. Vision Res 2021; 184:58-73. [PMID: 33873123 DOI: 10.1016/j.visres.2021.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 03/05/2021] [Accepted: 03/10/2021] [Indexed: 11/17/2022]
Abstract
During smooth pursuit eye movements, the visual system is faced with the task of telling apart reafferent retinal motion from motion in the world. While an efference copy signal can be used to predict the amount of reafference to subtract from the image, an image-based adaptive mechanism can ensure the continued accuracy of this computation. Indeed, repeatedly exposing observers to background motion with a fixed direction relative to that of the target that is pursued leads to a shift in their point of subjective stationarity (PSS). We asked whether the effect of exposure reflects adaptation to motion contingent on pursuit direction, recalibration of a reference signal or both. A recalibration account predicts a shift in reference signal (i.e. predicted reafference), resulting in a shift of PSS, but no change in sensitivity. Results show that both directional judgements and confidence judgements about them favor a recalibration account, whereby there is an adaptive shift in the reference signal caused by the prevailing retinal motion during pursuit. We also found that the recalibration effect is specific to the exposed visual hemifield.
Collapse
Affiliation(s)
- Raúl Luna
- Department of Experimental Psychology, Faculty of Psychology, Universidad Complutense de Madrid, Madrid, Spain; School of Psychology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Ignacio Serrano-Pedraza
- Department of Experimental Psychology, Faculty of Psychology, Universidad Complutense de Madrid, Madrid, Spain
| | | | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Phillips-Universität Marburg, Giessen, Germany
| | - David Souto
- Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, United Kingdom.
| |
Collapse
|
8
|
Wild B, Treue S. Primate extrastriate cortical area MST: a gateway between sensation and cognition. J Neurophysiol 2021; 125:1851-1882. [PMID: 33656951 DOI: 10.1152/jn.00384.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
9
|
Ahn MH, Park JH, Jeon H, Lee HJ, Kim HJ, Hong SK. Temporal Dynamics of Visually Induced Motion Perception and Neural Evidence of Alterations in the Motion Perception Process in an Immersive Virtual Reality Environment. Front Neurosci 2020; 14:600839. [PMID: 33328873 PMCID: PMC7710904 DOI: 10.3389/fnins.2020.600839] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 10/29/2020] [Indexed: 01/10/2023] Open
Abstract
Even though reciprocal inhibitory vestibular interactions following visual stimulation have been understood as sensory-reweighting mechanisms to stabilize motion perception; this hypothesis has not been thoroughly investigated with temporal dynamic measurements. Recently, virtual reality technology has been implemented in different medical domains. However, exposure in virtual reality environments can cause discomfort, including nausea or headache, due to visual-vestibular conflicts. We speculated that self-motion perception could be altered by accelerative visual motion stimulation in the virtual reality situation because of the absence of vestibular signals (visual-vestibular sensory conflict), which could result in the sickness. The current study investigated spatio-temporal profiles for motion perception using immersive virtual reality. We demonstrated alterations in neural dynamics under the sensory mismatch condition (accelerative visual motion stimulation) and in participants with high levels of sickness after driving simulation. Additionally, an event-related potentials study revealed that the high-sickness group presented with higher P3 amplitudes in sensory mismatch conditions, suggesting that it would be a substantial demand of cognitive resources for motion perception on sensory mismatch conditions.
Collapse
Affiliation(s)
- Min-Hee Ahn
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Anyang, South Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Jeong Hye Park
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Anyang, South Korea
| | - Hanjae Jeon
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Hyo-Jeong Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Anyang, South Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Hyung-Jong Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Anyang, South Korea
| | - Sung Kwang Hong
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Anyang, South Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| |
Collapse
|
10
|
Wang Y, Wang X, Sang L, Zhang C, Zhao BT, Mo JJ, Hu WH, Shao XQ, Wang F, Ai L, Zhang JG, Zhang K. Network of ictal head version in mesial temporal lobe epilepsy. Brain Behav 2020; 10:e01820. [PMID: 32857475 PMCID: PMC7667364 DOI: 10.1002/brb3.1820] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 07/29/2020] [Accepted: 08/11/2020] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVE Ictal head version is a common clinical manifestation of mesial temporal lobe epilepsy (MTLE). Nevertheless, the location of the symptomatogenic zone and the network involved in head version remains unclear. We attempt to explain these problems by analyzing interictal 18 FDG-PET imaging and ictal stereo-electroencephalography (SEEG) recordings in MTLE patients. METHODS Fifty-eight patients with MTLE were retrospectively analyzed. The patients were divided into version (+) and (-) groups according to the occurrence of versive head movements. The interictal PET data were compared among 18 healthy controls and the (+) and (-) groups. Furthermore, epileptogenicity index (EI) values and correlations with the onset time of head version were analyzed with SEEG. RESULTS Intergroup comparisons showed that PET differences were observed in the middle temporal neocortex (MTN), posterior temporal neocortex (PTN), supramarginal gyrus (SMG), and inferior parietal lobe (IPL). The EI values in the SMG, MTN, and PTN were significantly higher in the version (+) group than in the version (-) group. A linear relationship was observed between head version onset and ipsilateral onset time in the SMG, orbitofrontal cortex (OFC), MTN, and PTN. A linear relationship was observed between EI, the difference between version onset and temporal neocortex onset, and the y-axis of the MNI coordinate. CONCLUSION The generation of ictal head version contributes to the propagation of ictal discharges to the intraparietal sulcus (IPS) area. The network of version originates from a mesial temporal lobe structure, passes through the MTN, PTN, and SMG, and likely ends at the IPS.
Collapse
Affiliation(s)
- Yao Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Xiu Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,Beijing Key Laboratory of Neurostimulation, Beijing, China
| | - Lin Sang
- Epilepsy Center, Peking University First Hospital Fengtai Hospital, Beijing, China
| | - Chao Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,Beijing Key Laboratory of Neurostimulation, Beijing, China
| | - Bao-Tian Zhao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Jia-Jie Mo
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Wen-Han Hu
- Beijing Key Laboratory of Neurostimulation, Beijing, China.,Stereotactic and Functional Neurosurgery Laboratory, Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Xiao-Qiu Shao
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Feng Wang
- Department of Neurosurgery, General Hospital of Ningxia Medical University, Ningxia, China
| | - Lin Ai
- Department of Nuclear Medicine, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Jian-Guo Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,Beijing Key Laboratory of Neurostimulation, Beijing, China.,Stereotactic and Functional Neurosurgery Laboratory, Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Kai Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,Beijing Key Laboratory of Neurostimulation, Beijing, China
| |
Collapse
|
11
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
12
|
Abstract
During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA.,
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| |
Collapse
|
13
|
Kuang S, Deng H, Zhang T. Adaptive heading performance during self-motion perception. Psych J 2019; 9:295-305. [PMID: 31814320 DOI: 10.1002/pchj.330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 11/05/2019] [Accepted: 11/13/2019] [Indexed: 11/07/2022]
Abstract
Previous studies have documented that the perception of self-motion direction can be extracted from the patterns of image motion on the retina (also termed optic flow). Self-motion perception remains stable even when the optic-flow information is distorted by concurrent gaze shifts from body/eye rotations. This has been interpreted that extraretinal signals-efference copies of eye/body movements-are involved in compensating for retinal distortions. Here, we tested an alternative hypothesis to the extraretinal interpretation. We hypothesized that accurate self-motion perception can be achieved from a purely optic-flow-based visual strategy acquired through experience, independent of extraretinal mechanism. To test this, we asked human subjects to perform a self-motion direction discrimination task under normal optic flow (fixation condition) or distorted optic flow resulted from either realistic (pursuit condition) or simulated (simulated condition) eye movements. The task was performed either without (pre- and posttraining) or with (during training) the feedback about the correct answer. We first replicated the previous observation that before training, direction perception was greatly impaired in the simulated condition where the optic flow was distorted and extraretinal eye movement signals were absent. We further showed that after a few training sessions, the initial impairment in direction perception was gradually improved. These results reveal that behavioral training can enforce the exploitation of retinal cues to compensate for the distortion, without the contribution from the extraretinal signals. Our results suggest that self-motion perception is a flexible and adaptive process which might depend on neural plasticity in relevant cortical areas.
Collapse
Affiliation(s)
- Shenbing Kuang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Hu Deng
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Tao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
14
|
Retinal Stabilization Reveals Limited Influence of Extraretinal Signals on Heading Tuning in the Medial Superior Temporal Area. J Neurosci 2019; 39:8064-8078. [PMID: 31488610 DOI: 10.1523/jneurosci.0388-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Revised: 08/17/2019] [Accepted: 08/20/2019] [Indexed: 11/21/2022] Open
Abstract
Heading perception in primates depends heavily on visual optic-flow cues. Yet during self-motion, heading percepts remain stable, even though smooth-pursuit eye movements often distort optic flow. According to theoretical work, self-motion can be represented accurately by compensating for these distortions in two ways: via retinal mechanisms or via extraretinal efference-copy signals, which predict the sensory consequences of movement. Psychophysical evidence strongly supports the efference-copy hypothesis, but physiological evidence remains inconclusive. Neurons that signal the true heading direction during pursuit are found in visual areas of monkey cortex, including the dorsal medial superior temporal area (MSTd). Here we measured heading tuning in MSTd using a novel stimulus paradigm, in which we stabilize the optic-flow stimulus on the retina during pursuit. This approach isolates the effects on neuronal heading preferences of extraretinal signals, which remain active while the retinal stimulus is prevented from changing. Our results from 3 female monkeys demonstrate a significant but small influence of extraretinal signals on the preferred heading directions of MSTd neurons. Under our stimulus conditions, which are rich in retinal cues, we find that retinal mechanisms dominate physiological corrections for pursuit eye movements, suggesting that extraretinal cues, such as predictive efference-copy mechanisms, have a limited role under naturalistic conditions.SIGNIFICANCE STATEMENT Sensory systems discount stimulation caused by an animal's own behavior. For example, eye movements cause irrelevant retinal signals that could interfere with motion perception. The visual system compensates for such self-generated motion, but how this happens is unclear. Two theoretical possibilities are a purely visual calculation or one using an internal signal of eye movements to compensate for their effects. The latter can be isolated by experimentally stabilizing the image on a moving retina, but this approach has never been adopted to study motion physiology. Using this method, we find that extraretinal signals have little influence on activity in visual cortex, whereas visually based corrections for ongoing eye movements have stronger effects and are likely most important under real-world conditions.
Collapse
|
15
|
Yang L, Gu Y. Distinct spatial coordinate of visual and vestibular heading signals in macaque FEFsem and MSTd. eLife 2017; 6. [PMID: 29134944 PMCID: PMC5685470 DOI: 10.7554/elife.29809] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/03/2017] [Indexed: 11/17/2022] Open
Abstract
Precise heading estimate requires integration of visual optic flow and vestibular inertial motion originating from distinct spatial coordinates (eye- and head-centered, respectively). To explore whether the two heading signals may share a common reference frame along the hierarchy of cortical stages, we explored two multisensory areas in macaques: the smooth pursuit area of the frontal eye field (FEFsem) closer to the motor side, and the dorsal portion of medial superior temporal area (MSTd) closer to the sensory side. In both areas, vestibular signals are head-centered, whereas visual signals are mainly eye-centered. However, visual signals in FEFsem are more shifted towards the head coordinate compared to MSTd. These results are robust being largely independent on: (1) smooth pursuit eye movement, (2) motion parallax cue, and (3) behavioral context for active heading estimation, indicating that the visual and vestibular heading signals may be represented in distinct spatial coordinate in sensory cortices.
Collapse
Affiliation(s)
- Lihua Yang
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
16
|
Wang J, Guo X, Zhuang X, Chen T, Yan W. Disrupted pursuit compensation during self-motion perception in early Alzheimer's disease. Sci Rep 2017. [PMID: 28642572 PMCID: PMC5481347 DOI: 10.1038/s41598-017-04377-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Our perception of the world is remarkably stable despite of distorted retinal input due to frequent eye movements. It is considered that the brain uses corollary discharge, efference copies of signals sent from motor to visual regions, to compensate for distortions and stabilize visual perception. In this study, we tested whether patients with Alzheimer’s disease (AD) have impaired corollary discharge functions as evidenced by reduced compensation during the perception of optic flow that mimics self-motion in the environment. We asked a group of early-stage AD patients and age-matched healthy controls to indicate the perceived direction of self-motion based on optic flow while tracking a moving target with smooth pursuit eye movement, or keeping eye fixation at a stationary target. We first replicated the previous findings that healthy participants were able to compensate for distorted optic flow in the presence of eye movements, as indicated by similar performance of self-motion perception between pursuit and fixation conditions. In stark contrast, AD patients showed impaired self-motion perception when the optic flow was distorted by eye movements. Our results suggest that early-stage AD pathology is associated with disrupted eye movement compensation during self-motion perception.
Collapse
Affiliation(s)
- Jingru Wang
- Department of Neurology, Liaocheng People's Hospital and Liaocheng Clinical School of Taishan Medical University, Liaocheng city, Shandong Province, 252000, China
| | - Xiaojun Guo
- Department of Neurology, Liaocheng People's Hospital and Liaocheng Clinical School of Taishan Medical University, Liaocheng city, Shandong Province, 252000, China
| | - Xianbo Zhuang
- Department of Neurology, Liaocheng People's Hospital and Liaocheng Clinical School of Taishan Medical University, Liaocheng city, Shandong Province, 252000, China
| | - Tuanzhi Chen
- Department of Neurology, Liaocheng People's Hospital and Liaocheng Clinical School of Taishan Medical University, Liaocheng city, Shandong Province, 252000, China
| | - Wei Yan
- Department of Neurology, Liaocheng People's Hospital and Liaocheng Clinical School of Taishan Medical University, Liaocheng city, Shandong Province, 252000, China.
| |
Collapse
|
17
|
Kuang S, Shi J, Wang Y, Zhang T. Where are you heading? Flexible integration of retinal and extra-retinal cues during self-motion perception. Psych J 2017; 6:141-152. [PMID: 28514063 DOI: 10.1002/pchj.165] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 01/26/2017] [Accepted: 02/02/2017] [Indexed: 11/08/2022]
Abstract
As we move forward in the environment, we experience a radial expansion of the retinal image, wherein the center corresponds to the instantaneous direction of self-motion. Humans can precisely perceive their heading direction even when the retinal motion is distorted by gaze shifts due to eye/body rotations. Previous studies have suggested that both retinal and extra-retinal strategies can compensate for the retinal image distortion. However, the relative contributions of each strategy remain unclear. To address this issue, we devised a two-alternative-headings discrimination task, in which participants had either real or simulated pursuit eye movements. The two conditions had the same retinal input but either with or without extra-retinal eye movement signals. Thus, the behavioral difference between conditions served as a metric of extra-retinal contribution. We systematically and independently manipulated pursuit speed, heading speed, and the reliability of retinal signals. We found that the levels of extra-retinal contributions increased with increasing pursuit speed (stronger extra-retinal signal), and with decreasing heading speed (weaker retinal signal). In addition, extra-retinal contributions also increased as we corrupted retinal signals with noise. Our results revealed that the relative magnitude of retinal and extra-retinal contributions was not fixed but rather flexibly adjusted to each specific task condition. This task-dependent, flexible integration appears to take the form of a reliability-based weighting scheme that maximizes heading performance.
Collapse
Affiliation(s)
- Shenbing Kuang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jinfu Shi
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yang Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Tao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
18
|
Strong SL, Silson EH, Gouws AD, Morland AB, McKeefry DJ. Differential processing of the direction and focus of expansion of optic flow stimuli in areas MST and V3A of the human visual cortex. J Neurophysiol 2017; 117:2209-2217. [PMID: 28298300 DOI: 10.1152/jn.00031.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 03/02/2017] [Accepted: 03/09/2017] [Indexed: 11/22/2022] Open
Abstract
Human neuropsychological and neuroimaging studies have raised the possibility that different attributes of optic flow stimuli, namely radial direction and the position of the focus of expansion (FOE), are processed within separate cortical areas. In the human brain, visual areas V5/MT+ and V3A have been proposed as integral to the analysis of these different attributes of optic flow stimuli. To establish direct causal relationships between neural activity in human (h)V5/MT+ and V3A and the perception of radial motion direction and FOE position, we used transcranial magnetic stimulation (TMS) to disrupt cortical activity in these areas while participants performed behavioral tasks dependent on these different aspects of optic flow stimuli. The cortical regions of interest were identified in seven human participants using standard functional MRI retinotopic mapping techniques and functional localizers. TMS to area V3A was found to disrupt FOE positional judgments but not radial direction discrimination, whereas the application of TMS to an anterior subdivision of hV5/MT+, MST/TO-2 produced the reverse effects, disrupting radial direction discrimination but eliciting no effect on the FOE positional judgment task. This double dissociation demonstrates that FOE position and radial direction of optic flow stimuli are signaled independently by neural activity in areas hV5/MT+ and V3A.NEW & NOTEWORTHY Optic flow constitutes a biologically relevant visual cue as we move through any environment. With the use of neuroimaging and brain-stimulation techniques, this study demonstrates that separate human brain areas are involved in the analysis of the direction of radial motion and the focus of expansion in optic flow. This dissociation reveals the existence of separate processing pathways for the analysis of different attributes of optic flow that are important for the guidance of self-locomotion and object avoidance.
Collapse
Affiliation(s)
- Samantha L Strong
- School of Optometry and Vision Science, University of Bradford, Bradford, West Yorkshire, United Kingdom.,York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom
| | - Edward H Silson
- York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom.,Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland; and
| | - André D Gouws
- York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom
| | - Antony B Morland
- York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom.,Centre for Neuroscience, Hull-York Medical School, University of York, York, United Kingdom
| | - Declan J McKeefry
- School of Optometry and Vision Science, University of Bradford, Bradford, West Yorkshire, United Kingdom;
| |
Collapse
|
19
|
Sheth BR, Young R. Two Visual Pathways in Primates Based on Sampling of Space: Exploitation and Exploration of Visual Information. Front Integr Neurosci 2016; 10:37. [PMID: 27920670 PMCID: PMC5118626 DOI: 10.3389/fnint.2016.00037] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Accepted: 10/25/2016] [Indexed: 11/14/2022] Open
Abstract
Evidence is strong that the visual pathway is segregated into two distinct streams—ventral and dorsal. Two proposals theorize that the pathways are segregated in function: The ventral stream processes information about object identity, whereas the dorsal stream, according to one model, processes information about either object location, and according to another, is responsible in executing movements under visual control. The models are influential; however recent experimental evidence challenges them, e.g., the ventral stream is not solely responsible for object recognition; conversely, its function is not strictly limited to object vision; the dorsal stream is not responsible by itself for spatial vision or visuomotor control; conversely, its function extends beyond vision or visuomotor control. In their place, we suggest a robust dichotomy consisting of a ventral stream selectively sampling high-resolution/focal spaces, and a dorsal stream sampling nearly all of space with reduced foveal bias. The proposal hews closely to the theme of embodied cognition: Function arises as a consequence of an extant sensory underpinning. A continuous, not sharp, segregation based on function emerges, and carries with it an undercurrent of an exploitation-exploration dichotomy. Under this interpretation, cells of the ventral stream, which individually have more punctate receptive fields that generally include the fovea or parafovea, provide detailed information about object shapes and features and lead to the systematic exploitation of said information; cells of the dorsal stream, which individually have large receptive fields, contribute to visuospatial perception, provide information about the presence/absence of salient objects and their locations for novel exploration and subsequent exploitation by the ventral stream or, under certain conditions, the dorsal stream. We leverage the dichotomy to unify neuropsychological cases under a common umbrella, account for the increased prevalence of multisensory integration in the dorsal stream under a Bayesian framework, predict conditions under which object recognition utilizes the ventral or dorsal stream, and explain why cells of the dorsal stream drive sensorimotor control and motion processing and have poorer feature selectivity. Finally, the model speculates on a dynamic interaction between the two streams that underscores a unified, seamless perception. Existing theories are subsumed under our proposal.
Collapse
Affiliation(s)
- Bhavin R Sheth
- Department of Electrical and Computer Engineering, University of HoustonHouston, TX, USA; Center for NeuroEngineering and Cognitive Systems, University of HoustonHouston, TX, USA
| | - Ryan Young
- Department of Neuroscience, Brandeis University Waltham, MA, USA
| |
Collapse
|
20
|
Abstract
In the current study, we explored observers' use of two distinct analyses for determining their direction of motion, or heading: a scene-based analysis and a motion-based analysis. In two experiments, subjects viewed sequentially presented, paired digitized images of real-world scenes and judged the direction of heading; the pairs were presented with various interstimulus intervals (ISIs). In Experiment 1, subjects could determine heading when the two frames were separated with a 1,000-ms ISI, long enough to eliminate apparent motion. In Experiment 2, subjects performed two tasks, a path-of-motion task and a memory-load task, under three different ISIs, 50 ms, 500 ms, and 1,000 ms. Heading accuracy decreased with an increase in ISI. Increasing memory load influenced heading judgments only for the longer ISI when motion-based information was not available. These results are consistent with the hypothesis that the scene-based analysis has a coarse spatial representation, is a sustained temporal process, and is capacity limited, whereas the motion-based analysis has a fine spatial resolution, is a transient temporal process, and is capacity unlimited.
Collapse
Affiliation(s)
- Sowon Hahn
- University of California at Riverside, USA
| | | | | |
Collapse
|
21
|
A faithful internal representation of walking movements in the Drosophila visual system. Nat Neurosci 2016; 20:72-81. [PMID: 27798632 DOI: 10.1038/nn.4435] [Citation(s) in RCA: 79] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2016] [Accepted: 10/04/2016] [Indexed: 12/13/2022]
Abstract
The integration of sensorimotor signals to internally estimate self-movement is critical for spatial perception and motor control. However, which neural circuits accurately track body motion and how these circuits control movement remain unknown. We found that a population of Drosophila neurons that were sensitive to visual flow patterns typically generated during locomotion, the horizontal system (HS) cells, encoded unambiguous quantitative information about the fly's walking behavior independently of vision. Angular and translational velocity signals were integrated with a behavioral-state signal and generated direction-selective and speed-sensitive graded changes in the membrane potential of these non-spiking cells. The nonvisual direction selectivity of HS cells cooperated with their visual selectivity only when the visual input matched that expected from the fly's movements, thereby revealing a circuit for internally monitoring voluntary walking. Furthermore, given that HS cells promoted leg-based turning, the activity of these cells could be used to control forward walking.
Collapse
|
22
|
3D Visual Response Properties of MSTd Emerge from an Efficient, Sparse Population Code. J Neurosci 2016; 36:8399-415. [PMID: 27511012 DOI: 10.1523/jneurosci.0396-16.2016] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 06/15/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Neurons in the dorsal subregion of the medial superior temporal (MSTd) area of the macaque respond to large, complex patterns of retinal flow, implying a role in the analysis of self-motion. Some neurons are selective for the expanding radial motion that occurs as an observer moves through the environment ("heading"), and computational models can account for this finding. However, ample evidence suggests that MSTd neurons exhibit a continuum of visual response selectivity to large-field motion stimuli. Furthermore, the underlying computational principles by which these response properties are derived remain poorly understood. Here we describe a computational model of macaque MSTd based on the hypothesis that neurons in MSTd efficiently encode the continuum of large-field retinal flow patterns on the basis of inputs received from neurons in MT with receptive fields that resemble basis vectors recovered with non-negative matrix factorization. These assumptions are sufficient to quantitatively simulate neurophysiological response properties of MSTd cells, such as 3D translation and rotation selectivity, suggesting that these properties might simply be a byproduct of MSTd neurons performing dimensionality reduction on their inputs. At the population level, model MSTd accurately predicts eye velocity and heading using a sparse distributed code, consistent with the idea that biological MSTd might be well equipped to efficiently encode various self-motion variables. The present work aims to add some structure to the often contradictory findings about macaque MSTd, and offers a biologically plausible account of a wide range of visual response properties ranging from single-unit selectivity to population statistics. SIGNIFICANCE STATEMENT Using a dimensionality reduction technique known as non-negative matrix factorization, we found that a variety of medial superior temporal (MSTd) neural response properties could be derived from MT-like input features. The responses that emerge from this technique, such as 3D translation and rotation selectivity, spiral tuning, and heading selectivity, can account for a number of empirical results. These findings (1) provide a further step toward a scientific understanding of the often nonintuitive response properties of MSTd neurons; (2) suggest that response properties, such as complex motion tuning and heading selectivity, might simply be a byproduct of MSTd neurons performing dimensionality reduction on their inputs; and (3) imply that motion perception in the cortex is consistent with ideas from the efficient-coding and free-energy principles.
Collapse
|
23
|
Macswan J, Rolstad K. Modularity and the Facilitation Effect: Psychological Mechanisms of Transfer in Bilingual Students. HISPANIC JOURNAL OF BEHAVIORAL SCIENCES 2016. [DOI: 10.1177/0739986305275173] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This article draws upon recent work in the cognitive neurosciences to suggest that the facilitation effect follows naturally within current psychological theory. A view of the mind as consisting of discrete mental modules, called psychological modularity, is defended with case study evidence of double dissociation. It is argued that transfer of academic subject knowledge occurs in bilingual settings as an epiphenomenon of mental architecture: Because content knowledge is independent of linguistic knowledge, it is accessible to any language or languages a person happens to know. As such, transfer should be seen as a metaphor for a process; it is simply a natural consequence of our mental architecture. Cummins’s developmental interdependence hypothesis, threshold hypothesis, and common underlying proficiency model are discussed. It is concluded that the facilitation effect is derived by the modularity thesis within a framework in which language is viewed as a cognitive domain separate from literacy and school subject matter knowledge.
Collapse
|
24
|
Abstract
When observers move through an environment, they are immersed in a sea of motions that guide their further movements. The horizontal relative motions of all possible pairs of stationary objects fall into three classes: They converge, diverge and slow down, or diverge with increasing velocity. Conjoined with ordinal depth information, the first two motions reveal nominal invariants, constraining heading to one side of the visual field. When two object pairs yield invariants on opposing sides of the heading, they can constrain judgments to a narrow region. Distributional analyses of responses in an experiment involving simulated observer movement suggest that observers follow these constraints.
Collapse
Affiliation(s)
- Ranxiao Frances Wang
- Cornell University
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | | |
Collapse
|
25
|
Abstract
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Collapse
|
26
|
Joint representation of translational and rotational components of optic flow in parietal cortex. Proc Natl Acad Sci U S A 2016; 113:5077-82. [PMID: 27095846 DOI: 10.1073/pnas.1604818113] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals.
Collapse
|
27
|
Greenlee M, Frank S, Kaliuzhna M, Blanke O, Bremmer F, Churan J, Cuturi LF, MacNeilage P, Smith A. Multisensory Integration in Self Motion Perception. Multisens Res 2016. [DOI: 10.1163/22134808-00002527] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
Collapse
Affiliation(s)
- Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Sebastian M. Frank
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Mariia Kaliuzhna
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Jan Churan
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Luigi F. Cuturi
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, UK
| |
Collapse
|
28
|
Beyeler M, Oros N, Dutt N, Krichmar JL. A GPU-accelerated cortical neural network model for visually guided robot navigation. Neural Netw 2015; 72:75-87. [PMID: 26494281 DOI: 10.1016/j.neunet.2015.09.005] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2015] [Revised: 07/17/2015] [Accepted: 09/22/2015] [Indexed: 11/27/2022]
|
29
|
Holten V, Donker SF, Stuit SM, Verstraten FAJ, van der Smagt MJ. Visual directional anisotropy does not mirror the directional anisotropy apparent in postural sway. Perception 2015; 44:477-89. [PMID: 26422898 DOI: 10.1068/p7925] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Presenting a large optic flow pattern to observers is likely to cause postural sway. However, directional anisotropies have been reported, in that contracting optic flow induces more postural sway than expanding optic flow. Recently, we showed that the biomechanics of the lower leg cannot account for this anisotropy (Holten, Donker, Verstraten, & van der Smagt, 2013, Experimental Brain Research, 228, 117-129). The question we address in the current study is whether differences in visual processing of optic flow directions, in particular the perceptual strength of these directions, mirrors the anisotropy apparent in postural sway. That is, can contracting optic flow be considered to be a perceptually stronger visual stimulus than expanding optic flow? In the current study we use a breaking continuous flash suppression paradigm where we assume that perceptually stronger visual stimuli will break the flash suppression earlier, making the suppressed optic flow stimulus visible sooner. Surprisingly, our results show the opposite, in that expanding optic flow is detected earlier than contracting optic flow.
Collapse
|
30
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
31
|
Sunkara A, DeAngelis GC, Angelaki DE. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex. eLife 2015; 4. [PMID: 25693417 PMCID: PMC4337725 DOI: 10.7554/elife.04693] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Accepted: 01/20/2015] [Indexed: 11/16/2022] Open
Abstract
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI:http://dx.doi.org/10.7554/eLife.04693.001 When strolling along a path beside a busy street, we can look around without losing our stride. The things we see change as we walk forward, and our view also changes if we turn our head—for example, to look at a passing car. Nevertheless, we can still tell that we are walking in a straight-line because our brain is able to compute the direction in which we are heading by discounting the visual changes caused by rotating our head or eyes. It remains unclear how the brain gets the information about head and eye movements that it would need to be able to do this. Many researchers had proposed that the brain estimates these rotations by using a copy of the neural signals that are sent to the muscles to move the eyes or head. However, it is possible that the brain can estimate head and eye rotations by directly analyzing the visual information from the eyes. One region of the brain that may contribute to this process is the ventral intraparietal area or ‘area VIP’ for short. Sunkara et al. devised an experiment that can help distinguish the effects of visual cues from copies of neural signals sent to the muscles during eye rotations. This involved training monkeys to look at a 3D display of moving dots, which gives the impression of moving through space. Sunkara et al. then measured the electrical signals in area VIP either when the monkey moved its eyes (to follow a moving target), or when the display changed to give the monkey the same visual cues as if it had rotated its eyes, when in fact it had not. Sunkara et al. found that the electrical signals recorded in area VIP when the monkey was given the illusion of rotating its eyes were similar to the signals recorded when the monkey actually rotated its eyes. This suggests that visual cues play an important role in correcting for the effects of eye rotations and correctly estimating the direction in which we are heading. Further research into the mechanisms behind this neural process could lead to new vision-based treatments for medical disorders that cause people to have balance problems. Similar research could also help to identify ways to improve navigation in automated vehicles, such as driverless cars. DOI:http://dx.doi.org/10.7554/eLife.04693.002
Collapse
Affiliation(s)
- Adhira Sunkara
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
32
|
Kim HR, Angelaki DE, DeAngelis GC. A novel role for visual perspective cues in the neural computation of depth. Nat Neurosci 2014; 18:129-37. [PMID: 25436667 PMCID: PMC4281299 DOI: 10.1038/nn.3889] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2014] [Accepted: 11/02/2014] [Indexed: 11/10/2022]
Abstract
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, USA
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York, USA
| |
Collapse
|
33
|
Kaminiarz A, Schlack A, Hoffmann KP, Lappe M, Bremmer F. Visual selectivity for heading in the macaque ventral intraparietal area. J Neurophysiol 2014; 112:2470-80. [PMID: 25122709 DOI: 10.1152/jn.00410.2014] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.
Collapse
Affiliation(s)
| | - Anja Schlack
- Allgemeine Zoologie und Neurobiologie, University of Bochum, Bochum, Germany; and
| | - Klaus-Peter Hoffmann
- AG Neurophysik, University of Marburg, Marburg, Germany; Allgemeine Zoologie und Neurobiologie, University of Bochum, Bochum, Germany; and
| | - Markus Lappe
- Institut für Psychologie, University of Münster, Münster, Germany
| | - Frank Bremmer
- AG Neurophysik, University of Marburg, Marburg, Germany;
| |
Collapse
|
34
|
Royden CS, Holloway MA. Detecting moving objects in an optic flow field using direction- and speed-tuned operators. Vision Res 2014; 98:14-25. [PMID: 24607912 DOI: 10.1016/j.visres.2014.02.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2013] [Revised: 01/25/2014] [Accepted: 02/21/2014] [Indexed: 11/20/2022]
Abstract
An observer moving through a scene must be able to identify moving objects. Psychophysical results have shown that people can identify moving objects based on the speed or direction of their movement relative to the optic flow field generated by the observer's motion. Here we show that a model that uses speed- and direction-tuned units, whose responses are based on the response properties of cells in the primate visual cortex, can successfully identify the borders of moving objects in a scene through which an observer is moving.
Collapse
Affiliation(s)
- Constance S Royden
- Department of Mathematics and Computer Science, College of the Holy Cross, United States.
| | - Michael A Holloway
- Department of Mathematics and Computer Science, College of the Holy Cross, United States
| |
Collapse
|
35
|
A unified model of heading and path perception in primate MSTd. PLoS Comput Biol 2014; 10:e1003476. [PMID: 24586130 PMCID: PMC3930491 DOI: 10.1371/journal.pcbi.1003476] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2013] [Accepted: 01/03/2014] [Indexed: 11/20/2022] Open
Abstract
Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory estimation from flow.
Collapse
|
36
|
Brostek L, Büttner U, Mustari MJ, Glasauer S. Eye Velocity Gain Fields in MSTd During Optokinetic Stimulation. Cereb Cortex 2014; 25:2181-90. [PMID: 24557636 DOI: 10.1093/cercor/bhu024] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Lesion studies argue for an involvement of cortical area dorsal medial superior temporal area (MSTd) in the control of optokinetic response (OKR) eye movements to planar visual stimulation. Neural recordings during OKR suggested that MSTd neurons directly encode stimulus velocity. On the other hand, studies using radial visual flow together with voluntary smooth pursuit eye movements showed that visual motion responses were modulated by eye movement-related signals. Here, we investigated neural responses in MSTd during continuous optokinetic stimulation using an information-theoretic approach for characterizing neural tuning with high resolution. We show that the majority of MSTd neurons exhibit gain-field-like tuning functions rather than directly encoding one variable. Neural responses showed a large diversity of tuning to combinations of retinal and extraretinal input. Eye velocity-related activity was observed prior to the actual eye movements, reflecting an efference copy. The observed tuning functions resembled those emerging in a network model trained to perform summation of 2 population-coded signals. Together, our findings support the hypothesis that MSTd implements the visuomotor transformation from retinal to head-centered stimulus velocity signals for the control of OKR.
Collapse
Affiliation(s)
- Lukas Brostek
- Clinical Neurosciences Bernstein Center for Computational Neuroscience, Munich 81377, Germany
| | - Ulrich Büttner
- Clinical Neurosciences German Vertigo Center IFB, Ludwig-Maximilians-Universität , Munich 81377, Germany
| | - Michael J Mustari
- Department of Ophthalmology and Washington National Primate Research Center, University of Washington, Seattle, WA 98195, USA
| | - Stefan Glasauer
- Clinical Neurosciences German Vertigo Center IFB, Ludwig-Maximilians-Universität , Munich 81377, Germany Bernstein Center for Computational Neuroscience, Munich 81377, Germany
| |
Collapse
|
37
|
Abstract
The brain must convert retinal coordinates into those required for directing an effector. One prominent theory holds that, through a combination of visual and motor/proprioceptive information, head-/body-centered representations are computed within the posterior parietal cortex (PPC). An alternative theory, supported by recent visual and saccade functional magnetic resonance imaging (fMRI) topographic mapping studies, suggests that PPC neurons provide a retinal/eye-centered coordinate system, in which the coding of a visual stimulus location and/or intended saccade endpoints should remain unaffected by changes in gaze position. To distinguish between a retinal/eye-centered and a head-/body-centered coordinate system, we measured how gaze direction affected the representation of visual space in the parietal cortex using fMRI. Subjects performed memory-guided saccades from a central starting point to locations “around the clock.” Starting points varied between left, central, and right gaze relative to the head-/body midline. We found that memory-guided saccadotopic maps throughout the PPC showed spatial reorganization with very subtle changes in starting gaze position, despite constant retinal input and eye movement metrics. Such a systematic shift is inconsistent with models arguing for a retinal/eye-centered coordinate system in the PPC, but it is consistent with head-/body-centered coordinate representations.
Collapse
Affiliation(s)
- Jason D. Connolly
- Faculty of Medical Sciences, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
- Current address: Wolfson Research Institute, University of Durham, Thornaby TS17 6BH, UK
- Current address: Department of Psychology, Durham University Science Site, Durham DH1 3LE, UK
| | - Quoc C. Vuong
- Faculty of Medical Sciences, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| | - Alexander Thiele
- Faculty of Medical Sciences, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
38
|
Linking sensory neurons to visually guided behavior: relating MST activity to steering in a virtual environment. Vis Neurosci 2013; 30:315-30. [PMID: 24171813 PMCID: PMC9827659 DOI: 10.1017/s0952523813000412] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Many complex behaviors rely on guidance from sensations. To perform these behaviors, the motor system must decode information relevant to the task from the sensory system. However, identifying the neurons responsible for encoding the appropriate sensory information remains a difficult problem for neurophysiologists. A key step toward identifying candidate systems is finding neurons or groups of neurons capable of representing the stimuli adequately to support behavior. A traditional approach involves quantitatively measuring the performance of single neurons and comparing this to the performance of the animal. One of the strongest pieces of evidence in support of a neuronal population being involved in a behavioral task comes from the signals being sufficient to support behavior. Numerous experiments using perceptual decision tasks show that visual cortical neurons in many areas have this property. However, most visually guided behaviors are not categorical but continuous and dynamic. In this article, we review the concept of sufficiency and the tools used to measure neural and behavioral performance. We show how concepts from information theory can be used to measure the ongoing performance of both neurons and animal behavior. Finally, we apply these tools to dorsal medial superior temporal (MSTd) neurons and demonstrate that these neurons can represent stimuli important to navigation to a distant goal. We find that MSTd neurons represent ongoing steering error in a virtual-reality steering task. Although most individual neurons were insufficient to support the behavior, some very nearly matched the animal's estimation performance. These results are consistent with many results from perceptual experiments and in line with the predictions of Mountcastle's "lower envelope principle."
Collapse
|
39
|
Leclercq G, Blohm G, Lefèvre P. Accounting for direction and speed of eye motion in planning visually guided manual tracking. J Neurophysiol 2013; 110:1945-57. [DOI: 10.1152/jn.00130.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.
Collapse
Affiliation(s)
- Guillaume Leclercq
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; and
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - Philippe Lefèvre
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| |
Collapse
|
40
|
Furlan M, Wann JP, Smith AT. A representation of changing heading direction in human cortical areas pVIP and CSv. ACTA ACUST UNITED AC 2013; 24:2848-58. [PMID: 23709643 DOI: 10.1093/cercor/bht132] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
When we move around in the environment, we continually change direction. Much work has examined how the brain extracts instantaneous direction of heading from optic flow but how changes in heading are encoded is unknown. Change could simply be inferred cognitively from successive instantaneous heading values, but we hypothesize that heading change is represented as a low-level signal that feeds into motor control with minimal need for attention or cognition. To test this, we first used functional MRI to measure activity in several predefined visual areas previously associated with processing optic flow (hMST, hV6, pVIP, and CSv) while participants viewed flow that simulated either constant heading or changing heading. We then trained a support vector machine (SVM) to distinguish the multivoxel activity pattern elicited by rightward versus leftward changes in heading direction. Some motion-sensitive visual cortical areas, including hMST, responded well to flow but did not appear to encode heading change. However, visual areas pVIP and, particularly, CSv responded with strong selectivity to changing flow and also allowed direction of heading change to be decoded. This suggests that these areas may construct a representation of heading change from instantaneous heading directions, permitting rapid and accurate preattentive detection and response to change.
Collapse
Affiliation(s)
- Michele Furlan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - John P Wann
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Andrew T Smith
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| |
Collapse
|
41
|
Duijnhouwer J, Noest AJ, Lankheet MJM, van den Berg AV, van Wezel RJA. Speed and direction response profiles of neurons in macaque MT and MST show modest constraint line tuning. Front Behav Neurosci 2013; 7:22. [PMID: 23576963 PMCID: PMC3616296 DOI: 10.3389/fnbeh.2013.00022] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2012] [Accepted: 03/05/2013] [Indexed: 11/13/2022] Open
Abstract
Several models of heading detection during smooth pursuit rely on the assumption of local constraint line tuning to exist in large scale motion detection templates. A motion detector that exhibits pure constraint line tuning responds maximally to any 2D-velocity in the set of vectors that can be decomposed into the central, or classic, preferred velocity (the shortest vector that still yields the maximum response) and any vector orthogonal to that. To test this assumption, we measured the firing rates of isolated middle temporal (MT) and medial superior temporal (MST) neurons to random dot stimuli moving in a range of directions and speeds. We found that as a function of 2D velocity, the pooled responses were best fit with a 2D Gaussian profile with a factor of elongation, orthogonal to the central preferred velocity, of roughly 1.5 for MST and 1.7 for MT. This means that MT and MST cells are more sharply tuned for speed than they are for direction; and that they indeed show some level of constraint line tuning. However, we argue that the observed elongation is insufficient to achieve behavioral heading discrimination accuracy on the order of 1-2 degrees as reported before.
Collapse
Affiliation(s)
- Jacob Duijnhouwer
- Center for Molecular and Behavioral Neuroscience, Rutgers University Newark, NJ, USA
| | | | | | | | | |
Collapse
|
42
|
Brostek L, Büttner U, Mustari MJ, Glasauer S. Neuronal variability of MSTd neurons changes differentially with eye movement and visually related variables. ACTA ACUST UNITED AC 2012; 23:1774-83. [PMID: 22772648 DOI: 10.1093/cercor/bhs146] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Neurons in macaque cortical area MSTd are driven by visual motion and eye movement related signals. This multimodal characteristic makes MSTd an ideal system for studying the dependence of neuronal activity on different variables. Here, we analyzed the temporal structure of spiking patterns during visual motion stimulation using 2 distinct behavioral paradigms: fixation (FIX) and optokinetic response. For the FIX condition, inter- and intra-trial variability of spiking activity decreased with increasing stimulus strength, complying with a recent neurophysiological study reporting stimulus-related decline of neuronal variability. In contrast, for the optokinetic condition variability increased together with increasing eye velocity while retinal image velocity remained low. Analysis of stimulus signal variability revealed a correlation between the normalized variance of image velocity and neuronal variability, but no correlation with normalized eye velocity variance. We further show that the observed difference in neuronal variability allows classifying spike trains according to the paradigm used, even when mean firing rates (FRs) were similar. The stimulus-dependence of neuronal variability may result from the local network structure and/or the variability characteristics of the input signals, but may also reflect additional timing-based mechanisms independent of the neuron's mean FR and related to the modality driving the neuron.
Collapse
Affiliation(s)
- Lukas Brostek
- Clinical Neurosciences, Ludwig-Maximilians-University, Munich, Germany.
| | | | | | | |
Collapse
|
43
|
Modeling the influence of optic flow on grid cell firing in the absence of other cues1. J Comput Neurosci 2012; 33:475-93. [PMID: 22555390 PMCID: PMC3484285 DOI: 10.1007/s10827-012-0396-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2011] [Revised: 03/30/2012] [Accepted: 04/03/2012] [Indexed: 11/17/2022]
Abstract
Information from the vestibular, sensorimotor, or visual systems can affect the firing of grid cells recorded in entorhinal cortex of rats. Optic flow provides information about the rat’s linear and rotational velocity and, thus, could influence the firing pattern of grid cells. To investigate this possible link, we model parts of the rat’s visual system and analyze their capability in estimating linear and rotational velocity. In our model a rat is simulated to move along trajectories recorded from rat’s foraging on a circular ground platform. Thus, we preserve the intrinsic statistics of real rats’ movements. Visual image motion is analytically computed for a spherical camera model and superimposed with noise in order to model the optic flow that would be available to the rat. This optic flow is fed into a template model to estimate the rat’s linear and rotational velocities, which in turn are fed into an oscillatory interference model of grid cell firing. Grid scores are reported while altering the flow noise, tilt angle of the optical axis with respect to the ground, the number of flow templates, and the frequency used in the oscillatory interference model. Activity patterns are compatible with those of grid cells, suggesting that optic flow can contribute to their firing.
Collapse
|
44
|
Furman M, Gur M. And yet it moves: Perceptual illusions and neural mechanisms of pursuit compensation during smooth pursuit eye movements. Neurosci Biobehav Rev 2012; 36:143-51. [DOI: 10.1016/j.neubiorev.2011.05.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2010] [Revised: 05/02/2011] [Accepted: 05/11/2011] [Indexed: 10/18/2022]
|
45
|
DeAngelis G, Angelaki D. Visual–Vestibular Integration for Self-Motion Perception. Front Neurosci 2011. [DOI: 10.1201/b11092-39] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
46
|
|
47
|
Spering M, Schütz AC, Braun DI, Gegenfurtner KR. Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion. J Neurophysiol 2011; 105:1756-67. [DOI: 10.1152/jn.00344.2010] [Citation(s) in RCA: 74] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Success of motor behavior often depends on the ability to predict the path of moving objects. Here we asked whether tracking a visual object with smooth pursuit eye movements helps to predict its motion direction. We developed a paradigm, “eye soccer,” in which observers had to either track or fixate a visual target (ball) and judge whether it would have hit or missed a stationary vertical line segment (goal). Ball and goal were presented briefly for 100–500 ms and disappeared from the screen together before the perceptual judgment was prompted. In pursuit conditions, the ball moved towards the goal; in fixation conditions, the goal moved towards the stationary ball, resulting in similar retinal stimulation during pursuit and fixation. We also tested the condition in which the goal was fixated and the ball moved. Motion direction prediction was significantly better in pursuit than in fixation trials, regardless of whether ball or goal served as fixation target. In both fixation and pursuit trials, prediction performance was better when eye movements were accurate. Performance also increased with shorter ball-goal distance and longer presentation duration. A longer trajectory did not affect performance. During pursuit, an efference copy signal might provide additional motion information, leading to the advantage in motion prediction.
Collapse
Affiliation(s)
- Miriam Spering
- Department of Psychology, Experimental Psychology, Justus-Liebig University, Giessen, Germany; and
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Alexander C. Schütz
- Department of Psychology, Experimental Psychology, Justus-Liebig University, Giessen, Germany; and
| | - Doris I. Braun
- Department of Psychology, Experimental Psychology, Justus-Liebig University, Giessen, Germany; and
| | - Karl R. Gegenfurtner
- Department of Psychology, Experimental Psychology, Justus-Liebig University, Giessen, Germany; and
| |
Collapse
|
48
|
Inaba N, Miura K, Kawano K. Direction and speed tuning to visual motion in cortical areas MT and MSTd during smooth pursuit eye movements. J Neurophysiol 2011; 105:1531-45. [PMID: 21273314 DOI: 10.1152/jn.00511.2010] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When tracking a moving target in the natural world with pursuit eye movement, our visual system must compensate for the self-induced retinal slip of the visual features in the background to enable us to perceive their actual motion. We previously reported that the speed of the background stimulus in space is represented by dorsal medial superior temporal (MSTd) neurons in the monkey cortex, which compensate for retinal image motion resulting from eye movements when the direction of the pursuit and background motion are parallel to the preferred direction of each neuron. To further characterize the compensation observed in the MSTd responses to the background motion, we recorded single unit activities in cortical areas middle temporal (MT) and MSTd, and we selected neurons responsive to a large-field visual stimulus. We studied their responses to the large-field stimulus in the background while monkeys pursued a moving target and while fixated a stationary target. We investigated whether compensation for retinal image motion of the background depended on the speed of pursuit. We also asked whether the directional selectivity of each neuron in relation to the external world remained the same even during pursuit and whether compensation for retinal image motion occurred irrespective of the direction of the pursuit. We found that the majority of the MSTd neurons responded to the visual motion in space by compensating for the image motion on the retina resulting from the pursuit regardless of pursuit speed and direction, whereas most of the MT neurons responded in relation to the genuine retinal image motion.
Collapse
Affiliation(s)
- Naoko Inaba
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Japan
| | - Kenichiro Miura
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Japan
| | - Kenji Kawano
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Japan
| |
Collapse
|
49
|
Lee B, Pesaran B, Andersen RA. Area MSTd neurons encode visual stimuli in eye coordinates during fixation and pursuit. J Neurophysiol 2010; 105:60-8. [PMID: 20980545 DOI: 10.1152/jn.00495.2009] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Visual signals generated by self-motion are initially represented in retinal coordinates in the early parts of the visual system. Because this information can be used by an observer to navigate through the environment, it must be transformed into body or world coordinates at later stations of the visual-motor pathway. Neurons in the dorsal aspect of the medial superior temporal area (MSTd) are tuned to the focus of expansion (FOE) of the visual image. We performed experiments to determine whether focus tuning curves in area MSTd are represented in eye coordinates or in screen coordinates (which could be head, body, or world-centered in the head-fixed paradigm used). Because MSTd neurons adjust their FOE tuning curves during pursuit eye movements to compensate for changes in pursuit and translation speed that distort the visual image, the coordinate frame was determined while the eyes were stationary (fixed gaze or simulated pursuit conditions) and while the eyes were moving (real pursuit condition). We recorded extracellular responses from 80 MSTd neurons in two rhesus monkeys (Macaca mulatta). We found that the FOE tuning curves of the overwhelming majority of neurons were aligned in an eye-centered coordinate frame in each of the experimental conditions [fixed gaze: 77/80 (96%); real pursuit: 77/80 (96%); simulated pursuit 74/80 (93%); t-test, P < 0.05]. These results indicate that MSTd neurons represent heading in an eye-centered coordinate frame both when the eyes are stationary and when they are moving. We also found that area MSTd demonstrates significant eye position gain modulation of response fields much like its posterior parietal neighbors.
Collapse
Affiliation(s)
- Brian Lee
- Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA
| | | | | |
Collapse
|
50
|
Cortical neurons combine visual cues about self-movement. Exp Brain Res 2010; 206:283-97. [PMID: 20852992 DOI: 10.1007/s00221-010-2406-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2010] [Accepted: 08/25/2010] [Indexed: 10/19/2022]
Abstract
Visual cues about self-movement are derived from the patterns of optic flow and the relative motion of discrete objects. We recorded dorsal medial superior temporal (MSTd) cortical neurons in monkeys that held centered visual fixation while viewing optic flow and object motion stimuli simulating the self-movement cues seen during translation on a circular path. Twenty stimulus configurations presented naturalistic combinations of optic flow with superimposed objects that simulated either earth-fixed landmark objects or independently moving animate objects. Landmarks and animate objects yield the same response interactions with optic flow; mainly additive effects, with a substantial number of sub- and super-additive responses. Sub- and super-additive interactions reflect each neuron's local and global motion sensitivities: Local motion sensitivity is based on the spatial arrangement of directions created by object motion and the surrounding optic flow. Global motion sensitivity is based on the temporal sequence of self-movement headings that define a simulated path through the environment. We conclude that MST neurons' spatio-temporal response properties combine object motion and optic flow cues to represent self-movement in diverse, naturalistic circumstances.
Collapse
|