51
|
Rao HM, San Juan J, Shen FY, Villa JE, Rafie KS, Sommer MA. Neural Network Evidence for the Coupling of Presaccadic Visual Remapping to Predictive Eye Position Updating. Front Comput Neurosci 2016; 10:52. [PMID: 27313528 PMCID: PMC4889583 DOI: 10.3389/fncom.2016.00052] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 05/18/2016] [Indexed: 11/13/2022] Open
Abstract
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
Collapse
Affiliation(s)
- Hrishikesh M Rao
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Juan San Juan
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Fred Y Shen
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Jennifer E Villa
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Kimia S Rafie
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Marc A Sommer
- Department of Biomedical Engineering, Pratt School of Engineering, Duke UniversityDurham, NC, USA; Department of Neurobiology, Duke School of Medicine, Duke UniversityDurham, NC, USA; Center for Cognitive Neuroscience, Duke UniversityDurham, NC, USA
| |
Collapse
|
52
|
Abstract
Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception.
Collapse
Affiliation(s)
- Ehud Ahissar
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Eldad Assa
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
53
|
Abstract
UNLABELLED Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene.
Collapse
|
54
|
Zhou Y, Liu Y, Lu H, Wu S, Zhang M. Neuronal representation of saccadic error in macaque posterior parietal cortex (PPC). eLife 2016; 5. [PMID: 27097103 PMCID: PMC4865368 DOI: 10.7554/elife.10912] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2015] [Accepted: 04/18/2016] [Indexed: 11/18/2022] Open
Abstract
Motor control, motor learning, self-recognition, and spatial perception all critically depend on the comparison of motor intention to the actually executed movement. Despite our knowledge that the brainstem-cerebellum plays an important role in motor error detection and motor learning, the involvement of neocortex remains largely unclear. Here, we report the neuronal computation and representation of saccadic error in macaque posterior parietal cortex (PPC). Neurons with persistent pre- and post-saccadic response (PPS) represent the intended end-position of saccade; neurons with late post-saccadic response (LPS) represent the actual end-position of saccade. Remarkably, after the arrival of the LPS signal, the PPS neurons’ activity becomes highly correlated with the discrepancy between intended and actual end-position, and with the probability of making secondary (corrective) saccades. Thus, this neuronal computation might underlie the formation of saccadic error signals in PPC for speeding up saccadic learning and leading the occurrence of secondary saccade. DOI:http://dx.doi.org/10.7554/eLife.10912.001
Collapse
Affiliation(s)
- Yang Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China.,Institute of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China.,University of Chinese Academy of Sciences, Shanghai, China
| | - Yining Liu
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Haidong Lu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Si Wu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Mingsha Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| |
Collapse
|
55
|
Abstract
Sleep spindles are brief cortical oscillations at 10–15 Hz that occur predominantly during non-REM (quiet) sleep in adult mammals and are thought to contribute to learning and memory. Spindle bursts are phenomenologically similar to sleep spindles, but they occur predominantly in early infancy and are triggered by peripheral sensory activity (e.g., by retinal waves); accordingly, spindle bursts are thought to organize neural networks in the developing brain and establish functional links with the sensory periphery. Whereas the spontaneous retinal waves that trigger spindle bursts in visual cortex are a transient feature of early development, the myoclonic twitches that drive spindle bursts in sensorimotor cortex persist into adulthood. Moreover, twitches—and their associated spindle bursts—occur exclusively during REM (active) sleep. Curiously, despite the persistence of twitching into adulthood, twitch-related spindle bursts have not been reported in adult sensorimotor cortex. This raises the question of whether such spindle burst activity does not occur in adulthood or, alternatively, occurs but has yet to be discovered. If twitch-related spindle bursts do occur in adults, they could contribute to the calibration, maintenance, and repair of sensorimotor systems.
Collapse
|
56
|
Morris AP, Bremmer F, Krekelberg B. The Dorsal Visual System Predicts Future and Remembers Past Eye Position. Front Syst Neurosci 2016; 10:9. [PMID: 26941617 PMCID: PMC4764714 DOI: 10.3389/fnsys.2016.00009] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 01/29/2016] [Indexed: 11/13/2022] Open
Abstract
Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior.
Collapse
Affiliation(s)
- Adam P Morris
- Neuroscience Program, Department of Physiology, Biomedicine Discovery Institute, Monash University Clayton, VIC, Australia
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg Marburg, Germany
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University Newark, NJ, USA
| |
Collapse
|
57
|
Bergelt J, Hamker FH. Suppression of displacement detection in the presence and absence of eye movements: a neuro-computational perspective. BIOLOGICAL CYBERNETICS 2016; 110:81-89. [PMID: 26733211 DOI: 10.1007/s00422-015-0677-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2015] [Accepted: 12/19/2015] [Indexed: 06/05/2023]
Abstract
Understanding the subjective experience of a visually stable world during eye movements has been an important research topic for many years. Various studies were conducted to reveal fundamental mechanisms of this phenomenon. For example, in the paradigm saccadic suppression of displacement (SSD), it has been observed that a small displacement of a saccade target could not easily be reported if this displacement took place during a saccade. New results from Zimmermann et al. (J Neurophysiol 112(12):3066-3076, 2014) show that the effect of being oblivious to small displacements occurs not only during saccades, but also if a mask is introduced while the target is displaced. We address the question of how neurons in the parietal cortex may be connected to each other to account for the SSD effect in experiments involving a saccade and equally well in the absence of an eye movement while perception is disrupted by a mask.
Collapse
Affiliation(s)
- Julia Bergelt
- Artificial Intelligence, Computer Science, Chemnitz University of Technology, Chemnitz, Germany
| | - Fred H Hamker
- Artificial Intelligence, Computer Science, Chemnitz University of Technology, Chemnitz, Germany.
| |
Collapse
|
58
|
Bohlen MO, Chen LL. A noninvasive electromagnetic perturbation approach to probe extraocular proprioception. J AAPOS 2016; 20:12-8. [PMID: 26917065 DOI: 10.1016/j.jaapos.2015.10.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2015] [Revised: 10/14/2015] [Accepted: 10/16/2015] [Indexed: 11/29/2022]
Abstract
BACKGROUND Extraocular proprioception has been shown to participate in spatial perception and binocular alignment. Yet the physiological approaches used to study this sensory signal are limited because proprioceptive signaling takes place at the same time as visuomotor signaling. It is critical to dissociate this sensory signal from other visuomotor events that accompany eye movements. METHODS We present a novel noninvasive and quantifiable method for probing extraocular proprioception independent of other visuomotor processing by attaching a rare-earth magnet to a real-time model eye and placing an electromagnet <20 mm from the eye. An electromagnet can increase or decrease angular displacements and velocities of the model eye. RESULTS Electromagnetic activation rapidly affected (<2 ms) the rotation kinematics of the eye, which were correlated linearly with both the current supply and the distance of the electromagnet relative to the eye. CONCLUSIONS This method circumvented the constraints of conventional physiological manipulation of extraocular proprioception, such as manually or mechanically tugging on the eye ball. It can be applied to produce the discrepancy between the intended and the executed eye movements, so that proprioceptive reafference signals are dissociated from corollary motor discharges and other visuomotor events.
Collapse
Affiliation(s)
- Martin O Bohlen
- Program in Neuroscience, University of Mississippi Medical Center, Jackson, Mississippi
| | - Lewis L Chen
- Departments of Otolaryngology and Communicative Sciences, Neurobiology and Anatomical Sciences, Neurology, and Ophthalmology, University of Mississippi Medical Center, Jackson, Mississippi.
| |
Collapse
|
59
|
Ego C, Yüksel D, Orban de Xivry JJ, Lefèvre P. Development of internal models and predictive abilities for visual tracking during childhood. J Neurophysiol 2016; 115:301-9. [PMID: 26510757 PMCID: PMC4760460 DOI: 10.1152/jn.00534.2015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Accepted: 10/28/2015] [Indexed: 12/28/2022] Open
Abstract
The prediction of the consequences of our own actions through internal models is an essential component of motor control. Previous studies showed improvement of anticipatory behaviors with age for grasping, drawing, and postural control. Since these actions require visual and proprioceptive feedback, these improvements might reflect both the development of internal models and the feedback control. In contrast, visual tracking of a temporarily invisible target gives specific markers of prediction and internal models for eye movements. Therefore, we recorded eye movements in 50 children (aged 5-19 yr) and in 10 adults, who were asked to pursue a visual target that is temporarily blanked. Results show that the youngest children (5-7 yr) have a general oculomotor behavior in this task, qualitatively similar to the one observed in adults. However, the overall performance of older subjects in terms of accuracy at target reappearance and variability in their behavior was much better than the youngest children. This late maturation of predictive mechanisms with age was reflected into the development of the accuracy of the internal models governing the synergy between the saccadic and pursuit systems with age. Altogether, we hypothesize that the maturation of the interaction between smooth pursuit and saccades that relies on internal models of the eye and target displacement is related to the continuous maturation of the cerebellum.
Collapse
Affiliation(s)
- Caroline Ego
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université Catholique de Louvain, Louvain-la-Neuve, Belgium; Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Demet Yüksel
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium; Ophthalmology Department, Cliniques Universitaires Saint-Luc, Brussels, Belgium; and
| | - Jean-Jacques Orban de Xivry
- Department of Kinesiology, Movement Control and Neuroplasticity Research Group, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Philippe Lefèvre
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université Catholique de Louvain, Louvain-la-Neuve, Belgium; Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium;
| |
Collapse
|
60
|
Jayet Bray LC, Bansal S, Joiner WM. Quantifying the spatial extent of the corollary discharge benefit to transsaccadic visual perception. J Neurophysiol 2015; 115:1132-45. [PMID: 26683070 DOI: 10.1152/jn.00657.2015] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2015] [Accepted: 12/16/2015] [Indexed: 01/20/2023] Open
Abstract
Extraretinal information, such as corollary discharge (CD), is hypothesized to help compensate for saccade-induced visual input disruptions. However, support for this hypothesis is largely for one-dimensional transsaccadic visual changes, with little comprehensive information on the spatial characteristics. Here we systematically mapped the two-dimensional extent of this compensation by quantifying the insensitivity to different displacement metrics. Human subjects made saccades to targets positioned at different amplitudes (4° or 8°) and directions (rightward, oblique, or upward). After the saccade the initial target disappeared and, after a blank period, reappeared at a shifted location-a collinear, diagonal, or orthogonal displacement. Subjects reported the perceived shift direction, and we determined the displacement detection based on the perceptual judgments. The two-dimensional insensitivity fields resulting from the perceptual thresholds had spatial features similar to the saccadic eye movement variability: 1) scaled with movement amplitude, 2) oriented (less sensitive to the change) along the saccade vector, and 3) approximately constant in shape when normalized by movement amplitude. In addition, comparing the postsaccadic perceptual estimate of the presaccadic target location to that based solely on the postsaccade visual error showed that overall the perceptual estimate was approximately 50% more accurate and 35% less variable than estimates based solely on this visual information. However, this relationship was not uniform: The benefit of extraretinal information was observed largely for displacements with a component parallel to the saccade vector. These results suggest a graded use of extraretinal information when forming the postsaccadic perceptual evaluation of transsaccadic environmental changes.
Collapse
Affiliation(s)
| | - Sonia Bansal
- Department of Neuroscience, George Mason University, Fairfax, Virginia; and
| | - Wilsaan M Joiner
- Department of Bioengineering, George Mason University, Fairfax, Virginia; Department of Neuroscience, George Mason University, Fairfax, Virginia; and Krasnow Institute for Advanced Study, Sensorimotor Integration Laboratory, George Mason University, Fairfax, Virginia
| |
Collapse
|
61
|
Troncoso XG, McCamy MB, Jazi AN, Cui J, Otero-Millan J, Macknik SL, Costela FM, Martinez-Conde S. V1 neurons respond differently to object motion versus motion from eye movements. Nat Commun 2015; 6:8114. [PMID: 26370518 PMCID: PMC4579399 DOI: 10.1038/ncomms9114] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2015] [Accepted: 07/21/2015] [Indexed: 11/10/2022] Open
Abstract
How does the visual system differentiate self-generated motion from motion in the external world? Humans can discern object motion from identical retinal image displacements induced by eye movements, but the brain mechanisms underlying this ability are unknown. Here we exploit the frequent production of microsaccades during ocular fixation in the primate to compare primary visual cortical responses to self-generated motion (real microsaccades) versus motion in the external world (object motion mimicking microsaccades). Real and simulated microsaccades were randomly interleaved in the same viewing condition, thereby producing equivalent oculomotor and behavioural engagement. Our results show that real microsaccades generate biphasic neural responses, consisting of a rapid increase in the firing rate followed by a slow and smaller-amplitude suppression that drops below baseline. Simulated microsaccades generate solely excitatory responses. These findings indicate that V1 neurons can respond differently to internally and externally generated motion, and expand V1's potential role in information processing and visual stability during eye movements. A key question in neuroscience is understanding how the brain distinguishes self-generated motion from motion in the external world. Here the authors demonstrate that the response of primary visual cortical neurons to a moving stimulus depends on whether the motion was self- or externally generated.
Collapse
Affiliation(s)
- Xoana G Troncoso
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,UNIC-CNRS (Unité de Neuroscience Information et Complexité, Centre National de la Recherche Scientifique), 1 Avenue de la Terrase, 91198 Gif-sur-Yvette, France
| | - Michael B McCamy
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA
| | - Ali Najafian Jazi
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,Program in Neuroscience, Arizona State University, PO Box 874601, Tempe, Arizona 85287, USA
| | - Jie Cui
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA
| | - Jorge Otero-Millan
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,Department of Neurology, Johns Hopkins University, 600 N Wolfe Street, Baltimore, Maryland 21287, USA
| | - Stephen L Macknik
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,State University of New York (SUNY) Downstate Medical Center, 450 Clarkson Avenue, Brooklyn, New York 11203, USA
| | - Francisco M Costela
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,Program in Neuroscience, Arizona State University, PO Box 874601, Tempe, Arizona 85287, USA
| | - Susana Martinez-Conde
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,State University of New York (SUNY) Downstate Medical Center, 450 Clarkson Avenue, Brooklyn, New York 11203, USA
| |
Collapse
|
62
|
Brain control and information transfer. Exp Brain Res 2015; 233:3335-47. [DOI: 10.1007/s00221-015-4423-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2015] [Accepted: 08/17/2015] [Indexed: 11/27/2022]
|
63
|
Caminiti R, Innocenti GM, Battaglia-Mayer A. Organization and evolution of parieto-frontal processing streams in macaque monkeys and humans. Neurosci Biobehav Rev 2015; 56:73-96. [PMID: 26112130 DOI: 10.1016/j.neubiorev.2015.06.014] [Citation(s) in RCA: 56] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2015] [Revised: 05/08/2015] [Accepted: 06/09/2015] [Indexed: 01/01/2023]
Abstract
The functional organization of the parieto-frontal system is crucial for understanding cognitive-motor behavior and provides the basis for interpreting the consequences of parietal lesions in humans from a neurobiological perspective. The parieto-frontal connectivity defines some main information streams that, rather than being devoted to restricted functions, underlie a rich behavioral repertoire. Surprisingly, from macaque to humans, evolution has added only a few, new functional streams, increasing however their complexity and encoding power. In fact, the characterization of the conduction times of parietal and frontal areas to different target structures has recently opened a new window on cortical dynamics, suggesting that evolution has amplified the probability of dynamic interactions between the nodes of the network, thanks to communication patterns based on temporally-dispersed conduction delays. This might allow the representation of sensory-motor signals within multiple neural assemblies and reference frames, as to optimize sensory-motor remapping within an action space characterized by different and more complex demands across evolution.
Collapse
Affiliation(s)
- Roberto Caminiti
- Department of Physiology and Pharmacology, University of Rome SAPIENZA, P.le Aldo Moro 5, 00185 Rome, Italy.
| | - Giorgio M Innocenti
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden; Brain and Mind Institute, Federal Institute of Technology, EPFL, Lausanne, Switzerland
| | - Alexandra Battaglia-Mayer
- Department of Physiology and Pharmacology, University of Rome SAPIENZA, P.le Aldo Moro 5, 00185 Rome, Italy
| |
Collapse
|
64
|
Daemi M, Crawford JD. A kinematic model for 3-D head-free gaze-shifts. Front Comput Neurosci 2015; 9:72. [PMID: 26113816 PMCID: PMC4461827 DOI: 10.3389/fncom.2015.00072] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 05/27/2015] [Indexed: 11/13/2022] Open
Abstract
Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.
Collapse
Affiliation(s)
- Mehdi Daemi
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada
| | - J Douglas Crawford
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada ; Department of Psychology, York University Toronto, ON, Canada ; School of Kinesiology and Health Sciences, York University Toronto, ON, Canada ; Brain in Action NSERC CREATE/DFG IRTG Program Canada/Germany
| |
Collapse
|
65
|
Bansal S, Jayet Bray LC, Peterson MS, Joiner WM. The effect of saccade metrics on the corollary discharge contribution to perceived eye location. J Neurophysiol 2015; 113:3312-22. [PMID: 25761955 DOI: 10.1152/jn.00771.2014] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Accepted: 03/10/2015] [Indexed: 11/22/2022] Open
Abstract
Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics.
Collapse
Affiliation(s)
- Sonia Bansal
- Department of Neuroscience, George Mason University, Fairfax, Virginia
| | | | - Matthew S Peterson
- Department of Neuroscience, George Mason University, Fairfax, Virginia; Department of Psychology, George Mason University, Fairfax, Virginia
| | - Wilsaan M Joiner
- Department of Neuroscience, George Mason University, Fairfax, Virginia; Department of Bioengineering, George Mason University, Fairfax, Virginia;
| |
Collapse
|
66
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
67
|
A role for mixed corollary discharge and proprioceptive signals in predicting the sensory consequences of movements. J Neurosci 2015; 34:16103-16. [PMID: 25429151 DOI: 10.1523/jneurosci.2751-14.2014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Animals must distinguish behaviorally relevant patterns of sensory stimulation from those that are attributable to their own movements. In principle, this distinction could be made based on internal signals related to motor commands, known as corollary discharge (CD), sensory feedback, or some combination of both. Here we use an advantageous model system--the electrosensory lobe (ELL) of weakly electric mormyrid fish--to directly examine how CD and proprioceptive feedback signals are transformed into negative images of the predictable electrosensory consequences of the fish's motor commands and/or movements. In vivo recordings from ELL neurons and theoretical modeling suggest that negative images are formed via anti-Hebbian plasticity acting on random, nonlinear mixtures of CD and proprioception. In support of this, we find that CD and proprioception are randomly mixed in spinal mossy fibers and that properties of granule cells are consistent with a nonlinear recoding of these signals. The mechanistic account provided here may be relevant to understanding how internal models of movement consequences are implemented in other systems in which similar components (e.g., mixed sensory and motor signals and synaptic plasticity) are found.
Collapse
|
68
|
Abstract
In strabismus, potentially either eye can inform the brain about the location of a target so that an accurate saccade can be made. Sixteen human subjects with alternating exotropia were tested dichoptically while viewing stimuli on a tangent screen. Each trial began with a fixation cross visible to only one eye. After the subject fixated the cross, a peripheral target visible to only one eye flashed briefly. The subject's task was to look at it. As a rule, the eye to which the target was presented was the eye that acquired the target. However, when stimuli were presented in the far nasal visual field, subjects occasionally performed a "crossover" saccade by placing the other eye on the target. This strategy avoided the need to make a large adducting saccade. In such cases, information about target location was obtained by one eye and used to program a saccade for the other eye, with a corresponding latency increase. In 10/16 subjects, targets were presented on some trials to both eyes. Binocular sensory maps were also compiled to delineate the portions of the visual scene perceived with each eye. These maps were compared with subjects' pattern of eye choice for target acquisition. There was a correspondence between suppression scotoma maps and the eye used to acquire peripheral targets. In other words, targets were fixated by the eye used to perceive them. These studies reveal how patients with alternating strabismus, despite eye misalignment, manage to localize and capture visual targets in their environment.
Collapse
|
69
|
Odoj B, Balslev D. Role of Oculoproprioception in Coding the Locus of Attention. J Cogn Neurosci 2015; 28:517-28. [DOI: 10.1162/jocn_a_00910] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The most common neural representations for spatial attention encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be combined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allocation of attention, the source of this input has so far remained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculoproprioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants discriminated visual targets whose location was cued in a nonvisual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculoproprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculoproprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention.
Collapse
|
70
|
Boisgontier MP, Van Halewyck F, Corporaal SHA, Willacker L, Van Den Bergh V, Beets IAM, Levin O, Swinnen SP. Vision of the active limb impairs bimanual motor tracking in young and older adults. Front Aging Neurosci 2014; 6:320. [PMID: 25452727 PMCID: PMC4233931 DOI: 10.3389/fnagi.2014.00320] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Accepted: 10/30/2014] [Indexed: 12/01/2022] Open
Abstract
Despite the intensive investigation of bimanual coordination, it remains unclear how directing vision toward either limb influences performance, and whether this influence is affected by age. To examine these questions, we assessed the performance of young and older adults on a bimanual tracking task in which they matched motor-driven movements of their right hand (passive limb) with their left hand (active limb) according to in-phase and anti-phase patterns. Performance in six visual conditions involving central vision, and/or peripheral vision of the active and/or passive limb was compared to performance in a no vision condition. Results indicated that directing central vision to the active limb consistently impaired performance, with higher impairment in older than young adults. Conversely, directing central vision to the passive limb improved performance in young adults, but less consistently in older adults. In conditions involving central vision of one limb and peripheral vision of the other limb, similar effects were found to those for conditions involving central vision of one limb only. Peripheral vision alone resulted in similar or impaired performance compared to the no vision (NV) condition. These results indicate that the locus of visual attention is critical for bimanual motor control in young and older adults, with older adults being either more impaired or less able to benefit from a given visual condition.
Collapse
Affiliation(s)
- Matthieu P. Boisgontier
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences Group, Department of KinesiologyKU Leuven, Leuven, Belgium
| | - Florian Van Halewyck
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences Group, Department of KinesiologyKU Leuven, Leuven, Belgium
| | - Sharissa H. A. Corporaal
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences Group, Department of KinesiologyKU Leuven, Leuven, Belgium
| | - Lina Willacker
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences Group, Department of KinesiologyKU Leuven, Leuven, Belgium
| | - Veerle Van Den Bergh
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences Group, Department of KinesiologyKU Leuven, Leuven, Belgium
| | - Iseult A. M. Beets
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences Group, Department of KinesiologyKU Leuven, Leuven, Belgium
| | - Oron Levin
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences Group, Department of KinesiologyKU Leuven, Leuven, Belgium
| | - Stephan P. Swinnen
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences Group, Department of KinesiologyKU Leuven, Leuven, Belgium
- Leuven Research Institute for Neuroscience and DiseaseKU Leuven, Leuven, Belgium
| |
Collapse
|
71
|
Functional magnetic resonance imaging of sensorimotor transformations in saccades and antisaccades. Neuroimage 2014; 102 Pt 2:848-60. [DOI: 10.1016/j.neuroimage.2014.08.033] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2014] [Revised: 07/10/2014] [Accepted: 08/20/2014] [Indexed: 11/17/2022] Open
|
72
|
Dhindsa K, Drobinin V, King J, Hall GB, Burgess N, Becker S. Examining the role of the temporo-parietal network in memory, imagery, and viewpoint transformations. Front Hum Neurosci 2014; 8:709. [PMID: 25278860 PMCID: PMC4165350 DOI: 10.3389/fnhum.2014.00709] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2014] [Accepted: 08/25/2014] [Indexed: 11/13/2022] Open
Abstract
The traditional view of the medial temporal lobe (MTL) focuses on its role in episodic memory. However, some of the underlying functions of the MTL can be ascertained from its wider role in supporting spatial cognition in concert with parietal and prefrontal regions. The MTL is strongly implicated in the formation of enduring allocentric representations (e.g., O'Keefe, 1976; King et al., 2002; Ekstrom et al., 2003). According to our BBB model (Byrne et al., 2007), these representations must interact with head-centered and body-centered representations in posterior parietal cortex via a transformation circuit involving retrosplenial areas. Egocentric sensory representations in parietal areas can then cue the recall of allocentric spatial representations in long-term memory and, conversely, the products of retrieval in MTL can generate mental imagery within a parietal “window.” Such imagery is necessarily egocentric and forms part of visuospatial working memory, in which it can be manipulated for the purpose of planning/imagining the future. Recent fMRI evidence (Lambrey et al., 2012; Zhang et al., 2012) supports the BBB model. To further test the model, we had participants learn the locations of objects in a virtual scene and tested their spatial memory under conditions that impose varying demands on the transformation circuit. We analyzed how brain activity correlated with accuracy in judging the direction of an object (1) from visuospatial working memory (we assume transient working memory due to the order of tasks and the absence of change in viewpoint, but long-term memory retrieval is also possible), (2) after a rotation of viewpoint, or (3) after a rotation and translation of viewpoint (judgment of relative direction). We found performance-related activity in both tasks requiring viewpoint rotation (ROT and JRD, i.e., conditions 2 and 3) in the core medial temporal to medial parietal circuit identified by the BBB model. These results are consistent with the predictions of the BBB model, and shed further light on the neural mechanisms underlying spatial memory, mental imagery and viewpoint transformations.
Collapse
Affiliation(s)
- Kiret Dhindsa
- School of Computational Science and Engineering, McMaster University Hamilton, ON, Canada ; Neurotechnology and Neuroplasticity Lab, Department of Psychology Neuroscience and Behaviour, McMaster University Hamilton, ON, Canada
| | - Vladislav Drobinin
- Neurotechnology and Neuroplasticity Lab, Department of Psychology Neuroscience and Behaviour, McMaster University Hamilton, ON, Canada
| | - John King
- Psychology and Language Sciences, University College London London, UK
| | - Geoffrey B Hall
- Neurotechnology and Neuroplasticity Lab, Department of Psychology Neuroscience and Behaviour, McMaster University Hamilton, ON, Canada
| | - Neil Burgess
- Institute of Cognitive Neuroscience, University College London London, UK
| | - Suzanna Becker
- Neurotechnology and Neuroplasticity Lab, Department of Psychology Neuroscience and Behaviour, McMaster University Hamilton, ON, Canada
| |
Collapse
|
73
|
Szpiro SFA, Spering M, Carrasco M. Perceptual learning modifies untrained pursuit eye movements. J Vis 2014; 14:8. [PMID: 25002412 DOI: 10.1167/14.8.8] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response.
Collapse
Affiliation(s)
- Sarit F A Szpiro
- Department of Psychology, New York University, New York, NY, USA
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, CanadaBrain Research Centre, University of British Columbia, Vancouver, Canada
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USACenter for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
74
|
Strappini F, Pitzalis S, Snyder AZ, McAvoy MP, Sereno MI, Corbetta M, Shulman GL. Eye position modulates retinotopic responses in early visual areas: a bias for the straight-ahead direction. Brain Struct Funct 2014; 220:2587-601. [PMID: 24942135 PMCID: PMC4549389 DOI: 10.1007/s00429-014-0808-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Accepted: 05/21/2014] [Indexed: 11/30/2022]
Abstract
Even though the eyes constantly change position, the location of a stimulus can be accurately represented by a population of neurons with retinotopic receptive fields modulated by eye position gain fields. Recent electrophysiological studies, however, indicate that eye position gain fields may serve an additional function since they have a non-uniform spatial distribution that increases the neural response to stimuli in the straight-ahead direction. We used functional magnetic resonance imaging and a wide-field stimulus display to determine whether gaze modulations in early human visual cortex enhance the blood-oxygenation-level dependent (BOLD) response to stimuli that are straight-ahead. Subjects viewed rotating polar angle wedge stimuli centered straight-ahead or vertically displaced by ±20° eccentricity. Gaze position did not affect the topography of polar phase-angle maps, confirming that coding was retinotopic, but did affect the amplitude of the BOLD response, consistent with a gain field. In agreement with recent electrophysiological studies, BOLD responses in V1 and V2 to a wedge stimulus at a fixed retinal locus decreased when the wedge location in head-centered coordinates was farther from the straight-ahead direction. We conclude that stimulus-evoked BOLD signals are modulated by a systematic, non-uniform distribution of eye-position gain fields.
Collapse
Affiliation(s)
- Francesca Strappini
- Department of Neurology, Washington University, School of Medicine, Saint Louis, MO, 63110, USA,
| | | | | | | | | | | | | |
Collapse
|
75
|
Abstract
Understanding how the brain computes eye position is essential to unraveling high-level visual functions such as eye movement planning, coordinate transformations and stability of spatial awareness. The lateral intraparietal area (LIP) is essential for this process. However, despite decades of research, its contribution to the eye position signal remains controversial. LIP neurons have recently been reported to inaccurately represent eye position during a saccadic eye movement, and to be too slow to support a role in high-level visual functions. We addressed this issue by predicting eye position and saccade direction from the responses of populations of LIP neurons. We found that both signals were accurately predicted before, during and after a saccade. Also, the dynamics of these signals support their contribution to visual functions. These findings provide a principled understanding of the coding of information in populations of neurons within an important node of the cortical network for visual-motor behaviors. DOI:http://dx.doi.org/10.7554/eLife.02813.001 Whenever we reach towards an object, we automatically use visual information to guide our movements and make any adjustments required. Visual feedback helps us to learn new motor skills, and ensures that our physical view of the world remains stable despite the fact that every eye movement causes the image on the retina to shift dramatically. However, such visual feedback is only useful because it can be compared with information on the position of the eyes, which is stored by the brain at all times. It is thought that one important structure where information on eye position is stored is an area towards the back of the brain called the lateral intraparietal cortex, but the exact contribution of this region has long been controversial. Graf and Andersen have now clarified the role of this area by studying monkeys as they performed an eye-movement task. Rhesus monkeys were trained to fixate on a particular location on a grid. A visual target was then flashed up briefly in another location and, after a short delay, the monkeys moved their eyes to the new location to earn a reward. As the monkeys performed the task, a group of electrodes recorded signals from multiple neurons within the lateral intraparietal cortex. This meant that Graf and Andersen could compare the neuronal responses of populations of neurons before, during, and after the movement. By studying neural populations, it was possible to accurately predict the direction in which a monkey was about to move his eyes, and also the initial and final eye positions. After a movement had occurred, the neurons also signaled the direction in which the monkey's eyes had been facing beforehand. Thus, the lateral intraparietal area stores both retrospective and forward-looking information about eye position and movement. The work of Graf and Andersen confirms that the LIP has a central role in eye movement functions, and also contributes more generally to our understanding of how behaviors are encoded at the level of populations of neurons. Such information could ultimately aid the development of neural prostheses to help patients with paralysis resulting from injury or neurodegeneration. DOI:http://dx.doi.org/10.7554/eLife.02813.002
Collapse
Affiliation(s)
- Arnulf Ba Graf
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, United States
| | - Richard A Andersen
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, United States
| |
Collapse
|
76
|
Dell'Osso LF, Orge FH, Jacobs JB, Wang ZI. Fusion maldevelopment (latent/manifest latent) nystagmus syndrome: effects of four-muscle tenotomy and reattachment. J Pediatr Ophthalmol Strabismus 2014; 51:180-8. [PMID: 24694546 DOI: 10.3928/01913913-20140326-01] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/10/2013] [Accepted: 01/16/2014] [Indexed: 11/20/2022]
Abstract
PURPOSE To examine the waveform and clinical effects of the four-muscle tenotomy and reattachment procedure in fusion maldevelopment nystagmus syndrome (FMNS) and to compare them to those documented in infantile nystagmus syndrome (INS) and acquired nystagmus. METHODS Both infrared reflection and high-speed digital video systems were used to record the eye movements in a patient with FMNS (before and after tenotomy and reattachment). Data were analyzed using the eXpanded Nystagmus Acuity Function (NAFX) that is part of the OMtools software. Model simulations and predictions were performed using the authors' behavioral ocular motor system model in MATLAB Simulink (The MathWorks, Inc., Natick, MA). RESULTS The model predicted, and the patient's data confirmed, that the tenotomy and reattachment procedure produces improvements in FMN waveforms across a broader field of gaze and decreases the Alexander's law variation. The patient's tenotomy and reattachment plots of NAFX after surgery versus gaze angle were higher and had lower slope than before surgery. Clinically, despite moderate improvements in both peak measured acuity and stereoacuity, dramatic improvements in the patient's abilities and lifestyle resulted. CONCLUSIONS The four-muscle tenotomy and reattachment nystagmus surgery produced beneficial therapeutic effects on FMN waveforms that are similar to those demonstrated in INS and acquired nystagmus. These results support the authors' prior recommendation that tenotomy and reattachment nystagmus should be added to required strabismus procedures in patients who also have FMNS (ie, perform tenotomy and reattachment on all unoperated muscles in the plane of the nystagmus). Furthermore, when strabismus surgery is not required, four-muscle tenotomy and reattachment may be used to improve FMN waveforms and visual function.
Collapse
|
77
|
Guipponi O, Odouard S, Pinède S, Wardak C, Ben Hamed S. fMRI Cortical Correlates of Spontaneous Eye Blinks in the Nonhuman Primate. Cereb Cortex 2014; 25:2333-45. [PMID: 24654257 DOI: 10.1093/cercor/bhu038] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Eyeblinks are defined as a rapid closing and opening of the eyelid. Three types of blinks are defined: spontaneous, reflexive, and voluntary. Here, we focus on the cortical correlates of spontaneous blinks, using functional magnetic resonance imaging (fMRI) in the nonhuman primate. Our observations reveal an ensemble of cortical regions processing the somatosensory, proprioceptive, peripheral visual, and possibly nociceptive consequences of blinks. These observations indicate that spontaneous blinks have consequences on the brain beyond the visual cortex, possibly contaminating fMRI protocols that generate in the participants heterogeneous blink behaviors. This is especially the case when these protocols induce (nonunusual) eye fatigue and corneal dryness due to demanding fixation requirements, as is the case here. Importantly, no blink related activations were observed in the prefrontal and parietal blinks motor command areas nor in the prefrontal, parietal, and medial temporal blink suppression areas. This indicates that the absence of activation in these areas is not a signature of the absence of blink contamination in the data. While these observations increase our understanding of the neural bases of spontaneous blinks, they also strongly call for new criteria to identify whether fMRI recordings are contaminated by a heterogeneous blink behavior or not.
Collapse
Affiliation(s)
- Olivier Guipponi
- Centre de Neuroscience Cognitive, CNRS UMR 5229-Université Claude Bernard Lyon I, 69675 Bron Cedex, France
| | - Soline Odouard
- Centre de Neuroscience Cognitive, CNRS UMR 5229-Université Claude Bernard Lyon I, 69675 Bron Cedex, France
| | - Serge Pinède
- Centre de Neuroscience Cognitive, CNRS UMR 5229-Université Claude Bernard Lyon I, 69675 Bron Cedex, France
| | - Claire Wardak
- Centre de Neuroscience Cognitive, CNRS UMR 5229-Université Claude Bernard Lyon I, 69675 Bron Cedex, France
| | - Suliann Ben Hamed
- Centre de Neuroscience Cognitive, CNRS UMR 5229-Université Claude Bernard Lyon I, 69675 Bron Cedex, France
| |
Collapse
|
78
|
Ziesche A, Hamker FH. Brain circuits underlying visual stability across eye movements-converging evidence for a neuro-computational model of area LIP. Front Comput Neurosci 2014; 8:25. [PMID: 24653691 PMCID: PMC3949326 DOI: 10.3389/fncom.2014.00025] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Accepted: 02/14/2014] [Indexed: 11/13/2022] Open
Abstract
The understanding of the subjective experience of a visually stable world despite the occurrence of an observer's eye movements has been the focus of extensive research for over 20 years. These studies have revealed fundamental mechanisms such as anticipatory receptive field (RF) shifts and the saccadic suppression of stimulus displacements, yet there currently exists no single explanatory framework for these observations. We show that a previously presented neuro-computational model of peri-saccadic mislocalization accounts for the phenomenon of predictive remapping and for the observation of saccadic suppression of displacement (SSD). This converging evidence allows us to identify the potential ingredients of perceptual stability that generalize beyond different data sets in a formal physiology-based model. In particular we propose that predictive remapping stabilizes the visual world across saccades by introducing a feedback loop and, as an emergent result, small displacements of stimuli are not noticed by the visual system. The model provides a link from neural dynamics, to neural mechanism and finally to behavior, and thus offers a testable comprehensive framework of visual stability.
Collapse
Affiliation(s)
- Arnold Ziesche
- Artificial Intelligence, Computer Science, Chemnitz University of Technology Chemnitz, Germany ; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster Muenster, Germany
| | - Fred H Hamker
- Artificial Intelligence, Computer Science, Chemnitz University of Technology Chemnitz, Germany
| |
Collapse
|
79
|
Abstract
The human somatosensory cortex (S1) is not among the brain areas usually associated with visuospatial attention. However, such a function can be presumed, given the recently identified eye proprioceptive input to S1 and the established links between gaze and attention. Here we investigated a rare patient with a focal lesion of the right postcentral gyrus that interferes with the processing of eye proprioception without affecting the ability to locate visual objects relative to her body or to execute eye movements. As a behavioral measure of spatial attention, we recorded fixation time during visual search and reaction time for visual discrimination in lateral displays. In contrast to a group of age-matched controls, the patient showed a gradient in looking time and in visual sensitivity toward the midline. Because an attention bias in the opposite direction, toward the ipsilesional space, occurs in patients with spatial neglect, in a second study, we asked whether the incidental coinjury of S1 together with the neglect-typical perisylvian lesion leads to a milder neglect. A voxelwise lesion behavior mapping analysis of a group of right-hemisphere stroke patients supported this hypothesis. The effect of an isolated S1 lesion on visual exploration and visual sensitivity as well as the modulatory role of S1 in spatial neglect suggest a role of this area in visuospatial attention. We hypothesize that the proprioceptive gaze signal in S1, although playing only a minor role in locating visual objects relative to the body, affects the allocation of attention in the visual space.
Collapse
|
80
|
Chen X, Deangelis GC, Angelaki DE. Diverse spatial reference frames of vestibular signals in parietal cortex. Neuron 2013; 80:1310-21. [PMID: 24239126 DOI: 10.1016/j.neuron.2013.09.006] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2013] [Indexed: 10/26/2022]
Abstract
Reference frames are important for understanding how sensory cues from different modalities are coordinated to guide behavior, and the parietal cortex is critical to these functions. We compare reference frames of vestibular self-motion signals in the ventral intraparietal area (VIP), parietoinsular vestibular cortex (PIVC), and dorsal medial superior temporal area (MSTd). Vestibular heading tuning in VIP is invariant to changes in both eye and head positions, indicating a body (or world)-centered reference frame. Vestibular signals in PIVC have reference frames that are intermediate between head and body centered. In contrast, MSTd neurons show reference frames between head and eye centered but not body centered. Eye and head position gain fields were strongest in MSTd and weakest in PIVC. Our findings reveal distinct spatial reference frames for representing vestibular signals and pose new challenges for understanding the respective roles of these areas in potentially diverse vestibular functions.
Collapse
Affiliation(s)
- Xiaodong Chen
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | | | | |
Collapse
|
81
|
Eye-position signals in the dorsal visual system are accurate and precise on short timescales. J Neurosci 2013; 33:12395-406. [PMID: 23884945 DOI: 10.1523/jneurosci.0576-13.2013] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Eye-position signals (EPS) are found throughout the primate visual system and are thought to provide a mechanism for representing spatial locations in a manner that is robust to changes in eye position. It remains unknown, however, whether cortical EPS (also known as "gain fields") have the necessary spatial and temporal characteristics to fulfill their purported computational roles. To quantify these EPS, we combined single-unit recordings in four dorsal visual areas of behaving rhesus macaques (lateral intraparietal area, ventral intraparietal area, middle temporal area, and the medial superior temporal area) with likelihood-based population-decoding techniques. The decoders used knowledge of spiking statistics to estimate eye position during fixation from a set of observed spike counts across neurons. Importantly, these samples were short in duration (100 ms) and from individual trials to mimic the real-time estimation problem faced by the brain. The results suggest that cortical EPS provide an accurate and precise representation of eye position, albeit with unequal signal fidelity across brain areas and a modest underestimation of eye eccentricity. The underestimation of eye eccentricity predicted a pattern of mislocalization that matches the errors made by human observers. In addition, we found that eccentric eye positions were associated with enhanced precision relative to the primary eye position. This predicts that positions in visual space should be represented more reliably during eccentric gaze than while looking straight ahead. Together, these results suggest that cortical eye-position signals provide a useable head-centered representation of visual space on timescales that are compatible with the duration of a typical ocular fixation.
Collapse
|
82
|
Leclercq G, Blohm G, Lefèvre P. Accounting for direction and speed of eye motion in planning visually guided manual tracking. J Neurophysiol 2013; 110:1945-57. [DOI: 10.1152/jn.00130.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.
Collapse
Affiliation(s)
- Guillaume Leclercq
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; and
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - Philippe Lefèvre
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| |
Collapse
|
83
|
|
84
|
Joiner WM, Cavanaugh J, FitzGibbon EJ, Wurtz RH. Corollary discharge contributes to perceived eye location in monkeys. J Neurophysiol 2013; 110:2402-13. [PMID: 23986562 DOI: 10.1152/jn.00362.2013] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do.
Collapse
Affiliation(s)
- Wilsaan M Joiner
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, Maryland; and
| | | | | | | |
Collapse
|
85
|
Abstract
To locate visual objects, the brain combines information about retinal location and direction of gaze. Studies in monkeys have demonstrated that eye position modulates the gain of visual signals with "gain fields," so that single neurons represent both retinotopic location and eye position. We wished to know whether eye position and retinotopic stimulus location are both represented in human visual cortex. Using functional magnetic resonance imaging, we measured separately for each of several different gaze positions cortical responses to stimuli that varied periodically in retinal locus. Visually evoked responses were periodic following the periodic retinotopic stimulation. Only the response amplitudes depended on eye position; response phases were indistinguishable across eye positions. We used multivoxel pattern analysis to decode eye position from the spatial pattern of response amplitudes. The decoder reliably discriminated eye position in five of the early visual cortical areas by taking advantage of a spatially heterogeneous eye position-dependent modulation of cortical activity. We conclude that responses in retinotopically organized visual cortical areas are modulated by gain fields qualitatively similar to those previously observed neurophysiologically.
Collapse
|
86
|
Bedi H, Goltz HC, Wong AMF, Chandrakumar M, Niechwiej-Szwedo E. Error correcting mechanisms during antisaccades: contribution of online control during primary saccades and offline control via secondary saccades. PLoS One 2013; 8:e68613. [PMID: 23936308 PMCID: PMC3735558 DOI: 10.1371/journal.pone.0068613] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2012] [Accepted: 05/31/2013] [Indexed: 11/26/2022] Open
Abstract
Errors in eye movements can be corrected during the ongoing saccade through in-flight modifications (i.e., online control), or by programming a secondary eye movement (i.e., offline control). In a reflexive saccade task, the oculomotor system can use extraretinal information (i.e., efference copy) online to correct errors in the primary saccade, and offline retinal information to generate a secondary corrective saccade. The purpose of this study was to examine the error correction mechanisms in the antisaccade task. The roles of extraretinal and retinal feedback in maintaining eye movement accuracy were investigated by presenting visual feedback at the spatial goal of the antisaccade. We found that online control for antisaccade is not affected by the presence of visual feedback; that is whether visual feedback is present or not, the duration of the deceleration interval was extended and significantly correlated with reduced antisaccade endpoint error. We postulate that the extended duration of deceleration is a feature of online control during volitional saccades to improve their endpoint accuracy. We found that secondary saccades were generated more frequently in the antisaccade task compared to the reflexive saccade task. Furthermore, we found evidence for a greater contribution from extraretinal sources of feedback in programming the secondary “corrective” saccades in the antisaccade task. Nonetheless, secondary saccades were more corrective for the remaining antisaccade amplitude error in the presence of visual feedback of the target. Taken together, our results reveal a distinctive online error control strategy through an extension of the deceleration interval in the antisaccade task. Target feedback does not improve online control, rather it improves the accuracy of secondary saccades in the antisaccade task.
Collapse
Affiliation(s)
- Harleen Bedi
- The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Herbert C. Goltz
- The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Agnes M. F. Wong
- The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | | | - Ewa Niechwiej-Szwedo
- Department of Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
- * E-mail:
| |
Collapse
|
87
|
Tian J, Ying HS, Zee DS. Revisiting corrective saccades: role of visual feedback. Vision Res 2013; 89:54-64. [PMID: 23891705 DOI: 10.1016/j.visres.2013.07.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Revised: 06/26/2013] [Accepted: 07/15/2013] [Indexed: 10/26/2022]
Abstract
To clarify the role of visual feedback in the generation of corrective movements after inaccurate primary saccades, we used a visually-triggered saccade task in which we varied how long the target was visible. The target was on for only 100ms (OFF100ms), on until the start of the primary saccade (OFFonset) or on for 2s (ON). We found that the tolerance for the post-saccadic error was small (-2%) with a visual signal (ON) but greater (-6%) without visual feedback (OFF100ms). Saccades with an error of -10%, however, were likely to be followed by corrective saccades regardless of whether or not visual feedback was present. Corrective saccades were generally generated earlier when visual error information was available; their latency was related to the size of the error. The LATER (Linear Approach to Threshold with Ergodic Rate) model analysis also showed a comparable small population of short latency corrective saccades irrespective of the target visibility. Finally, we found, in the absence of visual feedback, the accuracy of corrective saccades across subjects was related to the latency of the primary saccade. Our findings provide new insights into the mechanisms underlying the programming of corrective saccades: (1) the preparation of corrective saccades begins along with the preparation of the primary saccades, (2) the accuracy of corrective saccades depends on the reaction time of the primary saccades and (3) if visual feedback is available after the initiation of the primary saccade, the prepared correction can be updated.
Collapse
Affiliation(s)
- Jing Tian
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA.
| | | | | |
Collapse
|
88
|
Dits J, King WM, van der Steen J. Scaling of compensatory eye movements during translations: virtual versus real depth. Neuroscience 2013; 246:73-81. [PMID: 23639883 DOI: 10.1016/j.neuroscience.2013.04.029] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2013] [Revised: 04/12/2013] [Accepted: 04/15/2013] [Indexed: 10/26/2022]
Abstract
Vestibulo-ocular reflexes are the fastest compensatory reflex systems. One of these is the translational vestibulo-ocular reflex (TVOR) which stabilizes the gaze at a given fixation point during whole body translations. For a proper response of the TVOR the eyes have to counter rotate in the head with a velocity that is inversely scaled to viewing distance of the target. It is generally assumed that scaling of the TVOR is automatically coupled to vergence angle at the brainstem level. However, different lines of evidence also argue that in humans scaling of the TVOR also depends on a mechanism that pre-sets gain on a priori knowledge of target distance. To discriminate between these two possibilities we used a real target paradigm with vergence angle coupled to distance and a virtual target paradigm with vergence angle dissociated from target distance. We compared TVOR responses in six subjects who underwent lateral sinusoidal whole-body translations at 1 and 2 Hz. Real targets varied between distance of 50 and 22.4 cm in front of the subjects, whereas the virtual targets consisting of a green and red light emitting diode (LED) were physically located at 50 cm from the subject. Red and green LED's were dichoptically viewed. By shifting the red LED relative to the green LED we created a range of virtual viewing distances where vergence angle changed but the ideal kinematic eye velocity was always the same. Eye velocity data recorded with virtual targets were compared to eye velocity data recorded with real targets. We also used flashing targets (flash frequency 1 Hz, duration 5 ms). During the real, continuous visible targets condition scaling of compensatory eye velocity with vergence angle was nearly perfect. During viewing of virtual targets, and with flashed targets compensatory eye velocity only weakly correlated to vergence angle, indicating that vergence angle is only partially coupled to compensatory eye velocity during translation. Our data suggest that in humans vergence angle as a measure of target distance estimation has only limited use for automatic TVOR scaling.
Collapse
Affiliation(s)
- J Dits
- Department of Neuroscience, Erasmus University Medical Centre Rotterdam, Dr. Molewaterplein 50, 3000 DR Rotterdam, The Netherlands
| | | | | |
Collapse
|
89
|
Abstract
To interact rapidly and effectively with our environment, our brain needs access to a neural representation--or map--of the spatial layout of the external world. However, the construction of such a map poses major challenges to the visual system, given that the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head, and body to explore the world. Much research has been devoted to how the stability is achieved, with the debate often polarized between the utility of spatiotopic maps (that remain solid in external coordinates), as opposed to transiently updated retinotopic maps. Our research suggests that the visual system uses both strategies to maintain stability. fMRI, motion-adaptation, and saccade-adaptation studies demonstrate and characterize spatiotopic neural maps within the dorsal visual stream that remain solid in external rather than retinal coordinates. However, the construction of these maps takes time (up to 500 ms) and attentional resources. To solve the immediate problems created by individual saccades, we postulate the existence of a separate system to bridge each saccade with neural units that are 'transiently craniotopic'. These units prepare for the effects of saccades with a shift of their receptive fields before the saccade starts, then relaxing back into their standard position during the saccade, compensating for its action. Psychophysical studies investigating localization of stimuli flashed briefly around the time of saccades provide strong support for these neural mechanisms, and show quantitatively how they integrate information across saccades. This transient system cooperates with the spatiotopic mechanism to provide a useful map to guide interactions with our environment: one rapid and transitory, bringing into play the high-resolution visual areas; the other slow, long-lasting, and low-resolution, useful for interacting with the world.
Collapse
Affiliation(s)
- David C Burr
- Department of Psychology, University of Florence, via San Salvi 12, 50135 Florence, Italy.
| | | |
Collapse
|
90
|
Axons giving rise to the palisade endings of feline extraocular muscles display motor features. J Neurosci 2013; 33:2784-93. [PMID: 23407938 DOI: 10.1523/jneurosci.4116-12.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Palisade endings are nerve specializations found in the extraocular muscles (EOMs) of mammals, including primates. They have long been postulated to be proprioceptors. It was recently demonstrated that palisade endings are cholinergic and that in monkeys they originate from the EOM motor nuclei. Nevertheless, there is considerable difference of opinion concerning the nature of palisade ending function. Palisade endings in EOMs were examined in cats to test whether they display motor or sensory characteristics. We injected an anterograde tracer into the oculomotor or abducens nuclei and combined tracer visualization with immunohistochemistry and α-bungarotoxin staining. Employing immunohistochemistry, we performed molecular analyses of palisade endings and trigeminal ganglia to determine whether cat palisade endings are a cholinergic trigeminal projection. We confirmed that palisade endings are cholinergic and showed, for the first time, that they, like extraocular motoneurons, are also immunoreactive for calcitonin gene-related peptide. Following tracer injection into the EOM nuclei, we observed tracer-positive palisade endings that exhibited choline acetyl transferase immunoreactivity. The tracer-positive nerve fibers supplying palisade endings also established motor terminals along the muscle fibers, as demonstrated by α-bungarotoxin. Neither the trigeminal ganglion nor the ophthalmic branch of the trigeminal nerve contained cholinergic elements. This study confirms that palisade endings originate in the EOM motor nuclei and further indicates that they are extensions of the axons supplying the muscle fiber related to the palisade. The present work excludes the possibility that they receive cholinergic trigeminal projections. These findings call into doubt the proposed proprioceptive function of palisade endings.
Collapse
|
91
|
Abstract
Spatial attention can be defined as the selection of a location for privileged stimulus processing. Most oculomotor structures, such as the superior colliculus or the FEFs, play an additional role in visuospatial attention. Indeed, electrical stimulation of these structures can cause changes in visual sensitivity that are location specific. We have proposed that the recently discovered ocular proprioceptive area in the human postcentral gyrus (S1(EYE)) may have a similar function. This suggestion was based on the observation that a reduction of excitability in this area with TMS causes not only a shift in perceived eye position but also lateralized changes in visual sensitivity. Here we investigated whether these shifts in perceived gaze position and visual sensitivity are spatially congruent. After continuous theta burst stimulation over S1(EYE), participants underestimated own eye rotation, so that saccades from a lateral eye rotation undershoot a central sound (Experiment 1). They discriminated letters faster if they were presented nearer the orbit midline (Experiment 2) and spent less time looking at locations nearer the orbit midline when searching for a nonexistent target in a letter array (Experiment 3). This suggests that visual sensitivity increased nearer the orbit midline, in the same direction as the shift in perceived eye position. This spatial congruence argues for a functional coupling between the cortical eye position signal in the somatosensory cortex and visuospatial attention.
Collapse
|
92
|
Xu BY, Karachi C, Goldberg ME. The postsaccadic unreliability of gain fields renders it unlikely that the motor system can use them to calculate target position in space. Neuron 2013; 76:1201-9. [PMID: 23259954 DOI: 10.1016/j.neuron.2012.10.034] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/31/2012] [Indexed: 11/27/2022]
Abstract
Gain fields, the eye-position modulation of visual responses, are thought to provide a mechanism by which the motor system can accurately calculate target position in space despite a constantly moving eye. Current gain-field models assume that the modulation of visual responses by eye position is accurate at all times, even around the time of a saccade. Here, we show that for at least 150 ms after a saccade, gain fields in the lateral intraparietal area (LIP) are unreliable. The majority of LIP cells with steady-state gain fields reflect the presaccadic eye position. The remainder of the cells have responses that cannot be predicted by their steady-state gain fields. Nonetheless, a monkey's oculomotor performance is accurate during this time. These results suggest that current models built upon a simple gain-field algorithm cannot be used to calculate the position of a target in space that flashes briefly after a saccade.
Collapse
Affiliation(s)
- Benjamin Y Xu
- Mahoney-Keck Center for Brain and Behavior Research, Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA.
| | | | | |
Collapse
|
93
|
Ma R, Cui H, Lee SH, Anastasio TJ, Malpeli JG. Predictive encoding of moving target trajectory by neurons in the parabigeminal nucleus. J Neurophysiol 2013; 109:2029-43. [PMID: 23365185 DOI: 10.1152/jn.01032.2012] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Intercepting momentarily invisible moving objects requires internally generated estimations of target trajectory. We demonstrate here that the parabigeminal nucleus (PBN) encodes such estimations, combining sensory representations of target location, extrapolated positions of briefly obscured targets, and eye position information. Cui and Malpeli (Cui H, Malpeli JG. J Neurophysiol 89: 3128-3142, 2003) reported that PBN activity for continuously visible tracked targets is determined by retinotopic target position. Here we show that when cats tracked moving, blinking targets the relationship between activity and target position was similar for ON and OFF phases (400 ms for each phase). The dynamic range of activity evoked by virtual targets was 94% of that of real targets for the first 200 ms after target offset and 64% for the next 200 ms. Activity peaked at about the same best target position for both real and virtual targets. PBN encoding of target position takes into account changes in eye position resulting from saccades, even without visual feedback. Since PBN response fields are retinotopically organized, our results suggest that activity foci associated with real and virtual targets at a given target position lie in the same physical location in the PBN, i.e., a retinotopic as well as a rate encoding of virtual-target position. We also confirm that PBN activity is specific to the intended target of a saccade and is predictive of which target will be chosen if two are offered. A Bayesian predictor-corrector model is presented that conceptually explains the differences in the dynamic ranges of PBN neuronal activity evoked during tracking of real and virtual targets.
Collapse
Affiliation(s)
- Rui Ma
- Neuroscience Program, University of Illinois, Urbana, Illinois 61820, USA
| | | | | | | | | |
Collapse
|
94
|
Abstract
Human vision uses saccadic eye movements to rapidly shift the sensitive foveal portion of our retina to objects of interest. For vision to function properly amidst these ballistic eye movements, a mechanism is needed to extract discrete percepts on each fixation from the continuous stream of neural activity that spans fixations. The speed of visual parsing is crucial because human behaviors ranging from reading to driving to sports rely on rapid visual analysis. We find that a brain signal associated with moving the eyes appears to play a role in resetting visual analysis on each fixation, a process that may aid in parsing the neural signal. We quantified the degree to which the perception of tilt is influenced by the tilt of a stimulus on a preceding fixation. Two key conditions were compared, one in which a saccade moved the eyes from one stimulus to the next and a second simulated saccade condition in which the stimuli moved in the same manner but the subjects did not move their eyes. We find that there is a brief period of time at the start of each fixation during which the tilt of the previous stimulus influences perception (in a direction opposite to the tilt aftereffect)--perception is not instantaneously reset when a fixation starts. Importantly, the results show that this perceptual bias is much greater, with nearly identical visual input, when saccades are simulated. This finding suggests that, in real-saccade conditions, some signal related to the eye movement may be involved in the reset phenomenon. While proprioceptive information from the extraocular muscles is conceivably a factor, the fast speed of the effect we observe suggests that a more likely mechanism is a corollary discharge signal associated with eye movement.
Collapse
|
95
|
Lienbacher K, Horn AKE. Palisade endings and proprioception in extraocular muscles: a comparison with skeletal muscles. BIOLOGICAL CYBERNETICS 2012; 106:643-55. [PMID: 23053430 DOI: 10.1007/s00422-012-0519-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2012] [Accepted: 09/04/2012] [Indexed: 05/20/2023]
Abstract
This article describes current views on motor and sensory control of extraocular muscles (EOMs) based on anatomical data. The special morphology of EOMs, including their motor innervation, is described in comparison to classical skeletal limb and trunk muscles. The presence of proprioceptive organs is reviewed with emphasis on the palisade endings (PEs), which are unique to EOMs, but the function of which is still debated. In consideration of the current new anatomical data about the location of cell bodies of PEs, a hypothesis on the function of PEs in EOMs and the multiply innervated muscle fibres they are attached to is put forward.
Collapse
Affiliation(s)
- Karoline Lienbacher
- Institute of Anatomy and Cell Biology, Department I, Ludwig-Maximilians University of Munich, Munich, Germany
| | | |
Collapse
|
96
|
Pynn LK, DeSouza JFX. The function of efference copy signals: implications for symptoms of schizophrenia. Vision Res 2012; 76:124-33. [PMID: 23159418 DOI: 10.1016/j.visres.2012.10.019] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2010] [Revised: 09/12/2012] [Accepted: 10/31/2012] [Indexed: 11/29/2022]
Abstract
Efference copy signals are used to reduce cognitive load by decreasing sensory processing of reafferent information (those incoming sensory signals that are produced by an organism's own motor output). Attenuated sensory processing of self-generated afferents is seen across species and in multiple sensory systems involving many different neural structures and circuits including both cortical and subcortical structures with thalamic nuclei playing a particularly important role. It has been proposed that the failure to disambiguate self-induced from externally generated sensory input may cause some of the positive symptoms in schizophrenia such as auditory hallucinations and delusions of passivity. Here, we review the current data on the role of efference copy signals within different sensory modalities as well as the behavioral, structural and functional abnormalities in clinical groups that support this hypothesis.
Collapse
Affiliation(s)
- Laura K Pynn
- Centre for Vision Research, York University, Toronto, Ontario, Canada M3J 1P3
| | | |
Collapse
|
97
|
Van Grootel TJ, Van der Willigen RF, Van Opstal AJ. Experimental test of spatial updating models for monkey eye-head gaze shifts. PLoS One 2012; 7:e47606. [PMID: 23118883 PMCID: PMC3485288 DOI: 10.1371/journal.pone.0047606] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2012] [Accepted: 09/13/2012] [Indexed: 12/02/2022] Open
Abstract
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.
Collapse
Affiliation(s)
- Tom J. Van Grootel
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Nijmegen, The Netherlands
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Robert F. Van der Willigen
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Nijmegen, The Netherlands
| | - A. John Van Opstal
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Nijmegen, The Netherlands
- * E-mail:
| |
Collapse
|
98
|
Abstract
Misalignment of the eyes can lead to double vision and visual confusion. However, these sensations are rare when strabismus is acquired early in life, because the extra image is suppressed. To explore the mechanism of perceptual suppression in strabismus, the visual fields were mapped binocularly in 14 human subjects with exotropia. Subjects wore red/blue filter glasses to permit dichoptic stimulation while fixating a central target on a tangent screen. A purple stimulus was flashed at a peripheral location; its reported color ("red" or "blue") revealed which eye's image was perceived at that locus. The maps showed a vertical border between the center of gaze for each eye, splitting the visual field into two separate regions. In each region, perception was mediated by only one eye, with suppression of the other eye. Unexpectedly, stimuli falling on the fovea of the deviated eye were seen in all subjects. However, they were perceived in a location shifted by the angle of ocular deviation. This plasticity in the coding of visual direction allows accurate localization of objects everywhere in the visual scene, despite the presence of strabismus.
Collapse
|
99
|
Eye proprioception used for visual localization only if in conflict with the oculomotor plan. J Neurosci 2012; 32:8569-73. [PMID: 22723697 DOI: 10.1523/jneurosci.1488-12.2012] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Both the corollary discharge of the oculomotor command and eye muscle proprioception provide eye position information to the brain. Two contradictory models have been suggested about how these two sources contribute to visual localization: (1) only the efference copy is used whereas proprioception is a slow recalibrator of the forward model, and (2) both signals are used together as a weighted average. We had the opportunity to test these hypotheses in a patient (R.W.) with a circumscribed lesion of the right postcentral gyrus that overlapped the human eye proprioceptive representation. R.W. was as accurate and precise as the control group (n = 19) in locating a lit LED that she viewed through the eye contralateral to the lesion. However, when the task was preceded by a brief (<1 s), gentle push to the closed eye, which perturbed eye position and stimulated eye proprioceptors in the absence of a motor command, R.W.'s accuracy significantly decreased compared with both her own baseline and the healthy control group. The data suggest that in normal conditions, eye proprioception is not used for visual localization. Eye proprioception is, however, continuously monitored to be incorporated into the eye position estimate when a mismatch with the efference copy of the motor command is detected. Our result thus supports the first model and, furthermore, identifies the limits for its operation.
Collapse
|
100
|
Eye proprioception may provide real time eye position information. Neurol Sci 2012; 34:281-6. [PMID: 22872063 DOI: 10.1007/s10072-012-1172-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2012] [Accepted: 07/24/2012] [Indexed: 12/21/2022]
Abstract
Because of the frequency of eye movements, online knowledge of eye position is crucial for the accurate spatial perception and behavioral navigation. Both the internal monitoring signal (corollary discharge) of eye movements and the eye proprioception signal are thought to contribute to the localization of the eye position in the orbit. However, the functional role of these two eye position signals in spatial cognition has been disputed for more than a century. The predominant view proposes that the online analysis of eye position is exclusively provided by the corollary discharge signal, while the eye proprioception signal only plays a role in the long-term calibration of the oculomotor system. However, increasing evidence from recent behavioral and physiological studies suggests that the eye proprioception signal may play a role in the online monitoring of eye position. The purpose of this review is to discuss the feasibility and possible function of the eye proprioceptive signal for online monitoring of eye position.
Collapse
|