1
|
Ugolini G, Graf W. Pathways from the superior colliculus and the nucleus of the optic tract to the posterior parietal cortex in macaque monkeys: Functional frameworks for representation updating and online movement guidance. Eur J Neurosci 2024; 59:2792-2825. [PMID: 38544445 DOI: 10.1111/ejn.16314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 05/22/2024]
Abstract
The posterior parietal cortex (PPC) integrates multisensory and motor-related information for generating and updating body representations and movement plans. We used retrograde transneuronal transfer of rabies virus combined with a conventional tracer in macaque monkeys to identify direct and disynaptic pathways to the arm-related rostral medial intraparietal area (MIP), the ventral lateral intraparietal area (LIPv), belonging to the parietal eye field, and the pursuit-related lateral subdivision of the medial superior temporal area (MSTl). We found that these areas receive major disynaptic pathways via the thalamus from the nucleus of the optic tract (NOT) and the superior colliculus (SC), mainly ipsilaterally. NOT pathways, targeting MSTl most prominently, serve to process the sensory consequences of slow eye movements for which the NOT is the key sensorimotor interface. They potentially contribute to the directional asymmetry of the pursuit and optokinetic systems. MSTl and LIPv receive feedforward inputs from SC visual layers, which are potential correlates for fast detection of motion, perceptual saccadic suppression and visual spatial attention. MSTl is the target of efference copy pathways from saccade- and head-related compartments of SC motor layers and head-related reticulospinal neurons. They are potential sources of extraretinal signals related to eye and head movement in MSTl visual-tracking neurons. LIPv and rostral MIP receive efference copy pathways from all SC motor layers, providing online estimates of eye, head and arm movements. Our findings have important implications for understanding the role of the PPC in representation updating, internal models for online movement guidance, eye-hand coordination and optic ataxia.
Collapse
Affiliation(s)
- Gabriella Ugolini
- Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR9197 CNRS - Université Paris-Saclay, Campus CEA Saclay, Saclay, France
| | - Werner Graf
- Department of Physiology and Biophysics, Howard University, Washington, DC, USA
| |
Collapse
|
2
|
Topographic organization of eye-position dependent gain fields in human visual cortex. Nat Commun 2022; 13:7925. [PMID: 36564372 PMCID: PMC9789150 DOI: 10.1038/s41467-022-35488-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/06/2022] [Indexed: 12/25/2022] Open
Abstract
The ability to move has introduced animals with the problem of sensory ambiguity: the position of an external stimulus could change over time because the stimulus moved, or because the animal moved its receptors. This ambiguity can be resolved with a change in neural response gain as a function of receptor orientation. Here, we developed an encoding model to capture gain modulation of visual responses in high field (7 T) fMRI data. We characterized population eye-position dependent gain fields (pEGF). The information contained in the pEGFs allowed us to reconstruct eye positions over time across the visual hierarchy. We discovered a systematic distribution of pEGF centers: pEGF centers shift from contra- to ipsilateral following pRF eccentricity. Such a topographical organization suggests that signals beyond pure retinotopy are accessible early in the visual hierarchy, providing the potential to solve sensory ambiguity and optimize sensory processing information for functionally relevant behavior.
Collapse
|
3
|
Koshizawa R, Oki K, Takayose M. The presence of occlusion affects electroencephalogram activity patterns when the target is occluded and immediately before occlusion. Neuroreport 2022; 33:345-353. [DOI: 10.1097/wnr.0000000000001792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
4
|
McFadyen JR, Heider B, Karkhanis AN, Cloherty SL, Muñoz F, Siegel RM, Morris AP. Robust Coding of Eye Position in Posterior Parietal Cortex despite Context-Dependent Tuning. J Neurosci 2022; 42:4116-4130. [PMID: 35410881 PMCID: PMC9121829 DOI: 10.1523/jneurosci.0674-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 03/30/2022] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
Neurons in posterior parietal cortex (PPC) encode many aspects of the sensory world (e.g., scene structure), the posture of the body, and plans for action. For a downstream computation, however, only some of these dimensions are relevant; the rest are "nuisance variables" because their influence on neural activity changes with sensory and behavioral context, potentially corrupting the read-out of relevant information. Here we show that a key postural variable for vision (eye position) is represented robustly in male macaque PPC across a range of contexts, although the tuning of single neurons depended strongly on context. Contexts were defined by different stages of a visually guided reaching task, including (1) a visually sparse epoch, (2) a visually rich epoch, (3) a "go" epoch in which the reach was cued, and (4) during the reach itself. Eye position was constant within trials but varied across trials in a 3 × 3 grid spanning 24° × 24°. Using demixed principal component analysis of neural spike-counts, we found that the subspace of the population response encoding eye position is orthogonal to that encoding task context. Accordingly, a context-naive (fixed-parameter) decoder was nevertheless able to estimate eye position reliably across contexts. Errors were small given the sample size (∼1.78°) and would likely be even smaller with larger populations. Moreover, they were comparable to that of decoders that were optimized for each context. Our results suggest that population codes in PPC shield encoded signals from crosstalk to support robust sensorimotor transformations across contexts.SIGNIFICANCE STATEMENT Neurons in posterior parietal cortex (PPC) which are sensitive to gaze direction are thought to play a key role in spatial perception and behavior (e.g., reaching, navigation), and provide a potential substrate for brain-controlled prosthetics. Many, however, change their tuning under different sensory and behavioral contexts, raising the prospect that they provide unreliable representations of egocentric space. Here, we analyze the structure of encoding dimensions for gaze direction and context in PPC during different stages of a visually guided reaching task. We use demixed dimensionality reduction and decoding techniques to show that the coding of gaze direction in PPC is mostly invariant to context. This suggests that PPC can provide reliable spatial information across sensory and behavioral contexts.
Collapse
Affiliation(s)
- Jamie R McFadyen
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Barbara Heider
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Anushree N Karkhanis
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Shaun L Cloherty
- School of Engineering, RMIT University, Melbourne, VIC, 3001, Australia
| | - Fabian Muñoz
- Department of Neuroscience, Columbia University, New York, NY, 10027
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Ralph M Siegel
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
- Monash Data Futures Institute, Monash University, Clayton, VIC, 3800, Australia
| |
Collapse
|
5
|
Foster C, Sheng WA, Heed T, Ben Hamed S. The macaque ventral intraparietal area has expanded into three homologue human parietal areas. Prog Neurobiol 2021; 209:102185. [PMID: 34775040 DOI: 10.1016/j.pneurobio.2021.102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 10/27/2021] [Accepted: 11/05/2021] [Indexed: 10/19/2022]
Abstract
The macaque ventral intraparietal area (VIP) in the fundus of the intraparietal sulcus has been implicated in a diverse range of sensorimotor and cognitive functions such as motion processing, multisensory integration, processing of head peripersonal space, defensive behavior, and numerosity coding. Here, we exhaustively review macaque VIP function, cytoarchitectonics, and anatomical connectivity and integrate it with human studies that have attempted to identify a potential human VIP homologue. We show that human VIP research has consistently identified three, rather than one, bilateral parietal areas that each appear to subsume some, but not all, of the macaque area's functionality. Available evidence suggests that this human "VIP complex" has evolved as an expansion of the macaque area, but that some precursory specialization within macaque VIP has been previously overlooked. The three human areas are dominated, roughly, by coding the head or self in the environment, visual heading direction, and the peripersonal environment around the head, respectively. A unifying functional principle may be best described as prediction in space and time, linking VIP to state estimation as a key parietal sensorimotor function. VIP's expansive differentiation of head and self-related processing may have been key in the emergence of human bodily self-consciousness.
Collapse
Affiliation(s)
- Celia Foster
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Wei-An Sheng
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France
| | - Tobias Heed
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany; Department of Psychology, University of Salzburg, Salzburg, Austria; Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria.
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France.
| |
Collapse
|
6
|
Diomedi S, Vaccari FE, Filippini M, Fattori P, Galletti C. Mixed Selectivity in Macaque Medial Parietal Cortex during Eye-Hand Reaching. iScience 2020; 23:101616. [PMID: 33089104 PMCID: PMC7559278 DOI: 10.1016/j.isci.2020.101616] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 06/18/2020] [Accepted: 09/23/2020] [Indexed: 01/07/2023] Open
Abstract
The activity of neurons of the medial posterior parietal area V6A in macaque monkeys is modulated by many aspects of reach task. In the past, research was mostly focused on modulating the effect of single parameters upon the activity of V6A cells. Here, we used Generalized Linear Models (GLMs) to simultaneously test the contribution of several factors upon V6A cells during a fix-to-reach task. This approach resulted in the definition of a representative “functional fingerprint” for each neuron. We first studied how the features are distributed in the population. Our analysis highlighted the virtual absence of units strictly selective for only one factor and revealed that most cells are characterized by “mixed selectivity.” Then, exploiting our GLM framework, we investigated the dynamics of spatial parameters encoded within V6A. We found that the tuning is not static, but changed along the trial, indicating the sequential occurrence of visuospatial transformations helpful to guide arm movement. The parietal cortex integrates a variety of sensorimotor inputs to guide reaching GLM disentangled the effect of various reaching parameters upon cell activity V6A neurons were not functionally clustered, but characterized by mixed selectivity Spatial selectivity was dynamic and reached its peak during the movement phase
Collapse
Affiliation(s)
- Stefano Diomedi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Francesco E. Vaccari
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Corresponding author
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Corresponding author
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
7
|
Lisi M. Uncertainty and spatial updating in posterior parietal cortex. Cortex 2020; 130:441-443. [DOI: 10.1016/j.cortex.2020.02.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Revised: 02/21/2020] [Accepted: 02/21/2020] [Indexed: 11/17/2022]
|
8
|
Filippini M, Morris AP, Breveglieri R, Hadjidimitrakis K, Fattori P. Decoding of standard and non-standard visuomotor associations from parietal cortex. J Neural Eng 2020; 17:046027. [PMID: 32698164 DOI: 10.1088/1741-2552/aba87e] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Neural signals can be decoded and used to move neural prostheses with the purpose of restoring motor function in patients with mobility impairments. Such patients typically have intact eye movement control and visual function, suggesting that cortical visuospatial signals could be used to guide external devices. Neurons in parietal cortex mediate sensory-motor transformations, encode the spatial coordinates for reaching goals, hand position and movements, and other spatial variables. We studied how spatial information is represented at the population level, and the possibility to decode not only the position of visual targets and the plans to reach them, but also conditional, non-spatial motor responses. APPROACH The animals first fixated one of nine targets in 3D space and then, after the target changed color, either reached toward it, or performed a non-spatial motor response (lift hand from a button). Spiking activity of parietal neurons was recorded in monkeys during two tasks. We then decoded different task related parameters. MAIN RESULTS We first show that a maximum-likelihood estimation (MLE) algorithm trained separately in each task transformed neural activity into accurate metric predictions of target location. Furthermore, by combining MLE with a Naïve Bayes classifier, we decoded the monkey's motor intention (reach or hand lift) and the different phases of the tasks. These results show that, although V6A encodes the spatial location of a target during a delay period, the signals they carry are updated around the movement execution in an intention/motor specific way. SIGNIFICANCE These findings show the presence of multiple levels of information in parietal cortex that could be decoded and used in brain machine interfaces to control both goal-directed movements and more cognitive visuomotor associations.
Collapse
Affiliation(s)
- M Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Piazza di Porta San Donato 2, Bologna 40126, Italy. ALMA-AI: Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
| | | | | | | | | |
Collapse
|
9
|
Dowiasch S, Meyer-Stender S, Klingenhoefer S, Bremmer F. Nonretinocentric localization of successively presented flashes during smooth pursuit eye movements. J Vis 2020; 20:8. [PMID: 32298416 PMCID: PMC7405758 DOI: 10.1167/jov.20.4.8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Keeping track of objects in our environment across body and eye movements is essential for perceptual stability and localization of external objects. As of yet, it is largely unknown how this perceptual stability is achieved. A common behavioral approach to investigate potential neuronal mechanisms underlying spatial vision has been the presentation of one brief visual stimulus across eye movements. Here, we adopted this approach and aimed to determine the reference frame of the perceptual localization of two successively presented flashes during fixation and smooth pursuit eye movements (SPEMs). To this end, eccentric flashes with a stimulus onset asynchrony of zero or ± 200 ms had to be localized with respect to each other during fixation and SPEMs. The results were used to evaluate different models predicting the reference frame in which the spatial information is represented. First, we were able to reproduce the well-known effect of relative mislocalization during fixation. Second, smooth pursuit led to a characteristic relative mislocalization, different from that during fixation. A model assuming that relative localization takes place in a nonretinocentric reference frame described our data best. This suggests that the relative localization judgment is performed at a stage of visual processing in which retinal and nonretinal information is available.
Collapse
|
10
|
Schneider L, Dominguez-Vargas AU, Gibson L, Kagan I, Wilke M. Eye position signals in the dorsal pulvinar during fixation and goal-directed saccades. J Neurophysiol 2020; 123:367-391. [DOI: 10.1152/jn.00432.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies have demonstrated saccade-related activity in the dorsal pulvinar, and we have recently shown that many neurons exhibit postsaccadic spatial preference. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°, 0°, 15°) in monkeys performing a visually cued memory saccade task. We found two main types of gaze dependence. First, ~50% of neurons showed dependence on static gaze direction during initial and postsaccadic fixation, and might be signaling the position of the eyes in the orbit or coding foveal targets in a head/body/world-centered reference frame. The population-derived eye position signal lagged behind the saccade. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory, and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to postsaccadic fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in nonretinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually guided eye and limb movements. NEW & NOTEWORTHY Work on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. We show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.
Collapse
Affiliation(s)
- Lukas Schneider
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Adan-Ulises Dominguez-Vargas
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Escuela Nacional de Estudios Superiores Unidad-León, Universidad Nacional Autónoma de México, León, Guanajuato, Mexico
| | - Lydia Gibson
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Igor Kagan
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| | - Melanie Wilke
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| |
Collapse
|
11
|
Balslev D, Odoj B. Distorted gaze direction input to attentional priority map in spatial neglect. Neuropsychologia 2019; 131:119-128. [PMID: 31128129 PMCID: PMC6667735 DOI: 10.1016/j.neuropsychologia.2019.05.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 05/13/2019] [Accepted: 05/17/2019] [Indexed: 11/30/2022]
Abstract
A contribution of the gaze signals to the attention imbalance in spatial neglect is presumed. Direct evidence however, is still lacking. Theoretical models for spatial attention posit an internal representation of locations that are selected in the competition for neural processing resources – an attentional priority map. Following up on our recent research showing an imbalance in the allocation of attention after an oculoproprioceptive perturbation in healthy volunteers, we investigated here whether the lesion in spatial neglect distorts the gaze direction input to this representation. Information about one's own direction of gaze is critical for the coordinate transformation between retinotopic and hand proprioceptive locations. To assess the gaze direction input to the attentional priority map, patients with left spatial neglect performed a cross-modal attention task in their normal, right hemispace. They discriminated visual targets whose location was cued by the patient's right index finger hidden from view. The locus of attention in response to the cue was defined as the location with the largest decrease in reaction time for visual discrimination in the presence vs. absence of the cue. In two control groups consisting of healthy elderly and patients with a right hemisphere lesion without neglect, the loci of attention were at the exact location of the cues. In contrast, neglect patients allocated attention at 0.5⁰-2⁰ rightward of the finger for all tested locations. A control task using reaching to visual targets in the absence of visual hand feedback ruled out a general error in visual localization. These findings demonstrate that in spatial neglect the gaze direction input to the attentional priority map is distorted. This observation supports the emerging view that attention and gaze are coupled and suggests that interventions that target gaze signals could alleviate spatial neglect. The mechanisms of left inattention in spatial neglect are incompletely understood. Attention loci in visual space are displaced to the right of somatosensory cues. This indicates a distorted gaze direction input to the attentional priority map. Distorted gaze direction input could lead to left-right attention imbalance.
Collapse
Affiliation(s)
- Daniela Balslev
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, KY169JP, UK.
| | - Bartholomäus Odoj
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tuebingen, Tuebingen, 72076, Germany; Department of Psychology, University of Copenhagen, Copenhagen, DK, 1353, Denmark
| |
Collapse
|
12
|
Morris AP, Krekelberg B. A Stable Visual World in Primate Primary Visual Cortex. Curr Biol 2019; 29:1471-1480.e6. [PMID: 31031112 DOI: 10.1016/j.cub.2019.03.069] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 02/13/2019] [Accepted: 03/28/2019] [Indexed: 11/26/2022]
Abstract
Humans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina-and propagated throughout the visual cortical hierarchy-is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here, we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded "eye tracker" that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in area V1 of macaque monkeys during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of gaze direction. This decoded signal tracked the eye accurately not only during fixation but also during fast and slow eye movements. After a fast eye movement, the eye-position signal arrived in V1 at approximately the same time at which the new visual information arrived from the retina. Using simulations, we show that this V1 eye-position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.
Collapse
Affiliation(s)
- Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, 26 Innovation Walk, Clayton, Victoria 3800, Australia.
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, 197 University Ave., Newark, New Jersey 07102, USA
| |
Collapse
|
13
|
Paradiso MA, Akers-Campbell S, Ruiz O, Niemeyer JE, Geman S, Loper J. Transsacadic Information and Corollary Discharge in Local Field Potentials of Macaque V1. Front Integr Neurosci 2019; 12:63. [PMID: 30692920 PMCID: PMC6340263 DOI: 10.3389/fnint.2018.00063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 12/11/2018] [Indexed: 01/08/2023] Open
Abstract
Approximately three times per second, human visual perception is interrupted by a saccadic eye movement. In addition to taking the eyes to a new location, several lines of evidence suggest that the saccades play multiple roles in visual perception. Indeed, it may be crucial that visual processing is informed about movements of the eyes in order to analyze visual input distinctly and efficiently on each fixation and preserve stable visual perception of the world across saccades. A variety of studies has demonstrated that activity in multiple brain areas is modulated by saccades. The hypothesis tested here is that these signals carry significant information that could be used in visual processing. To test this hypothesis, local field potentials (LFPs) were simultaneously recorded from multiple electrodes in macaque primary visual cortex (V1); support vector machines (SVMs) were used to classify the peri-saccadic LFPs. We find that LFPs in area V1 carry information that can be used to distinguish neural activity associated with fixations from saccades, precisely estimate the onset time of fixations, and reliably infer the directions of saccades. This information may be used by the brain in processes including visual stability, saccadic suppression, receptive field (RF) remapping, fixation amplification, and trans-saccadic visual perception.
Collapse
Affiliation(s)
- Michael A Paradiso
- Department of Neuroscience, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - Seth Akers-Campbell
- Department of Neuroscience, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - Octavio Ruiz
- Department of Neuroscience, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - James E Niemeyer
- Department of Neuroscience, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - Stuart Geman
- Department of Applied Mathematics, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| | - Jackson Loper
- Department of Applied Mathematics, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States
| |
Collapse
|
14
|
Abstract
Primates use frequent, rapid eye movements to sample their visual environment. This is a fruitful strategy to make the best use of the highly sensitive foveal part of the retina, but it requires neural mechanisms to bind the rapidly changing visual input into a single, stable percept. Studies investigating these neural mechanisms have typically assumed that perisaccadic perception in nonhuman primates matches that of humans. We tested this assumption by performing identical experiments in human and nonhuman primates. Our data confirm that perisaccadic visual perception of macaques and humans is qualitatively similar. Specifically, we found a reduction in detectability and mislocalization of targets presented at the time of saccades. We also found substantial differences between human and nonhuman primates. Notably, in nonhuman primates, localization that requires knowledge of eye position was less precise, nonhuman primates detected fewer perisaccadic stimuli, and perisaccadic compression was not towards the saccade target. The qualitative similarities between species support the view that the nonhuman primate is ideally suited to study aspects of brain function—such as those relying on foveal vision—that are uniquely developed in primates. The quantitative differences, however, demonstrate the need for a reassessment of the models purportedly linking neural response changes at the time of saccades with the behavioral phenomena of perisaccadic reduction of detectability and mislocalization.
Collapse
Affiliation(s)
- Steffen Klingenhoefer
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, USA
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, USA
| |
Collapse
|
15
|
Bremmer F, Kaminiarz A, Klingenhoefer S, Churan J. Decoding Target Distance and Saccade Amplitude from Population Activity in the Macaque Lateral Intraparietal Area (LIP). Front Integr Neurosci 2016; 10:30. [PMID: 27630547 PMCID: PMC5005376 DOI: 10.3389/fnint.2016.00030] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2016] [Accepted: 08/19/2016] [Indexed: 11/13/2022] Open
Abstract
Primates perform saccadic eye movements in order to bring the image of an interesting target onto the fovea. Compared to stationary targets, saccades toward moving targets are computationally more demanding since the oculomotor system must use speed and direction information about the target as well as knowledge about its own processing latency to program an adequate, predictive saccade vector. In monkeys, different brain regions have been implicated in the control of voluntary saccades, among them the lateral intraparietal area (LIP). Here we asked, if activity in area LIP reflects the distance between fovea and saccade target, or the amplitude of an upcoming saccade, or both. We recorded single unit activity in area LIP of two macaque monkeys. First, we determined for each neuron its preferred saccade direction. Then, monkeys performed visually guided saccades along the preferred direction toward either stationary or moving targets in pseudo-randomized order. LIP population activity allowed to decode both, the distance between fovea and saccade target as well as the size of an upcoming saccade. Previous work has shown comparable results for saccade direction (Graf and Andersen, 2014a,b). Hence, LIP population activity allows to predict any two-dimensional saccade vector. Functional equivalents of macaque area LIP have been identified in humans. Accordingly, our results provide further support for the concept of activity from area LIP as neural basis for the control of an oculomotor brain-machine interface.
Collapse
Affiliation(s)
- Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg Marburg, Germany
| | - Andre Kaminiarz
- Department of Neurophysics, Philipps-Universität Marburg Marburg, Germany
| | | | - Jan Churan
- Department of Neurophysics, Philipps-Universität Marburg Marburg, Germany
| |
Collapse
|
16
|
Abstract
Our world appears stable, although our eyes constantly shift its image across the retina. What brain mechanisms allow for this perceptual stability? A recent study has brought us a step closer to answering this millennial question.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Cognitive Neuroscience (INM3), Institute of Neuroscience and Medicine, Research Centre Juelich, D-52428 Juelich, Germany
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Karl-v-Frisch Str. 8a, D-35043 Marburg, Germany.
| |
Collapse
|
17
|
Dowiasch S, Blohm G, Bremmer F. Neural correlate of spatial (mis-)localization during smooth eye movements. Eur J Neurosci 2016; 44:1846-55. [PMID: 27177769 PMCID: PMC5089592 DOI: 10.1111/ejn.13276] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2015] [Accepted: 04/19/2016] [Indexed: 11/29/2022]
Abstract
The dependence of neuronal discharge on the position of the eyes in the orbit is a functional characteristic of many visual cortical areas of the macaque. It has been suggested that these eye-position signals provide relevant information for a coordinate transformation of visual signals into a non-eye-centered frame of reference. This transformation could be an integral part for achieving visual perceptual stability across eye movements. Previous studies demonstrated close to veridical eye-position decoding during stable fixation as well as characteristic erroneous decoding across saccadic eye-movements. Here we aimed to decode eye position during smooth pursuit. We recorded neural activity in macaque area VIP during steady fixation, saccades and smooth-pursuit and investigated the temporal and spatial accuracy of eye position as decoded from the neuronal discharges. Confirming previous results, the activity of the majority of neurons depended linearly on horizontal and vertical eye position. The application of a previously introduced computational approach (isofrequency decoding) allowed eye position decoding with considerable accuracy during steady fixation. We applied the same decoder on the activity of the same neurons during smooth-pursuit. On average, the decoded signal was leading the current eye position. A model combining this constant lead of the decoded eye position with a previously described attentional bias ahead of the pursuit target describes the asymmetric mislocalization pattern for briefly flashed stimuli during smooth pursuit eye movements as found in human behavioral studies.
Collapse
Affiliation(s)
- Stefan Dowiasch
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| | | | - Frank Bremmer
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| |
Collapse
|
18
|
Tian X, Yoshida M, Hafed ZM. A Microsaccadic Account of Attentional Capture and Inhibition of Return in Posner Cueing. Front Syst Neurosci 2016; 10:23. [PMID: 27013991 PMCID: PMC4779940 DOI: 10.3389/fnsys.2016.00023] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2015] [Accepted: 02/22/2016] [Indexed: 11/13/2022] Open
Abstract
Microsaccades exhibit systematic oscillations in direction after spatial cueing, and these oscillations correlate with facilitatory and inhibitory changes in behavioral performance in the same tasks. However, independent of cueing, facilitatory and inhibitory changes in visual sensitivity also arise pre-microsaccadically. Given such pre-microsaccadic modulation, an imperative question to ask becomes: how much of task performance in spatial cueing may be attributable to these peri-movement changes in visual sensitivity? To investigate this question, we adopted a theoretical approach. We developed a minimalist model in which: (1) microsaccades are repetitively generated using a rise-to-threshold mechanism, and (2) pre-microsaccadic target onset is associated with direction-dependent modulation of visual sensitivity, as found experimentally. We asked whether such a model alone is sufficient to account for performance dynamics in spatial cueing. Our model not only explained fine-scale microsaccade frequency and direction modulations after spatial cueing, but it also generated classic facilitatory (i.e., attentional capture) and inhibitory [i.e., inhibition of return (IOR)] effects of the cue on behavioral performance. According to the model, cues reflexively reset the oculomotor system, which unmasks oscillatory processes underlying microsaccade generation; once these oscillatory processes are unmasked, "attentional capture" and "IOR" become direct outcomes of pre-microsaccadic enhancement or suppression, respectively. Interestingly, our model predicted that facilitatory and inhibitory effects on behavior should appear as a function of target onset relative to microsaccades even without prior cues. We experimentally validated this prediction for both saccadic and manual responses. We also established a potential causal mechanism for the microsaccadic oscillatory processes hypothesized by our model. We used retinal-image stabilization to experimentally control instantaneous foveal motor error during the presentation of peripheral cues, and we found that post-cue microsaccadic oscillations were severely disrupted. This suggests that microsaccades in spatial cueing tasks reflect active oculomotor correction of foveal motor error, rather than presumed oscillatory covert attentional processes. Taken together, our results demonstrate that peri-microsaccadic changes in vision can go a long way in accounting for some classic behavioral phenomena.
Collapse
Affiliation(s)
- Xiaoguang Tian
- Physiology of Active Vision Laboratory, Werner Reichardt Centre for Integrative Neuroscience, University of TuebingenTuebingen, Germany; Graduate School of Neural and Behavioural Sciences, International Max-Planck Research School, University of TuebingenTuebingen, Germany
| | - Masatoshi Yoshida
- Department of Developmental Physiology, National Institute for Physiological Sciences Okazaki, Japan
| | - Ziad M Hafed
- Physiology of Active Vision Laboratory, Werner Reichardt Centre for Integrative Neuroscience, University of Tuebingen Tuebingen, Germany
| |
Collapse
|
19
|
Morris AP, Bremmer F, Krekelberg B. The Dorsal Visual System Predicts Future and Remembers Past Eye Position. Front Syst Neurosci 2016; 10:9. [PMID: 26941617 PMCID: PMC4764714 DOI: 10.3389/fnsys.2016.00009] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 01/29/2016] [Indexed: 11/13/2022] Open
Abstract
Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior.
Collapse
Affiliation(s)
- Adam P Morris
- Neuroscience Program, Department of Physiology, Biomedicine Discovery Institute, Monash University Clayton, VIC, Australia
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg Marburg, Germany
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University Newark, NJ, USA
| |
Collapse
|
20
|
Lehky SR, Sereno ME, Sereno AB. Characteristics of Eye-Position Gain Field Populations Determine Geometry of Visual Space. Front Integr Neurosci 2016; 9:72. [PMID: 26834587 PMCID: PMC4718998 DOI: 10.3389/fnint.2015.00072] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Accepted: 12/21/2015] [Indexed: 11/17/2022] Open
Abstract
We have previously demonstrated differences in eye-position spatial maps for anterior inferotemporal cortex (AIT) in the ventral stream and lateral intraparietal cortex (LIP) in the dorsal stream, based on population decoding of gaze angle modulations of neural visual responses (i.e., eye-position gain fields). Here we explore the basis of such spatial encoding differences through modeling of gain field characteristics. We created a population of model neurons, each having a different eye-position gain field. This population was used to reconstruct eye-position visual space using multidimensional scaling. As gain field shapes have never been well-established experimentally, we examined different functions, including planar, sigmoidal, elliptical, hyperbolic, and mixtures of those functions. All functions successfully recovered positions, indicating weak constraints on allowable gain field shapes. We then used a genetic algorithm to modify the characteristics of model gain field populations until the recovered spatial maps closely matched those derived from monkey neurophysiological data in AIT and LIP. The primary differences found between model AIT and LIP gain fields were that AIT gain fields were more foveally dominated. That is, gain fields in AIT operated on smaller spatial scales and smaller dispersions than in LIP. Thus, we show that the geometry of eye-position visual space depends on the population characteristics of gain fields, and that differences in gain field characteristics for different cortical areas may underlie differences in the representation of space.
Collapse
Affiliation(s)
- Sidney R Lehky
- Computational Neurobiology Laboratory, The Salk Institute La Jolla, CA, USA
| | | | - Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Medical School Houston, TX, USA
| |
Collapse
|
21
|
Hafed ZM, Chen CY, Tian X. Vision, Perception, and Attention through the Lens of Microsaccades: Mechanisms and Implications. Front Syst Neurosci 2015; 9:167. [PMID: 26696842 PMCID: PMC4667031 DOI: 10.3389/fnsys.2015.00167] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 11/17/2015] [Indexed: 11/13/2022] Open
Abstract
Microsaccades are small saccades. Neurophysiologically, microsaccades are generated using similar brainstem mechanisms as larger saccades. This suggests that peri-saccadic changes in vision that accompany large saccades might also be expected to accompany microsaccades. In this review, we highlight recent evidence demonstrating this. Microsaccades are not only associated with suppressed visual sensitivity and perception, as in the phenomenon of saccadic suppression, but they are also associated with distorted spatial representations, as in the phenomenon of saccadic compression, and pre-movement response gain enhancement, as in the phenomenon of pre-saccadic attention. Surprisingly, the impacts of peri-microsaccadic changes in vision are far reaching, both in time relative to movement onset as well as spatial extent relative to movement size. Periods of ~100 ms before and ~100 ms after microsaccades exhibit significant changes in neuronal activity and behavior, and this happens at eccentricities much larger than the eccentricities targeted by the microsaccades themselves. Because microsaccades occur during experiments enforcing fixation, these effects create a need to consider the impacts of microsaccades when interpreting a variety of experiments on vision, perception, and cognition using awake, behaving subjects. The clearest example of this idea to date has been on the links between microsaccades and covert visual attention. Recent results have demonstrated that peri-microsaccadic changes in vision play a significant role in both neuronal and behavioral signatures of covert visual attention, so much so that in at least some attentional cueing paradigms, there is very tight synchrony between microsaccades and the emergence of attentional effects. Just like large saccades, microsaccades are genuine motor outputs, and their impacts can be substantial even during perceptual and cognitive experiments not concerned with overt motor generation per se.
Collapse
Affiliation(s)
- Ziad M Hafed
- Physiology of Active Vision Laboratory, Werner Reichardt Centre for Integrative Neuroscience, University of Tuebingen Tuebingen, Germany
| | - Chih-Yang Chen
- Physiology of Active Vision Laboratory, Werner Reichardt Centre for Integrative Neuroscience, University of Tuebingen Tuebingen, Germany ; Graduate School of Neural and Behavioural Sciences, International Max-Planck Research School, University of Tuebingen Tuebingen, Germany
| | - Xiaoguang Tian
- Physiology of Active Vision Laboratory, Werner Reichardt Centre for Integrative Neuroscience, University of Tuebingen Tuebingen, Germany ; Graduate School of Neural and Behavioural Sciences, International Max-Planck Research School, University of Tuebingen Tuebingen, Germany
| |
Collapse
|
22
|
Cloherty SL, Crowder NA, Mustari MJ, Ibbotson MR. Saccade-induced image motion cannot account for post-saccadic enhancement of visual processing in primate MST. Front Syst Neurosci 2015; 9:122. [PMID: 26388747 PMCID: PMC4555012 DOI: 10.3389/fnsys.2015.00122] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Accepted: 08/19/2015] [Indexed: 11/13/2022] Open
Abstract
Primates use saccadic eye movements to make gaze changes. In many visual areas, including the dorsal medial superior temporal area (MSTd) of macaques, neural responses to visual stimuli are reduced during saccades but enhanced afterwards. How does this enhancement arise-from an internal mechanism associated with saccade generation or through visual mechanisms activated by the saccade sweeping the image of the visual scene across the retina? Spontaneous activity in MSTd is elevated even after saccades made in darkness, suggesting a central mechanism for post-saccadic enhancement. However, based on the timing of this effect, it may arise from a different mechanism than occurs in normal vision. Like neural responses in MSTd, initial ocular following eye speed is enhanced after saccades, with evidence suggesting both internal and visually mediated mechanisms. Here we recorded from visual neurons in MSTd and measured responses to motion stimuli presented soon after saccades and soon after simulated saccades-saccade-like displacements of the background image during fixation. We found that neural responses in MSTd were enhanced when preceded by real saccades but not when preceded by simulated saccades. Furthermore, we also observed enhancement following real saccades made across a blank screen that generated no motion signal within the recorded neurons' receptive fields. We conclude that in MSTd the mechanism leading to post-saccadic enhancement has internal origins.
Collapse
Affiliation(s)
- Shaun L Cloherty
- National Vision Research Institute, Australian College of Optometry Carlton, VIC, Australia ; Department of Optometry and Vision Sciences, Australian Research Council Centre of Excellence for Integrative Brain Function, University of Melbourne Parkville, VIC, Australia ; Department of Electrical and Electronic Engineering, University of Melbourne Parkville, VIC, Australia
| | - Nathan A Crowder
- Department of Psychology and Neuroscience, Life Sciences Centre, Dalhousie University Halifax, NS, Canada
| | - Michael J Mustari
- Visual Sciences, Yerkes National Primate Research Center, Emory University Atlanta, GA, USA
| | - Michael R Ibbotson
- National Vision Research Institute, Australian College of Optometry Carlton, VIC, Australia ; Department of Optometry and Vision Sciences, Australian Research Council Centre of Excellence for Integrative Brain Function, University of Melbourne Parkville, VIC, Australia
| |
Collapse
|
23
|
Abstract
Eye movements are essential to human vision. A new study shows that the tiny eye movements we make while holding our gaze on a point of interest are associated with brief, attention-like changes in the sensitivity of visual neurons.
Collapse
|
24
|
Mirpour K, Bisley JW. Remapping, Spatial Stability, and Temporal Continuity: From the Pre-Saccadic to Postsaccadic Representation of Visual Space in LIP. Cereb Cortex 2015; 26:3183-95. [PMID: 26142462 DOI: 10.1093/cercor/bhv153] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
As our eyes move, we have a strong percept that the world is stable in space and time; however, the signals in cortex coming from the retina change with each eye movement. It is not known how this changing input produces the visual percept we experience, although the predictive remapping of receptive fields has been described as a likely candidate. To explain how remapping accounts for perceptual stability, we examined responses of neurons in the lateral intraparietal area while animals performed a visual foraging task. When a stimulus was brought into the response field of a neuron that exhibited remapping, the onset of the postsaccadic representation occurred shortly after the saccade ends. Whenever a stimulus was taken out of the response field, the presaccadic representation abruptly ended shortly after the eyes stopped moving. In the 38% (20/52) of neurons that exhibited remapping, there was no more than 30 ms between the end of the presaccadic representation and the start of the postsaccadic representation and, in some neurons, and the population as a whole, it was continuous. We conclude by describing how this seamless shift from a presaccadic to postsaccadic representation could contribute to spatial stability and temporal continuity.
Collapse
Affiliation(s)
| | - James W Bisley
- Department of Neurobiology Jules Stein Eye Institute, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, USA Department of Psychology and the Brain Research Institute, UCLA, Los Angeles, CA 90095, USA Center for Interdisciplinary Research (ZiF), Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|
25
|
High-resolution eye tracking using V1 neuron activity. Nat Commun 2014; 5:4605. [PMID: 25197783 PMCID: PMC4159777 DOI: 10.1038/ncomms5605] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2014] [Accepted: 07/04/2014] [Indexed: 11/09/2022] Open
Abstract
Studies of high-acuity visual cortical processing have been limited by the inability to track eye position with sufficient accuracy to precisely reconstruct the visual stimulus on the retina. As a result, studies of primary visual cortex (V1) have been performed almost entirely on neurons outside the high-resolution central portion of the visual field (the fovea). Here we describe a procedure for inferring eye position using multi-electrode array recordings from V1 coupled with nonlinear stimulus processing models. We show that this method can be used to infer eye position with 1 arc-min accuracy--significantly better than conventional techniques. This allows for analysis of foveal stimulus processing, and provides a means to correct for eye movement-induced biases present even outside the fovea. This method could thus reveal critical insights into the role of eye movements in cortical coding, as well as their contribution to measures of cortical variability.
Collapse
|
26
|
Wright JM, Krekelberg B. Transcranial direct current stimulation over posterior parietal cortex modulates visuospatial localization. J Vis 2014; 14:14.9.5. [PMID: 25104830 DOI: 10.1167/14.9.5] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Visual localization is based on the complex interplay of bottom-up and top-down processing. Based on previous work, the posterior parietal cortex (PPC) is assumed to play an essential role in this interplay. In this study, we investigated the causal role of the PPC in visual localization. Specifically, our goal was to determine whether modulation of the PPC via transcranial direct current stimulation (tDCS) could induce visual mislocalization similar to that induced by an exogenous attentional cue (Wright, Morris, & Krekelberg, 2011). We placed one stimulation electrode over the right PPC and the other over the left PPC (dual tDCS) and varied the polarity of the stimulation. We found that this manipulation altered visual localization; this supports the causal involvement of the PPC in visual localization. Notably, mislocalization was more rightward when the cathode was placed over the right PPC than when the anode was placed over the right PPC. This mislocalization was found within a few minutes of stimulation onset, it dissipated during stimulation, but then resurfaced after stimulation offset and lasted for another 10-15 min. On the assumption that excitability is reduced beneath the cathode and increased beneath the anode, these findings support the view that each hemisphere biases processing to the contralateral hemifield and that the balance of activation between the hemispheres contributes to position perception (Kinsbourne, 1977; Szczepanski, Konen, & Kastner, 2010).
Collapse
Affiliation(s)
- Jessica M Wright
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, USA
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, USA
| |
Collapse
|
27
|
Abstract
Eye tracking experiments show that neurons respond rapidly to eye movements, allowing our view of the world to remain stable.
Collapse
Affiliation(s)
- Vincent D Costa
- Vincent D Costa is in the Laboratory of Neuropsychology, National Institute of Mental Health, Maryland, United States
| | - Bruno B Averbeck
- Bruno B Averbeck is in the Laboratory of Neuropsychology, National Institute of Mental Health, Maryland, United States
| |
Collapse
|
28
|
Abstract
Understanding how the brain computes eye position is essential to unraveling high-level visual functions such as eye movement planning, coordinate transformations and stability of spatial awareness. The lateral intraparietal area (LIP) is essential for this process. However, despite decades of research, its contribution to the eye position signal remains controversial. LIP neurons have recently been reported to inaccurately represent eye position during a saccadic eye movement, and to be too slow to support a role in high-level visual functions. We addressed this issue by predicting eye position and saccade direction from the responses of populations of LIP neurons. We found that both signals were accurately predicted before, during and after a saccade. Also, the dynamics of these signals support their contribution to visual functions. These findings provide a principled understanding of the coding of information in populations of neurons within an important node of the cortical network for visual-motor behaviors. DOI:http://dx.doi.org/10.7554/eLife.02813.001 Whenever we reach towards an object, we automatically use visual information to guide our movements and make any adjustments required. Visual feedback helps us to learn new motor skills, and ensures that our physical view of the world remains stable despite the fact that every eye movement causes the image on the retina to shift dramatically. However, such visual feedback is only useful because it can be compared with information on the position of the eyes, which is stored by the brain at all times. It is thought that one important structure where information on eye position is stored is an area towards the back of the brain called the lateral intraparietal cortex, but the exact contribution of this region has long been controversial. Graf and Andersen have now clarified the role of this area by studying monkeys as they performed an eye-movement task. Rhesus monkeys were trained to fixate on a particular location on a grid. A visual target was then flashed up briefly in another location and, after a short delay, the monkeys moved their eyes to the new location to earn a reward. As the monkeys performed the task, a group of electrodes recorded signals from multiple neurons within the lateral intraparietal cortex. This meant that Graf and Andersen could compare the neuronal responses of populations of neurons before, during, and after the movement. By studying neural populations, it was possible to accurately predict the direction in which a monkey was about to move his eyes, and also the initial and final eye positions. After a movement had occurred, the neurons also signaled the direction in which the monkey's eyes had been facing beforehand. Thus, the lateral intraparietal area stores both retrospective and forward-looking information about eye position and movement. The work of Graf and Andersen confirms that the LIP has a central role in eye movement functions, and also contributes more generally to our understanding of how behaviors are encoded at the level of populations of neurons. Such information could ultimately aid the development of neural prostheses to help patients with paralysis resulting from injury or neurodegeneration. DOI:http://dx.doi.org/10.7554/eLife.02813.002
Collapse
Affiliation(s)
- Arnulf Ba Graf
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, United States
| | - Richard A Andersen
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, United States
| |
Collapse
|
29
|
Neurons in cortical area MST remap the memory trace of visual motion across saccadic eye movements. Proc Natl Acad Sci U S A 2014; 111:7825-30. [PMID: 24821778 DOI: 10.1073/pnas.1401370111] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.
Collapse
|
30
|
Sereno AB, Sereno ME, Lehky SR. Recovering stimulus locations using populations of eye-position modulated neurons in dorsal and ventral visual streams of non-human primates. Front Integr Neurosci 2014; 8:28. [PMID: 24734008 PMCID: PMC3975102 DOI: 10.3389/fnint.2014.00028] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2013] [Accepted: 03/08/2014] [Indexed: 11/13/2022] Open
Abstract
We recorded visual responses while monkeys fixated the same target at different gaze angles, both dorsally (lateral intraparietal cortex, LIP) and ventrally (anterior inferotemporal cortex, AIT). While eye-position modulations occurred in both areas, they were both more frequent and stronger in LIP neurons. We used an intrinsic population decoding technique, multidimensional scaling (MDS), to recover eye positions, equivalent to recovering fixated target locations. We report that eye-position based visual space in LIP was more accurate (i.e., metric). Nevertheless, the AIT spatial representation remained largely topologically correct, perhaps indicative of a categorical spatial representation (i.e., a qualitative description such as "left of" or "above" as opposed to a quantitative, metrically precise description). Additionally, we developed a simple neural model of eye position signals and illustrate that differences in single cell characteristics can influence the ability to recover target position in a population of cells. We demonstrate for the first time that the ventral stream contains sufficient information for constructing an eye-position based spatial representation. Furthermore we demonstrate, in dorsal and ventral streams as well as modeling, that target locations can be extracted directly from eye position signals in cortical visual responses without computing coordinate transforms of visual space.
Collapse
Affiliation(s)
- Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Health Science Center at Houston Houston, TX, USA
| | | | - Sidney R Lehky
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies La Jolla, CA, USA
| |
Collapse
|
31
|
Krekelberg B, van Wezel RJA. Neural mechanisms of speed perception: transparent motion. J Neurophysiol 2013; 110:2007-18. [PMID: 23926031 DOI: 10.1152/jn.00333.2013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Visual motion on the macaque retina is processed by direction- and speed-selective neurons in extrastriate middle temporal cortex (MT). There is strong evidence for a link between the activity of these neurons and direction perception. However, there is conflicting evidence for a link between speed selectivity of MT neurons and speed perception. Here we study this relationship by using a strong perceptual illusion in speed perception: when two transparently superimposed dot patterns move in opposite directions, their apparent speed is much larger than the perceived speed of a single pattern moving at that physical speed. Moreover, the sensitivity for speed discrimination is reduced for such bidirectional patterns. We first confirmed these behavioral findings in human subjects and extended them to a monkey subject. Second, we determined speed tuning curves of MT neurons to bidirectional motion and compared these to speed tuning curves for unidirectional motion. Consistent with previous reports, the response to bidirectional motion was often reduced compared with unidirectional motion at the preferred speed. In addition, we found that tuning curves for bidirectional motion were shifted to lower preferred speeds. As a consequence, bidirectional motion of some speeds typically evoked larger responses than unidirectional motion. Third, we showed that these changes in neural responses could explain changes in speed perception with a simple labeled line decoder. These data provide new insight into the encoding of transparent motion patterns and provide support for the hypothesis that MT activity can be decoded for speed perception with a labeled line model.
Collapse
Affiliation(s)
- Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey
| | | |
Collapse
|