1
|
Gao W, Lin Y, Shen J, Han J, Song X, Lu Y, Zhan H, Li Q, Ge H, Lin Z, Shi W, Drugowitsch J, Tang H, Chen X. Diverse effects of gaze direction on heading perception in humans. Cereb Cortex 2023:7024719. [PMID: 36734278 DOI: 10.1093/cercor/bhac541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023] Open
Abstract
Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Jianing Han
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaoxiao Song
- Department of Liberal Arts, School of Art Administration and Education, China Academy of Art, 218 Nanshan Road, Shangcheng District, Hangzhou 310002, China
| | - Yukun Lu
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Huijia Zhan
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Qianbing Li
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Haoting Ge
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou 310009, China
| | - Wenlei Shi
- Center for the Study of the History of Chinese Language and Center for the Study of Language and Cognition, Zhejiang University, 866 Yuhangtang Road, Xihu District, Hangzhou 310058, China
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Longwood Avenue 220, Boston, MA 02116, United States
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| |
Collapse
|
2
|
Abstract
Our visual system is fundamentally retinotopic. When viewing a stable scene, each eye movement shifts object features and locations on the retina. Thus, sensory representations must be updated, or remapped, across saccades to align presaccadic and postsaccadic inputs. The earliest remapping studies focused on anticipatory, presaccadic shifts of neuronal spatial receptive fields. Over time, it has become clear that there are multiple forms of remapping and that different forms of remapping may be mediated by different neural mechanisms. This review attempts to organize the various forms of remapping into a functional taxonomy based on experimental data and ongoing debates about forward versus convergent remapping, presaccadic versus postsaccadic remapping, and spatial versus attentional remapping. We integrate findings from primate neurophysiological, human neuroimaging and behavioral, and computational modeling studies. We conclude by discussing persistent open questions related to remapping, with specific attention to binding of spatial and featural information during remapping and speculations about remapping's functional significance. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio 43210, USA;
| | - James A Mazer
- Department of Microbiology and Cell Biology, Montana State University, Bozeman, Montana 59717, USA;
| |
Collapse
|
3
|
Abstract
Remapping is a property of some cortical and subcortical neurons that update their responses around the time of an eye movement to account for the shift of stimuli on the retina due to the saccade. Physiologically, remapping is traditionally tested by briefly presenting a single stimulus around the time of the saccade and looking at the onset of the response and the locations in space to which the neuron is responsive. Here we suggest that a better way to understand the functional role of remapping is to look at the time at which the neural signal emerges when saccades are made across a stable scene. Based on data obtained using this approach, we suggest that remapping in the lateral intraparietal area is sufficient to play a role in maintaining visual stability across saccades, whereas in the frontal eye field, remapped activity carries information that affects future saccadic choices and, in a separate subset of neurons, is used to maintain a map of locations in the scene that have been previously fixated.
Collapse
Affiliation(s)
- James W Bisley
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA.,Jules Stein Eye Institute, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA.,Department of Psychology and the Brain Research Institute, UCLA, Los Angeles, CA, USA
| | - Koorosh Mirpour
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Yelda Alkan
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
4
|
Characterizing and dissociating multiple time-varying modulatory computations influencing neuronal activity. PLoS Comput Biol 2019; 15:e1007275. [PMID: 31513570 PMCID: PMC6759185 DOI: 10.1371/journal.pcbi.1007275] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 09/24/2019] [Accepted: 07/18/2019] [Indexed: 11/19/2022] Open
Abstract
In many brain areas, sensory responses are heavily modulated by factors including attentional state, context, reward history, motor preparation, learned associations, and other cognitive variables. Modelling the effect of these modulatory factors on sensory responses has proven challenging, mostly due to the time-varying and nonlinear nature of the underlying computations. Here we present a computational model capable of capturing and dissociating multiple time-varying modulatory effects on neuronal responses on the order of milliseconds. The model’s performance is tested on extrastriate perisaccadic visual responses in nonhuman primates. Visual neurons respond to stimuli presented around the time of saccades differently than during fixation. These perisaccadic changes include sensitivity to the stimuli presented at locations outside the neuron’s receptive field, which suggests a contribution of multiple sources to perisaccadic response generation. Current computational approaches cannot quantitatively characterize the contribution of each modulatory source in response generation, mainly due to the very short timescale on which the saccade takes place. In this study, we use a high spatiotemporal resolution experimental paradigm along with a novel extension of the generalized linear model framework (GLM), termed the sparse-variable GLM, to allow for time-varying model parameters representing the temporal evolution of the system with a resolution on the order of milliseconds. We used this model framework to precisely map the temporal evolution of the spatiotemporal receptive field of visual neurons in the middle temporal area during the execution of a saccade. Moreover, an extended model based on a factorization of the sparse-variable GLM allowed us to disassociate and quantify the contribution of individual sources to the perisaccadic response. Our results show that our novel framework can precisely capture the changes in sensitivity of neurons around the time of saccades, and provide a general framework to quantitatively track the role of multiple modulatory sources over time. The sensory responses of neurons in many brain areas, particularly those in higher prefrontal or parietal areas, are strongly influenced by factors including task rules, attentional state, context, reward history, motor preparation, learned associations, and other cognitive variables. These modulations often occur in combination, or on fast timescales which present a challenge for both experimental and modelling approaches aiming to describe the underlying mechanisms or computations. Here we present a computational model capable of capturing and dissociating multiple time-varying modulatory effects on spiking responses on the order of milliseconds. The model’s performance is evaluated by testing its ability to reproduce and dissociate multiple changes in visual sensitivity occurring in extrastriate visual cortex around the time of rapid eye movements. No previous model is capable of capturing these changes with as fine a resolution as that presented here. Our model both provides specific insight into the nature and time course of changes in visual sensitivity around the time of eye movements, and offers a general framework applicable to a wide variety of contexts in which sensory processing is modulated dynamically by multiple time-varying cognitive or behavioral factors, to understand the neuronal computations underpinning these modulations and make predictions about the underlying mechanisms.
Collapse
|
5
|
Marino AC, Mazer JA. Saccades Trigger Predictive Updating of Attentional Topography in Area V4. Neuron 2019; 98:429-438.e4. [PMID: 29673484 DOI: 10.1016/j.neuron.2018.03.020] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Revised: 10/09/2017] [Accepted: 03/10/2018] [Indexed: 11/30/2022]
Abstract
During natural behavior, saccades and attention act together to allocate limited neural resources. Attention is generally mediated by retinotopic visual neurons; therefore, specific neurons representing attended features change with each saccade. We investigated the neural mechanisms that allow attentional targeting in the face of saccades. Specifically, we looked for predictive changes in attentional modulation state or receptive field position that could stabilize attentional representations across saccades in area V4, known to be necessary for attention-dependent behavior. We recorded from neurons in monkeys performing a novel spatiotopic attention task, in which performance depended on accurate saccade compensation. Measurements of attentional modulation revealed a predictive attentional "hand-off" corresponding to a presaccadic transfer of attentional state from neurons inside the attentional focus before the saccade to those that will be inside the focus after the saccade. The predictive nature of the hand-off ensures that attentional brain maps are properly configured immediately after each saccade.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT, USA; Medical Scientist Training Program, Yale School of Medicine, New Haven, CT, USA; Department of Neurobiology, Yale School of Medicine, New Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT, USA; Department of Neurobiology, Yale School of Medicine, New Haven, CT, USA; Department of Psychology, Yale University, New Haven, CT, USA.
| |
Collapse
|
6
|
Abstract
Our vision depends upon shifting our high-resolution fovea to objects of interest in the visual field. Each saccade displaces the image on the retina, which should produce a chaotic scene with jerks occurring several times per second. It does not. This review examines how an internal signal in the primate brain (a corollary discharge) contributes to visual continuity across saccades. The article begins with a review of evidence for a corollary discharge in the monkey and evidence from inactivation experiments that it contributes to perception. The next section examines a specific neuronal mechanism for visual continuity, based on corollary discharge that is referred to as visual remapping. Both the basic characteristics of this anticipatory remapping and the factors that control it are enumerated. The last section considers hypotheses relating remapping to the perceived visual continuity across saccades, including remapping's contribution to perceived visual stability across saccades.
Collapse
Affiliation(s)
- Robert H Wurtz
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, Maryland 20892-4435, USA;
| |
Collapse
|
7
|
Yao T, Treue S, Krishna BS. Saccade-synchronized rapid attention shifts in macaque visual cortical area MT. Nat Commun 2018; 9:958. [PMID: 29511189 PMCID: PMC5840291 DOI: 10.1038/s41467-018-03398-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 02/08/2018] [Indexed: 12/16/2022] Open
Abstract
While making saccadic eye-movements to scan a visual scene, humans and monkeys are able to keep track of relevant visual stimuli by maintaining spatial attention on them. This ability requires a shift of attentional modulation from the neuronal population representing the relevant stimulus pre-saccadically to the one representing it post-saccadically. For optimal performance, this trans-saccadic attention shift should be rapid and saccade-synchronized. Whether this is so is not known. We trained two rhesus monkeys to make saccades while maintaining covert attention at a fixed spatial location. We show that the trans-saccadic attention shift in cortical visual medial temporal (MT) area is well synchronized to saccades. Attentional modulation crosses over from the pre-saccadic to the post-saccadic neuronal representation by about 50 ms after a saccade. Taking response latency into account, the trans-saccadic attention shift is well timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades. Saccades result in remapping the neural representation of a target object as well as its attentional modulation. Here the authors show that the trans-saccadic attentional shift is precisely synchronized with the saccade resulting in optimal maintenance of the locus of spatial attention.
Collapse
Affiliation(s)
- Tao Yao
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, 37077, Goettingen, Germany.,Laboratory for Neuro-and Psychophysiology, KU Leuven Medical School, Campus Gasthuisberg, 3000, Leuven, Belgium
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, 37077, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, 37077, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, 37077, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, 37073, Goettingen, Germany
| | - B Suresh Krishna
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, 37077, Goettingen, Germany. .,Leibniz-ScienceCampus Primate Cognition, 37077, Goettingen, Germany.
| |
Collapse
|
8
|
Abstract
Primates use frequent, rapid eye movements to sample their visual environment. This is a fruitful strategy to make the best use of the highly sensitive foveal part of the retina, but it requires neural mechanisms to bind the rapidly changing visual input into a single, stable percept. Studies investigating these neural mechanisms have typically assumed that perisaccadic perception in nonhuman primates matches that of humans. We tested this assumption by performing identical experiments in human and nonhuman primates. Our data confirm that perisaccadic visual perception of macaques and humans is qualitatively similar. Specifically, we found a reduction in detectability and mislocalization of targets presented at the time of saccades. We also found substantial differences between human and nonhuman primates. Notably, in nonhuman primates, localization that requires knowledge of eye position was less precise, nonhuman primates detected fewer perisaccadic stimuli, and perisaccadic compression was not towards the saccade target. The qualitative similarities between species support the view that the nonhuman primate is ideally suited to study aspects of brain function—such as those relying on foveal vision—that are uniquely developed in primates. The quantitative differences, however, demonstrate the need for a reassessment of the models purportedly linking neural response changes at the time of saccades with the behavioral phenomena of perisaccadic reduction of detectability and mislocalization.
Collapse
Affiliation(s)
- Steffen Klingenhoefer
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, USA
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, USA
| |
Collapse
|
9
|
Rao HM, Mayo JP, Sommer MA. Circuits for presaccadic visual remapping. J Neurophysiol 2016; 116:2624-2636. [PMID: 27655962 DOI: 10.1152/jn.00182.2016] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Accepted: 09/14/2016] [Indexed: 01/08/2023] Open
Abstract
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF's local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about the reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Collapse
Affiliation(s)
- Hrishikesh M Rao
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina;
| | - J Patrick Mayo
- Department of Neurobiology, Duke School of Medicine, Duke University, Durham, North Carolina; and
| | - Marc A Sommer
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke School of Medicine, Duke University, Durham, North Carolina; and.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
10
|
Abstract
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Collapse
|
11
|
Seidel Malkinson T, Pertzov Y, Zohary E. Turning Symbolic: The Representation of Motion Direction in Working Memory. Front Psychol 2016; 7:165. [PMID: 26909059 PMCID: PMC4754772 DOI: 10.3389/fpsyg.2016.00165] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2015] [Accepted: 01/28/2016] [Indexed: 11/21/2022] Open
Abstract
What happens to the representation of a moving stimulus when it is no longer present and its motion direction has to be maintained in working memory (WM)? Is the initial, sensorial representation maintained during the delay period or is there another representation, at a higher level of abstraction? It is also feasible that multiple representations may co-exist in WM, manifesting different facets of sensory and more abstract features. To that end, we investigated the mnemonic representation of motion direction in a series of three psychophysical experiments, using a delayed motion-discrimination task (relative clockwise∖counter-clockwise judgment). First, we show that a change in the dots’ contrast polarity does not hamper performance. Next, we demonstrate that performance is unaffected by relocation of the Test stimulus in either retinotopic or spatiotopic coordinate frames. Finally, we show that an arrow-shaped cue presented during the delay interval between the Sample and Test stimulus, strongly biases performance toward the direction of the arrow, although the cue itself is non-informative (it has no predictive value of the correct answer). These results indicate that the representation of motion direction in WM could be independent of the physical features of the stimulus (polarity or position) and has non-sensorial abstract qualities. It is plausible that an abstract mnemonic trace might be activated alongside a more basic, analog representation of the stimulus. We speculate that the specific sensitivity of the mnemonic representation to the arrow-shaped symbol may stem from the long term learned association between direction and the hour in the clock.
Collapse
Affiliation(s)
- Tal Seidel Malkinson
- Department of Neurobiology, Alexander Silberman Institute of Life Sciences, Hebrew University of JerusalemJerusalem, Israel; Department of Psychology, Hebrew University of JerusalemJerusalem, Israel; Institut National de la Santé et de la Recherche Médicale U1127, Centre National de la Recherche Scientifique UMR 7225, UMR S 1127, Évaluation Physiologique chez les Sujets Sains et Atteints de Troubles Cognitifs (PICNIC Lab), Institut du Cerveau et de la Moelle Épinière, Sorbonne Universités, Université Pierre et Marie Curie-Paris 06Paris, France
| | - Yoni Pertzov
- Department of Psychology, Hebrew University of Jerusalem Jerusalem, Israel
| | - Ehud Zohary
- Department of Neurobiology, Alexander Silberman Institute of Life Sciences, Hebrew University of JerusalemJerusalem, Israel; The Edmond and Lily Safra Center for Brain Sciences, Hebrew University of JerusalemJerusalem, Israel
| |
Collapse
|
12
|
Yao T, Treue S, Krishna BS. An Attention-Sensitive Memory Trace in Macaque MT Following Saccadic Eye Movements. PLoS Biol 2016; 14:e1002390. [PMID: 26901857 PMCID: PMC4764326 DOI: 10.1371/journal.pbio.1002390] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2015] [Accepted: 01/26/2016] [Indexed: 12/02/2022] Open
Abstract
We experience a visually stable world despite frequent retinal image displacements induced by eye, head, and body movements. The neural mechanisms underlying this remain unclear. One mechanism that may contribute is transsaccadic remapping, in which the responses of some neurons in various attentional, oculomotor, and visual brain areas appear to anticipate the consequences of saccades. The functional role of transsaccadic remapping is actively debated, and many of its key properties remain unknown. Here, recording from two monkeys trained to make a saccade while directing attention to one of two spatial locations, we show that neurons in the middle temporal area (MT), a key locus in the motion-processing pathway of humans and macaques, show a form of transsaccadic remapping called a memory trace. The memory trace in MT neurons is enhanced by the allocation of top-down spatial attention. Our data provide the first demonstration, to our knowledge, of the influence of top-down attention on the memory trace anywhere in the brain. We find evidence only for a small and transient effect of motion direction on the memory trace (and in only one of two monkeys), arguing against a role for MT in the theoretically critical yet empirically contentious phenomenon of spatiotopic feature-comparison and adaptation transfer across saccades. Our data support the hypothesis that transsaccadic remapping represents the shift of attentional pointers in a retinotopic map, so that relevant locations can be tracked and rapidly processed across saccades. Our results resolve important issues concerning the perisaccadic representation of visual stimuli in the dorsal stream and demonstrate a significant role for top-down attention in modulating this representation. How does the brain keep track of specific attended features after eye movements? A new study of the macaque brain implicates the middle temporal (MT) area in the remapping of attentional pointers across saccades. Humans experience a visually stable world despite the fact that eye, head, and body movements cause frequent shifts of the image on the retina. Humans and monkeys are also able to keep track of visual stimuli across such movements. One mechanism that may contribute to these abilities is “transsaccadic remapping,” in which the responses of some neurons in various attentional, oculomotor, and visual brain areas appear to anticipate the consequences of saccades. A current hypothesis proposes that the brain maintains “attentional pointers” to the locations of relevant stimuli and that, via transsaccadic remapping, it rapidly relocates these pointers to compensate for intervening eye movements. Whether stimulus features are also remapped across saccades (along with their location) remains unclear. Here, we show the presence of transsaccadic remapping in a macaque monkey brain area critical for visual motion processing, the middle temporal area (MT). This remapped response is stronger for an attended stimulus. We find only weak evidence for motion-direction information in the remapped response. These results support the attentional pointer hypothesis and demonstrate for the first time, to our knowledge, the impact of top-down attention on transsaccadic remapping in the brain.
Collapse
Affiliation(s)
- Tao Yao
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany
- * E-mail: (TY); (BSK)
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany
- Bernstein Center for Computational Neuroscience, Goettingen, Germany
- Faculty of Biology and Psychology, Goettingen University, Goettingen, Germany
| | - B. Suresh Krishna
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany
- * E-mail: (TY); (BSK)
| |
Collapse
|
13
|
Marino AC, Mazer JA. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology. Front Syst Neurosci 2016; 10:3. [PMID: 26903820 PMCID: PMC4743436 DOI: 10.3389/fnsys.2016.00003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 01/15/2016] [Indexed: 11/13/2022] Open
Abstract
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron's spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Medical Scientist Training Program, Yale University School of MedicineNew Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Department of Neurobiology, Yale University School of MedicineNew Haven, CT, USA; Department of Psychology, Yale UniversityNew Haven, CT, USA
| |
Collapse
|
14
|
Two distinct types of remapping in primate cortical area V4. Nat Commun 2016; 7:10402. [PMID: 26832423 PMCID: PMC4740356 DOI: 10.1038/ncomms10402] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Accepted: 12/08/2015] [Indexed: 11/25/2022] Open
Abstract
Visual neurons typically receive information from a limited portion of the retina, and such receptive fields are a key organizing principle for much of visual cortex. At the same time, there is strong evidence that receptive fields transiently shift around the time of saccades. The nature of the shift is controversial: Previous studies have found shifts consistent with a role for perceptual constancy; other studies suggest a role in the allocation of spatial attention. Here we present evidence that both the previously documented functions exist in individual neurons in primate cortical area V4. Remapping associated with perceptual constancy occurs for saccades in all directions, while attentional shifts mainly occur for neurons with receptive fields in the same hemifield as the saccade end point. The latter are relatively sluggish and can be observed even during saccade planning. Overall these results suggest a complex interplay of visual and extraretinal influences during the execution of saccades. Visual receptive fields are known to change positions around the time of a saccade, but the nature of this remapping is unclear. Here Neupane and colleagues show that neurons in area V4 of the visual cortex exhibit two types of remapping, one consistent with a role in maintaining perceptual stability, and a second that seems to reflect shifts of attention.
Collapse
|
15
|
Transsaccadic processing: stability, integration, and the potential role of remapping. Atten Percept Psychophys 2015; 77:3-27. [PMID: 25380979 DOI: 10.3758/s13414-014-0751-y] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
While our frequent saccades allow us to sample the complex visual environment in a highly efficient manner, they also raise certain challenges for interpreting and acting upon visual input. In the present, selective review, we discuss key findings from the domains of cognitive psychology, visual perception, and neuroscience concerning two such challenges: (1) maintaining the phenomenal experience of visual stability despite our rapidly shifting gaze, and (2) integrating visual information across discrete fixations. In the first two sections of the article, we focus primarily on behavioral findings. Next, we examine the possibility that a neural phenomenon known as predictive remapping may provide an explanation for aspects of transsaccadic processing. In this section of the article, we delineate and critically evaluate multiple proposals about the potential role of predictive remapping in light of both theoretical principles and empirical findings.
Collapse
|
16
|
Smith JET, Beliveau V, Schoen A, Remz J, Zhan CA, Cook EP. Dynamics of the functional link between area MT LFPs and motion detection. J Neurophysiol 2015; 114:80-98. [PMID: 25948867 DOI: 10.1152/jn.00058.2015] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 04/30/2015] [Indexed: 01/24/2023] Open
Abstract
The evolution of a visually guided perceptual decision results from multiple neural processes, and recent work suggests that signals with different neural origins are reflected in separate frequency bands of the cortical local field potential (LFP). Spike activity and LFPs in the middle temporal area (MT) have a functional link with the perception of motion stimuli (referred to as neural-behavioral correlation). To cast light on the different neural origins that underlie this functional link, we compared the temporal dynamics of the neural-behavioral correlations of MT spikes and LFPs. Wide-band activity was simultaneously recorded from two locations of MT from monkeys performing a threshold, two-stimuli, motion pulse detection task. Shortly after the motion pulse occurred, we found that high-gamma (100-200 Hz) LFPs had a fast, positive correlation with detection performance that was similar to that of the spike response. Beta (10-30 Hz) LFPs were negatively correlated with detection performance, but their dynamics were much slower, peaked late, and did not depend on stimulus configuration or reaction time. A late change in the correlation of all LFPs across the two recording electrodes suggests that a common input arrived at both MT locations prior to the behavioral response. Our results support a framework in which early high-gamma LFPs likely reflected fast, bottom-up, sensory processing that was causally linked to perception of the motion pulse. In comparison, late-arriving beta and high-gamma LFPs likely reflected slower, top-down, sources of neural-behavioral correlation that originated after the perception of the motion pulse.
Collapse
Affiliation(s)
- Jackson E T Smith
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom; Department of Physiology, McGill University, Montreal, Quebec, Canada; and
| | - Vincent Beliveau
- Department of Physiology, McGill University, Montreal, Quebec, Canada; and
| | - Alan Schoen
- Department of Physiology, McGill University, Montreal, Quebec, Canada; and
| | - Jordana Remz
- Department of Physiology, McGill University, Montreal, Quebec, Canada; and
| | - Chang'an A Zhan
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Erik P Cook
- Department of Physiology, McGill University, Montreal, Quebec, Canada; and
| |
Collapse
|
17
|
Neurons in cortical area MST remap the memory trace of visual motion across saccadic eye movements. Proc Natl Acad Sci U S A 2014; 111:7825-30. [PMID: 24821778 DOI: 10.1073/pnas.1401370111] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.
Collapse
|
18
|
Zhang E, Zhang GL, Li W. Spatiotopic perceptual learning mediated by retinotopic processing and attentional remapping. Eur J Neurosci 2013; 38:3758-67. [PMID: 24118649 DOI: 10.1111/ejn.12379] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Accepted: 09/02/2013] [Indexed: 11/28/2022]
Abstract
Visual processing takes place in both retinotopic and spatiotopic frames of reference. Whereas visual perceptual learning is usually specific to the trained retinotopic location, our recent study has shown spatiotopic specificity of learning in motion direction discrimination. To explore the mechanisms underlying spatiotopic processing and learning, and to examine whether similar mechanisms also exist in visual form processing, we trained human subjects to discriminate an orientation difference between two successively displayed stimuli, with a gaze shift in between to manipulate their positional relation in the spatiotopic frame of reference without changing their retinal locations. Training resulted in better orientation discriminability for the trained than for the untrained spatial relation of the two stimuli. This learning-induced spatiotopic preference was seen only at the trained retinal location and orientation, suggesting experience-dependent spatiotopic form processing directly based on a retinotopic map. Moreover, a similar but weaker learning-induced spatiotopic preference was still present even if the first stimulus was rendered irrelevant to the orientation discrimination task by having the subjects judge the orientation of the second stimulus relative to its mean orientation in a block of trials. However, if the first stimulus was absent, and thus no attention was captured before the gaze shift, the learning produced no significant spatiotopic preference, suggesting an important role of attentional remapping in spatiotopic processing and learning. Taken together, our results suggest that spatiotopic visual representation can be mediated by interactions between retinotopic processing and attentional remapping, and can be modified by perceptual training.
Collapse
Affiliation(s)
- En Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | | | | |
Collapse
|
19
|
Abstract
How is visual space represented in cortical area MT+? At a relatively coarse scale, the organization of MT+ is debated; retinotopic, spatiotopic, or mixed representations have all been proposed. However, none of these representations entirely explain the perceptual localization of objects at a fine spatial scale--a scale relevant for tasks like navigating or manipulating objects. For example, perceived positions of objects are strongly modulated by visual motion; stationary flashes appear shifted in the direction of nearby motion. Does spatial coding in MT+ reflect these shifts in perceived position? We performed an fMRI experiment employing this "flash-drag" effect and found that flashes presented near motion produced patterns of activity similar to physically shifted flashes in the absence of motion. This reveals a motion-dependent change in the neural representation of object position in human MT+, a process that could help compensate for perceptual and motor delays in localizing objects in dynamic scenes.
Collapse
|
20
|
Abstract
It has been suggested that one way we may create a stable percept of the visual world across multiple eye movements is to pass information from one set of neurons to another around the time of each eye movement. Previous studies have shown that some neurons in the lateral intraparietal area (LIP) exhibit anticipatory remapping: these neurons produce a visual response to a stimulus that will enter their receptive field after a saccade but before it actually does so. LIP responses during fixation are thought to represent attentional priority, behavioral relevance, or value. In this study, we test whether the remapped response represents this attentional priority by examining the activity of LIP neurons while animals perform a visual foraging task. We find that the population responds more to a target than to a distractor before the saccade even begins to bring the stimulus into the receptive field. Within 20 ms of the saccade ending, the responses in almost one-third of LIP neurons closely resemble the responses that will emerge during stable fixation. Finally, we show that, in these neurons and in the population as a whole, this remapping occurs for all stimuli in all locations across the visual field and for both long and short saccades. We conclude that this complete remapping of attentional priority across the visual field could underlie spatial stability across saccades.
Collapse
|
21
|
Abstract
Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.
Collapse
|
22
|
Dynamics of eye-position signals in the dorsal visual system. Curr Biol 2012; 22:173-9. [PMID: 22225775 DOI: 10.1016/j.cub.2011.12.032] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2011] [Revised: 12/01/2011] [Accepted: 12/12/2011] [Indexed: 11/22/2022]
Abstract
BACKGROUND Many visual areas of the primate brain contain signals related to the current position of the eyes in the orbit. These cortical eye-position signals are thought to underlie the transformation of retinal input-which changes with every eye movement-into a stable representation of visual space. For this coding scheme to work, such signals would need to be updated fast enough to keep up with the eye during normal exploratory behavior. We examined the dynamics of cortical eye-position signals in four dorsal visual areas of the macaque brain: the lateral and ventral intraparietal areas (LIP; VIP), the middle temporal area (MT), and the medial-superior temporal area (MST). We recorded extracellular activity of single neurons while the animal performed sequences of fixations and saccades in darkness. RESULTS The data show that eye-position signals are updated predictively, such that the representation shifts in the direction of a saccade prior to (<100 ms) the actual eye movement. Despite this early start, eye-position signals remain inaccurate until shortly after (10-150 ms) the eye movement. By using simulated behavioral experiments, we show that this brief misrepresentation of eye position provides a neural explanation for the psychophysical phenomenon of perisaccadic mislocalization, in which observers misperceive the positions of visual targets flashed around the time of saccadic eye movements. CONCLUSIONS Together, these results suggest that eye-position signals in the dorsal visual system are updated rapidly across eye movements and play a direct role in perceptual localization, even when they are erroneous.
Collapse
|
23
|
Abstract
Perceptual stability requires the integration of information across eye movements. We first tested the hypothesis that motion signals are integrated by neurons whose receptive fields (RFs) do not move with the eye but stay fixed in the world. Specifically, we measured the RF properties of neurons in the middle temporal area (MT) of macaques (Macaca mulatta) during the slow phase of optokinetic nystagmus. Using a novel method to estimate RF locations for both spikes and local field potentials, we found that the location on the retina that changed spike rates or local field potentials did not change with eye position; RFs moved with the eye. Second, we tested the hypothesis that neurons link information across eye positions by remapping the retinal location of their RFs to future locations. To test this, we compared RF locations during leftward and rightward slow phases of optokinetic nystagmus. We found no evidence for remapping during slow eye movements; the RF location was not affected by eye-movement direction. Together, our results show that RFs of MT neurons and the aggregate activity reflected in local field potentials are yoked to the eye during slow eye movements. This implies that individual MT neurons do not integrate sensory information from a single position in the world across eye movements. Future research will have to determine whether such integration, and the construction of perceptual stability, takes place in the form of a distributed population code in eye-centered visual cortex or is deferred to downstream areas.
Collapse
|