1
|
Weng G, Akbarian A, Clark K, Noudoost B, Nategh N. Neural correlates of perisaccadic visual mislocalization in extrastriate cortex. Nat Commun 2024; 15:6335. [PMID: 39068199 PMCID: PMC11283495 DOI: 10.1038/s41467-024-50545-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 07/10/2024] [Indexed: 07/30/2024] Open
Abstract
When interacting with the visual world using saccadic eye movements (saccades), the perceived location of visual stimuli becomes biased, a phenomenon called perisaccadic mislocalization. However, the neural mechanism underlying this altered visuospatial perception and its potential link to other perisaccadic perceptual phenomena have not been established. Using the electrophysiological recording of extrastriate areas in four male macaque monkeys, combined with a computational model, we were able to quantify spatial bias around the saccade target (ST) based on the perisaccadic dynamics of extrastriate spatiotemporal sensitivity captured by a statistical model. This approach could predict the perisaccadic spatial bias around the ST, consistent with behavioral data, and revealed the precise neuronal response components underlying representational bias. These findings also establish the crucial role of increased sensitivity near the ST for neurons with receptive fields far from the ST in driving the ST spatial bias. Moreover, we showed that, by allocating more resources for visual target representation, visual areas enhance their representation of the ST location, even at the expense of transient distortions in spatial representation. This potential neural basis for perisaccadic ST representation also supports a general role for extrastriate neurons in creating the perception of stimulus location.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA.
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA.
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, USA.
| |
Collapse
|
2
|
Moran C, Johnson PA, Landau AN, Hogendoorn H. Decoding Remapped Spatial Information in the Peri-Saccadic Period. J Neurosci 2024; 44:e2134232024. [PMID: 38871460 PMCID: PMC11270511 DOI: 10.1523/jneurosci.2134-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/20/2024] [Accepted: 04/22/2024] [Indexed: 06/15/2024] Open
Abstract
It has been suggested that, prior to a saccade, visual neurons predictively respond to stimuli that will fall in their receptive fields after completion of the saccade. This saccadic remapping process is thought to compensate for the shift of the visual world across the retina caused by eye movements. To map the timing of this predictive process in the brain, we recorded neural activity using electroencephalography during a saccade task. Human participants (male and female) made saccades between two fixation points while covertly attending to oriented gratings briefly presented at various locations on the screen. Data recorded during trials in which participants maintained fixation were used to train classifiers on stimuli in different positions. Subsequently, data collected during saccade trials were used to test for the presence of remapped stimulus information at the post-saccadic retinotopic location in the peri-saccadic period, providing unique insight into when remapped information becomes available. We found that the stimulus could be decoded at the remapped location ∼180 ms post-stimulus onset, but only when the stimulus was presented 100-200 ms before saccade onset. Within this range, we found that the timing of remapping was dictated by stimulus onset rather than saccade onset. We conclude that presenting the stimulus immediately before the saccade allows for optimal integration of the corollary discharge signal with the incoming peripheral visual information, resulting in a remapping of activation to the relevant post-saccadic retinotopic neurons.
Collapse
Affiliation(s)
- Caoimhe Moran
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- Department of Psychology,Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Philippa A Johnson
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- Cognitive Psychology Unit, Institute of Psychology & Leiden Institute for Brain and Cognition, Leiden University, Leiden 2333 AK, The Netherlands
| | - Ayelet N Landau
- Department of Psychology,Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- School of Psychology and Counselling, Queensland University of Technology, Kelvin Grove, Queensland 4059, Australia
| |
Collapse
|
3
|
Rafal RD. Seeing without a Scene: Neurological Observations on the Origin and Function of the Dorsal Visual Stream. J Intell 2024; 12:50. [PMID: 38786652 PMCID: PMC11121949 DOI: 10.3390/jintelligence12050050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 03/15/2024] [Accepted: 04/30/2024] [Indexed: 05/25/2024] Open
Abstract
In all vertebrates, visual signals from each visual field project to the opposite midbrain tectum (called the superior colliculus in mammals). The tectum/colliculus computes visual salience to select targets for context-contingent visually guided behavior: a frog will orient toward a small, moving stimulus (insect prey) but away from a large, looming stimulus (a predator). In mammals, visual signals competing for behavioral salience are also transmitted to the visual cortex, where they are integrated with collicular signals and then projected via the dorsal visual stream to the parietal and frontal cortices. To control visually guided behavior, visual signals must be encoded in body-centered (egocentric) coordinates, and so visual signals must be integrated with information encoding eye position in the orbit-where the individual is looking. Eye position information is derived from copies of eye movement signals transmitted from the colliculus to the frontal and parietal cortices. In the intraparietal cortex of the dorsal stream, eye movement signals from the colliculus are used to predict the sensory consequences of action. These eye position signals are integrated with retinotopic visual signals to generate scaffolding for a visual scene that contains goal-relevant objects that are seen to have spatial relationships with each other and with the observer. Patients with degeneration of the superior colliculus, although they can see, behave as though they are blind. Bilateral damage to the intraparietal cortex of the dorsal stream causes the visual scene to disappear, leaving awareness of only one object that is lost in space. This tutorial considers what we have learned from patients with damage to the colliculus, or to the intraparietal cortex, about how the phylogenetically older midbrain and the newer mammalian dorsal cortical visual stream jointly coordinate the experience of a spatially and temporally coherent visual scene.
Collapse
Affiliation(s)
- Robert D Rafal
- Department of Psychological and Brain Sciences, University of Delaware, Newark, DE 19716, USA
| |
Collapse
|
4
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
5
|
Weng G, Akbarian A, Clark K, Noudoost B, Nategh N. Neural correlates of perisaccadic visual mislocalization in extrastriate cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.06.565871. [PMID: 37986765 PMCID: PMC10659380 DOI: 10.1101/2023.11.06.565871] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
When interacting with the visual world using saccadic eye movements (saccades), the perceived location of visual stimuli becomes biased, a phenomenon called perisaccadic mislocalization, which is indeed an exemplar of the brain's dynamic representation of the visual world. However, the neural mechanism underlying this altered visuospatial perception and its potential link to other perisaccadic perceptual phenomena have not been established. Using a combined experimental and computational approach, we were able to quantify spatial bias around the saccade target (ST) based on the perisaccadic dynamics of extrastriate spatiotemporal sensitivity captured by statistical models. This approach could predict the perisaccadic spatial bias around the ST, consistent with the psychophysical studies, and revealed the precise neuronal response components underlying representational bias. These findings also established the crucial role of response remapping toward ST representation for neurons with receptive fields far from the ST in driving the ST spatial bias. Moreover, we showed that, by allocating more resources for visual target representation, visual areas enhance their representation of the ST location, even at the expense of transient distortions in spatial representation. This potential neural basis for perisaccadic ST representation, also supports a general role for extrastriate neurons in creating the perception of stimulus location.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Amir Akbarian
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Kelsey Clark
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Behrad Noudoost
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Neda Nategh
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
6
|
Fabius JH, Fracasso A, Deodato M, Melcher D, Van der Stigchel S. Bilateral increase in MEG planar gradients prior to saccade onset. Sci Rep 2023; 13:5830. [PMID: 37037892 PMCID: PMC10086038 DOI: 10.1038/s41598-023-32980-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 04/05/2023] [Indexed: 04/12/2023] Open
Abstract
Every time we move our eyes, the retinal locations of objects change. To distinguish the changes caused by eye movements from actual external motion of the objects, the visual system is thought to anticipate the consequences of eye movements (saccades). Single neuron recordings have indeed demonstrated changes in receptive fields before saccade onset. Although some EEG studies with human participants have also demonstrated a pre-saccadic increased potential over the hemisphere that will process a stimulus after a saccade, results have been mixed. Here, we used magnetoencephalography to investigate the timing and lateralization of visually evoked planar gradients before saccade onset. We modelled the gradients from trials with both a saccade and a stimulus as the linear combination of the gradients from two conditions with either only a saccade or only a stimulus. We reasoned that any residual gradients in the condition with both a saccade and a stimulus must be uniquely linked to visually-evoked neural activity before a saccade. We observed a widespread increase in residual planar gradients. Interestingly, this increase was bilateral, showing activity both contralateral and ipsilateral to the stimulus, i.e. over the hemisphere that would process the stimulus after saccade offset. This pattern of results is consistent with predictive pre-saccadic changes involving both the current and the future receptive fields involved in processing an attended object, well before the start of the eye movement. The active, sensorimotor coupling of vision and the oculomotor system may underlie the seamless subjective experience of stable and continuous perception.
Collapse
Affiliation(s)
- Jasper H Fabius
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, G12 8QQ, UK
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, The Netherlands
| | - Alessio Fracasso
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, G12 8QQ, UK
| | - Michele Deodato
- Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - David Melcher
- Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, The Netherlands.
| |
Collapse
|
7
|
Akbarian A, Clark K, Noudoost B, Nategh N. A sensory memory to preserve visual representations across eye movements. Nat Commun 2021; 12:6449. [PMID: 34750376 PMCID: PMC8575989 DOI: 10.1038/s41467-021-26756-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 10/13/2021] [Indexed: 11/09/2022] Open
Abstract
Saccadic eye movements (saccades) disrupt the continuous flow of visual information, yet our perception of the visual world remains uninterrupted. Here we assess the representation of the visual scene across saccades from single-trial spike trains of extrastriate visual areas, using a combined electrophysiology and statistical modeling approach. Using a model-based decoder we generate a high temporal resolution readout of visual information, and identify the specific changes in neurons' spatiotemporal sensitivity that underly an integrated perisaccadic representation of visual space. Our results show that by maintaining a memory of the visual scene, extrastriate neurons produce an uninterrupted representation of the visual world. Extrastriate neurons exhibit a late response enhancement close to the time of saccade onset, which preserves the latest pre-saccadic information until the post-saccadic flow of retinal information resumes. These results show how our brain exploits available information to maintain a representation of the scene while visual inputs are disrupted.
Collapse
Affiliation(s)
- Amir Akbarian
- grid.223827.e0000 0001 2193 0096Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT USA
| | - Kelsey Clark
- grid.223827.e0000 0001 2193 0096Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT USA
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA.
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA. .,Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, USA.
| |
Collapse
|
8
|
Wilmott JP, Michel MM. Transsaccadic integration of visual information is predictive, attention-based, and spatially precise. J Vis 2021; 21:14. [PMID: 34374744 PMCID: PMC8366295 DOI: 10.1167/jov.21.8.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 03/23/2021] [Indexed: 11/29/2022] Open
Abstract
Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal "psychophysical kernel" characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.
Collapse
Affiliation(s)
- James P Wilmott
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI, USA
| | - Melchi M Michel
- Department of Psychology and Center for Cognitive Science (RuCCS), Rutgers University, Piscataway, NJ, USA
- https://mmmlab.org/
| |
Collapse
|
9
|
Li HH, Hanning NM, Carrasco M. To look or not to look: dissociating presaccadic and covert spatial attention. Trends Neurosci 2021; 44:669-686. [PMID: 34099240 PMCID: PMC8552810 DOI: 10.1016/j.tins.2021.05.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 04/25/2021] [Accepted: 05/07/2021] [Indexed: 11/23/2022]
Abstract
Attention is a central neural process that enables selective and efficient processing of visual information. Individuals can attend to specific visual information either overtly, by making an eye movement to an object of interest, or covertly, without moving their eyes. We review behavioral, neuropsychological, neurophysiological, and computational evidence of presaccadic attentional modulations that occur while preparing saccadic eye movements, and highlight their differences from those of covert spatial endogenous (voluntary) and exogenous (involuntary) attention. We discuss recent studies and experimental procedures on how these different types of attention impact visual performance, alter appearance, differentially modulate the featural representation of basic visual dimensions (orientation and spatial frequency), engage different neural computations, and recruit partially distinct neural substrates. We conclude that presaccadic attention and covert attention are dissociable.
Collapse
Affiliation(s)
- Hsin-Hung Li
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA.
| | - Nina M Hanning
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
10
|
Abstract
Our visual system is fundamentally retinotopic. When viewing a stable scene, each eye movement shifts object features and locations on the retina. Thus, sensory representations must be updated, or remapped, across saccades to align presaccadic and postsaccadic inputs. The earliest remapping studies focused on anticipatory, presaccadic shifts of neuronal spatial receptive fields. Over time, it has become clear that there are multiple forms of remapping and that different forms of remapping may be mediated by different neural mechanisms. This review attempts to organize the various forms of remapping into a functional taxonomy based on experimental data and ongoing debates about forward versus convergent remapping, presaccadic versus postsaccadic remapping, and spatial versus attentional remapping. We integrate findings from primate neurophysiological, human neuroimaging and behavioral, and computational modeling studies. We conclude by discussing persistent open questions related to remapping, with specific attention to binding of spatial and featural information during remapping and speculations about remapping's functional significance. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio 43210, USA;
| | - James A Mazer
- Department of Microbiology and Cell Biology, Montana State University, Bozeman, Montana 59717, USA;
| |
Collapse
|
11
|
Dreneva A, Chernova U, Ermolova M, MacInnes WJ. Attention Trade-Off for Localization and Saccadic Remapping. Vision (Basel) 2021; 5:vision5020024. [PMID: 34065173 PMCID: PMC8163179 DOI: 10.3390/vision5020024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 05/15/2021] [Accepted: 05/18/2021] [Indexed: 12/03/2022] Open
Abstract
Predictive remapping may be the principal mechanism of maintaining visual stability, and attention is crucial for this process. We aimed to investigate the role of attention in predictive remapping in a dual task paradigm with two conditions, with and without saccadic remapping. The first task was to remember the clock hand position either after a saccade to the clock face (saccade condition requiring remapping) or after the clock being displaced to the fixation point (fixation condition with no saccade). The second task was to report the remembered location of a dot shown peripherally in the upper screen for 1 s. We predicted that performance in the two tasks would interfere in the saccade condition, but not in the fixation condition, because of the attentional demands needed for remapping with the saccade. For the clock estimation task, answers in the saccadic trials tended to underestimate the actual position by approximately 37 ms while responses in the fixation trials were closer to veridical. As predicted, the findings also revealed significant interaction between the two tasks showing decreased predicted accuracy in the clock task for increased error in the localization task, but only for the saccadic condition. Taken together, these results point at the key role of attention in predictive remapping.
Collapse
Affiliation(s)
- Anna Dreneva
- Faculty of Psychology, Lomonosov Moscow State University, 125009 Moscow, Russia
- Correspondence:
| | - Ulyana Chernova
- Vision Modelling Laboratory, Faculty of Social Science, HSE University, 101000 Moscow, Russia; (U.C.); (W.J.M.)
- School of Psychology, HSE University, 101000 Moscow, Russia;
| | - Maria Ermolova
- School of Psychology, HSE University, 101000 Moscow, Russia;
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, 72074 Tübingen, Germany
| | - William Joseph MacInnes
- Vision Modelling Laboratory, Faculty of Social Science, HSE University, 101000 Moscow, Russia; (U.C.); (W.J.M.)
- School of Psychology, HSE University, 101000 Moscow, Russia;
| |
Collapse
|
12
|
O'Reilly RC, Russin JL, Zolfaghar M, Rohrlich J. Deep Predictive Learning in Neocortex and Pulvinar. J Cogn Neurosci 2021; 33:1158-1196. [PMID: 34428793 PMCID: PMC10164227 DOI: 10.1162/jocn_a_01708] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
How do humans learn from raw sensory experience? Throughout life, but most obviously in infancy, we learn without explicit instruction. We propose a detailed biological mechanism for the widely embraced idea that learning is driven by the differences between predictions and actual outcomes (i.e., predictive error-driven learning). Specifically, numerous weak projections into the pulvinar nucleus of the thalamus generate top-down predictions, and sparse driver inputs from lower areas supply the actual outcome, originating in Layer 5 intrinsic bursting neurons. Thus, the outcome representation is only briefly activated, roughly every 100 msec (i.e., 10 Hz, alpha), resulting in a temporal difference error signal, which drives local synaptic changes throughout the neocortex. This results in a biologically plausible form of error backpropagation learning. We implemented these mechanisms in a large-scale model of the visual system and found that the simulated inferotemporal pathway learns to systematically categorize 3-D objects according to invariant shape properties, based solely on predictive learning from raw visual inputs. These categories match human judgments on the same stimuli and are consistent with neural representations in inferotemporal cortex in primates.
Collapse
|
13
|
Schwenk JCB, Klingenhoefer S, Werner BO, Dowiasch S, Bremmer F. Perisaccadic encoding of temporal information in macaque area V4. J Neurophysiol 2021; 125:785-795. [PMID: 33502931 DOI: 10.1152/jn.00387.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The accurate processing of temporal information is of critical importance in everyday life. Yet, psychophysical studies in humans have shown that the perception of time is distorted around saccadic eye movements. The neural correlates of this misperception are still poorly understood. Behavioral and neural evidence suggest that it is tightly linked to other known perisaccadic modulations of visual perception. To further our understanding of how temporal processing is affected by saccades, we studied the representations of brief visual time intervals during fixation and saccades in area V4 of two awake macaques. We presented random sequences of vertical bar stimuli and extracted neural responses to double-pulse stimulation at varying interstimulus intervals. Our results show that temporal information about very brief intervals of as brief as 20 ms is reliably represented in the multiunit activity in area V4. Response latencies were not systematically modulated by the saccade. However, a general increase in perisaccadic activity altered the ratio of response amplitudes within stimulus pairs compared with fixation. In line with previous studies showing that the perception of brief time intervals is partly based on response levels, this may be seen as a possible correlate of the perisaccadic misperception of time.NEW & NOTEWORTHY We investigated for the first time how temporal information on very brief timescales is represented in area V4 around the time of saccadic eye movements. Overall, the responses showed an unexpectedly precise representation of time intervals. Our finding of a perisaccadic modulation of relative response amplitudes introduces a new possible correlate of saccade-related perceptual distortions of time.
Collapse
Affiliation(s)
- Jakob C B Schwenk
- Department of Neurophysics, Philipps-Universität Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-University Giessen, Germany
| | | | - Björn-Olaf Werner
- Department of Neurophysics, Philipps-Universität Marburg, Marburg, Germany
| | - Stefan Dowiasch
- Department of Neurophysics, Philipps-Universität Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-University Giessen, Germany
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg, Marburg, Germany
| |
Collapse
|
14
|
Schweitzer R, Rolfs M. Intra-saccadic motion streaks as cues to linking object locations across saccades. J Vis 2021; 20:17. [PMID: 32334429 PMCID: PMC7405763 DOI: 10.1167/jov.20.4.17] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
When visual objects shift rapidly across the retina, they produce motion blur. Intra-saccadic visual signals, caused incessantly by our own saccades, are thought to be eliminated at early stages of visual processing. Here we investigate whether they are still available to the visual system and could—in principle—be used as cues for localizing objects as they change locations on the retina. Using a high-speed projection system, we developed a trans-saccadic identification task in which brief but continuous intra-saccadic object motion was key to successful performance. Observers made a saccade to a target stimulus that moved rapidly either up or down, strictly during the eye movement. Just as the target reached its final position, an identical distractor stimulus appeared on the opposite side, resulting in a display of two identical stimuli upon saccade landing. Observers had to identify the original target using the only available clue: the target's intra-saccadic movement. In an additional replay condition, we presented the observers’ own intra-saccadic retinal stimulus trajectories during fixation. Compared to the replay condition, task performance was impaired during saccades but recovered fully when a post-saccadic blank was introduced. Reverse regression analyses and confirmatory experiments showed that performance increased markedly when targets had long movement durations, low spatial frequencies, and orientations parallel to their retinal trajectory—features that promote intra-saccadic motion streaks. Although the potential functional role of intra-saccadic visual signals is still unclear, our results suggest that they could provide cues to tracking objects that rapidly change locations across saccades.
Collapse
|
15
|
Schwetlick L, Rothkegel LOM, Trukenbrod HA, Engbert R. Modeling the effects of perisaccadic attention on gaze statistics during scene viewing. Commun Biol 2020; 3:727. [PMID: 33262536 PMCID: PMC7708631 DOI: 10.1038/s42003-020-01429-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 10/21/2020] [Indexed: 11/09/2022] Open
Abstract
How we perceive a visual scene depends critically on the selection of gaze positions. For this selection process, visual attention is known to play a key role in two ways. First, image-features attract visual attention, a fact that is captured well by time-independent fixation models. Second, millisecond-level attentional dynamics around the time of saccade drives our gaze from one position to the next. These two related research areas on attention are typically perceived as separate, both theoretically and experimentally. Here we link the two research areas by demonstrating that perisaccadic attentional dynamics improve predictions on scan path statistics. In a mathematical model, we integrated perisaccadic covert attention with dynamic scan path generation. Our model reproduces saccade amplitude distributions, angular statistics, intersaccadic turning angles, and their impact on fixation durations as well as inter-individual differences using Bayesian inference. Therefore, our result lend support to the relevance of perisaccadic attention to gaze statistics.
Collapse
Affiliation(s)
- Lisa Schwetlick
- Department of Psychology, University of Potsdam, 14469, Potsdam, Germany.
- DFG Collaborative Research Center 1294, University of Potsdam, 14469, Potsdam, Germany.
| | | | | | - Ralf Engbert
- Department of Psychology, University of Potsdam, 14469, Potsdam, Germany
- DFG Collaborative Research Center 1294, University of Potsdam, 14469, Potsdam, Germany
- Research Focus Cognitive Science, University of Potsdam, 14469, Potsdam, Germany
| |
Collapse
|
16
|
Neupane S, Guitton D, Pack CC. Perisaccadic remapping: What? How? Why? Rev Neurosci 2020; 31:505-520. [PMID: 32242834 DOI: 10.1515/revneuro-2019-0097] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 12/31/2019] [Indexed: 11/15/2022]
Abstract
About 25 years ago, the discovery of receptive field (RF) remapping in the parietal cortex of nonhuman primates revealed that visual RFs, widely assumed to have a fixed retinotopic organization, can change position before every saccade. Measuring such changes can be deceptively difficult. As a result, studies that followed have generated a fascinating but somewhat confusing picture of the phenomenon. In this review, we describe how observations of RF remapping depend on the spatial and temporal sampling of visual RFs and saccade directions. Further, we summarize some of the theories of how remapping might occur in neural circuitry. Finally, based on neurophysiological and psychophysical observations, we discuss the ways in which remapping information might facilitate computations in downstream brain areas.
Collapse
Affiliation(s)
- Sujaya Neupane
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Daniel Guitton
- Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec H3A2B4, Canada
| | - Christopher C Pack
- Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec H3A2B4, Canada
| |
Collapse
|
17
|
Abstract
Current models of trans-saccadic perception propose that, after a saccade, the saccade target object must be localized among objects near the landing position. However, the nature of the attentional mechanisms supporting this process is currently under debate. In the present study, we tested whether surface properties of the saccade target object automatically bias post-saccadic selection using a variant of the visual search task. Participants executed a saccade to a shape-singleton target in a circular array. During this primary saccade, the array sometimes rotated so that the eyes landed between the target and an adjacent distractor, requiring gaze correction. In addition, each object in the array had an incidental color value. On Switch trials, the target and adjacent distractor switched colors. The accuracy and latency of gaze correction to the target (measures that provide a direct index of target localization) were compared with a control condition in which no color switch occurred (No-switch trials). Gaze correction to the target was substantially impaired in the Switch condition. This result was obtained even when participants had substantial incentive to avoid encoding the color of the saccade target. In addition, similar effects were observed when the roles of the two feature dimensions (color and shape) were reversed. The results indicate that saccade target features are automatically encoded before a saccade, are retained in visual working memory across the saccade, and instantiate a feature-based selection operation when the eyes land, biasing attention toward objects that match target features.
Collapse
|
18
|
Marino AC, Mazer JA. Saccades Trigger Predictive Updating of Attentional Topography in Area V4. Neuron 2019; 98:429-438.e4. [PMID: 29673484 DOI: 10.1016/j.neuron.2018.03.020] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Revised: 10/09/2017] [Accepted: 03/10/2018] [Indexed: 11/30/2022]
Abstract
During natural behavior, saccades and attention act together to allocate limited neural resources. Attention is generally mediated by retinotopic visual neurons; therefore, specific neurons representing attended features change with each saccade. We investigated the neural mechanisms that allow attentional targeting in the face of saccades. Specifically, we looked for predictive changes in attentional modulation state or receptive field position that could stabilize attentional representations across saccades in area V4, known to be necessary for attention-dependent behavior. We recorded from neurons in monkeys performing a novel spatiotopic attention task, in which performance depended on accurate saccade compensation. Measurements of attentional modulation revealed a predictive attentional "hand-off" corresponding to a presaccadic transfer of attentional state from neurons inside the attentional focus before the saccade to those that will be inside the focus after the saccade. The predictive nature of the hand-off ensures that attentional brain maps are properly configured immediately after each saccade.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT, USA; Medical Scientist Training Program, Yale School of Medicine, New Haven, CT, USA; Department of Neurobiology, Yale School of Medicine, New Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT, USA; Department of Neurobiology, Yale School of Medicine, New Haven, CT, USA; Department of Psychology, Yale University, New Haven, CT, USA.
| |
Collapse
|
19
|
Abstract
Our vision depends upon shifting our high-resolution fovea to objects of interest in the visual field. Each saccade displaces the image on the retina, which should produce a chaotic scene with jerks occurring several times per second. It does not. This review examines how an internal signal in the primate brain (a corollary discharge) contributes to visual continuity across saccades. The article begins with a review of evidence for a corollary discharge in the monkey and evidence from inactivation experiments that it contributes to perception. The next section examines a specific neuronal mechanism for visual continuity, based on corollary discharge that is referred to as visual remapping. Both the basic characteristics of this anticipatory remapping and the factors that control it are enumerated. The last section considers hypotheses relating remapping to the perceived visual continuity across saccades, including remapping's contribution to perceived visual stability across saccades.
Collapse
Affiliation(s)
- Robert H Wurtz
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, Maryland 20892-4435, USA;
| |
Collapse
|
20
|
Golomb JD. Remapping locations and features across saccades: a dual-spotlight theory of attentional updating. Curr Opin Psychol 2019; 29:211-218. [PMID: 31075621 DOI: 10.1016/j.copsyc.2019.03.018] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Revised: 03/23/2019] [Accepted: 03/28/2019] [Indexed: 01/06/2023]
Abstract
How do we maintain visual stability across eye movements? Much work has focused on how visual information is rapidly updated to maintain spatiotopic representations. However, predictive spatial remapping is only part of the story. Here I review key findings, recent debates, and open questions regarding remapping and its implications for visual attention and perception. This review focuses on two key questions: when does remapping occur, and what is the impact on feature perception? Findings are reviewed within the framework of a two-stage, or dual- spotlight, remapping process, where spatial attention must be both updated to the new location (fast, predictive stage) and withdrawn from the previous retinotopic location (slow, post-saccadic stage), with a particular focus on the link between spatial and feature information across eye movements.
Collapse
Affiliation(s)
- Julie D Golomb
- Department of Psychology, The Ohio State University, United States.
| |
Collapse
|
21
|
Yoshimoto S, Takeuchi T. Effect of spatial attention on spatiotopic visual motion perception. J Vis 2019; 19:4. [PMID: 30943532 DOI: 10.1167/19.4.4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We almost never experience visual instability, despite retinal image instability induced by eye movements. How the stability of visual perception is maintained through spatiotopic representation remains a matter of debate. The discrepancies observed in the findings of existing neuroscience studies regarding spatiotopic representation partly originate from differences in regard to how attention is deployed to stimuli. In this study, we psychophysically examined whether spatial attention is needed to perceive spatiotopic visual motion. For this purpose, we used visual motion priming, which is a phenomenon in which a preceding priming stimulus modulates the perceived moving direction of an ambiguous test stimulus, such as a drifting grating that phase shifts by 180°. To examine the priming effect in different coordinates, participants performed a saccade soon after the offset of a primer. The participants were tasked with judging the direction of a subsequently presented test stimulus. To control the effect of spatial attention, the participants were asked to conduct a concurrent dot contrast-change detection task after the saccade. Positive priming was prominent in spatiotopic conditions, whereas negative priming was dominant in retinotopic conditions. At least a 600-ms interval between the priming and test stimuli was needed to observe positive priming in spatiotopic coordinates. When spatial attention was directed away from the location of the test stimulus, spatiotopic positive motion priming completely disappeared; meanwhile, the spatiotopic positive motion priming at shorter interstimulus intervals was enhanced when spatial attention was directed to the location of the test stimulus. These results provide evidence that an attentional resource is requisite for developing spatiotopic representation more quickly.
Collapse
Affiliation(s)
- Sanae Yoshimoto
- Graduate School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan
| | - Tatsuto Takeuchi
- Department of Psychology, Japan Women's University, Kanagawa, Japan
| |
Collapse
|
22
|
Nikolaev AR, van Leeuwen C. Scene Buildup From Latent Memory Representations Across Eye Movements. Front Psychol 2019; 9:2701. [PMID: 30687166 PMCID: PMC6336688 DOI: 10.3389/fpsyg.2018.02701] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 12/17/2018] [Indexed: 12/16/2022] Open
Abstract
An unresolved problem in eye movement research is how a representation is constructed on-line from several consecutive fixations of a scene. Such a scene representation is generally understood to be sparse; yet, for meeting behavioral goals a certain level of detail is needed. We propose that this is achieved through the buildup of latent representations acquired at fixation. Latent representations are retained in an activity-silent manner, require minimal energy expenditure for their maintenance, and thus allow a larger storage capacity than traditional, activation based, visual working memory. The latent representations accumulate and interact in working memory to form to the scene representation. The result is rich in detail while sparse in the sense that it is restricted to the task-relevant aspects of the scene sampled through fixations. Relevant information can quickly and flexibly be retrieved by dynamical attentional prioritization. Latent representations are observable as transient functional connectivity patterns, which emerge due to short-term changes in synaptic weights. We discuss how observing latent representations could benefit from recent methodological developments in EEG-eye movement co-registration.
Collapse
Affiliation(s)
- Andrey R Nikolaev
- Laboratory for Perceptual Dynamics, Brain & Cognition Research Unit, KU Leuven, Leuven, Belgium
| | - Cees van Leeuwen
- Laboratory for Perceptual Dynamics, Brain & Cognition Research Unit, KU Leuven, Leuven, Belgium
| |
Collapse
|
23
|
Rolfs M, Murray-Smith N, Carrasco M. Perceptual learning while preparing saccades. Vision Res 2018; 152:126-138. [PMID: 29277450 PMCID: PMC6028304 DOI: 10.1016/j.visres.2017.11.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2017] [Revised: 11/25/2017] [Accepted: 11/28/2017] [Indexed: 10/18/2022]
Abstract
Traditional perceptual learning protocols rely almost exclusively on long periods of uninterrupted fixation. Taking a first step towards understanding perceptual learning in natural vision, we had observers report the orientation of a briefly flashed stimulus (clockwise or counterclockwise from a reference orientation) presented strictly during saccade preparation at a location offset from the saccade target. For each observer, the saccade direction, stimulus location, and orientation remained the same throughout training. Subsequently, we assessed performance during fixation in three transfer sessions, either at the trained or at an untrained location, and either using an untrained (Experiment 1) or the trained (Experiment 2) stimulus orientation. We modeled the evolution of contrast thresholds (i.e., the stimulus contrast necessary to discriminate its orientation correctly 75% of the time) as an exponential learning curve, and quantified departures from this curve in transfer sessions using two new, complementary measures of transfer costs (i.e., performance decrements after the transition into the Transfer phase). We observed robust perceptual learning and associated transfer costs for untrained locations and orientations. We also assessed if spatial transfer costs were reduced for the remapped location of the pre-saccadic stimulus-the location the stimulus would have had (but never had) after the saccade. Although the pattern of results at that location differed somewhat from that at the control location, we found no clear evidence for perceptual learning at remapped locations. Using novel, model-based ways to assess learning and transfer costs, our results show that location and feature specificity, hallmarks of perceptual learning, subsist if the target stimulus is presented strictly during saccade preparation throughout training.
Collapse
Affiliation(s)
- Martin Rolfs
- Department of Psychology, New York University, NY, USA; Center for Neural Science, New York University, NY, USA; Department of Psychology, Humboldt-Universität zu Berlin, Germany; Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Germany.
| | | | - Marisa Carrasco
- Department of Psychology, New York University, NY, USA; Center for Neural Science, New York University, NY, USA
| |
Collapse
|
24
|
Nau M, Julian JB, Doeller CF. How the Brain's Navigation System Shapes Our Visual Experience. Trends Cogn Sci 2018; 22:810-825. [PMID: 30031670 DOI: 10.1016/j.tics.2018.06.008] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Revised: 06/25/2018] [Accepted: 06/27/2018] [Indexed: 11/25/2022]
Abstract
We explore the environment not only by navigating, but also by viewing our surroundings with our eyes. Here we review growing evidence that the mammalian hippocampal formation, extensively studied in the context of navigation and memory, mediates a representation of visual space that is stably anchored to the external world. This visual representation puts the hippocampal formation in a central position to guide viewing behavior and to modulate visual processing beyond the medial temporal lobe (MTL). We suggest that vision and navigation share several key computational challenges that are solved by overlapping and potentially common neural systems, making vision an optimal domain to explore whether and how the MTL supports cognitive operations beyond navigation.
Collapse
Affiliation(s)
- Matthias Nau
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Norwegian University of Science and Technology, Trondheim, Norway; These authors contributed equally to this work
| | - Joshua B Julian
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Norwegian University of Science and Technology, Trondheim, Norway; These authors contributed equally to this work.
| | - Christian F Doeller
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Norwegian University of Science and Technology, Trondheim, Norway; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands; St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
25
|
Abstract
The thalamus has long been suspected to have an important role in cognition, yet recent theories have favored a more corticocentric view. According to this view, the thalamus is an excitatory feedforward relay to or between cortical regions, and cognitively relevant computations are exclusively cortical. Here, we review anatomical, physiological, and behavioral studies along evolutionary and theoretical dimensions, arguing for essential and unique thalamic computations in cognition. Considering their architectural features as well as their ability to initiate, sustain, and switch cortical activity, thalamic circuits appear uniquely suited for computing contextual signals that rapidly reconfigure task-relevant cortical representations. We introduce a framework that formalizes this notion, show its consistency with several findings, and discuss its prediction of thalamic roles in perceptual inference and behavioral flexibility. Overall, our framework emphasizes an expanded view of the thalamus in cognitive computations and provides a roadmap to test several of its theoretical and experimental predictions.
Collapse
Affiliation(s)
- Rajeev V. Rikhye
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Ralf D. Wimmer
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- Stanley Center for Psychiatric Genetics, Broad Institute, Cambridge, Massachusetts 02139, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- Stanley Center for Psychiatric Genetics, Broad Institute, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
26
|
Van der Stigchel S, Hollingworth A. Visuospatial Working Memory as a Fundamental Component of the Eye Movement System. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2018; 27:136-143. [PMID: 29805202 DOI: 10.1177/0963721417741710] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Humans make frequent movements of the eyes (saccades) to explore the visual environment. Here we argue that visuo-spatial working memory (VSWM) is a fundamental component of the eye movement system. Memory representations in VSWM are functionally integrated at all stages of orienting, from selection of the target, to maintenance of visual features across the saccade, to processes supporting the experience of perceptual continuity after the saccade, to the correction of gaze when the eyes fail to land on the intended object. VSWM is finely tuned to meet the challenges of active vision.
Collapse
|
27
|
Yao T, Treue S, Krishna BS. Saccade-synchronized rapid attention shifts in macaque visual cortical area MT. Nat Commun 2018; 9:958. [PMID: 29511189 PMCID: PMC5840291 DOI: 10.1038/s41467-018-03398-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 02/08/2018] [Indexed: 12/16/2022] Open
Abstract
While making saccadic eye-movements to scan a visual scene, humans and monkeys are able to keep track of relevant visual stimuli by maintaining spatial attention on them. This ability requires a shift of attentional modulation from the neuronal population representing the relevant stimulus pre-saccadically to the one representing it post-saccadically. For optimal performance, this trans-saccadic attention shift should be rapid and saccade-synchronized. Whether this is so is not known. We trained two rhesus monkeys to make saccades while maintaining covert attention at a fixed spatial location. We show that the trans-saccadic attention shift in cortical visual medial temporal (MT) area is well synchronized to saccades. Attentional modulation crosses over from the pre-saccadic to the post-saccadic neuronal representation by about 50 ms after a saccade. Taking response latency into account, the trans-saccadic attention shift is well timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades. Saccades result in remapping the neural representation of a target object as well as its attentional modulation. Here the authors show that the trans-saccadic attentional shift is precisely synchronized with the saccade resulting in optimal maintenance of the locus of spatial attention.
Collapse
Affiliation(s)
- Tao Yao
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, 37077, Goettingen, Germany.,Laboratory for Neuro-and Psychophysiology, KU Leuven Medical School, Campus Gasthuisberg, 3000, Leuven, Belgium
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, 37077, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, 37077, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, 37077, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, 37073, Goettingen, Germany
| | - B Suresh Krishna
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, 37077, Goettingen, Germany. .,Leibniz-ScienceCampus Primate Cognition, 37077, Goettingen, Germany.
| |
Collapse
|
28
|
Zhang X, Golomb JD. Target Localization after Saccades and at Fixation: Nontargets both Facilitate and Bias Responses. VISUAL COGNITION 2018; 26:734-752. [PMID: 30906199 DOI: 10.1080/13506285.2018.1553810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The image on our retina changes every time we make an eye movement. To maintain visual stability after saccades, specifically to locate visual targets, we may use nontarget objects as "landmarks". In the current study, we compared how the presence of nontargets affects target localization after saccades and during sustained fixation. Participants fixated a target object, which either maintained its location on the screen (sustained-fixation trials), or displaced to trigger a saccade (saccade trials). After the target disappeared, participants reported the most recent target location with a mouse click. We found that the presence of nontargets decreased response error magnitude and variability. However, this nontarget facilitation effect was not larger for saccade trials than sustained-fixation trials, indicating that nontarget facilitation might be a general effect for target localization, rather than of particular importance to post-saccadic stability. Additionally, participants' responses were biased towards the nontarget locations, particularly when the nontarget-target relationships were preserved in relative coordinates across the saccade. This nontarget bias interacted with biases from other spatial references, e.g. eye movement paths, possibly in a way that emphasized non-redundant information. In summary, the presence of nontargets is one of several sources of reference that combine to influence (both facilitate and bias) target localization.
Collapse
Affiliation(s)
- Xiaoli Zhang
- Department of Psychology, The Ohio State University, Columbus, OH 43210, USA
| | - Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, OH 43210, USA
| |
Collapse
|
29
|
Neupane S, Guitton D, Pack CC. Dissociation of forward and convergent remapping in primate visual cortex. Curr Biol 2017; 26:R491-R492. [PMID: 27326707 DOI: 10.1016/j.cub.2016.04.050] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
A fundamental concept in neuroscience is the receptive field, the area of space over which a neuron gathers information. Until about 25 years ago, visual receptive fields were thought to be determined entirely by the pattern of retinal inputs, so it was quite surprising to find neurons in primate cortex with receptive fields that changed position every time a saccade was executed [1]. Although this discovery has figured prominently into theories of visual perception, there is still much debate about the nature of the phenomenon: Some studies report forward remapping[1-3], in which receptive fields shift to their postsaccadic locations, and others report convergent remapping, in which receptive fields shift toward the saccade target [4]. These two possibilities can be difficult to distinguish, particularly when the two types of remapping lead to receptive field shifts in similar directions [5], as was the case in virtually all previous experiments. Here we report new data from neurons in primate cortical area V4, where both types of remapping have previously been reported [3,6]. Using an experimental configuration in which forward and convergent remapping would lead to receptive field shifts in opposite directions, we show that forward remapping is the dominant type of receptive field shift in V4.
Collapse
Affiliation(s)
- Sujaya Neupane
- Montreal Neurological Institute, McGill University, 3801 University Street, Montreal, Quebec, Canada.
| | - Daniel Guitton
- Montreal Neurological Institute, McGill University, 3801 University Street, Montreal, Quebec, Canada
| | - Christopher C Pack
- Montreal Neurological Institute, McGill University, 3801 University Street, Montreal, Quebec, Canada
| |
Collapse
|
30
|
Interaction between the oculomotor and postural systems during a dual-task: Compensatory reductions in head sway following visually-induced postural perturbations promote the production of accurate double-step saccades in standing human adults. PLoS One 2017; 12:e0173678. [PMID: 28296958 PMCID: PMC5351857 DOI: 10.1371/journal.pone.0173678] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Accepted: 02/25/2017] [Indexed: 11/19/2022] Open
Abstract
Humans routinely scan their environment for useful information using saccadic eye movements and/or coordinated movements of the eyes and other body segments such the head and the torso. Most previous eye movement studies were conducted with seated subject and showed that single saccades and sequences of saccades (e.g. double-step saccades) made to briefly flashed stimuli were equally accurate and precise. As one can easily appreciate, most gaze shifts performed daily by a given person are not produced from a seated position, but rather from a standing position either as subjects perform an action from an upright stance or as they walk from one place to another. In the experiments presented here, we developed a new dual-task paradigm in order to study the interaction between the gaze control system and the postural system. Healthy adults (n = 12) were required to both maintain balance and produce accurate single-step and double-step eye saccades from a standing position. Visually-induced changes in head sway were evoked using wide-field background stimuli that either moved in the mediolateral direction or in the anteroposterior direction. We found that, as in the seated condition, single- and double-step saccades were very precise and accurate when made from a standing position, but that a tighter control of head sway was necessary in the more complex double-step saccades condition for equivalent results to be obtained. Our perturbation results support the "common goal" hypothesis that state that if necessary, as was the case during the more complex oculomotor task, context-dependent modulations of the postural system can be triggered to reduced instability and therefore support the accomplishment of a suprapostural goal.
Collapse
|
31
|
Yao T, Ketkar M, Treue S, Krishna BS. Visual attention is available at a task-relevant location rapidly after a saccade. eLife 2016; 5. [PMID: 27879201 PMCID: PMC5120882 DOI: 10.7554/elife.18009] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Accepted: 10/25/2016] [Indexed: 11/13/2022] Open
Abstract
Maintaining attention at a task-relevant spatial location while making eye-movements necessitates a rapid, saccade-synchronized shift of attentional modulation from the neuronal population representing the task-relevant location before the saccade to the one representing it after the saccade. Currently, the precise time at which spatial attention becomes fully allocated to the task-relevant location after the saccade remains unclear. Using a fine-grained temporal analysis of human peri-saccadic detection performance in an attention task, we show that spatial attention is fully available at the task-relevant location within 30 milliseconds after the saccade. Subjects tracked the attentional target veridically throughout our task: i.e. they almost never responded to non-target stimuli. Spatial attention and saccadic processing therefore co-ordinate well to ensure that relevant locations are attentionally enhanced soon after the beginning of each eye fixation. DOI:http://dx.doi.org/10.7554/eLife.18009.001 When we look at a scene, our gaze does not move continuously across it. Instead, our eyes move discontinuously, shifting gaze rapidly from point to point to focus on different locations in the scene. These eye movements are known as saccades, and during them the brain temporarily and selectively stops processing visual information. In the brain, a particular area of a scene is represented by different neurons before and after a saccade. Paying attention to a relevant location in a scene across an eye movement therefore requires the brain to shift its attentional effects from the neurons that represented that location in the scene before the saccade to the set of neurons that do so after the saccade. Ideally, this shift should happen rapidly and be synchronized with the eye movement. Exactly how long it takes for attention to emerge at a relevant location after a saccade was not clear because attention had not been recorded on a fine enough time-scale immediately after an eye movement. Yao et al. have now addressed this issue in a series of experiments that asked volunteers to focus their eyes on a fixed point. The volunteers had to follow the point with their eyes as it jumped to a new location, and at the same time had to look out for a change in the movement of a pattern of random dots. The results reveal that attention is fully available at the relevant location within 30 milliseconds after the saccade. In fact, the 30-millisecond delay in the emergence of attention matches the period during which vision is suppressed during a saccade. Thus, the change in the brain’s focus of attention coordinates with the saccadic eye movement to ensure that attention can be fixed on a relevant location as soon as possible after the eye movement ends. More studies are now needed to investigate how the brain coordinates its attention and eye-movement processes to synchronize the shift in attention with the eye movement. DOI:http://dx.doi.org/10.7554/eLife.18009.002
Collapse
Affiliation(s)
- Tao Yao
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany
| | - Madhura Ketkar
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany.,European Neuroscience Institute, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany.,Faculty of Biology and Psychology, Goettingen University, Goettingen, Germany
| | - B Suresh Krishna
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany
| |
Collapse
|
32
|
Fabius JH, Fracasso A, Van der Stigchel S. Spatiotopic updating facilitates perception immediately after saccades. Sci Rep 2016; 6:34488. [PMID: 27686998 PMCID: PMC5043283 DOI: 10.1038/srep34488] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2016] [Accepted: 09/14/2016] [Indexed: 11/08/2022] Open
Abstract
As the neural representation of visual information is initially coded in retinotopic coordinates, eye movements (saccades) pose a major problem for visual stability. If no visual information were maintained across saccades, retinotopic representations would have to be rebuilt after each saccade. It is currently strongly debated what kind of information (if any at all) is accumulated across saccades, and when this information becomes available after a saccade. Here, we use a motion illusion to examine the accumulation of visual information across saccades. In this illusion, an annulus with a random texture slowly rotates, and is then replaced with a second texture (motion transient). With increasing rotation durations, observers consistently perceive the transient as large rotational jumps in the direction opposite to rotation direction (backward jumps). We first show that accumulated motion information is updated spatiotopically across saccades. Then, we show that this accumulated information is readily available after a saccade, immediately biasing postsaccadic perception. The current findings suggest that presaccadic information is used to facilitate postsaccadic perception and are in support of a forward model of transsaccadic perception, aiming at anticipating the consequences of eye movements and operating within the narrow perisaccadic time window.
Collapse
Affiliation(s)
- Jasper H. Fabius
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
| | - Alessio Fracasso
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
- Radiology, Center for Image Sciences, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands
- Spinoza Centre for Neuroimaging, University of Amsterdam, 1105 BK Amsterdam, The Netherlands
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
| |
Collapse
|
33
|
Rao HM, Mayo JP, Sommer MA. Circuits for presaccadic visual remapping. J Neurophysiol 2016; 116:2624-2636. [PMID: 27655962 DOI: 10.1152/jn.00182.2016] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Accepted: 09/14/2016] [Indexed: 01/08/2023] Open
Abstract
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF's local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about the reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Collapse
Affiliation(s)
- Hrishikesh M Rao
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina;
| | - J Patrick Mayo
- Department of Neurobiology, Duke School of Medicine, Duke University, Durham, North Carolina; and
| | - Marc A Sommer
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke School of Medicine, Duke University, Durham, North Carolina; and.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
34
|
Rao HM, San Juan J, Shen FY, Villa JE, Rafie KS, Sommer MA. Neural Network Evidence for the Coupling of Presaccadic Visual Remapping to Predictive Eye Position Updating. Front Comput Neurosci 2016; 10:52. [PMID: 27313528 PMCID: PMC4889583 DOI: 10.3389/fncom.2016.00052] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 05/18/2016] [Indexed: 11/13/2022] Open
Abstract
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
Collapse
Affiliation(s)
- Hrishikesh M Rao
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Juan San Juan
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Fred Y Shen
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Jennifer E Villa
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Kimia S Rafie
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Marc A Sommer
- Department of Biomedical Engineering, Pratt School of Engineering, Duke UniversityDurham, NC, USA; Department of Neurobiology, Duke School of Medicine, Duke UniversityDurham, NC, USA; Center for Cognitive Neuroscience, Duke UniversityDurham, NC, USA
| |
Collapse
|
35
|
Rolfs M, Szinte M. Remapping Attention Pointers: Linking Physiology and Behavior. Trends Cogn Sci 2016; 20:399-401. [PMID: 27118641 DOI: 10.1016/j.tics.2016.04.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2016] [Accepted: 04/12/2016] [Indexed: 10/21/2022]
Abstract
Our eyes rapidly scan visual scenes, displacing the projection on the retina with every move. Yet these frequent retinal image shifts do not appear to hamper vision. Two recent physiological studies shed new light on the role of attention in visual processing across saccadic eye movements.
Collapse
Affiliation(s)
- Martin Rolfs
- Department of Psychology and Bernstein Center for Computational Neuroscience, Humboldt Universität zu Berlin, 10099 Berlin, Germany.
| | - Martin Szinte
- Allgemeine und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Munich, 80802, Germany
| |
Collapse
|