1
|
Moran C, Johnson PA, Landau AN, Hogendoorn H. Decoding Remapped Spatial Information in the Peri-Saccadic Period. J Neurosci 2024; 44:e2134232024. [PMID: 38871460 PMCID: PMC11270511 DOI: 10.1523/jneurosci.2134-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/20/2024] [Accepted: 04/22/2024] [Indexed: 06/15/2024] Open
Abstract
It has been suggested that, prior to a saccade, visual neurons predictively respond to stimuli that will fall in their receptive fields after completion of the saccade. This saccadic remapping process is thought to compensate for the shift of the visual world across the retina caused by eye movements. To map the timing of this predictive process in the brain, we recorded neural activity using electroencephalography during a saccade task. Human participants (male and female) made saccades between two fixation points while covertly attending to oriented gratings briefly presented at various locations on the screen. Data recorded during trials in which participants maintained fixation were used to train classifiers on stimuli in different positions. Subsequently, data collected during saccade trials were used to test for the presence of remapped stimulus information at the post-saccadic retinotopic location in the peri-saccadic period, providing unique insight into when remapped information becomes available. We found that the stimulus could be decoded at the remapped location ∼180 ms post-stimulus onset, but only when the stimulus was presented 100-200 ms before saccade onset. Within this range, we found that the timing of remapping was dictated by stimulus onset rather than saccade onset. We conclude that presenting the stimulus immediately before the saccade allows for optimal integration of the corollary discharge signal with the incoming peripheral visual information, resulting in a remapping of activation to the relevant post-saccadic retinotopic neurons.
Collapse
Affiliation(s)
- Caoimhe Moran
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- Department of Psychology,Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Philippa A Johnson
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- Cognitive Psychology Unit, Institute of Psychology & Leiden Institute for Brain and Cognition, Leiden University, Leiden 2333 AK, The Netherlands
| | - Ayelet N Landau
- Department of Psychology,Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- School of Psychology and Counselling, Queensland University of Technology, Kelvin Grove, Queensland 4059, Australia
| |
Collapse
|
2
|
Harrison WJ, Stead I, Wallis TSA, Bex PJ, Mattingley JB. A computational account of transsaccadic attentional allocation based on visual gain fields. Proc Natl Acad Sci U S A 2024; 121:e2316608121. [PMID: 38941277 PMCID: PMC11228487 DOI: 10.1073/pnas.2316608121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 05/13/2024] [Indexed: 06/30/2024] Open
Abstract
Coordination of goal-directed behavior depends on the brain's ability to recover the locations of relevant objects in the world. In humans, the visual system encodes the spatial organization of sensory inputs, but neurons in early visual areas map objects according to their retinal positions, rather than where they are in the world. How the brain computes world-referenced spatial information across eye movements has been widely researched and debated. Here, we tested whether shifts of covert attention are sufficiently precise in space and time to track an object's real-world location across eye movements. We found that observers' attentional selectivity is remarkably precise and is barely perturbed by the execution of saccades. Inspired by recent neurophysiological discoveries, we developed an observer model that rapidly estimates the real-world locations of objects and allocates attention within this reference frame. The model recapitulates the human data and provides a parsimonious explanation for previously reported phenomena in which observers allocate attention to task-irrelevant locations across eye movements. Our findings reveal that visual attention operates in real-world coordinates, which can be computed rapidly at the earliest stages of cortical processing.
Collapse
Affiliation(s)
- William J. Harrison
- Psychology, School of Health, University of the Sunshine Coast, Sippy Downs, QLD4556, Australia
- Queensland Brain Institute, The University of Queensland, St. Lucia, QLD4072, Australia
- The School of Psychology, The University of Queensland, St. Lucia, QLD4072, Australia
| | - Imogen Stead
- Queensland Brain Institute, The University of Queensland, St. Lucia, QLD4072, Australia
| | - Thomas S. A. Wallis
- Centre for Cognitive Science and Institute of Psychology, Technical University of Darmstadt, Darmstadt64283, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg, Giessen, and Darmstadt, Marburg35032, Germany
| | - Peter J. Bex
- Department of Psychology, Northeastern University, Boston, MA02115
| | - Jason B. Mattingley
- Queensland Brain Institute, The University of Queensland, St. Lucia, QLD4072, Australia
- The School of Psychology, The University of Queensland, St. Lucia, QLD4072, Australia
- Canadian Institute for Advanced Research, Toronto, ONM5G 1M1, Canada
| |
Collapse
|
3
|
Xiao W, Sharma S, Kreiman G, Livingstone MS. Feature-selective responses in macaque visual cortex follow eye movements during natural vision. Nat Neurosci 2024; 27:1157-1166. [PMID: 38684892 PMCID: PMC11156562 DOI: 10.1038/s41593-024-01631-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 03/26/2024] [Indexed: 05/02/2024]
Abstract
In natural vision, primates actively move their eyes several times per second via saccades. It remains unclear whether, during this active looking, visual neurons exhibit classical retinotopic properties, anticipate gaze shifts or mirror the stable quality of perception, especially in complex natural scenes. Here, we let 13 monkeys freely view thousands of natural images across 4.6 million fixations, recorded 883 h of neuronal responses in six areas spanning primary visual to anterior inferior temporal cortex and analyzed spatial, temporal and featural selectivity in these responses. Face neurons tracked their receptive field contents, indicated by category-selective responses. Self-consistency analysis showed that general feature-selective responses also followed eye movements and remained gaze-dependent over seconds of viewing the same image. Computational models of feature-selective responses located retinotopic receptive fields during free viewing. We found limited evidence for feature-selective predictive remapping and no viewing-history integration. Thus, ventral visual neurons represent the world in a predominantly eye-centered reference frame during natural vision.
Collapse
Affiliation(s)
- Will Xiao
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Saloni Sharma
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Gabriel Kreiman
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|
4
|
Baltaretu BR, Stevens WD, Freud E, Crawford JD. Occipital and parietal cortex participate in a cortical network for transsaccadic discrimination of object shape and orientation. Sci Rep 2023; 13:11628. [PMID: 37468709 DOI: 10.1038/s41598-023-38554-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 07/11/2023] [Indexed: 07/21/2023] Open
Abstract
Saccades change eye position and interrupt vision several times per second, necessitating neural mechanisms for continuous perception of object identity, orientation, and location. Neuroimaging studies suggest that occipital and parietal cortex play complementary roles for transsaccadic perception of intrinsic versus extrinsic spatial properties, e.g., dorsomedial occipital cortex (cuneus) is sensitive to changes in spatial frequency, whereas the supramarginal gyrus (SMG) is modulated by changes in object orientation. Based on this, we hypothesized that both structures would be recruited to simultaneously monitor object identity and orientation across saccades. To test this, we merged two previous neuroimaging protocols: 21 participants viewed a 2D object and then, after sustained fixation or a saccade, judged whether the shape or orientation of the re-presented object changed. We, then, performed a bilateral region-of-interest analysis on identified cuneus and SMG sites. As hypothesized, cuneus showed both saccade and feature (i.e., object orientation vs. shape change) modulations, and right SMG showed saccade-feature interactions. Further, the cuneus activity time course correlated with several other cortical saccade/visual areas, suggesting a 'functional network' for feature discrimination. These results confirm the involvement of occipital/parietal cortex in transsaccadic vision and support complementary roles in spatial versus identity updating.
Collapse
Affiliation(s)
- B R Baltaretu
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada.
- Department of Biology, York University, Toronto, ON, M3J 1P3, Canada.
- Department of Psychology, Justus-Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.
| | - W Dale Stevens
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
| | - E Freud
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
| | - J D Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Biology, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
- School of Kinesiology and Health Sciences, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
5
|
Chen J, Golomb JD. Dynamic neural reconstructions of attended object location and features using EEG. J Neurophysiol 2023; 130:139-154. [PMID: 37283457 PMCID: PMC10393364 DOI: 10.1152/jn.00180.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/10/2023] [Accepted: 06/02/2023] [Indexed: 06/08/2023] Open
Abstract
Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous time courses of neural representations of attended features (time point-by-time point inverted encoding model reconstructions) and attended location (time point-by-time point decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them and on half of trials received a shift cue midtrial. We trained models on a stable period from Hold attention trials and then reconstructed/decoded the attended orientation/location at each time point on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention and that there may be time points during the shifting of attention when 1) feature and location representations become uncoupled and 2) both the previously attended and currently attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the present study lend themselves well to a wide variety of future applications.NEW & NOTEWORTHY We used human EEG and machine learning to reconstruct neural response profiles during dynamic shifts of attention. Specifically, we demonstrated that we could simultaneously read out both location and feature information from an attended item in a multistimulus display. Moreover, we examined how that readout evolves over time during the dynamic process of attentional shifts. These results provide insight into our understanding of attention, and this technique carries substantial potential for versatile extensions and applications.
Collapse
Affiliation(s)
- Jiageng Chen
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
| | - Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
6
|
Lu Z, Golomb JD. Dynamic saccade context triggers more stable object-location binding. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.26.538469. [PMID: 37162863 PMCID: PMC10168424 DOI: 10.1101/2023.04.26.538469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Our visual systems rapidly perceive and integrate information about object identities and locations. There is long-standing debate about how we achieve world-centered (spatiotopic) object representations across eye movements, with many studies reporting persistent retinotopic (eye-centered) effects even for higher-level object-location binding. But these studies are generally conducted in fairly static experimental contexts. Might spatiotopic object-location binding only emerge in more dynamic saccade contexts? In the present study, we investigated this using the Spatial Congruency Bias paradigm in healthy adults. In the static (single saccade) context, we found purely retinotopic binding, as before. However, robust spatiotopic binding emerged in the dynamic (multiple frequent saccades) context. We further isolated specific factors that modulate retinotopic and spatiotopic binding. Our results provide strong evidence that dynamic saccade context can trigger more stable object-location binding in ecologically-relevant spatiotopic coordinates, perhaps via a more flexible brain state which accommodates improved visual stability in the dynamic world.
Collapse
|
7
|
Fabius JH, Fracasso A, Deodato M, Melcher D, Van der Stigchel S. Bilateral increase in MEG planar gradients prior to saccade onset. Sci Rep 2023; 13:5830. [PMID: 37037892 PMCID: PMC10086038 DOI: 10.1038/s41598-023-32980-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 04/05/2023] [Indexed: 04/12/2023] Open
Abstract
Every time we move our eyes, the retinal locations of objects change. To distinguish the changes caused by eye movements from actual external motion of the objects, the visual system is thought to anticipate the consequences of eye movements (saccades). Single neuron recordings have indeed demonstrated changes in receptive fields before saccade onset. Although some EEG studies with human participants have also demonstrated a pre-saccadic increased potential over the hemisphere that will process a stimulus after a saccade, results have been mixed. Here, we used magnetoencephalography to investigate the timing and lateralization of visually evoked planar gradients before saccade onset. We modelled the gradients from trials with both a saccade and a stimulus as the linear combination of the gradients from two conditions with either only a saccade or only a stimulus. We reasoned that any residual gradients in the condition with both a saccade and a stimulus must be uniquely linked to visually-evoked neural activity before a saccade. We observed a widespread increase in residual planar gradients. Interestingly, this increase was bilateral, showing activity both contralateral and ipsilateral to the stimulus, i.e. over the hemisphere that would process the stimulus after saccade offset. This pattern of results is consistent with predictive pre-saccadic changes involving both the current and the future receptive fields involved in processing an attended object, well before the start of the eye movement. The active, sensorimotor coupling of vision and the oculomotor system may underlie the seamless subjective experience of stable and continuous perception.
Collapse
Affiliation(s)
- Jasper H Fabius
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, G12 8QQ, UK
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, The Netherlands
| | - Alessio Fracasso
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, G12 8QQ, UK
| | - Michele Deodato
- Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - David Melcher
- Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, The Netherlands.
| |
Collapse
|
8
|
Narhi-Martinez W, Dube B, Golomb JD. Attention as a multi-level system of weights and balances. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2023; 14:e1633. [PMID: 36317275 PMCID: PMC9840663 DOI: 10.1002/wcs.1633] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 10/07/2022] [Accepted: 10/08/2022] [Indexed: 12/29/2022]
Abstract
This opinion piece is part of a collection on the topic: "What is attention?" Despite the word's place in the common vernacular, a satisfying definition for "attention" remains elusive. Part of the challenge is there exist many different types of attention, which may or may not share common mechanisms. Here we review this literature and offer an intuitive definition that draws from aspects of prior theories and models of attention but is broad enough to recognize the various types of attention and modalities it acts upon: attention as a multi-level system of weights and balances. While the specific mechanism(s) governing the weighting/balancing may vary across levels, the fundamental role of attention is to dynamically weigh and balance all signals-both externally-generated and internally-generated-such that the highest weighted signals are selected and enhanced. Top-down, bottom-up, and experience-driven factors dynamically impact this balancing, and competition occurs both within and across multiple levels of processing. This idea of a multi-level system of weights and balances is intended to incorporate both external and internal attention and capture their myriad of constantly interacting processes. We review key findings and open questions related to external attention guidance, internal attention and working memory, and broader attentional control (e.g., ongoing competition between external stimuli and internal thoughts) within the framework of this analogy. We also speculate about the implications of failures of attention in terms of weights and balances, ranging from momentary one-off errors to clinical disorders, as well as attentional development and degradation across the lifespan. This article is categorized under: Psychology > Attention Neuroscience > Cognition.
Collapse
Affiliation(s)
| | - Blaire Dube
- The Ohio State University, Department of Psychology
| | - Julie D. Golomb
- Correspondence concerning this article should be addressed to Julie Golomb, Department of Psychology, The Ohio State University, Columbus, OH, 43210.
| |
Collapse
|
9
|
Kroell LM, Rolfs M. Foveal vision anticipates defining features of eye movement targets. eLife 2022; 11:e78106. [PMID: 36082940 PMCID: PMC9581528 DOI: 10.7554/elife.78106] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 09/03/2022] [Indexed: 11/13/2022] Open
Abstract
High-acuity foveal processing is vital for human vision. Nonetheless, little is known about how the preparation of large-scale rapid eye movements (saccades) affects visual sensitivity in the center of gaze. Based on findings from passive fixation tasks, we hypothesized that during saccade preparation, foveal processing anticipates soon-to-be fixated visual features. Using a dynamic large-field noise paradigm, we indeed demonstrate that defining features of an eye movement target are enhanced in the pre-saccadic center of gaze. Enhancement manifested as higher Hit Rates for foveal probes with target-congruent orientation and a sensitization to incidental, target-like orientation information in foveally presented noise. Enhancement was spatially confined to the center of gaze and its immediate vicinity, even after parafoveal task performance had been raised to a foveal level. Moreover, foveal enhancement during saccade preparation was more pronounced and developed faster than enhancement during passive fixation. Based on these findings, we suggest a crucial contribution of foveal processing to trans-saccadic visual continuity: Foveal processing of saccade targets commences before the movement is executed and thereby enables a seamless transition once the center of gaze reaches the target.
Collapse
Affiliation(s)
- Lisa M Kroell
- Department of Psychology, Humboldt-Universität zu BerlinBerlinGermany
- Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlinGermany
| | - Martin Rolfs
- Department of Psychology, Humboldt-Universität zu BerlinBerlinGermany
- Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlinGermany
- Exzellenzcluster Science of Intelligence, Technische Universität BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| |
Collapse
|
10
|
Abstract
Many models of attention assume that attentional selection takes place at a specific moment in time that demarcates the critical transition from pre-attentive to attentive processing of sensory input. We argue that this intuitively appealing standard account of attentional selectivity is not only inaccurate, but has led to substantial conceptual confusion. As an alternative, we offer a 'diachronic' framework that describes attentional selectivity as a process that unfolds over time. Key to this view is the concept of attentional episodes, brief periods of intense attentional amplification of sensory representations that regulate access to working memory and response-related processes. We describe how attentional episodes are linked to earlier attentional mechanisms and to recurrent processing at the neural level. We review studies that establish the existence of attentional episodes, delineate the factors that determine if and when they are triggered, and discuss the costs associated with processing multiple events within a single episode. Finally, we argue that this framework offers new solutions to old problems in attention research that have never been resolved. It can provide a unified and conceptually coherent account of the network of cognitive and neural processes that produce the goal-directed selectivity in perceptual processing that is commonly referred to as 'attention'.
Collapse
|
11
|
Wilmott JP, Michel MM. Transsaccadic integration of visual information is predictive, attention-based, and spatially precise. J Vis 2021; 21:14. [PMID: 34374744 PMCID: PMC8366295 DOI: 10.1167/jov.21.8.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 03/23/2021] [Indexed: 11/29/2022] Open
Abstract
Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal "psychophysical kernel" characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.
Collapse
Affiliation(s)
- James P Wilmott
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI, USA
| | - Melchi M Michel
- Department of Psychology and Center for Cognitive Science (RuCCS), Rutgers University, Piscataway, NJ, USA
- https://mmmlab.org/
| |
Collapse
|
12
|
Abstract
Our visual system is fundamentally retinotopic. When viewing a stable scene, each eye movement shifts object features and locations on the retina. Thus, sensory representations must be updated, or remapped, across saccades to align presaccadic and postsaccadic inputs. The earliest remapping studies focused on anticipatory, presaccadic shifts of neuronal spatial receptive fields. Over time, it has become clear that there are multiple forms of remapping and that different forms of remapping may be mediated by different neural mechanisms. This review attempts to organize the various forms of remapping into a functional taxonomy based on experimental data and ongoing debates about forward versus convergent remapping, presaccadic versus postsaccadic remapping, and spatial versus attentional remapping. We integrate findings from primate neurophysiological, human neuroimaging and behavioral, and computational modeling studies. We conclude by discussing persistent open questions related to remapping, with specific attention to binding of spatial and featural information during remapping and speculations about remapping's functional significance. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio 43210, USA;
| | - James A Mazer
- Department of Microbiology and Cell Biology, Montana State University, Bozeman, Montana 59717, USA;
| |
Collapse
|
13
|
Neural Representations of Covert Attention across Saccades: Comparing Pattern Similarity to Shifting and Holding Attention during Fixation. eNeuro 2021; 8:ENEURO.0186-20.2021. [PMID: 33558269 PMCID: PMC8026251 DOI: 10.1523/eneuro.0186-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 01/21/2021] [Accepted: 01/26/2021] [Indexed: 11/21/2022] Open
Abstract
We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold attention”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.
Collapse
|
14
|
Predictive remapping leaves a behaviorally measurable attentional trace on eye-centered brain maps. Psychon Bull Rev 2021; 28:1243-1251. [PMID: 33634356 DOI: 10.3758/s13423-021-01893-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2021] [Indexed: 11/08/2022]
Abstract
How does the brain maintain spatial attention despite the retinal displacement of objects by saccades? A possible solution is to use the vector of an upcoming saccade to compensate for the shift of objects on eye-centered (retinotopic) brain maps. In support of this hypothesis, previous studies have revealed attentional effects at the future retinal locus of an attended object, just before the onset of saccades. A critical yet unresolved theoretical issue is whether predictively remapped attentional effects would persist long enough on eye-centered brain maps, so no external input (goal, expectation, reward, memory, etc.) is needed to maintain spatial attention immediately following saccades. The present study examined this issue with inhibition of return (IOR), an attentional effect that reveals itself in both world-centered and eye-centered coordinates, and predictively remaps before saccades. In the first task, a saccade was introduced to a cueing task ("nonreturn-saccade task") to show that IOR is coded in world-centered coordinates following saccades. In a second cueing task, two consecutive saccades were executed to trigger remapping and to dissociate the retinal locus relevant to remapping from the cued retinal locus ("return-saccade" task). IOR was observed at the remapped retinal locus 430-ms following the (first) saccade that triggered remapping. A third cueing task ("no-remapping" task) further revealed that the lingering IOR effect left by remapping was not confounded by the attention spillover. These results together show that predictive remapping leaves a robust attentional trace on eye-centered brain maps. This retinotopic trace is sufficient to sustain spatial attention for a few hundred milliseconds following saccades.
Collapse
|
15
|
Fabius JH, Fracasso A, Acunzo DJ, Van der Stigchel S, Melcher D. Low-Level Visual Information Is Maintained across Saccades, Allowing for a Postsaccadic Handoff between Visual Areas. J Neurosci 2020; 40:9476-9486. [PMID: 33115930 PMCID: PMC7724139 DOI: 10.1523/jneurosci.1169-20.2020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 09/17/2020] [Accepted: 10/20/2020] [Indexed: 01/01/2023] Open
Abstract
Experience seems continuous and detailed despite saccadic eye movements changing retinal input several times per second. There is debate whether neural signals related to updating across saccades contain information about stimulus features, or only location pointers without visual details. We investigated the time course of low-level visual information processing across saccades by decoding the spatial frequency of a stationary stimulus that changed from one visual hemifield to the other because of a horizontal saccadic eye movement. We recorded magnetoencephalography while human subjects (both sexes) monitored the orientation of a grating stimulus, making spatial frequency task irrelevant. Separate trials, in which subjects maintained fixation, were used to train a classifier, whose performance was then tested on saccade trials. Decoding performance showed that spatial frequency information of the presaccadic stimulus remained present for ∼200 ms after the saccade, transcending retinotopic specificity. Postsaccadic information ramped up rapidly after saccade offset. There was an overlap of over 100 ms during which decoding was significant from both presaccadic and postsaccadic processing areas. This suggests that the apparent richness of perception across saccades may be supported by the continuous availability of low-level information with a "soft handoff" of information during the initial processing sweep of the new fixation.SIGNIFICANCE STATEMENT Saccades create frequent discontinuities in visual input, yet perception appears stable and continuous. How is this discontinuous input processed resulting in visual stability? Previous studies have focused on presaccadic remapping. Here we examined the time course of processing of low-level visual information (spatial frequency) across saccades with magnetoencephalography. The results suggest that spatial frequency information is not predictively remapped but also is not discarded. Instead, they suggest a soft handoff over time between different visual areas, making this information continuously available across the saccade. Information about the presaccadic stimulus remains available, while the information about the postsaccadic stimulus has also become available. The simultaneous availability of both the presaccadic and postsaccadic information could enable rich and continuous perception across saccades.
Collapse
Affiliation(s)
- Jasper H Fabius
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - Alessio Fracasso
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - David J Acunzo
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Sciences, University of Trento, I-38122 Trento, Italy
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, The Netherlands
| | - David Melcher
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Sciences, University of Trento, I-38122 Trento, Italy
- Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
16
|
Malevich T, Rybina E, Ivtushok E, Ardasheva L, MacInnes WJ. No evidence for an independent retinotopic reference frame for inhibition of return. Acta Psychol (Amst) 2020; 208:103107. [PMID: 32562893 DOI: 10.1016/j.actpsy.2020.103107] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 04/07/2020] [Accepted: 05/26/2020] [Indexed: 02/07/2023] Open
Abstract
Inhibition of return (IOR) represents a delay in responding to a previously inspected location and is viewed as a crucial mechanism that sways attention toward novelty in visual search. Although most visual processing occurs in retinotopic, eye-centered, coordinates, IOR must be coded in spatiotopic, environmental, coordinates to successfully serve its role as a foraging facilitator. Early studies supported this suggestion but recent results have shown that both spatiotopic and retinotopic reference frames of IOR may co-exist. The present study tested possible sources for IOR at the retinotopic location including being part of the spatiotopic IOR gradient, part of hemifield inhibition and being an independent source of IOR. We conducted four experiments that alternated the cue-target spatial distance (discrete and contiguous) and the response modality (manual and saccadic). In all experiments, we tested spatiotopic, retinotopic and neutral (neither spatiotopic nor retinotopic) locations. We did find IOR at both the retinotopic and spatiotopic locations but no evidence for an independent source of retinotopic IOR for either of the response modalities. In fact, we observed the spread of IOR across entire validly cued hemifield including at neutral locations. We conclude that these results indicate a strategy to inhibit the whole cued hemifield or suggest a large horizontal gradient around the spatiotopically cued location. PUBLIC SIGNIFICANCE STATEMENT: We perceive the visual world around us as stable despite constant shifts of the retinal image due to saccadic eye movements. In this study, we explore whether Inhibition of return (IOR), a mechanism preventing us from returning to previously attended locations, operates in spatiotopic, world-centered or in retinal, eye-centered coordinates. We tested both saccadic and manual IOR at spatiotopic, retinotopic, and control locations. We did not find an independent retinotopic source of IOR for either of the response modalities. The results suggest that IOR spreads over the whole previously attended visual hemifield or there is a large horizontal spatiotopic gradient. The current results are in line with the idea of IOR being a foraging facilitator in visual search and contribute to our understanding of spatiotopically organized aspects of visual and attentional systems.
Collapse
Affiliation(s)
- Tatiana Malevich
- Vision Modelling Laboratory, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia; Werner Reichardt Centre for Integrative Neuroscience, University of Tuebingen, Tuebingen, Germany
| | - Elena Rybina
- Department of Psychology, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia
| | - Elizaveta Ivtushok
- Department of Psychology, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia
| | - Liubov Ardasheva
- Department of Psychology, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia
| | - W Joseph MacInnes
- Vision Modelling Laboratory, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia; Department of Psychology, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia.
| |
Collapse
|
17
|
Abstract
Spatial attention is thought to be the "glue" that binds features together (e.g., Treisman & Gelade, 1980, Psychology, 12[1], 97-136)-but attention is dynamic, constantly moving across multiple goals and locations. For example, when a person moves her eyes, visual inputs that are coded relative to the eyes (retinotopic) must be rapidly updated to maintain stable world-centered (spatiotopic) representations. Here, we examined how dynamic updating of spatial attention after a saccadic eye movement affects object-feature binding. Immediately after a saccade, participants were simultaneously presented with four colored and oriented bars (one at a precued spatiotopic target location) and instructed to reproduce both the color and orientation of the target item. Object-feature binding was assessed by applying probabilistic mixture models to the joint distribution of feature errors: feature reports for the target item could be correlated (and thus bound together) or independent. We found that compared with holding attention without an eye movement, attentional updating after an eye movement produced more independent errors, including illusory conjunctions, in which one feature of the item at the spatiotopic target location was misbound with the other feature of the item at the initial retinotopic location. These findings suggest that even when only one spatiotopic location is task relevant, spatial attention-and thus object-feature binding-is malleable across and after eye movements, heightening the challenge that eye movements pose for the binding problem and for visual stability.
Collapse
|
18
|
MacInnes WJ, Jóhannesson ÓI, Chetverikov A, Kristjánsson Á. No Advantage for Separating Overt and Covert Attention in Visual Search. Vision (Basel) 2020; 4:E28. [PMID: 32443506 PMCID: PMC7356832 DOI: 10.3390/vision4020028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 04/02/2020] [Accepted: 05/10/2020] [Indexed: 11/22/2022] Open
Abstract
We move our eyes roughly three times every second while searching complex scenes, but covert attention helps to guide where we allocate those overt fixations. Covert attention may be allocated reflexively or voluntarily, and speeds the rate of information processing at the attended location. Reducing access to covert attention hinders performance, but it is not known to what degree the locus of covert attention is tied to the current gaze position. We compared visual search performance in a traditional gaze-contingent display, with a second task where a similarly sized contingent window is controlled with a mouse, allowing a covert aperture to be controlled independently by overt gaze. Larger apertures improved performance for both the mouse- and gaze-contingent trials, suggesting that covert attention was beneficial regardless of control type. We also found evidence that participants used the mouse-controlled aperture somewhat independently of gaze position, suggesting that participants attempted to untether their covert and overt attention when possible. This untethering manipulation, however, resulted in an overall cost to search performance, a result at odds with previous results in a change blindness paradigm. Untethering covert and overt attention may therefore have costs or benefits depending on the task demands in each case.
Collapse
Affiliation(s)
- W. Joseph MacInnes
- School of Psychology, National Research University Higher School of Economics, Moscow 101000, Russia;
- Vision Modelling Lab, Faculty of Social Sciences, National Research University Higher School of Economics, Moscow 101000, Russia
| | - Ómar I. Jóhannesson
- Icelandic Vision Laboratory, Department of Psychology, University of Iceland, 102 Reykjavik, Iceland;
| | - Andrey Chetverikov
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands;
| | - Árni Kristjánsson
- School of Psychology, National Research University Higher School of Economics, Moscow 101000, Russia;
- Icelandic Vision Laboratory, Department of Psychology, University of Iceland, 102 Reykjavik, Iceland;
| |
Collapse
|
19
|
Abstract
Most people easily learn to recognize new faces and places, and with more extensive practice they can become experts at visual tasks as complex as radiological diagnosis and action video games. Such perceptual plasticity has been thoroughly studied in the context of training paradigms that require constant fixation. In contrast, when observers learn under more natural conditions, they make frequent saccadic eye movements. Here we show that such eye movements can play an important role in visual learning. Observers performed a task in which they executed a saccade while discriminating the motion of a cued visual stimulus. Additional stimuli, presented simultaneously with the cued one, permitted an assessment of the perceptual integration of information across visual space. Consistent with previous results on perisaccadic remapping [M. Szinte, D. Jonikaitis, M. Rolfs, P. Cavanagh, H. Deubel, J. Neurophysiol. 116, 1592-1602 (2016)], most observers preferentially integrated information from locations representing the presaccadic and postsaccadic retinal positions of the cue. With extensive training on the saccade task, these observers gradually acquired the ability to perform similar motion integration without making eye movements. Importantly, the newly acquired pattern of spatial integration was determined by the metrics of the saccades made during training. These results suggest that oculomotor influences on visual processing, long thought to subserve the function of perceptual stability, also play a role in visual plasticity.
Collapse
|
20
|
Affiliation(s)
- Joy J Geng
- Department of Psychology, Center for Mind and Brain at University of California Davis, United states.
| | - Andrew B Leber
- Department of Psychology and Center for Cognitive & Brain Sciences, The Ohio State University, United states.
| | - Sarah Shomstein
- Department of Psychological and Brain Sciences, George Washington University, United states.
| |
Collapse
|