1
|
Roelfsema PR. Solving the binding problem: Assemblies form when neurons enhance their firing rate-they don't need to oscillate or synchronize. Neuron 2023; 111:1003-1019. [PMID: 37023707 DOI: 10.1016/j.neuron.2023.03.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Revised: 02/25/2023] [Accepted: 03/09/2023] [Indexed: 04/08/2023]
Abstract
When we look at an image, its features are represented in our visual system in a highly distributed manner, calling for a mechanism that binds them into coherent object representations. There have been different proposals for the neuronal mechanisms that can mediate binding. One hypothesis is that binding is achieved by oscillations that synchronize neurons representing features of the same perceptual object. This view allows separate communication channels between different brain areas. Another hypothesis is that binding of features that are represented in different brain regions occurs when the neurons in these areas that respond to the same object simultaneously enhance their firing rate, which would correspond to directing object-based attention to these features. This review summarizes evidence in favor of and against these two hypotheses, examining the neuronal correlates of binding and assessing the time course of perceptual grouping. I conclude that enhanced neuronal firing rates bind features into coherent object representations, whereas oscillations and synchrony are unrelated to binding.
Collapse
Affiliation(s)
- Pieter R Roelfsema
- Department of Vision & Cognition, Netherlands Institute for Neuroscience (KNAW), 1105 BA Amsterdam, the Netherlands; Department of Integrative Neurophysiology, VU University, De Boelelaan 1085, 1081 HV Amsterdam, the Netherlands; Department of Psychiatry, Academic Medical Centre, Postbus 22660, 1100 DD Amsterdam, the Netherlands; Laboratory of Visual Brain Therapy, Sorbonne Université, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012 Paris, France.
| |
Collapse
|
2
|
Fabius JH, Fracasso A, Acunzo DJ, Van der Stigchel S, Melcher D. Low-Level Visual Information Is Maintained across Saccades, Allowing for a Postsaccadic Handoff between Visual Areas. J Neurosci 2020; 40:9476-9486. [PMID: 33115930 PMCID: PMC7724139 DOI: 10.1523/jneurosci.1169-20.2020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 09/17/2020] [Accepted: 10/20/2020] [Indexed: 01/01/2023] Open
Abstract
Experience seems continuous and detailed despite saccadic eye movements changing retinal input several times per second. There is debate whether neural signals related to updating across saccades contain information about stimulus features, or only location pointers without visual details. We investigated the time course of low-level visual information processing across saccades by decoding the spatial frequency of a stationary stimulus that changed from one visual hemifield to the other because of a horizontal saccadic eye movement. We recorded magnetoencephalography while human subjects (both sexes) monitored the orientation of a grating stimulus, making spatial frequency task irrelevant. Separate trials, in which subjects maintained fixation, were used to train a classifier, whose performance was then tested on saccade trials. Decoding performance showed that spatial frequency information of the presaccadic stimulus remained present for ∼200 ms after the saccade, transcending retinotopic specificity. Postsaccadic information ramped up rapidly after saccade offset. There was an overlap of over 100 ms during which decoding was significant from both presaccadic and postsaccadic processing areas. This suggests that the apparent richness of perception across saccades may be supported by the continuous availability of low-level information with a "soft handoff" of information during the initial processing sweep of the new fixation.SIGNIFICANCE STATEMENT Saccades create frequent discontinuities in visual input, yet perception appears stable and continuous. How is this discontinuous input processed resulting in visual stability? Previous studies have focused on presaccadic remapping. Here we examined the time course of processing of low-level visual information (spatial frequency) across saccades with magnetoencephalography. The results suggest that spatial frequency information is not predictively remapped but also is not discarded. Instead, they suggest a soft handoff over time between different visual areas, making this information continuously available across the saccade. Information about the presaccadic stimulus remains available, while the information about the postsaccadic stimulus has also become available. The simultaneous availability of both the presaccadic and postsaccadic information could enable rich and continuous perception across saccades.
Collapse
Affiliation(s)
- Jasper H Fabius
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - Alessio Fracasso
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - David J Acunzo
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Sciences, University of Trento, I-38122 Trento, Italy
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, The Netherlands
| | - David Melcher
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Sciences, University of Trento, I-38122 Trento, Italy
- Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
3
|
Yao T, Treue S, Krishna BS. Saccade-synchronized rapid attention shifts in macaque visual cortical area MT. Nat Commun 2018; 9:958. [PMID: 29511189 PMCID: PMC5840291 DOI: 10.1038/s41467-018-03398-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 02/08/2018] [Indexed: 12/16/2022] Open
Abstract
While making saccadic eye-movements to scan a visual scene, humans and monkeys are able to keep track of relevant visual stimuli by maintaining spatial attention on them. This ability requires a shift of attentional modulation from the neuronal population representing the relevant stimulus pre-saccadically to the one representing it post-saccadically. For optimal performance, this trans-saccadic attention shift should be rapid and saccade-synchronized. Whether this is so is not known. We trained two rhesus monkeys to make saccades while maintaining covert attention at a fixed spatial location. We show that the trans-saccadic attention shift in cortical visual medial temporal (MT) area is well synchronized to saccades. Attentional modulation crosses over from the pre-saccadic to the post-saccadic neuronal representation by about 50 ms after a saccade. Taking response latency into account, the trans-saccadic attention shift is well timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades. Saccades result in remapping the neural representation of a target object as well as its attentional modulation. Here the authors show that the trans-saccadic attentional shift is precisely synchronized with the saccade resulting in optimal maintenance of the locus of spatial attention.
Collapse
Affiliation(s)
- Tao Yao
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, 37077, Goettingen, Germany.,Laboratory for Neuro-and Psychophysiology, KU Leuven Medical School, Campus Gasthuisberg, 3000, Leuven, Belgium
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, 37077, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, 37077, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, 37077, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, 37073, Goettingen, Germany
| | - B Suresh Krishna
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, 37077, Goettingen, Germany. .,Leibniz-ScienceCampus Primate Cognition, 37077, Goettingen, Germany.
| |
Collapse
|
4
|
Yao T, Ketkar M, Treue S, Krishna BS. Visual attention is available at a task-relevant location rapidly after a saccade. eLife 2016; 5. [PMID: 27879201 PMCID: PMC5120882 DOI: 10.7554/elife.18009] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Accepted: 10/25/2016] [Indexed: 11/13/2022] Open
Abstract
Maintaining attention at a task-relevant spatial location while making eye-movements necessitates a rapid, saccade-synchronized shift of attentional modulation from the neuronal population representing the task-relevant location before the saccade to the one representing it after the saccade. Currently, the precise time at which spatial attention becomes fully allocated to the task-relevant location after the saccade remains unclear. Using a fine-grained temporal analysis of human peri-saccadic detection performance in an attention task, we show that spatial attention is fully available at the task-relevant location within 30 milliseconds after the saccade. Subjects tracked the attentional target veridically throughout our task: i.e. they almost never responded to non-target stimuli. Spatial attention and saccadic processing therefore co-ordinate well to ensure that relevant locations are attentionally enhanced soon after the beginning of each eye fixation. DOI:http://dx.doi.org/10.7554/eLife.18009.001 When we look at a scene, our gaze does not move continuously across it. Instead, our eyes move discontinuously, shifting gaze rapidly from point to point to focus on different locations in the scene. These eye movements are known as saccades, and during them the brain temporarily and selectively stops processing visual information. In the brain, a particular area of a scene is represented by different neurons before and after a saccade. Paying attention to a relevant location in a scene across an eye movement therefore requires the brain to shift its attentional effects from the neurons that represented that location in the scene before the saccade to the set of neurons that do so after the saccade. Ideally, this shift should happen rapidly and be synchronized with the eye movement. Exactly how long it takes for attention to emerge at a relevant location after a saccade was not clear because attention had not been recorded on a fine enough time-scale immediately after an eye movement. Yao et al. have now addressed this issue in a series of experiments that asked volunteers to focus their eyes on a fixed point. The volunteers had to follow the point with their eyes as it jumped to a new location, and at the same time had to look out for a change in the movement of a pattern of random dots. The results reveal that attention is fully available at the relevant location within 30 milliseconds after the saccade. In fact, the 30-millisecond delay in the emergence of attention matches the period during which vision is suppressed during a saccade. Thus, the change in the brain’s focus of attention coordinates with the saccadic eye movement to ensure that attention can be fixed on a relevant location as soon as possible after the eye movement ends. More studies are now needed to investigate how the brain coordinates its attention and eye-movement processes to synchronize the shift in attention with the eye movement. DOI:http://dx.doi.org/10.7554/eLife.18009.002
Collapse
Affiliation(s)
- Tao Yao
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany
| | - Madhura Ketkar
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany.,European Neuroscience Institute, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany.,Faculty of Biology and Psychology, Goettingen University, Goettingen, Germany
| | - B Suresh Krishna
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany
| |
Collapse
|
5
|
Abstract
Neurons in early visual cortical areas not only represent incoming visual information but are also engaged by higher level cognitive processes, including attention, working memory, imagery, and decision-making. Are these cognitive effects an epiphenomenon or are they functionally relevant for these mental operations? We review evidence supporting the hypothesis that the modulation of activity in early visual areas has a causal role in cognition. The modulatory influences allow the early visual cortex to act as a multiscale cognitive blackboard for read and write operations by higher visual areas, which can thereby efficiently exchange information. This blackboard architecture explains how the activity of neurons in the early visual cortex contributes to scene segmentation and working memory, and relates to the subject's inferences about the visual world. The architecture also has distinct advantages for the processing of visual routines that rely on a number of sequentially executed processing steps.
Collapse
Affiliation(s)
- Pieter R Roelfsema
- Netherlands Institute for Neuroscience, 1105 BA Amsterdam, The Netherlands; .,Department of Integrative Neurophysiology, VU University Amsterdam, 1081 HV Amsterdam, The Netherlands.,Psychiatry Department, Academic Medical Center, 1105 AZ Amsterdam, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 EN Nijmegen, The Netherlands
| |
Collapse
|
6
|
Marino AC, Mazer JA. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology. Front Syst Neurosci 2016; 10:3. [PMID: 26903820 PMCID: PMC4743436 DOI: 10.3389/fnsys.2016.00003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 01/15/2016] [Indexed: 11/13/2022] Open
Abstract
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron's spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Medical Scientist Training Program, Yale University School of MedicineNew Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Department of Neurobiology, Yale University School of MedicineNew Haven, CT, USA; Department of Psychology, Yale UniversityNew Haven, CT, USA
| |
Collapse
|
7
|
Abstract
Neurons at early stages of the visual cortex signal elemental features, such as pieces of contour, but how these signals are organized into perceptual objects is unclear. Theories have proposed that spiking synchrony between these neurons encodes how features are grouped (binding-by-synchrony), but recent studies did not find the predicted increase in synchrony with binding. Here we propose that features are grouped to "proto-objects" by intrinsic feedback circuits that enhance the responses of the participating feature neurons. This hypothesis predicts synchrony exclusively between feature neurons that receive feedback from the same grouping circuit. We recorded from neurons in macaque visual cortex and used border-ownership selectivity, an intrinsic property of the neurons, to infer whether or not two neurons are part of the same grouping circuit. We found that binding produced synchrony between same-circuit neurons, but not between other pairs of neurons, as predicted by the grouping hypothesis. In a selective attention task, synchrony emerged with ignored as well as attended objects, and higher synchrony was associated with faster behavioral responses, as would be expected from early grouping mechanisms that provide the structure for object-based processing. Thus, synchrony could be produced by automatic activation of intrinsic grouping circuits. However, the binding-related elevation of synchrony was weak compared with its random fluctuations, arguing against synchrony as a code for binding. In contrast, feedback grouping circuits encode binding by modulating the response strength of related feature neurons. Thus, our results suggest a novel coding mechanism that might underlie the proto-objects of perception.
Collapse
|
8
|
Grossberg S, Srinivasan K, Yazdanbakhsh A. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Front Psychol 2015; 5:1457. [PMID: 25642198 PMCID: PMC4294135 DOI: 10.3389/fpsyg.2014.01457] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Accepted: 11/28/2014] [Indexed: 12/02/2022] Open
Abstract
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Karthik Srinivasan
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Arash Yazdanbakhsh
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| |
Collapse
|
9
|
Abstract
Psychophysical and neurophysiological studies indicate that during the preparation of saccades, visual processing at the target location is facilitated automatically by the deployment of attention. It has been assumed that the neural mechanisms involved in presaccadic shifts of attention are purely spatial in nature. Saccade preparation modulates the visual responses of neurons within extrastriate area V4, where the responses to targets are enhanced and responses to nontargets are suppressed. We tested whether this effect also engages a nonspatial form of modulation. We measured the responses of area V4 neurons to oriented gratings in two monkeys (Macaca mulatta) making delayed saccades to targets distant from the neuronal receptive field (RF). We varied the orientation of both the RF stimulus and the saccadic target. We found that, in addition to the spatial modulation, saccade preparation involves a feature-dependent modulation of V4 neuronal responses. Specifically, we found that the suppression of area V4 responses to nontarget stimuli during the preparation of saccades depends on the features of the saccadic target. Presaccadic suppression was absent when the features of the saccadic target matched the features preferred by individual V4 neurons. This feature-dependent modulation occurred in the absence of any feature-attention task. We show that our observations are consistent with a computational framework in which feature-based effects automatically emerge from saccade-related feedback signals that are spatial in nature.
Collapse
|
10
|
Visual space is compressed in prefrontal cortex before eye movements. Nature 2014; 507:504-7. [PMID: 24670771 PMCID: PMC4064801 DOI: 10.1038/nature13149] [Citation(s) in RCA: 130] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2013] [Accepted: 02/13/2014] [Indexed: 11/08/2022]
Abstract
We experience the visual world through a series of saccadic eye movements, each one shifting our gaze to bring objects of interest to the fovea for further processing. Although such movements lead to frequent and substantial displacements of the retinal image, these displacements go unnoticed. It is widely assumed that a primary mechanism underlying this apparent stability is an anticipatory shifting of visual receptive fields (RFs) from their presaccadic to their postsaccadic locations before movement onset. Evidence of this predictive 'remapping' of RFs has been particularly apparent within brain structures involved in gaze control. However, critically absent among that evidence are detailed measurements of visual RFs before movement onset. Here we show that during saccade preparation, rather than remap, RFs of neurons in a prefrontal gaze control area massively converge towards the saccadic target. We mapped the visual RFs of prefrontal neurons during stable fixation and immediately before the onset of eye movements, using multi-electrode recordings in monkeys. Following movements from an initial fixation point to a target, RFs remained stationary in retinocentric space. However, in the period immediately before movement onset, RFs shifted by as much as 18 degrees of visual angle, and converged towards the target location. This convergence resulted in a threefold increase in the proportion of RFs responding to stimuli near the target region. In addition, like in human observers, the population of prefrontal neurons grossly mislocalized presaccadic stimuli as being closer to the target. Our results show that RF shifts do not predict the retinal displacements due to saccades, but instead reflect the overriding perception of target space during eye movements.
Collapse
|
11
|
Figure-ground processing during fixational saccades in V1: indication for higher-order stability. J Neurosci 2014; 34:3247-52. [PMID: 24573283 DOI: 10.1523/jneurosci.4375-13.2014] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In a typical visual scene we continuously perceive a "figure" that is segregated from the surrounding "background" despite ongoing microsaccades and small saccades that are performed when attempting fixation (fixational saccades [FSs]). Previously reported neuronal correlates of figure-ground (FG) segregation in the primary visual cortex (V1) showed enhanced activity in the "figure" along with suppressed activity in the noisy "background." However, it is unknown how this FG modulation in V1 is affected by FSs. To investigate this question, we trained two monkeys to detect a contour embedded in a noisy background while simultaneously imaging V1 using voltage-sensitive dyes. During stimulus presentation, the monkeys typically performed 1-3 FSs, which displaced the contour over the retina. Using eye position and a 2D analytical model to map the stimulus onto V1, we were able to compute FG modulation before and after each FS. On the spatial cortical scale, we found that, after each FS, FG modulation follows the stimulus retinal displacement and "hops" within the V1 retinotopic map, suggesting visual instability. On the temporal scale, FG modulation is initiated in the new retinotopic position before it disappeared from the old retinotopic position. Moreover, the FG modulation developed faster after an FS, compared with after stimulus onset, which may contribute to visual stability of FG segregation, along the timeline of stimulus presentation. Therefore, despite spatial discontinuity of FG modulation in V1, the higher-order stability of FG modulation along time may enable our stable and continuous perception.
Collapse
|
12
|
Zhang E, Zhang GL, Li W. Spatiotopic perceptual learning mediated by retinotopic processing and attentional remapping. Eur J Neurosci 2013; 38:3758-67. [PMID: 24118649 DOI: 10.1111/ejn.12379] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Accepted: 09/02/2013] [Indexed: 11/28/2022]
Abstract
Visual processing takes place in both retinotopic and spatiotopic frames of reference. Whereas visual perceptual learning is usually specific to the trained retinotopic location, our recent study has shown spatiotopic specificity of learning in motion direction discrimination. To explore the mechanisms underlying spatiotopic processing and learning, and to examine whether similar mechanisms also exist in visual form processing, we trained human subjects to discriminate an orientation difference between two successively displayed stimuli, with a gaze shift in between to manipulate their positional relation in the spatiotopic frame of reference without changing their retinal locations. Training resulted in better orientation discriminability for the trained than for the untrained spatial relation of the two stimuli. This learning-induced spatiotopic preference was seen only at the trained retinal location and orientation, suggesting experience-dependent spatiotopic form processing directly based on a retinotopic map. Moreover, a similar but weaker learning-induced spatiotopic preference was still present even if the first stimulus was rendered irrelevant to the orientation discrimination task by having the subjects judge the orientation of the second stimulus relative to its mean orientation in a block of trials. However, if the first stimulus was absent, and thus no attention was captured before the gaze shift, the learning produced no significant spatiotopic preference, suggesting an important role of attentional remapping in spatiotopic processing and learning. Taken together, our results suggest that spatiotopic visual representation can be mediated by interactions between retinotopic processing and attentional remapping, and can be modified by perceptual training.
Collapse
Affiliation(s)
- En Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | | | | |
Collapse
|
13
|
O'Herron P, von der Heydt R. Remapping of border ownership in the visual cortex. J Neurosci 2013; 33:1964-74. [PMID: 23365235 PMCID: PMC4086328 DOI: 10.1523/jneurosci.2797-12.2013] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2012] [Revised: 09/17/2012] [Accepted: 10/22/2012] [Indexed: 11/21/2022] Open
Abstract
We see objects as having continuity although the retinal image changes frequently. How such continuity is achieved is hard to understand, because neurons in the visual cortex have small receptive fields that are fixed on the retina, which means that a different set of neurons is activated every time the eyes move. Neurons in areas V1 and V2 of the visual cortex signal the local features that are currently in their receptive fields and do not show "remapping" when the image moves. However, subsets of neurons in these areas also carry information about global aspects, such as figure-ground organization. Here we performed experiments to find out whether figure-ground organization is remapped. We recorded single neurons in macaque V1 and V2 in which figure-ground organization is represented by assignment of contours to regions (border ownership). We found previously that border-ownership signals persist when a figure edge is switched to an ambiguous edge by removing the context. We now used this paradigm to see whether border ownership transfers when the ambiguous edge is moved across the retina. In the new position, the edge activated a different set of neurons at a different location in cortex. We found that border ownership was transferred to the newly activated neurons. The transfer occurred whether the edge was moved by a saccade or by moving the visual display. Thus, although the contours are coded in retinal coordinates, their assignment to objects is maintained across movements of the retinal image.
Collapse
Affiliation(s)
- Philip O'Herron
- Krieger Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, Maryland 21218, USA.
| | | |
Collapse
|
14
|
Prime SL, Vesia M, Crawford JD. Cortical mechanisms for trans-saccadic memory and integration of multiple object features. Philos Trans R Soc Lond B Biol Sci 2011; 366:540-53. [PMID: 21242142 DOI: 10.1098/rstb.2010.0184] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Constructing an internal representation of the world from successive visual fixations, i.e. separated by saccadic eye movements, is known as trans-saccadic perception. Research on trans-saccadic perception (TSP) has been traditionally aimed at resolving the problems of memory capacity and visual integration across saccades. In this paper, we review this literature on TSP with a focus on research showing that egocentric measures of the saccadic eye movement can be used to integrate simple object features across saccades, and that the memory capacity for items retained across saccades, like visual working memory, is restricted to about three to four items. We also review recent transcranial magnetic stimulation experiments which suggest that the right parietal eye field and frontal eye fields play a key functional role in spatial updating of objects in TSP. We conclude by speculating on possible cortical mechanisms for governing egocentric spatial updating of multiple objects in TSP.
Collapse
Affiliation(s)
- Steven L Prime
- Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada, R3T 2N2
| | | | | |
Collapse
|
15
|
Abstract
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world.
Collapse
Affiliation(s)
- Sebastiaan Mathôt
- Department of Cognitive Psychology, Vrije Universiteit, Amsterdam, The Netherlands.
| | | |
Collapse
|
16
|
Biber U, Ilg UJ. Visual stability and the motion aftereffect: a psychophysical study revealing spatial updating. PLoS One 2011; 6:e16265. [PMID: 21298104 PMCID: PMC3027650 DOI: 10.1371/journal.pone.0016265] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2010] [Accepted: 12/08/2010] [Indexed: 11/21/2022] Open
Abstract
Eye movements create an ever-changing image of the world on the retina. In particular, frequent saccades call for a compensatory mechanism to transform the changing visual information into a stable percept. To this end, the brain presumably uses internal copies of motor commands. Electrophysiological recordings of visual neurons in the primate lateral intraparietal cortex, the frontal eye fields, and the superior colliculus suggest that the receptive fields (RFs) of special neurons shift towards their post-saccadic positions before the onset of a saccade. However, the perceptual consequences of these shifts remain controversial. We wanted to test in humans whether a remapping of motion adaptation occurs in visual perception.The motion aftereffect (MAE) occurs after viewing of a moving stimulus as an apparent movement to the opposite direction. We designed a saccade paradigm suitable for revealing pre-saccadic remapping of the MAE. Indeed, a transfer of motion adaptation from pre-saccadic to post-saccadic position could be observed when subjects prepared saccades. In the remapping condition, the strength of the MAE was comparable to the effect measured in a control condition (33±7% vs. 27±4%). Contrary, after a saccade or without saccade planning, the MAE was weak or absent when adaptation and test stimulus were located at different retinal locations, i.e. the effect was clearly retinotopic. Regarding visual cognition, our study reveals for the first time predictive remapping of the MAE but no spatiotopic transfer across saccades. Since the cortical sites involved in motion adaptation in primates are most likely the primary visual cortex and the middle temporal area (MT/V5) corresponding to human MT, our results suggest that pre-saccadic remapping extends to these areas, which have been associated with strict retinotopy and therefore with classical RF organization. The pre-saccadic transfer of visual features demonstrated here may be a crucial determinant for a stable percept despite saccades.
Collapse
Affiliation(s)
- Ulrich Biber
- Hertie-Institute for Clinical Brain Research, Department of Cognitive Neurology, University of Tübingen, Tübingen, Germany.
| | | |
Collapse
|
17
|
Neuronal activity in the visual cortex reveals the temporal order of cognitive operations. J Neurosci 2011; 30:16293-303. [PMID: 21123575 DOI: 10.1523/jneurosci.1256-10.2010] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Most mental processes consist of a number of processing steps that are executed sequentially. The timing of the individual mental operations can usually only be estimated indirectly, from the pattern of reaction times. In vision, however, many processing steps are associated with the modulation of neuronal activity in early visual areas. Here we exploited this association to elucidate the time course of neuronal activity related to each of the self-paced mental processing steps in complex visual tasks. We trained monkeys to perform two tasks, search-trace and trace-search, which required performing a sequence of two operations: a visual search for a specific color and the mental tracing of a curve. We used multielectrode recording techniques to monitor the representations of multiple visual items in area V1 at the same time and found that the relevant curve as well as the target of visual search evoked enhanced neuronal activity with a timing that depended on the order of operations. This modulation of neuronal activity in early visual areas could allow these areas to (1) act as a cognitive blackboard that permits the exchange of information between successive processing steps of a sequential visual task and to (2) contribute to the orderly progression of task-dependent endogenous attention shifts that are driven by task structure and evolve over hundreds of milliseconds.
Collapse
|
18
|
Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements. J Neurosci 2010; 30:10493-506. [PMID: 20685992 DOI: 10.1523/jneurosci.1546-10.2010] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
With each eye movement, the image of the world received by the visual system changes dramatically. To maintain stable spatiotopic (world-centered) visual representations, the retinotopic (eye-centered) coordinates of visual stimuli are continually remapped, even before the eye movement is completed. Recent psychophysical work has suggested that updating of attended locations occurs as well, although on a slower timescale, such that sustained attention lingers in retinotopic coordinates for several hundred milliseconds after each saccade. To explore where and when this "retinotopic attentional trace" resides in the cortical visual processing hierarchy, we conducted complementary functional magnetic resonance imaging and event-related potential (ERP) experiments using a novel gaze-contingent task. Human subjects executed visually guided saccades while covertly monitoring a fixed spatiotopic target location. Although subjects responded only to stimuli appearing at the attended spatiotopic location, blood oxygen level-dependent responses to stimuli appearing after the eye movement at the previously, but no longer, attended retinotopic location were enhanced in visual cortical area V4 and throughout visual cortex. This retinotopic attentional trace was also detectable with higher temporal resolution in the anterior N1 component of the ERP data, a well established signature of attentional modulation. Together, these results demonstrate that, when top-down spatiotopic signals act to redirect visuospatial attention to new retinotopic locations after eye movements, facilitation transiently persists in the cortical regions representing the previously relevant retinotopic location.
Collapse
|
19
|
Alexander DM, Van Leeuwen C. Mapping of contextual modulation in the population response of primary visual cortex. Cogn Neurodyn 2010; 4:1-24. [PMID: 19898958 PMCID: PMC2837531 DOI: 10.1007/s11571-009-9098-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2009] [Revised: 10/04/2009] [Accepted: 10/11/2009] [Indexed: 10/20/2022] Open
Abstract
We review the evidence of long-range contextual modulation in V1. Populations of neurons in V1 are activated by a wide variety of stimuli outside of their classical receptive fields (RF), well beyond their surround region. These effects generally involve extra-RF features with an orientation component. The population mapping of orientation preferences to the upper layers of V1 is well understood, as far as the classical RF properties are concerned, and involves organization into pinwheel-like structures. We introduce a novel hypothesis regarding the organization of V1's contextual response. We show that RF and extra-RF orientation preferences are mapped in related ways. Orientation pinwheels are the foci of both types of features. The mapping of contextual features onto the orientation pinwheel has a form that recapitulates the organization of the visual field: an iso-orientation patch within the pinwheel also responds to extra-RF stimuli of the same orientation. We hypothesize that the same form of mapping applies to other stimulus properties that are mapped out in V1, such as colour and contrast selectivity. A specific consequence is that fovea-like properties will be mapped in a systematic way to orientation pinwheels. We review the evidence that cytochrome oxidase blobs comprise the foci of this contextual remapping for colour and low contrasts. Neurodynamics and motion in the visual field are argued to play an important role in the shaping and maintenance of this type of mapping in V1.
Collapse
Affiliation(s)
- David M. Alexander
- Laboratory for Perceptual Dynamics, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako-shi, Saitama 351-0198 Japan
| | - Cees Van Leeuwen
- Laboratory for Perceptual Dynamics, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako-shi, Saitama 351-0198 Japan
| |
Collapse
|
20
|
Khayat PS, Pooresmaeili A, Roelfsema PR. Time course of attentional modulation in the frontal eye field during curve tracing. J Neurophysiol 2009; 101:1813-22. [PMID: 19176609 DOI: 10.1152/jn.91050.2008] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Neurons in the frontal eye fields (FEFs) register incoming visual information and select visual stimuli that are relevant for behavior. Here we investigated the timing of the visual response and the timing of selection by recording from single FEF neurons in a curve-tracing task that requires shifts of attention followed by an oculomotor response. We found that the behavioral selection signal in area FEF had a latency of 147 ms and that it was delayed substantially relative to the visual response, which occurred 50 ms after stimulus presentation. We compared the FEF responses to activity previously recorded in the primary visual cortex (area V1) during the same task. Visual responses in area V1 preceded the FEF responses, but the latencies of selection signals in areas V1 and FEF were similar. The similarity of timing of selection signals in structures at opposite ends of the visual cortical processing hierarchy supports the view that stimulus selection occurs in an interaction between widely separated cortical regions.
Collapse
Affiliation(s)
- P S Khayat
- Department of Physiology, McGill University, 3655 Prom Sir. W. Osler, Montréal QC, H3G 1Y6, Canada.
| | | | | |
Collapse
|
21
|
Melcher D. Predictive remapping of visual features precedes saccadic eye movements. Nat Neurosci 2007; 10:903-7. [PMID: 17589507 DOI: 10.1038/nn1917] [Citation(s) in RCA: 146] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2007] [Accepted: 05/15/2007] [Indexed: 11/09/2022]
Abstract
The frequent occurrence of saccadic eye movements raises the question of how information is combined across separate glances into a stable, continuous percept. Here I show that visual form processing is altered at both the current fixation position and the location of the saccadic target before the saccade. When human observers prepared to follow a displacement of the stimulus with the eyes, visual form adaptation was transferred from current fixation to the future gaze position. This transfer of adaptation also influenced the perception of test stimuli shown at an intermediate position between fixation and saccadic target. Additionally, I found a presaccadic transfer of adaptation when observers prepared to move their eyes toward a stationary adapting stimulus in peripheral vision. The remapping of visual processing, demonstrated here with form adaptation, may help to explain our impression of a smooth transition, with no temporal delay, of visual perception across glances.
Collapse
Affiliation(s)
- David Melcher
- Center for Mind/Brain Studies and Department of Cognitive Science, University of Trento, Corso Bettini 31, Rovereto 38068, Italy.
| |
Collapse
|
22
|
Alexander DM, Wright JJ. The maximum range and timing of excitatory contextual modulation in monkey primary visual cortex. Vis Neurosci 2006; 23:721-8. [PMID: 17020628 DOI: 10.1017/s0952523806230049] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2004] [Accepted: 03/27/2006] [Indexed: 11/06/2022]
Abstract
Contextual modulations of receptive field properties by distal stimulus configurations have been shown for a variety of stimulus paradigms. A survey of excitatory contextual modulation data for V1 shows the maximum scale of interactions, measured in terms of distance in V1, to be between 10 mm and 30 mm. Different types of excitatory contextual modulation in V1 occur throughout the interval of 40-250 ms after stimulus delivery. This window provides opportunity for global propagation of visual contextual information to a subset of V1 neurons, via several routes within the visual system. We propose a number of experiments and analyses to confirm the results from this empirical survey.
Collapse
Affiliation(s)
- D M Alexander
- Faculty of Information Technology, University of Technology, Sydney, Australia.
| | | |
Collapse
|
23
|
Abstract
With each eye movement, stationary objects in the world change position on the retina, yet we perceive the world as stable. Spatial updating, or remapping, is one neural mechanism by which the brain compensates for shifts in the retinal image caused by voluntary eye movements. Remapping of a visual representation is believed to arise from a widespread neural circuit including parietal and frontal cortex. The current experiment tests the hypothesis that extrastriate visual areas in human cortex have access to remapped spatial information. We tested this hypothesis using functional magnetic resonance imaging (fMRI). We first identified the borders of several occipital lobe visual areas using standard retinotopic techniques. We then tested subjects while they performed a single-step saccade task analogous to the task used in neurophysiological studies in monkeys, and two conditions that control for visual and motor effects. We analyzed the fMRI time series data with a nonlinear, fully Bayesian hierarchical statistical model. We identified remapping as activity in the single-step task that could not be attributed to purely visual or oculomotor effects. The strength of remapping was roughly monotonic with position in the visual hierarchy: remapped responses were largest in areas V3A and hV4 and smallest in V1 and V2. These results demonstrate that updated visual representations are present in cortical areas that are directly linked to visual perception.
Collapse
Affiliation(s)
- Elisha P Merriam
- Department of Neuroscience, and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA.
| | | | | |
Collapse
|
24
|
Hafed ZM, Krauzlis RJ. Ongoing eye movements constrain visual perception. Nat Neurosci 2006; 9:1449-57. [PMID: 17028586 DOI: 10.1038/nn1782] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2006] [Accepted: 09/13/2006] [Indexed: 11/08/2022]
Abstract
Eye movements markedly change the pattern of retinal stimulation. To maintain stable vision, the brain possesses a variety of mechanisms that compensate for the retinal consequences of eye movements. However, eye movements may also be important for resolving the ambiguities often posed by visual inputs, because motor commands contain additional spatial information that is necessarily absent from retinal signals. To test this possibility, we used a perceptually ambiguous stimulus composed of four line segments, consistent with a shape whose vertices were occluded. In a passive condition, subjects fixated a spot while the shape translated along a certain trajectory. In several active conditions, the spot, occluder and shape translated such that when subjects tracked the spot, they experienced the same retinal stimulus as during fixation. We found that eye movements significantly promoted perceptual coherence compared to fixation. These results indicate that eye movement information constrains the perceptual interpretation of visual inputs.
Collapse
Affiliation(s)
- Ziad M Hafed
- Salk Institute for Biological Studies, 10010 North Torrey Pines Road, La Jolla, California 92037, USA.
| | | |
Collapse
|
25
|
Khayat PS, Spekreijse H, Roelfsema PR. Attention lights up new object representations before the old ones fade away. J Neurosci 2006; 26:138-42. [PMID: 16399680 PMCID: PMC6674304 DOI: 10.1523/jneurosci.2784-05.2006] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
We investigated how attention shifts from one object to another by recording neuronal activity in the primary visual cortex. Monkeys performed a contour-grouping task in which they had to select a target curve and ignore a distractor curve. Some trials required a shift of attention, because the target and distractor curves were switched during the course of the trial. We monitored the dynamics of this attention shift in area V1, in which neuronal responses evoked by the target curve are stronger than those evoked by the distractor. The reallocation of attention was associated with a rapid and strong enhancement of responses to the newly attended curve, followed, after approximately 60 ms, by a weaker suppression of responses to the curve from which attention was removed. We conclude that attention can be rapidly allocated to a new object before it disengages from the previously attended one.
Collapse
Affiliation(s)
- Paul S Khayat
- Department of Vision and Cognition, The Netherlands Ophthalmic Research Institute, 1105 BA Amsterdam, The Netherlands.
| | | | | |
Collapse
|
26
|
Thiele A. Vision: a brake on the speed of sight. Curr Biol 2005; 15:R917-9. [PMID: 16303547 DOI: 10.1016/j.cub.2005.10.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
We move our eyes more often than our heart beats. Our brain seems to cope effortlessly with the consequences of these rapid visual alterations, but a new study shows that similar scene changes in the absence of eye movements delay the speed of information processing. So are there costs in constantly shifting our focus of gaze?
Collapse
|
27
|
Royal DW, Sáry G, Schall JD, Casagrande VA. Correlates of motor planning and postsaccadic fixation in the macaque monkey lateral geniculate nucleus. Exp Brain Res 2005; 168:62-75. [PMID: 16151777 DOI: 10.1007/s00221-005-0093-z] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2005] [Accepted: 05/05/2005] [Indexed: 12/23/2022]
Abstract
There is significant controversy regarding the ability of the primate visual system to construct stable percepts from a never-ending stream of brief fixations and rapid saccadic eye movements. In this study, we examined the timing and occurrence of perisaccadic modulation of LGN single-unit activity in awake-behaving macaque monkeys while they made spontaneous saccades in the dark and made visually guided saccades to discrete stimuli located outside the receptive field. Our hypothesis was that the activity of LGN cells is modulated by efference copies of motor plans to produce saccadic eye movements and that this modulation depends neither on the presence of feedforward visual information nor on a corollary discharge of signals directing saccadic eye movements. On average, 25% of LGN cells demonstrated significant perisaccadic modulation. This modulation consisted of a moderate suppression of activity that began more than 100 ms prior to the initiation of a saccadic eye movement and continued beyond the termination of the saccadic eye movement. This suppression was followed by a large enhancement of activity after the eyes arrived at the next fixation. Although members of all three LGN relay cell classes (magnocellular, parvocellular, and koniocellular) demonstrated significant saccade-related suppression and enhancement of activity, more cells demonstrated postsaccadic enhancement (25%) than perisaccadic suppression (17%). In no case did the timing of the modulation coincide directly with saccade duration. The degree of modulation observed did not vary with LGN cell class, LGN receptive field center location, center sign (ON-center or OFF-center), or saccade latency or velocity. The time course of modulation did, however, vary with saccade size such that suppression was longer for longer saccades. The fact that activity from a percentage of LGN cells from all cell classes was modulated in relationship to saccadic eye movements in the absence of direct visual stimulation suggests that this modulation is a general phenomenon not tied to specific types of visual stimuli. Similarly, because the onset of the modulation preceded eye movements by more than 100 ms, it is likely that this modulation reflects higher order motor-planning rather than a corollary of mechanisms in direct control of eye movements themselves. Finally, the fact that the largest modulation is a postsaccadic enhancement of activity may suggest that perisaccadic modulations are designed more for the facilitation of visual information processing once the eyes land at a new location than for filtering unwanted visual stimuli.
Collapse
Affiliation(s)
- D W Royal
- Center for Molecular Neuroscience, Vanderbilt University, Nashville, TN 37232-2175, USA
| | | | | | | |
Collapse
|
28
|
Khayat PS, Spekreijse H, Roelfsema PR. Visual information transfer across eye movements in the monkey. Vision Res 2004; 44:2901-17. [PMID: 15380995 DOI: 10.1016/j.visres.2004.06.018] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2003] [Revised: 06/15/2004] [Indexed: 11/19/2022]
Abstract
During normal viewing, the eyes move from one location to another in order to sample the visual environment. Information acquired before the eye movement facilitates post-saccadic processing. This "preview effect" indicates that some information is maintained in transsaccadic memory and combined with information acquired at the next fixation. However, the nature of transsaccadic memory remains a subject of debate. Here, we investigate preview effects in monkeys that carry out a contour-grouping (curve-tracing) task, by manipulating the consistency between pre- and post-saccadic information. The results show that consistent information causes a preview benefit, whereas inconsistent information causes a preview cost. These preview effects are relatively independent of the pre-saccadic viewing duration, and they occur even when the stimulus is exposed for only approximately 10 ms. The results further demonstrate that an entire relevant curve is stored in transsaccadic memory, instead of just the items at the saccade goal. They suggest that preview effects are caused by a mechanism that stores attended sensory information to make it available at the next fixation. The results are discussed within a theoretical framework that establishes an intimate relationship between attention, short-term memory and transsaccadic memory.
Collapse
Affiliation(s)
- Paul S Khayat
- Department of Vision and Cognition, The Netherlands Ophthalmic Research Institute, Meibergdreef 47, 1105 BA Amsterdam.
| | | | | |
Collapse
|