51
|
Lescroart MD, Kanwisher N, Golomb JD. No Evidence for Automatic Remapping of Stimulus Features or Location Found with fMRI. Front Syst Neurosci 2016; 10:53. [PMID: 27378866 PMCID: PMC4904027 DOI: 10.3389/fnsys.2016.00053] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Accepted: 05/27/2016] [Indexed: 11/21/2022] Open
Abstract
The input to our visual system shifts every time we move our eyes. To maintain a stable percept of the world, visual representations must be updated with each saccade. Near the time of a saccade, neurons in several visual areas become sensitive to the regions of visual space that their receptive fields occupy after the saccade. This process, known as remapping, transfers information from one set of neurons to another, and may provide a mechanism for visual stability. However, it is not clear whether remapping transfers information about stimulus features in addition to information about stimulus location. To investigate this issue, we recorded blood-oxygen-level dependent (BOLD) functional magnetic resonance imaging (fMRI) responses while human subjects viewed images of faces and houses (two visual categories with many feature differences). Immediately after some image presentations, subjects made a saccade that moved the previously stimulated location to the opposite side of the visual field. We then used a combination of univariate analyses and multivariate pattern analyses to test whether information about stimulus location and stimulus features were remapped to the ipsilateral hemisphere after the saccades. We found no reliable indication of stimulus feature remapping in any region. However, we also found no reliable indication of stimulus location remapping, despite the fact that our paradigm was highly similar to previous fMRI studies of remapping. The absence of location remapping in our study precludes strong conclusions regarding feature remapping. However, these results also suggest that measurement of location remapping with fMRI depends strongly on the details of the experimental paradigm used. We highlight differences in our approach from the original fMRI studies of remapping, discuss potential reasons for the failure to generalize prior location remapping results, and suggest directions for future research.
Collapse
Affiliation(s)
- Mark D Lescroart
- Helen Wills Neuroscience Institute, University of California Berkeley, CA, USA
| | - Nancy Kanwisher
- McGovern Center for Brain Research, Massachusetts Institute of Technology Cambridge, MA, USA
| | - Julie D Golomb
- Department of Psychology, Center for Cognitive and Brain Sciences, Ohio State University Columbus, OH, USA
| |
Collapse
|
52
|
Abstract
Visual perception seems continuous, but recent evidence suggests that the underlying perceptual mechanisms are in fact periodic-particularly visual attention. Because visual attention is closely linked to the preparation of saccadic eye movements, the question arises how periodic attentional processes interact with the preparation and execution of voluntary saccades. In two experiments, human observers made voluntary saccades between two placeholders, monitoring each one for the presentation of a threshold-level target. Detection performance was evaluated as a function of latency with respect to saccade landing. The time course of detection performance revealed oscillations at around 4 Hz both before the saccade at the saccade origin and after the saccade at the saccade destination. Furthermore, oscillations before and after the saccade were in phase, meaning that the saccade did not disrupt or reset the ongoing attentional rhythm. Instead, it seems that voluntary saccades are executed as part of an ongoing attentional rhythm, with the eyes in flight during the troughs of the attentional wave. This finding for the first time demonstrates that periodic attentional mechanisms affect not only perception but also overt motor behavior.
Collapse
|
53
|
Abstract
Saccadic remapping, a presaccadic increase in neural activity when a saccade is about to bring an object into a neuron's receptive field, may be crucial for our perception of a stable world. Studies of perception and saccadic remapping, like ours, focus on the presaccadic acquisition of information from the saccade target, with no direct reference to underlying physiology. While information is known to be acquired prior to a saccade, it is unclear whether object-selective or feature-specific information is remapped. To test this, we performed a series of psychophysical experiments in which we presented a peripheral, nonfoveated face as a presaccadic target. The target face disappeared at saccade onset. After making a saccade to the location of the peripheral target face (which was no longer visible), subjects misperceived the expression of a subsequent, foveally presented neutral face as being repelled away from the peripheral presaccadic face target. This effect was similar to a sequential shape contrast or negative aftereffect but required a saccade, because covert attention was not sufficient to generate the illusion. Additional experiments further revealed that inverting the faces disrupted the illusion, suggesting that presaccadic remapping is object-selective and not based on low-level features. Our results demonstrate that saccadic remapping can be an object-selective process, spatially tuned to the target of the saccade and distinct from covert attention in the absence of a saccade.
Collapse
|
54
|
Abstract
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Collapse
|
55
|
Meilinger T, Watanabe K. Multiple Strategies for Spatial Integration of 2D Layouts within Working Memory. PLoS One 2016; 11:e0154088. [PMID: 27101011 PMCID: PMC4839648 DOI: 10.1371/journal.pone.0154088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 04/08/2016] [Indexed: 11/18/2022] Open
Abstract
Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions.
Collapse
Affiliation(s)
- Tobias Meilinger
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
- Department for Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- * E-mail:
| | - Katsumi Watanabe
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
- Department of Intermedia Art and Science, Waseda University, Tokyo, Japan
| |
Collapse
|
56
|
Marino AC, Mazer JA. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology. Front Syst Neurosci 2016; 10:3. [PMID: 26903820 PMCID: PMC4743436 DOI: 10.3389/fnsys.2016.00003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 01/15/2016] [Indexed: 11/13/2022] Open
Abstract
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron's spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Medical Scientist Training Program, Yale University School of MedicineNew Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Department of Neurobiology, Yale University School of MedicineNew Haven, CT, USA; Department of Psychology, Yale UniversityNew Haven, CT, USA
| |
Collapse
|
57
|
Grossberg S. Cortical Dynamics of Figure-Ground Separation in Response to 2D Pictures and 3D Scenes: How V2 Combines Border Ownership, Stereoscopic Cues, and Gestalt Grouping Rules. Front Psychol 2016; 6:2054. [PMID: 26858665 PMCID: PMC4726768 DOI: 10.3389/fpsyg.2015.02054] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2015] [Accepted: 12/24/2015] [Indexed: 11/20/2022] Open
Abstract
The FACADE model, and its laminar cortical realization and extension in the 3D LAMINART model, have explained, simulated, and predicted many perceptual and neurobiological data about how the visual cortex carries out 3D vision and figure-ground perception, and how these cortical mechanisms enable 2D pictures to generate 3D percepts of occluding and occluded objects. In particular, these models have proposed how border ownership occurs, but have not yet explicitly explained the correlation between multiple properties of border ownership neurons in cortical area V2 that were reported in a remarkable series of neurophysiological experiments by von der Heydt and his colleagues; namely, border ownership, contrast preference, binocular stereoscopic information, selectivity for side-of-figure, Gestalt rules, and strength of attentional modulation, as well as the time course during which such properties arise. This article shows how, by combining 3D LAMINART properties that were discovered in two parallel streams of research, a unified explanation of these properties emerges. This explanation proposes, moreover, how these properties contribute to the generation of consciously seen 3D surfaces. The first research stream models how processes like 3D boundary grouping and surface filling-in interact in multiple stages within and between the V1 interblob—V2 interstripe—V4 cortical stream and the V1 blob—V2 thin stripe—V4 cortical stream, respectively. Of particular importance for understanding figure-ground separation is how these cortical interactions convert computationally complementary boundary and surface mechanisms into a consistent conscious percept, including the critical use of surface contour feedback signals from surface representations in V2 thin stripes to boundary representations in V2 interstripes. Remarkably, key figure-ground properties emerge from these feedback interactions. The second research stream shows how cells that compute absolute disparity in cortical area V1 are transformed into cells that compute relative disparity in cortical area V2. Relative disparity is a more invariant measure of an object's depth and 3D shape, and is sensitive to figure-ground properties.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center for Computational Neuroscience and Neural Technology, Boston UniversityBoston, MA, USA; Department of Mathematics, Boston UniversityBoston, MA, USA
| |
Collapse
|
58
|
A "blanking effect" for surface features: Transsaccadic spatial-frequency discrimination is improved by postsaccadic blanking. Atten Percept Psychophys 2015; 77:1500-6. [PMID: 25991033 DOI: 10.3758/s13414-015-0926-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although saccadic eye movements occur frequently—about three or four times a second—humans are astonishingly blind to transsaccadic changes. Locational displacements of the saccade target of up to 2 deg of visual angle, and even large changes of a visual scene, can go unnoticed. For a long time, this insensitivity was ascribed to deficits in transsaccadic memory: Only a coarse, (spatially) imprecise representation would be retained across a saccade. This assumption was contradicted by Deubel's and Schneider's (Behavioral and Brain Sciences 17:259-260, 1994) striking finding that locational discrimination performance across a saccade is greatly improved by inserting a short postsaccadic blank. Surprisingly, the question of whether blanking effects occur also for other forms of transsaccadic changes (i.e., surface-feature changes) has been widely ignored. We tested this question by means of a transsaccadic change in spatial frequency. Postsaccadic blanking facilitated spatial-frequency discrimination, but to a smaller amount than the usual blanking effects obtained with locational displacements. This finding bears important implications for models of visual stability and transsaccadic memory.
Collapse
|
59
|
Takemura N, Fukui T, Inui T. A Computational Model for Aperture Control in Reach-to-Grasp Movement Based on Predictive Variability. Front Comput Neurosci 2015; 9:143. [PMID: 26696874 PMCID: PMC4675317 DOI: 10.3389/fncom.2015.00143] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 11/09/2015] [Indexed: 11/29/2022] Open
Abstract
In human reach-to-grasp movement, visual occlusion of a target object leads to a larger peak grip aperture compared to conditions where online vision is available. However, no previous computational and neural network models for reach-to-grasp movement explain the mechanism of this effect. We simulated the effect of online vision on the reach-to-grasp movement by proposing a computational control model based on the hypothesis that the grip aperture is controlled to compensate for both motor variability and sensory uncertainty. In this model, the aperture is formed to achieve a target aperture size that is sufficiently large to accommodate the actual target; it also includes a margin to ensure proper grasping despite sensory and motor variability. To this end, the model considers: (i) the variability of the grip aperture, which is predicted by the Kalman filter, and (ii) the uncertainty of the object size, which is affected by visual noise. Using this model, we simulated experiments in which the effect of the duration of visual occlusion was investigated. The simulation replicated the experimental result wherein the peak grip aperture increased when the target object was occluded, especially in the early phase of the movement. Both predicted motor variability and sensory uncertainty play important roles in the online visuomotor process responsible for grip aperture control.
Collapse
Affiliation(s)
- Naohiro Takemura
- Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Yoshida-honmachi Kyoto, Japan
| | - Takao Fukui
- Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Yoshida-honmachi Kyoto, Japan
| | - Toshio Inui
- Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Yoshida-honmachi Kyoto, Japan
| |
Collapse
|
60
|
He T, Ding Y, Wang Z. Environment- and eye-centered inhibitory cueing effects are both observed after a methodological confound is eliminated. Sci Rep 2015; 5:16586. [PMID: 26565380 PMCID: PMC4643241 DOI: 10.1038/srep16586] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Accepted: 10/16/2015] [Indexed: 12/02/2022] Open
Abstract
Inhibition of return (IOR), typically explored in cueing paradigms, is a performance cost associated with previously attended locations and has been suggested as a crucial attentional mechanism that biases orientation towards novelty. In their seminal IOR paper, Posner and Cohen (1984) showed that IOR is coded in spatiotopic or environment-centered coordinates. Recent studies, however, have consistently reported IOR effects in both spatiotopic and retinotopic (eye-centered) coordinates. One overlooked methodological confound of all previous studies is that the spatial gradient of IOR is not considered when selecting the baseline for estimating IOR effects. This methodological issue makes it difficult to tell if the IOR effects reported in previous studies were coded in retinotopic or spatiotopic coordinates, or in both. The present study addresses this issue with the incorporation of no-cue trials to a modified cueing paradigm in which the cue and target are always intervened by a gaze-shift. The results revealed that a) IOR is indeed coded in both spatiotopic and retinotopic coordinates, and b) the methodology of previous work may have underestimated spatiotopic and retinotopic IOR effects.
Collapse
Affiliation(s)
- Tao He
- Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, 311121, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, 311121, China
| | - Yun Ding
- Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, 311121, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, 311121, China
| | - Zhiguo Wang
- Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, 311121, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, 311121, China
| |
Collapse
|
61
|
Nakashima Y, Iijima T, Sugita Y. Surround-contingent motion aftereffect. Vision Res 2015; 117:9-15. [PMID: 26459145 DOI: 10.1016/j.visres.2015.09.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2015] [Revised: 09/25/2015] [Accepted: 09/28/2015] [Indexed: 11/26/2022]
Abstract
We investigated whether motion aftereffects (MAE) can be contingent on surroundings. Random dots moving leftward and rightward were presented in alternation. Moving dots were surrounded by an open circle or an open square. After prolonged exposure to these stimuli, MAE were found to be contingent upon the surrounding frames: dots moving in a random direction appeared moving leftward when surrounded by the frame that was presented in conjunction with rightward motion. The effect lasted for 24h and was observed when adapter and test stimuli were presented not only retinotopically, but also at the same spatiotopic position. Furthermore, the effect was observed even when the adapter and test stimuli were presented at different retinotopic and spatiotopic positions as long as they were presented in the same hemi-field. These results indicate that MAE would be influenced not only by the stimulus features, but also by their surroundings, and they suggest that the surround-contingent MAE might be mediated in the higher stage of the motion processing pathway.
Collapse
Affiliation(s)
- Yusuke Nakashima
- Department of Psychology, Waseda University, 1-24-1 Toyama, Shinjuku-ku 162-8644, Tokyo, Japan.
| | - Takumi Iijima
- Department of Psychology, Waseda University, 1-24-1 Toyama, Shinjuku-ku 162-8644, Tokyo, Japan
| | - Yoichi Sugita
- Department of Psychology, Waseda University, 1-24-1 Toyama, Shinjuku-ku 162-8644, Tokyo, Japan
| |
Collapse
|
62
|
Abstract
UNLABELLED We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing by shifting externally stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with each fixation or by maintaining a representation of presaccadic visual features in working memory and updating it with new information from the remapped location. Crucially, when multiple objects are present in a scene the planning of eye movements profoundly affects the precision of their working memory representations, transferring limited memory resources from fixation toward the saccade target. Here we show that when humans make saccades, it results in an update of not just the precision of representations but also their contents. When multiple item colors are shifted imperceptibly during a saccade the perceived colors are found to fall between presaccadic and postsaccadic values, with the weight given to each input varying continuously with item location, and fixed relative to saccade parameters. Increasing sensory uncertainty, by adding color noise, biases updating toward the more reliable input, which is consistent with an optimal integration of presaccadic working memory with a postsaccadic updating signal. We recover this update signal and show it to be tightly focused on the vicinity of the saccade target. These results reveal how the nervous system accumulates detailed visual information from multiple views of the same object or scene. SIGNIFICANCE STATEMENT This study examines the consequences of saccadic eye movements for the internal representation of visual objects. A saccade shifts the image of a stable visual object from one part of the retina to another. We show that visual representations are built up over these different views of the same object, by combining information obtained before and after each saccade. The weights given to presaccadic and postsaccadic information are determined by the relative reliability of each input. This provides evidence that the visual system combines inputs over time in a statistically optimal way.
Collapse
|
63
|
Transsaccadic processing: stability, integration, and the potential role of remapping. Atten Percept Psychophys 2015; 77:3-27. [PMID: 25380979 DOI: 10.3758/s13414-014-0751-y] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
While our frequent saccades allow us to sample the complex visual environment in a highly efficient manner, they also raise certain challenges for interpreting and acting upon visual input. In the present, selective review, we discuss key findings from the domains of cognitive psychology, visual perception, and neuroscience concerning two such challenges: (1) maintaining the phenomenal experience of visual stability despite our rapidly shifting gaze, and (2) integrating visual information across discrete fixations. In the first two sections of the article, we focus primarily on behavioral findings. Next, we examine the possibility that a neural phenomenon known as predictive remapping may provide an explanation for aspects of transsaccadic processing. In this section of the article, we delineate and critically evaluate multiple proposals about the potential role of predictive remapping in light of both theoretical principles and empirical findings.
Collapse
|
64
|
Chen C, Chen X, Gao M, Yang Q, Yan H. Contextual influence on the tilt after-effect in foveal and para-foveal vision. Neurosci Bull 2015; 31:307-16. [PMID: 25895001 DOI: 10.1007/s12264-014-1521-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2014] [Accepted: 11/03/2014] [Indexed: 11/24/2022] Open
Abstract
A sensory stimulus can only be properly interpreted in light of the stimuli that surround it in space and time. The tilt illusion (TI) and tilt after-effect (TAE) provide good evidence that the perception of a target depends strongly on both its spatial and temporal context. In previous studies, the TI and TAE have typically been investigated separately, so little is known about their co-effects on visual perception and information processing mechanisms. Here, we considered the influence of the spatial context and the temporal effect together and asked how center-surround context affects the TAE in foveal and para-foveal vision. Our results showed that different center-surround spatial patterns significantly affected the TAE for both foveal and para-foveal vision. In the fovea, the TAE was mainly produced by central adaptive gratings. Cross-oriented surroundings significantly inhibited the TAE, and iso-oriented surroundings slightly facilitated it; surround inhibition was much stronger than surround facilitation. In the para-fovea, the TAE was mainly decided by the surrounding patches. Likewise, a cross-oriented central patch inhibited the TAE, and an iso-oriented one facilitated it, but there was no significant difference between inhibition and facilitation. Our findings demonstrated, at the perceptual level, that our visual system adopts different mechanisms to process consistent or inconsistent central-surround orientation information and that the unequal magnitude of surround inhibition and facilitation is vitally important for the visual system to improve the detectability or discriminability of novel or incongruent stimuli.
Collapse
Affiliation(s)
- Cheng Chen
- Chengdu College, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | | | | | | | | |
Collapse
|
65
|
Disrupting saccadic updating: visual interference prior to the first saccade elicits spatial errors in the secondary saccade in a double-step task. Exp Brain Res 2015; 233:1893-905. [PMID: 25832623 DOI: 10.1007/s00221-015-4261-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 03/18/2015] [Indexed: 10/23/2022]
Abstract
When we explore the visual environment around us, we produce sequences of very precise eye movements aligning the objects of interest with the most sensitive part of the retina for detailed visual processing. A copy of the impending motor command, the corollary discharge, is sent as soon as the first saccade in a sequence is ready to monitor the next fixation location and correctly plan the subsequent eye movement. Neurophysiological investigations have shown that chemical interference with the corollary discharge generates a distinct pattern of spatial errors on sequential eye movements, with similar results also from clinical and TMS studies. Here, we used saccadic inhibition to interfere with the temporal domain of the first of two subsequent saccades during a standard double-step paradigm. In two experiments, we report that the temporal interference on the primary saccade led to a specific error in the final landing position of the second saccade that was consistent with previous lesion and neurophysiological studies, but without affecting the spatial characteristics of the first eye movement. On the other hand, single-step saccades were differently influence by the flash, with a general undershoot, more pronounced for larger saccadic amplitude. These findings show that a flashed visual transient can disrupt saccadic updating in a double-step task, possibly due to the mismatch between the planned and the executed saccadic eye movement.
Collapse
|
66
|
Grossberg S, Srinivasan K, Yazdanbakhsh A. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Front Psychol 2015; 5:1457. [PMID: 25642198 PMCID: PMC4294135 DOI: 10.3389/fpsyg.2014.01457] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Accepted: 11/28/2014] [Indexed: 12/02/2022] Open
Abstract
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Karthik Srinivasan
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Arash Yazdanbakhsh
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| |
Collapse
|
67
|
Use of exocentric and egocentric representations in the concurrent planning of sequential saccades. J Neurosci 2014; 34:16009-21. [PMID: 25429142 DOI: 10.1523/jneurosci.0328-14.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The concurrent planning of sequential saccades offers a simple model to study the nature of visuomotor transformations since the second saccade vector needs to be remapped to foveate the second target following the first saccade. Remapping is thought to occur through egocentric mechanisms involving an efference copy of the first saccade that is available around the time of its onset. In contrast, an exocentric representation of the second target relative to the first target, if available, can be used to directly code the second saccade vector. While human volunteers performed a modified double-step task, we examined the role of exocentric encoding in concurrent saccade planning by shifting the first target location well before the efference copy could be used by the oculomotor system. The impact of the first target shift on concurrent processing was tested by examining the end-points of second saccades following a shift of the second target during the first saccade. The frequency of second saccades to the old versus new location of the second target, as well as the propagation of first saccade localization errors, both indices of concurrent processing, were found to be significantly reduced in trials with the first target shift compared to those without it. A similar decrease in concurrent processing was obtained when we shifted the first target but kept constant the second saccade vector. Overall, these results suggest that the brain can use relatively stable visual landmarks, independent of efference copy-based egocentric mechanisms, for concurrent planning of sequential saccades.
Collapse
|
68
|
Abekawa N, Gomi H. Online gain update for manual following response accompanied by gaze shift during arm reaching. J Neurophysiol 2014; 113:1206-16. [PMID: 25429112 DOI: 10.1152/jn.00281.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To capture objects by hand, online motor corrections are required to compensate for self-body movements. Recent studies have shown that background visual motion, usually caused by body movement, plays a significant role in such online corrections. Visual motion applied during a reaching movement induces a rapid and automatic manual following response (MFR) in the direction of the visual motion. Importantly, the MFR amplitude is modulated by the gaze direction relative to the reach target location (i.e., foveal or peripheral reaching). That is, the brain specifies the adequate visuomotor gain for an online controller based on gaze-reach coordination. However, the time or state point at which the brain specifies this visuomotor gain remains unclear. More specifically, does the gain change occur even during the execution of reaching? In the present study, we measured MFR amplitudes during a task in which the participant performed a saccadic eye movement that altered the gaze-reach coordination during reaching. The results indicate that the MFR amplitude immediately after the saccade termination changed according to the new gaze-reach coordination, suggesting a flexible online updating of the MFR gain during reaching. An additional experiment showed that this gain updating mostly started before the saccade terminated. Therefore, the MFR gain updating process would be triggered by an ocular command related to saccade planning or execution based on forthcoming changes in the gaze-reach coordination. Our findings suggest that the brain flexibly updates the visuomotor gain for an online controller even during reaching movements based on continuous monitoring of the gaze-reach coordination.
Collapse
Affiliation(s)
- Naotoshi Abekawa
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Wakamiya, Morinosato, Atsugi, Kanagawa, Japan; and
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Wakamiya, Morinosato, Atsugi, Kanagawa, Japan; and CREST, Japan Science and Technology Agency, Kawaguchi, Saitama, Japan
| |
Collapse
|
69
|
From brain synapses to systems for learning and memory: Object recognition, spatial navigation, timed conditioning, and movement control. Brain Res 2014; 1621:270-93. [PMID: 25446436 DOI: 10.1016/j.brainres.2014.11.018] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2014] [Accepted: 11/06/2014] [Indexed: 11/23/2022]
Abstract
This article provides an overview of neural models of synaptic learning and memory whose expression in adaptive behavior depends critically on the circuits and systems in which the synapses are embedded. It reviews Adaptive Resonance Theory, or ART, models that use excitatory matching and match-based learning to achieve fast category learning and whose learned memories are dynamically stabilized by top-down expectations, attentional focusing, and memory search. ART clarifies mechanistic relationships between consciousness, learning, expectation, attention, resonance, and synchrony. ART models are embedded in ARTSCAN architectures that unify processes of invariant object category learning, recognition, spatial and object attention, predictive remapping, and eye movement search, and that clarify how conscious object vision and recognition may fail during perceptual crowding and parietal neglect. The generality of learned categories depends upon a vigilance process that is regulated by acetylcholine via the nucleus basalis. Vigilance can get stuck at too high or too low values, thereby causing learning problems in autism and medial temporal amnesia. Similar synaptic learning laws support qualitatively different behaviors: Invariant object category learning in the inferotemporal cortex; learning of grid cells and place cells in the entorhinal and hippocampal cortices during spatial navigation; and learning of time cells in the entorhinal-hippocampal system during adaptively timed conditioning, including trace conditioning. Spatial and temporal processes through the medial and lateral entorhinal-hippocampal system seem to be carried out with homologous circuit designs. Variations of a shared laminar neocortical circuit design have modeled 3D vision, speech perception, and cognitive working memory and learning. A complementary kind of inhibitory matching and mismatch learning controls movement. This article is part of a Special Issue entitled SI: Brain and Memory.
Collapse
|
70
|
Zirnsak M, Moore T. Saccades and shifting receptive fields: anticipating consequences or selecting targets? Trends Cogn Sci 2014; 18:621-8. [PMID: 25455690 DOI: 10.1016/j.tics.2014.10.002] [Citation(s) in RCA: 63] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Revised: 10/06/2014] [Accepted: 10/07/2014] [Indexed: 11/28/2022]
Abstract
Saccadic eye movements cause frequent and substantial displacements of the retinal image, but those displacements go unnoticed. It has been widely assumed that this perceived stability emerges from the shifting of visual receptive fields from their current, presaccadic locations to their future, postsaccadic locations in anticipation of the retinal consequences of saccades. Although evidence consistent with this anticipatory remapping has accumulated over the years, more recent work suggests an alternative view. In this opinion article, we examine the evidence of presaccadic receptive field shifts and their relationship to the perceptual changes that accompany saccades. We argue that both reflect the selection of targets for saccades rather than the anticipation of a displaced retinal image.
Collapse
Affiliation(s)
- Marc Zirnsak
- Department of Neurobiology, and Howard Hughes Medical Institute, Stanford University School of Medicine, Stanford, CA 94305, USA.
| | - Tirin Moore
- Department of Neurobiology, and Howard Hughes Medical Institute, Stanford University School of Medicine, Stanford, CA 94305, USA
| |
Collapse
|
71
|
Abstract
The receptive fields of early visual neurons are anchored in retinotopic coordinates (Hubel and Wiesel, 1962). Eye movements shift these receptive fields and therefore require that different populations of neurons encode an object's constituent features across saccades. Whether feature groupings are preserved across successive fixations or processing starts anew with each fixation has been hotly debated (Melcher and Morrone, 2003; Melcher, 2005, 2010; Knapen et al., 2009; Cavanagh et al., 2010a,b; Morris et al., 2010). Here we show that feature integration initially occurs within retinotopic coordinates, but is then conserved within a spatiotopic coordinate frame independent of where the features fall on the retinas. With human observers, we first found that the relative timing of visual features plays a critical role in determining the spatial area over which features are grouped. We exploited this temporal dependence of feature integration to show that features co-occurring within 45 ms remain grouped across eye movements. Our results thus challenge purely feedforward models of feature integration (Pelli, 2008; Freeman and Simoncelli, 2011) that begin de novo after every eye movement, and implicate the involvement of brain areas beyond early visual cortex. The strong temporal dependence we quantify and its link with trans-saccadic object perception instead suggest that feature integration depends, at least in part, on feedback from higher brain areas (Mumford, 1992; Rao and Ballard, 1999; Di Lollo et al., 2000; Moore and Armstrong, 2003; Stanford et al., 2010).
Collapse
|
72
|
Characterizing ensemble statistics: mean size is represented across multiple frames of reference. Atten Percept Psychophys 2014; 76:746-58. [PMID: 24347042 DOI: 10.3758/s13414-013-0595-x] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The visual system represents the overall statistical, not individual, properties of sets. Here we tested the spatial nature of ensemble statistics. We used a mean-size adaptation paradigm (Corbett et al. in Visual Cognition, 20, 211-231, 2012) to examine whether average size is encoded in multiple reference frames. We adapted observers to patches of small- and large-sized dots in opposite regions of the display (left/right or top/bottom) and then tested their perceptions of the sizes of single test dots presented in regions that corresponded to retinotopic, spatiotopic, and hemispheric coordinates within the adapting displays. We observed retinotopic, spatiotopic, and hemispheric adaptation aftereffects, such that participants perceived a test dot as being larger when it was presented in the area adapted to the patch of small dots than when it was presented in the area adapted to large dots. This aftereffect also transferred between eyes. Our results demonstrate that mean size is represented across multiple spatial frames of reference, supporting the proposal that ensemble statistics play a fundamental role in maintaining perceptual stability.
Collapse
|
73
|
Abstract
In natural scenes, multiple visual stimuli compete for selection; however, each saccade displaces the stimulus representations in retinotopicaly organized visual and oculomotor maps. In the present study, we used saccade curvature to investigate whether oculomotor competition across eye movements is represented in retinotopic or spatiotopic coordinates. Participants performed a sequence of saccades and we induced oculomotor competition by briefly presenting a task-irrelevant distractor at different times during the saccade sequence. Despite the intervening saccade, the second saccade curved away from a spatial representation of the distractor that was presented before the first saccade. Furthermore, the degree of saccade curvature increased with the salience of the distractor presented before the first saccade. The results suggest that spatiotopic representations of target-distractor competition are crucial for successful interaction with objects of interest despite the intervening eye movements.
Collapse
|
74
|
Figure-ground processing during fixational saccades in V1: indication for higher-order stability. J Neurosci 2014; 34:3247-52. [PMID: 24573283 DOI: 10.1523/jneurosci.4375-13.2014] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In a typical visual scene we continuously perceive a "figure" that is segregated from the surrounding "background" despite ongoing microsaccades and small saccades that are performed when attempting fixation (fixational saccades [FSs]). Previously reported neuronal correlates of figure-ground (FG) segregation in the primary visual cortex (V1) showed enhanced activity in the "figure" along with suppressed activity in the noisy "background." However, it is unknown how this FG modulation in V1 is affected by FSs. To investigate this question, we trained two monkeys to detect a contour embedded in a noisy background while simultaneously imaging V1 using voltage-sensitive dyes. During stimulus presentation, the monkeys typically performed 1-3 FSs, which displaced the contour over the retina. Using eye position and a 2D analytical model to map the stimulus onto V1, we were able to compute FG modulation before and after each FS. On the spatial cortical scale, we found that, after each FS, FG modulation follows the stimulus retinal displacement and "hops" within the V1 retinotopic map, suggesting visual instability. On the temporal scale, FG modulation is initiated in the new retinotopic position before it disappeared from the old retinotopic position. Moreover, the FG modulation developed faster after an FS, compared with after stimulus onset, which may contribute to visual stability of FG segregation, along the timeline of stimulus presentation. Therefore, despite spatial discontinuity of FG modulation in V1, the higher-order stability of FG modulation along time may enable our stable and continuous perception.
Collapse
|
75
|
|
76
|
The background is remapped across saccades. Exp Brain Res 2013; 232:609-18. [PMID: 24276312 DOI: 10.1007/s00221-013-3769-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2011] [Accepted: 11/09/2013] [Indexed: 10/26/2022]
Abstract
Physiological studies have found that neurons prepare for impending eye movements, showing anticipatory responses to stimuli presented at the location of the post-saccadic receptive fields (RFs) (Wurtz in Vis Res 48:2070-2089, 2008). These studies proposed that visual neurons with shifting RFs prepared for the stimuli they would process after an impending saccade. Additionally, psychophysical studies have shown behavioral consequences of those anticipatory responses, including the transfer of aftereffects (Melcher in Nat Neurosci 10:903-907, 2007) and the remapping of attention (Rolfs et al. in Nat Neurosci 14:252-258, 2011). As the physiological studies proposed, the shifting RF mechanism explains the transfer of aftereffects. Recently, a new mechanism based on activation transfer via a saliency map was proposed, which accounted for the remapping of attention (Cavanagh et al. in Trends Cogn Sci 14:147-153, 2010). We hypothesized that there would be different aspects of the remapping corresponding to these different neural mechanisms. This study found that the information in the background was remapped to a similar extent as the figure, provided that the visual context remained stable. We manipulated the status of the figure and the ground in the saliency map and showed that the manipulation modulated the remapping of the figure and the ground in different ways. These results suggest that the visual system has an ability to remap the background as well as the figure, but lacks the ability to modulate the remapping of the background based on the visual context, and that different neural mechanisms might work together to maintain visual stability across saccades.
Collapse
|
77
|
Subramanian J, Colby CL. Shape selectivity and remapping in dorsal stream visual area LIP. J Neurophysiol 2013; 111:613-27. [PMID: 24225538 DOI: 10.1152/jn.00841.2011] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We explore the visual world by making rapid eye movements (saccades) to focus on objects and locations of interest. Despite abrupt retinal image shifts, we see the world as stable. Remapping contributes to visual stability by updating the internal image with every saccade. Neurons in macaque lateral intraparietal cortex (LIP) and other brain areas update information about salient locations around the time of a saccade. The depth of information transfer remains to be thoroughly investigated. Area LIP, as part of the dorsal visual stream, is regarded as a spatially selective area, yet there is evidence that LIP neurons also encode object features. We sought to determine whether LIP remaps shape information. This knowledge is important for understanding what information is retained from each glance. We identified 82 remapping neurons. First, we presented shapes within the receptive field and tested for shape selectivity in a fixation task. Among the remapping neurons, 28 neurons (34%) were selective for shape. Second, we presented the same shapes in the future location of the receptive field around the time of the saccade and tested for shape selectivity during remapping. Thirty-one (38%) neurons were selective for shape. Of 11 neurons that were shape selective in both tasks, 5 showed significant correlation between shape selectivity in the two tasks. Across the population, there was a weak but significant correlation between responses to shape in the two tasks. Our results provide neurophysiological evidence that remapped responses in area LIP can encode shape information as well as spatial information.
Collapse
Affiliation(s)
- Janani Subramanian
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania
| | | |
Collapse
|
78
|
Harrison WJ, Retell JD, Remington RW, Mattingley JB. Visual crowding at a distance during predictive remapping. Curr Biol 2013; 23:793-8. [PMID: 23562269 DOI: 10.1016/j.cub.2013.03.050] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2013] [Revised: 02/28/2013] [Accepted: 03/21/2013] [Indexed: 11/18/2022]
Abstract
When we move our eyes, images of objects are displaced on the retina, yet the visual world appears stable. Oculomotor activity just prior to an eye movement contributes to perceptual stability by providing information about the predicted location of a relevant object on the retina following a saccade. It remains unclear, however, whether an object's features are represented at the remapped location. Here, we exploited the phenomenon of visual crowding to show that presaccadic remapping preserves the elementary features of objects at their predicted postsaccadic locations. Observers executed an eye movement and identified a letter probe flashed just before the saccade. Flanking stimuli were flashed around the location that would be occupied by the probe immediately following the saccade. Despite being positioned in the opposite visual field to the probe, these flankers disrupted observers' ability to identify the probe. Crucially, this "remapped crowding" interference was stronger when the flankers were visually similar to the probe than when the flanker and probe stimuli were distinct. Our findings suggest that visual processing at remapped locations is featurally dependent, providing a mechanism for achieving perceptual continuity of objects across saccades.
Collapse
Affiliation(s)
- William J Harrison
- School of Psychology, The University of Queensland, St. Lucia, QLD 4072, Australia.
| | | | | | | |
Collapse
|
79
|
Abstract
The fundamental role of the visual system is to guide behavior in natural environments. To optimize information transmission, many animals have evolved a non-homogeneous retina and serially sample visual scenes by saccadic eye movements. Such eye movements, however, introduce high-speed retinal motion and decouple external and internal reference frames. Until now, these processes have only been studied with unnatural stimuli, eye movement behavior, and tasks. These experiments confound retinotopic and geotopic coordinate systems and may probe a non-representative functional range. Here we develop a real-time, gaze-contingent display with precise spatiotemporal control over high-definition natural movies. In an active condition, human observers freely watched nature documentaries and indicated the location of periodic narrow-band contrast increments relative to their gaze position. In a passive condition under central fixation, the same retinal input was replayed to each observer by updating the video's screen position. Comparison of visual sensitivity between conditions revealed three mechanisms that the visual system has adapted to compensate for peri-saccadic vision changes. Under natural conditions we show that reduced visual sensitivity during eye movements can be explained simply by the high retinal speed during a saccade without recourse to an extra-retinal mechanism of active suppression; we give evidence for enhanced sensitivity immediately after an eye movement indicative of visual receptive fields remapping in anticipation of forthcoming spatial structure; and we demonstrate that perceptual decisions can be made in world rather than retinal coordinates.
Collapse
|
80
|
Talsma D, White BJ, Mathôt S, Munoz DP, Theeuwes J. A retinotopic attentional trace after saccadic eye movements: evidence from event-related potentials. J Cogn Neurosci 2013; 25:1563-77. [PMID: 23530898 DOI: 10.1162/jocn_a_00390] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Saccadic eye movements are a major source of disruption to visual stability, yet we experience little of this disruption. We can keep track of the same object across multiple saccades. It is generally assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Recent behavioral and ERP evidence suggests that visual attention is also remapped, but that it may still leave a residual retinotopic trace immediately after a saccade. The current study was designed to further examine electrophysiological evidence for such a retinotopic trace by recording ERPs elicited by stimuli that were presented immediately after a saccade (80 msec SOA). Participants were required to maintain attention at a specific location (and to memorize this location) while making a saccadic eye movement. Immediately after the saccade, a visual stimulus was briefly presented at either the attended location (the same spatiotopic location), a location that matched the attended location retinotopically (the same retinotopic location), or one of two control locations. ERP data revealed an enhanced P1 amplitude for the stimulus presented at the retinotopically matched location, but a significant attenuation for probes presented at the original attended location. These results are consistent with the hypothesis that visuospatial attention lingers in retinotopic coordinates immediately following gaze shifts.
Collapse
Affiliation(s)
- Durk Talsma
- Department of Experimental Psychology, Faculty of Psychology and Educational Sciences, Ghent University, Henri Dunantlaan 2, 9000 Gent, Belgium.
| | | | | | | | | |
Collapse
|
81
|
Mathôt S, Theeuwes J. A reinvestigation of the reference frame of the tilt-adaptation aftereffect. Sci Rep 2013; 3:1152. [PMID: 23359857 PMCID: PMC3556595 DOI: 10.1038/srep01152] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2012] [Accepted: 01/09/2013] [Indexed: 11/26/2022] Open
Abstract
The tilt-adaptation aftereffect (TAE) is the phenomenon that prolonged perception of a tilted ‘adapter’ stimulus affects the perceived tilt of a subsequent ‘tester’ stimulus. Although it is clear that TAE is strongest when adapter and tester are presented at the same location, the reference frame of the effect is debated. Some authors have reported that TAE is spatiotopic (world centred): It occurs when adapter and tester are presented at the same display location, even when this corresponds to different retinal locations. Others have reported that TAE is exclusively retinotopic (eye centred): It occurs only when adapter and tester are presented at the same retinal location, even when this corresponds to different display locations. Because this issue is crucial for models of transsaccadic perception, we reinvestigated the reference frame of TAE. We report that TAE is exclusively retinotopic, supporting the notion that there is no transsaccadic integration of low-level visual information.
Collapse
|
82
|
Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world. Neural Netw 2013; 37:1-47. [PMID: 23149242 DOI: 10.1016/j.neunet.2012.09.017] [Citation(s) in RCA: 183] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2012] [Revised: 08/24/2012] [Accepted: 09/24/2012] [Indexed: 11/17/2022]
|
83
|
Abstract
Human vision uses saccadic eye movements to rapidly shift the sensitive foveal portion of our retina to objects of interest. For vision to function properly amidst these ballistic eye movements, a mechanism is needed to extract discrete percepts on each fixation from the continuous stream of neural activity that spans fixations. The speed of visual parsing is crucial because human behaviors ranging from reading to driving to sports rely on rapid visual analysis. We find that a brain signal associated with moving the eyes appears to play a role in resetting visual analysis on each fixation, a process that may aid in parsing the neural signal. We quantified the degree to which the perception of tilt is influenced by the tilt of a stimulus on a preceding fixation. Two key conditions were compared, one in which a saccade moved the eyes from one stimulus to the next and a second simulated saccade condition in which the stimuli moved in the same manner but the subjects did not move their eyes. We find that there is a brief period of time at the start of each fixation during which the tilt of the previous stimulus influences perception (in a direction opposite to the tilt aftereffect)--perception is not instantaneously reset when a fixation starts. Importantly, the results show that this perceptual bias is much greater, with nearly identical visual input, when saccades are simulated. This finding suggests that, in real-saccade conditions, some signal related to the eye movement may be involved in the reset phenomenon. While proprioceptive information from the extraocular muscles is conceivably a factor, the fast speed of the effect we observe suggests that a more likely mechanism is a corollary discharge signal associated with eye movement.
Collapse
|
84
|
Golomb JD, Kanwisher N. Higher level visual cortex represents retinotopic, not spatiotopic, object location. Cereb Cortex 2012; 22:2794-810. [PMID: 22190434 PMCID: PMC3491766 DOI: 10.1093/cercor/bhr357] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex-important for stable object recognition and action-contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a "searchlight" analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates.
Collapse
Affiliation(s)
- Julie D Golomb
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | | |
Collapse
|
85
|
Cicchini GM, Binda P, Burr DC, Morrone MC. Transient spatiotopic integration across saccadic eye movements mediates visual stability. J Neurophysiol 2012. [PMID: 23197453 DOI: 10.1152/jn.00478.2012] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Eye movements pose major problems to the visual system, because each new saccade changes the mapping of external objects on the retina. It is known that stimuli briefly presented around the time of saccades are systematically mislocalized, whereas continuously visible objects are perceived as spatially stable even when they undergo large transsaccadic displacements. In this study we investigated the relationship between these two phenomena and measured how human subjects perceive the position of pairs of bars briefly displayed around the time of large horizontal saccades. We show that they interact strongly, with the perisaccadic bar being drawn toward the other, dramatically altering the pattern of perisaccadic mislocalization. The interaction field extends over a wide range (200 ms and 20°) and is oriented along the retinotopic trajectory of the saccade-induced motion, suggesting a mechanism that integrates pre- and postsaccadic stimuli at different retinal locations but similar external positions. We show how transient changes in spatial integration mechanisms, which are consistent with the present psychophysical results and with the properties of "remapping cells" reported in the literature, can create transient craniotopy by merging the distinct retinal images of the pre- and postsaccadic fixations to signal a single stable object.
Collapse
Affiliation(s)
- Guido M Cicchini
- Institute of Neuroscience, National Research Council, Pisa, Italy
| | | | | | | |
Collapse
|
86
|
Tas AC, Moore CM, Hollingworth A. An object-mediated updating account of insensitivity to transsaccadic change. J Vis 2012; 12:18. [PMID: 23092946 PMCID: PMC3720035 DOI: 10.1167/12.11.18] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2012] [Accepted: 09/14/2012] [Indexed: 11/24/2022] Open
Abstract
Recent evidence has suggested that relatively precise information about the location and visual form of a saccade target object is retained across a saccade. However, this information appears to be available for report only when the target is removed briefly, so that the display is blank when the eyes land. We hypothesized that the availability of precise target information is dependent on whether a post-saccade object is mapped to the same object representation established for the presaccade target. If so, then the post-saccade features of the target overwrite the presaccade features, a process of object mediated updating in which visual masking is governed by object continuity. In two experiments, participants' sensitivity to the spatial displacement of a saccade target was improved when that object changed surface feature properties across the saccade, consistent with the prediction of the object-mediating updating account. Transsaccadic perception appears to depend on a mechanism of object-based masking that is observed across multiple domains of vision. In addition, the results demonstrate that surface-feature continuity contributes to visual stability across saccades.
Collapse
Affiliation(s)
- A. Caglar Tas
- University of Iowa, Department of Psychology, Iowa City, IA, USA
| | | | | |
Collapse
|
87
|
Zhao M, Gersch TM, Schnitzer BS, Dosher BA, Kowler E. Eye movements and attention: the role of pre-saccadic shifts of attention in perception, memory and the control of saccades. Vision Res 2012; 74:40-60. [PMID: 22809798 DOI: 10.1016/j.visres.2012.06.017] [Citation(s) in RCA: 84] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2012] [Revised: 05/11/2012] [Accepted: 06/25/2012] [Indexed: 11/18/2022]
Abstract
Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment.
Collapse
Affiliation(s)
- Min Zhao
- Department of Psychology, Rutgers University, Piscataway, NJ 08854, United States.
| | | | | | | | | |
Collapse
|
88
|
Dickinson JE, Mighall HK, Almeida RA, Bell J, Badcock DR. Rapidly acquired shape and face aftereffects are retinotopic and local in origin. Vision Res 2012; 65:1-11. [DOI: 10.1016/j.visres.2012.05.012] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2011] [Revised: 05/22/2012] [Accepted: 05/27/2012] [Indexed: 11/29/2022]
|
89
|
Foley NC, Grossberg S, Mingolla E. Neural dynamics of object-based multifocal visual spatial attention and priming: object cueing, useful-field-of-view, and crowding. Cogn Psychol 2012; 65:77-117. [PMID: 22425615 DOI: 10.1016/j.cogpsych.2012.02.001] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2011] [Revised: 01/07/2012] [Accepted: 02/02/2012] [Indexed: 11/18/2022]
Abstract
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects.
Collapse
Affiliation(s)
- Nicholas C Foley
- Center for Adaptive Systems, Department of Cognitive and Neural Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
90
|
Abstract
As we shift our gaze to explore the visual world, information enters cortex in a sequence of successive snapshots, interrupted by phases of blur. Our experience, in contrast, appears like a movie of a continuous stream of objects embedded in a stable world. This perception of stability across eye movements has been linked to changes in spatial sensitivity of visual neurons anticipating the upcoming saccade, often referred to as shifting receptive fields (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997; Nakamura and Colby, 2002). How exactly these receptive field dynamics contribute to perceptual stability is currently not clear. Anticipatory receptive field shifts toward the future, postsaccadic position may bridge the transient perisaccadic epoch (Sommer and Wurtz, 2006; Wurtz, 2008; Melcher and Colby, 2008). Alternatively, a presaccadic shift of receptive fields toward the saccade target area (Tolias et al., 2001) may serve to focus visual resources onto the most relevant objects in the postsaccadic scene (Hamker et al., 2008). In this view, shifts of feature detectors serve to facilitate the processing of the peripheral visual content before it is foveated. While this conception is consistent with previous observations on receptive field dynamics and on perisaccadic compression (Ross et al., 1997; Morrone et al., 1997; Kaiser and Lappe, 2004), it predicts that receptive fields beyond the saccade target shift toward the saccade target rather than in the direction of the saccade. We have tested this prediction in human observers via the presaccadic transfer of the tilt-aftereffect (Melcher, 2007).
Collapse
|
91
|
Abstract
Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.
Collapse
|
92
|
Boi M, Vergeer M, Ogmen H, Herzog MH. Nonretinotopic exogenous attention. Curr Biol 2011; 21:1732-7. [PMID: 22000104 DOI: 10.1016/j.cub.2011.08.059] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2011] [Revised: 07/18/2011] [Accepted: 08/30/2011] [Indexed: 11/30/2022]
Abstract
Attention is crucial for visual perception because it allows the visual system to effectively use its limited resources by selecting behaviorally and cognitively relevant stimuli from the large amount of information impinging on the eyes. Reflexive, stimulus-driven attention is essential for successful interactions with the environment because it can, for example, speed up responses to life-threatening events. It is commonly believed that exogenous attention operates in the retinotopic coordinates of the early visual system. Here, using a novel experimental paradigm [1], we show that a nonretinotopic cue improves both accuracy and reaction times in a visual search task. Furthermore, the influence of the cue is limited both in space and time, a characteristic typical of exogenous cueing. These and other recent findings show that many more aspects of vision are processed nonretinotopically than previously thought.
Collapse
Affiliation(s)
- Marco Boi
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland.
| | | | | | | |
Collapse
|
93
|
Ibbotson M, Krekelberg B. Visual perception and saccadic eye movements. Curr Opin Neurobiol 2011; 21:553-8. [PMID: 21646014 PMCID: PMC3175312 DOI: 10.1016/j.conb.2011.05.012] [Citation(s) in RCA: 106] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2011] [Revised: 05/12/2011] [Accepted: 05/15/2011] [Indexed: 12/22/2022]
Abstract
We use saccades several times per second to move the fovea between points of interest and build an understanding of our visual environment. Recent behavioral experiments show evidence for the integration of pre- and postsaccadic information (even subliminally), the modulation of visual sensitivity, and the rapid reallocation of attention. The recent physiological literature has identified a characteristic modulation of neural responsiveness-perisaccadic reduction followed by a postsaccadic increase-that is found in many visual areas, but whose source is as yet unknown. This modulation seems optimal for reducing sensitivity during and boosting sensitivity between saccades, but no study has yet established a direct causal link between neural and behavioral changes.
Collapse
Affiliation(s)
- Michael Ibbotson
- ARC Centre of Excellence in Vision Science, R.N. Robertson Building, Australian National University, Canberra, ACT 0200, Australia
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, 197 University, Avenue, Newark, New Jersey 07102, United States, T: +1 973 353 3602, F: +1 973 273 4803
| |
Collapse
|
94
|
Kowler E. Eye movements: the past 25 years. Vision Res 2011; 51:1457-83. [PMID: 21237189 PMCID: PMC3094591 DOI: 10.1016/j.visres.2010.12.014] [Citation(s) in RCA: 288] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2010] [Revised: 11/29/2010] [Accepted: 12/27/2010] [Indexed: 11/30/2022]
Abstract
This article reviews the past 25 years of research on eye movements (1986-2011). Emphasis is on three oculomotor behaviors: gaze control, smooth pursuit and saccades, and on their interactions with vision. Focus over the past 25 years has remained on the fundamental and classical questions: What are the mechanisms that keep gaze stable with either stationary or moving targets? How does the motion of the image on the retina affect vision? Where do we look - and why - when performing a complex task? How can the world appear clear and stable despite continual movements of the eyes? The past 25 years of investigation of these questions has seen progress and transformations at all levels due to new approaches (behavioral, neural and theoretical) aimed at studying how eye movements cope with real-world visual and cognitive demands. The work has led to a better understanding of how prediction, learning and attention work with sensory signals to contribute to the effective operation of eye movements in visually rich environments.
Collapse
Affiliation(s)
- Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, NJ 08854, United States.
| |
Collapse
|
95
|
Abstract
Human observers explore scenes by shifting their gaze from object to object. Before each eye movement, a peripheral glimpse of the next object to be fixated has however already been caught. Here we investigate whether the perceptual organization extracted from such a preview could guide the perceptual analysis of the same object during the next fixation. We observed that participants were indeed significantly faster at grouping together spatially separate elements into an object contour, when the same contour elements had also been grouped together in the peripheral preview display. Importantly, this facilitation occurred despite a change in the grouping cue defining the object contour (similarity versus collinearity). We conclude that an intermediate-level description of object shape persists in the visual system across gaze shifts, providing it with a robust basis for balancing efficiency and continuity during scene exploration.
Collapse
|
96
|
Hamker FH, Zirnsak M, Ziesche A, Lappe M. Computational models of spatial updating in peri-saccadic perception. Philos Trans R Soc Lond B Biol Sci 2011; 366:554-71. [PMID: 21242143 DOI: 10.1098/rstb.2010.0229] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Perceptual phenomena that occur around the time of a saccade, such as peri-saccadic mislocalization or saccadic suppression of displacement, have often been linked to mechanisms of spatial stability. These phenomena are usually regarded as errors in processes of trans-saccadic spatial transformations and they provide important tools to study these processes. However, a true understanding of the underlying brain processes that participate in the preparation for a saccade and in the transfer of information across it requires a closer, more quantitative approach that links different perceptual phenomena with each other and with the functional requirements of ensuring spatial stability. We review a number of computational models of peri-saccadic spatial perception that provide steps in that direction. Although most models are concerned with only specific phenomena, some generalization and interconnection between them can be obtained from a comparison. Our analysis shows how different perceptual effects can coherently be brought together and linked back to neuronal mechanisms on the way to explaining vision across saccades.
Collapse
Affiliation(s)
- Fred H Hamker
- Department of Psychology, Westfälische Wilhelms University Münster, Münster, Germany.
| | | | | | | |
Collapse
|
97
|
Wurtz RH, Joiner WM, Berman RA. Neuronal mechanisms for visual stability: progress and problems. Philos Trans R Soc Lond B Biol Sci 2011; 366:492-503. [PMID: 21242138 DOI: 10.1098/rstb.2010.0186] [Citation(s) in RCA: 97] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
How our vision remains stable in spite of the interruptions produced by saccadic eye movements has been a repeatedly revisited perceptual puzzle. The major hypothesis is that a corollary discharge (CD) or efference copy signal provides information that the eye has moved, and this information is used to compensate for the motion. There has been progress in the search for neuronal correlates of such a CD in the monkey brain, the best animal model of the human visual system. In this article, we briefly summarize the evidence for a CD pathway to frontal cortex, and then consider four questions on the relation of neuronal mechanisms in the monkey brain to stable visual perception. First, how can we determine whether the neuronal activity is related to stable visual perception? Second, is the activity a possible neuronal correlate of the proposed transsaccadic memory hypothesis of visual stability? Third, are the neuronal mechanisms modified by visual attention and does our perceived visual stability actually result from neuronal mechanisms related primarily to the central visual field? Fourth, does the pathway from superior colliculus through the pulvinar nucleus to visual cortex contribute to visual stability through suppression of the visual blur produced by saccades?
Collapse
Affiliation(s)
- Robert H Wurtz
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | |
Collapse
|
98
|
Abstract
Our vision remains stable even though the movements of our eyes, head and bodies create a motion pattern on the retina. One of the most important, yet basic, feats of the visual system is to correctly determine whether this retinal motion is owing to real movement in the world or rather our own self-movement. This problem has occupied many great thinkers, such as Descartes and Helmholtz, at least since the time of Alhazen. This theme issue brings together leading researchers from animal neurophysiology, clinical neurology, psychophysics and cognitive neuroscience to summarize the state of the art in the study of visual stability. Recently, there has been significant progress in understanding the limits of visual stability in humans and in identifying many of the brain circuits involved in maintaining a stable percept of the world. Clinical studies and new experimental methods, such as transcranial magnetic stimulation, now make it possible to test the causal role of different brain regions in creating visual stability and also allow us to measure the consequences when the mechanisms of visual stability break down.
Collapse
Affiliation(s)
- David Melcher
- Faculty of Cognitive Science, University of Trento, Italy.
| |
Collapse
|
99
|
Abstract
Visual perception is based on both incoming sensory signals and information about ongoing actions. Recordings from single neurons have shown that corollary discharge signals can influence visual representations in parietal, frontal and extrastriate visual cortex, as well as the superior colliculus (SC). In each of these areas, visual representations are remapped in conjunction with eye movements. Remapping provides a mechanism for creating a stable, eye-centred map of salient locations. Temporal and spatial aspects of remapping are highly variable from cell to cell and area to area. Most neurons in the lateral intraparietal area remap stimulus traces, as do many neurons in closely allied areas such as the frontal eye fields the SC and extrastriate area V3A. Remapping is not purely a cortical phenomenon. Stimulus traces are remapped from one hemifield to the other even when direct cortico-cortical connections are removed. The neural circuitry that produces remapping is distinguished by significant plasticity, suggesting that updating of salient stimuli is fundamental for spatial stability and visuospatial behaviour. These findings provide new evidence that a unified and stable representation of visual space is constructed by redundant circuitry, comprising cortical and subcortical pathways, with a remarkable capacity for reorganization.
Collapse
Affiliation(s)
- Nathan J Hall
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | | |
Collapse
|
100
|
Attention doesn't slide: spatiotopic updating after eye movements instantiates a new, discrete attentional locus. Atten Percept Psychophys 2011; 73:7-14. [PMID: 21258903 DOI: 10.3758/s13414-010-0016-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
During natural vision, eye movements can drastically alter the retinotopic (eye-centered) coordinates of locations and objects, yet the spatiotopic (world-centered) percept remains stable. Maintaining visuospatial attention in spatiotopic coordinates requires updating of attentional representations following each eye movement. However, this updating is not instantaneous; attentional facilitation temporarily lingers at the previous retinotopic location after a saccade, a phenomenon known as the retinotopic attentional trace. At various times after a saccade, we probed attention at an intermediate location between the retinotopic and spatiotopic locations to determine whether a single locus of attentional facilitation slides progressively from the previous retinotopic location to the appropriate spatiotopic location, or whether retinotopic facilitation decays while a new, independent spatiotopic locus concurrently becomes active. Facilitation at the intermediate location was not significant at any time, suggesting that top-down attention can result in enhancement of discrete retinotopic and spatiotopic locations without passing through intermediate locations.
Collapse
|