1
|
Denagamage S, Morton MP, Hudson NV, Nandy AS. Widespread receptive field remapping in early primate visual cortex. Cell Rep 2024; 43:114557. [PMID: 39058592 DOI: 10.1016/j.celrep.2024.114557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 04/24/2024] [Accepted: 07/13/2024] [Indexed: 07/28/2024] Open
Abstract
Predictive remapping of receptive fields (RFs) is thought to be one of the critical mechanisms for enforcing perceptual stability during eye movements. While RF remapping has been observed in several cortical areas, its role in early visual cortex and its consequences on the tuning properties of neurons have been poorly understood. Here, we track remapping RFs in hundreds of neurons from visual area V2 while subjects perform a cued saccade task. We find that remapping is widespread in area V2 across neurons from all recorded cortical layers and cell types. Furthermore, our results suggest that remapping RFs not only maintain but also transiently enhance their feature selectivity due to untuned suppression. Taken together, these findings shed light on the dynamics and prevalence of remapping in the early visual cortex, forcing us to revise current models of perceptual stability during saccadic eye movements.
Collapse
Affiliation(s)
- Sachira Denagamage
- Department of Neuroscience, Yale University, New Haven, CT 06510, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06510, USA.
| | - Mitchell P Morton
- Department of Neuroscience, Yale University, New Haven, CT 06510, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06510, USA
| | - Nyomi V Hudson
- Department of Neuroscience, Yale University, New Haven, CT 06510, USA
| | - Anirvan S Nandy
- Department of Neuroscience, Yale University, New Haven, CT 06510, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06510, USA; Kavli Institute for Neuroscience, Yale University, New Haven, CT 06510, USA; Wu Tsai Institute, Yale University, New Haven, CT 06510, USA; Department of Psychology, Yale University, New Haven, CT 06510, USA.
| |
Collapse
|
2
|
Moran C, Johnson PA, Landau AN, Hogendoorn H. Decoding Remapped Spatial Information in the Peri-Saccadic Period. J Neurosci 2024; 44:e2134232024. [PMID: 38871460 PMCID: PMC11270511 DOI: 10.1523/jneurosci.2134-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/20/2024] [Accepted: 04/22/2024] [Indexed: 06/15/2024] Open
Abstract
It has been suggested that, prior to a saccade, visual neurons predictively respond to stimuli that will fall in their receptive fields after completion of the saccade. This saccadic remapping process is thought to compensate for the shift of the visual world across the retina caused by eye movements. To map the timing of this predictive process in the brain, we recorded neural activity using electroencephalography during a saccade task. Human participants (male and female) made saccades between two fixation points while covertly attending to oriented gratings briefly presented at various locations on the screen. Data recorded during trials in which participants maintained fixation were used to train classifiers on stimuli in different positions. Subsequently, data collected during saccade trials were used to test for the presence of remapped stimulus information at the post-saccadic retinotopic location in the peri-saccadic period, providing unique insight into when remapped information becomes available. We found that the stimulus could be decoded at the remapped location ∼180 ms post-stimulus onset, but only when the stimulus was presented 100-200 ms before saccade onset. Within this range, we found that the timing of remapping was dictated by stimulus onset rather than saccade onset. We conclude that presenting the stimulus immediately before the saccade allows for optimal integration of the corollary discharge signal with the incoming peripheral visual information, resulting in a remapping of activation to the relevant post-saccadic retinotopic neurons.
Collapse
Affiliation(s)
- Caoimhe Moran
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- Department of Psychology,Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Philippa A Johnson
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- Cognitive Psychology Unit, Institute of Psychology & Leiden Institute for Brain and Cognition, Leiden University, Leiden 2333 AK, The Netherlands
| | - Ayelet N Landau
- Department of Psychology,Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- School of Psychology and Counselling, Queensland University of Technology, Kelvin Grove, Queensland 4059, Australia
| |
Collapse
|
3
|
Melcher D, Alaberkyan A, Anastasaki C, Liu X, Deodato M, Marsicano G, Almeida D. An early effect of the parafoveal preview on post-saccadic processing of English words. Atten Percept Psychophys 2024:10.3758/s13414-024-02916-4. [PMID: 38956003 DOI: 10.3758/s13414-024-02916-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/05/2024] [Indexed: 07/04/2024]
Abstract
A key aspect of efficient visual processing is to use current and previous information to make predictions about what we will see next. In natural viewing, and when looking at words, there is typically an indication of forthcoming visual information from extrafoveal areas of the visual field before we make an eye movement to an object or word of interest. This "preview effect" has been studied for many years in the word reading literature and, more recently, in object perception. Here, we integrated methods from word recognition and object perception to investigate the timing of the preview on neural measures of word recognition. Through a combined use of EEG and eye-tracking, a group of multilingual participants took part in a gaze-contingent, single-shot saccade experiment in which words appeared in their parafoveal visual field. In valid preview trials, the same word was presented during the preview and after the saccade, while in the invalid condition, the saccade target was a number string that turned into a word during the saccade. As hypothesized, the valid preview greatly reduced the fixation-related evoked response. Interestingly, multivariate decoding analyses revealed much earlier preview effects than previously reported for words, and individual decoding performance correlated with participant reading scores. These results demonstrate that a parafoveal preview can influence relatively early aspects of post-saccadic word processing and help to resolve some discrepancies between the word and object literatures.
Collapse
Affiliation(s)
- David Melcher
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates.
- Center for Brain and Health, NYUAD Research Institute, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates.
| | - Ani Alaberkyan
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Chrysi Anastasaki
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Xiaoyi Liu
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
- Department of Psychology, Princeton University, Washington Rd, Princeton, NJ, 08540, USA
| | - Michele Deodato
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
- Center for Brain and Health, NYUAD Research Institute, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Gianluca Marsicano
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121, Bologna, Italy
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47023, Cesena, Italy
| | - Diogo Almeida
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| |
Collapse
|
4
|
Liu X, Melcher D, Carrasco M, Hanning NM. Pre-saccadic Preview Shapes Post-Saccadic Processing More Where Perception is Poor. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.18.541028. [PMID: 37292871 PMCID: PMC10245755 DOI: 10.1101/2023.05.18.541028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The pre-saccadic preview of a peripheral target enhances the efficiency of its post-saccadic processing, termed the extrafoveal preview effect. Peripheral visual performance -and thus the quality of the preview- varies around the visual field, even at iso-eccentric locations: it is better along the horizontal than vertical meridian and along the lower than upper vertical meridian. To investigate whether these polar angle asymmetries influence the preview effect, we asked human participants (to preview four tilted gratings at the cardinals, until a central cue indicated to which one to saccade. During the saccade, the target orientation either remained or slightly changed (valid/invalid preview). After saccade landing, participants discriminated the orientation of the (briefly presented) second grating. Stimulus contrast was titrated with adaptive staircases to assess visual performance. Expectedly, valid previews increased participants' post-saccadic contrast sensitivity. This preview benefit, however, was inversely related to polar angle perceptual asymmetries; largest at the upper, and smallest at the horizontal meridian. This finding reveals that the visual system compensates for peripheral asymmetries when integrating information across saccades, by selectively assigning higher weights to the less-well perceived preview information. Our study supports the recent line of evidence showing that perceptual dynamics around saccades vary with eye movement direction.
Collapse
|
5
|
Denagamage S, Morton MP, Hudson NV, Nandy AS. WIDESPREAD RECEPTIVE FIELD REMAPPING IN EARLY VISUAL CORTEX. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.01.539001. [PMID: 37205367 PMCID: PMC10187178 DOI: 10.1101/2023.05.01.539001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Our eyes are in constant motion, yet we perceive the visual world as stable. Predictive remapping of receptive fields is thought to be one of the critical mechanisms for enforcing perceptual stability during eye movements. While receptive field remapping has been identified in several cortical areas, the spatiotemporal dynamics of remapping, and its consequences on the tuning properties of neurons, remain poorly understood. Here, we tracked remapping receptive fields in hundreds of neurons from visual Area V2 while subjects performed a cued saccade task. We found that remapping was far more widespread in Area V2 than previously reported and can be found in neurons from all recorded cortical layers and cell types. Surprisingly, neurons undergoing remapping exhibit sensitivity to two punctate locations in visual space. Furthermore, we found that feature selectivity is not only maintained during remapping but transiently increases due to untuned suppression. Taken together, these results shed light on the spatiotemporal dynamics of remapping and its ubiquitous prevalence in the early visual cortex, and force us to revise current models of perceptual stability.
Collapse
Affiliation(s)
- Sachira Denagamage
- Department of Neuroscience, Yale University, New Haven, CT 06510
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06510
- Lead contact
| | - Mitchell P. Morton
- Department of Neuroscience, Yale University, New Haven, CT 06510
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06510
| | - Nyomi V. Hudson
- Department of Neuroscience, Yale University, New Haven, CT 06510
| | - Anirvan S. Nandy
- Department of Neuroscience, Yale University, New Haven, CT 06510
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06510
- Kavli Institute for Neuroscience, Yale University, New Haven, CT 06510
- Wu Tsai Institute, Yale University, New Haven, CT 06510
- Department of Psychology, Yale University, New Haven, CT 06510
| |
Collapse
|
6
|
Huber-Huber C, Melcher D. Saccade execution increases the preview effect with faces: An EEG and eye-tracking coregistration study. Atten Percept Psychophys 2023:10.3758/s13414-023-02802-5. [PMID: 37917292 DOI: 10.3758/s13414-023-02802-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2023] [Indexed: 11/04/2023]
Abstract
Under naturalistic viewing conditions, humans conduct about three to four saccadic eye movements per second. These dynamics imply that in real life, humans rarely see something completely new; there is usually a preview of the upcoming foveal input from extrafoveal regions of the visual field. In line with results from the field of reading research, we have shown with EEG and eye-tracking coregistration that an extrafoveal preview also affects postsaccadic visual object processing and facilitates discrimination. Here, we ask whether this preview effect in the fixation-locked N170, and in manual responses to the postsaccadic target face (tilt discrimination), requires saccade execution. Participants performed a gaze-contingent experiment in which extrafoveal face images could change their orientation during a saccade directed to them. In a control block, participants maintained stable gaze throughout the experiment and the extrafoveal face reappeared foveally after a simulated saccade latency. Compared with this no-saccade condition, the neural and the behavioral preview effects were much larger in the saccade condition. We also found shorter first fixation durations after an invalid preview, which is in contrast to reading studies. We interpret the increased preview effect under saccade execution as the result of the additional sensorimotor processes that come with gaze behavior compared with visual perception under stable fixation. In addition, our findings call into question whether EEG studies with fixed gaze capture key properties and dynamics of active, natural vision.
Collapse
Affiliation(s)
- Christoph Huber-Huber
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, Italy.
| | - David Melcher
- Center for Brain & Health, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
7
|
Lu Z, Golomb JD. Dynamic saccade context triggers more stable object-location binding. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.26.538469. [PMID: 37162863 PMCID: PMC10168424 DOI: 10.1101/2023.04.26.538469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Our visual systems rapidly perceive and integrate information about object identities and locations. There is long-standing debate about how we achieve world-centered (spatiotopic) object representations across eye movements, with many studies reporting persistent retinotopic (eye-centered) effects even for higher-level object-location binding. But these studies are generally conducted in fairly static experimental contexts. Might spatiotopic object-location binding only emerge in more dynamic saccade contexts? In the present study, we investigated this using the Spatial Congruency Bias paradigm in healthy adults. In the static (single saccade) context, we found purely retinotopic binding, as before. However, robust spatiotopic binding emerged in the dynamic (multiple frequent saccades) context. We further isolated specific factors that modulate retinotopic and spatiotopic binding. Our results provide strong evidence that dynamic saccade context can trigger more stable object-location binding in ecologically-relevant spatiotopic coordinates, perhaps via a more flexible brain state which accommodates improved visual stability in the dynamic world.
Collapse
|
8
|
The Role of Foveal Cortex in Discriminating Peripheral Stimuli: The Sketchpad Hypothesis. NEUROSCI 2022. [DOI: 10.3390/neurosci4010002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Foveal (central) and peripheral vision are strongly interconnected to provide an integrated experience of the world around us. Recently, it has been suggested that there is a feedback mechanism that links foveal and peripheral vision. This peripheral-to-foveal feedback differs from other feedback mechanisms in that during visual processing a novel representation of a stimulus is formed in a different cortical region than that of the feedforward representation. The functional role of foveal feedback is not yet completely understood, but some evidence from neuroimaging studies suggests a link with peripheral shape processing. Behavioural and transcranial magnetic stimulation studies show impairment in peripheral shape discrimination when the foveal retinotopic cortex is disrupted post stimulus presentation. This review aims to link these findings to the visual sketchpad hypothesis. According to this hypothesis, foveal retinotopic cortex stores task-relevant information to aid identification of peripherally presented objects. We discuss how the characteristics of foveal feedback support this hypothesis and rule out other possible explanations. We also discuss the possibility that the foveal feedback may be independent of the sensory modality of the stimulation.
Collapse
|
9
|
Jovanovic L, McGraw PV, Roach NW, Johnston A. The spatial properties of adaptation-induced distance compression. J Vis 2022; 22:7. [PMID: 36223110 PMCID: PMC9583746 DOI: 10.1167/jov.22.11.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Exposure to a dynamic texture reduces the perceived separation between objects, altering the mapping between physical relations in the environment and their neural representations. Here we investigated the spatial tuning and spatial frame of reference of this aftereffect to understand the stage(s) of processing where adaptation-induced changes occur. In Experiment 1, we measured apparent separation at different positions relative to the adapted area, revealing a strong but tightly tuned compression effect. We next tested the spatial frame of reference of the effect, either by introducing a gaze shift between adaptation and test phase (Experiment 2) or by decoupling the spatial selectivity of adaptation in retinotopic and world-centered coordinates (Experiment 3). Results across the two experiments indicated that both retinotopic and world-centered adaptation effects can occur independently. Spatial attention to the location of the adaptor alone could not account for the world-centered transfer we observed, and retinotopic adaptation did not transfer to world-centered coordinates after a saccade (Experiment 4). Finally, we found that aftereffects in different reference frames have a similar, narrow spatial tuning profile (Experiment 5). Together, our results suggest that the neural representation of local separation resides early in the visual cortex, but it can also be modulated by activity in higher visual areas.
Collapse
Affiliation(s)
| | - Paul V McGraw
- School of Psychology, University of Nottingham, Nottingham, UK.,
| | - Neil W Roach
- School of Psychology, University of Nottingham, Nottingham, UK.,
| | - Alan Johnston
- School of Psychology, University of Nottingham, Nottingham, UK.,
| |
Collapse
|
10
|
Velji-Ibrahim J, Crawford JD, Cattaneo L, Monaco S. Action planning modulates the representation of object features in human fronto-parietal and occipital cortex. Eur J Neurosci 2022; 56:4803-4818. [PMID: 35841138 PMCID: PMC9545676 DOI: 10.1111/ejn.15776] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 05/19/2022] [Accepted: 06/09/2022] [Indexed: 11/27/2022]
Abstract
The visual cortex has been extensively studied to investigate its role in object recognition but to a lesser degree to determine how action planning influences the representation of objects' features. We used functional MRI and pattern classification methods to determine if during action planning, object features (orientation and location) could be decoded in an action‐dependent way. Sixteen human participants used their right dominant hand to perform movements (Align or Open reach) towards one of two 3D‐real oriented objects that were simultaneously presented and placed on either side of a fixation cross. While both movements required aiming towards target location, Align but not Open reach movements required participants to precisely adjust hand orientation. Therefore, we hypothesized that if the representation of object features is modulated by the upcoming action, pre‐movement activity pattern would allow more accurate dissociation between object features in Align than Open reach tasks. We found such dissociation in the anterior and posterior parietal cortex, as well as in the dorsal premotor cortex, suggesting that visuomotor processing is modulated by the upcoming task. The early visual cortex showed significant decoding accuracy for the dissociation between object features in the Align but not Open reach task. However, there was no significant difference between the decoding accuracy in the two tasks. These results demonstrate that movement‐specific preparatory signals modulate object representation in the frontal and parietal cortex, and to a lesser extent in the early visual cortex, likely through feedback functional connections.
Collapse
Affiliation(s)
- Jena Velji-Ibrahim
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Trento, Italy.,Center for Vision Research, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Center for Vision Research, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, Toronto, Ontario, Canada.,Departments of Biology and Psychology, York University, Toronto, Ontario, Canada
| | - Luigi Cattaneo
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Simona Monaco
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| |
Collapse
|
11
|
Steinberg NJ, Roth ZN, Merriam EP. Spatiotopic and retinotopic memory in the context of natural images. J Vis 2022; 22:11. [PMID: 35323869 PMCID: PMC8963666 DOI: 10.1167/jov.22.4.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Neural responses throughout the visual cortex encode stimulus location in a retinotopic (i.e., eye-centered) reference frame, and memory for stimulus position is most precise in retinal coordinates. Yet visual perception is spatiotopic: objects are perceived as stationary, even though eye movements cause frequent displacement of their location on the retina. Previous studies found that, after a single saccade, memory of retinotopic locations is more accurate than memory of spatiotopic locations. However, it is not known whether various aspects of natural viewing affect the retinotopic reference frame advantage. We found that the retinotopic advantage may in part depend on a retinal afterimage, which can be effectively nullified through backwards masking. Moreover, in the presence of natural scenes, spatiotopic memory is more accurate than retinotopic memory, but only when subjects are provided sufficient time to process the scene before the eye movement. Our results demonstrate that retinotopic memory is not always more accurate than spatiotopic memory and that the fidelity of memory traces in both reference frames are sensitive to the presence of contextual cues.
Collapse
Affiliation(s)
- Noah J Steinberg
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD, USA.,
| | - Zvi N Roth
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD, USA.,
| | - Elisha P Merriam
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD, USA.,
| |
Collapse
|
12
|
Hübner C, Schütz AC. Rapid visual adaptation persists across saccades. iScience 2021; 24:102986. [PMID: 34485868 PMCID: PMC8403744 DOI: 10.1016/j.isci.2021.102986] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 05/28/2021] [Accepted: 07/09/2021] [Indexed: 11/26/2022] Open
Abstract
Neurons in the visual cortex quickly adapt to constant input, which should lead to perceptual fading within few tens of milliseconds. However, perceptual fading is rarely observed in everyday perception, possibly because eye movements refresh retinal input. Recently, it has been suggested that amplitudes of large saccadic eye movements are scaled to maximally decorrelate presaccadic and postsaccadic inputs and thus to annul perceptual fading. However, this argument builds on the assumption that adaptation within naturally brief fixation durations is strong enough to survive any visually disruptive saccade and affect perception. We tested this assumption by measuring the effect of luminance adaptation on postsaccadic contrast perception. We found that postsaccadic contrast perception was affected by presaccadic luminance adaptation during brief periods of fixation. This adaptation effect emerges within 100 milliseconds and persists over seconds. These results indicate that adaptation during natural fixation periods can affect perception even after visually disruptive saccades.
Collapse
Affiliation(s)
- Carolin Hübner
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, 35037 Marburg, Germany.,Institut für Psychologie, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, 35037 Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg, 35037 Marburg, Germany
| |
Collapse
|
13
|
Abstract
Our visual system is fundamentally retinotopic. When viewing a stable scene, each eye movement shifts object features and locations on the retina. Thus, sensory representations must be updated, or remapped, across saccades to align presaccadic and postsaccadic inputs. The earliest remapping studies focused on anticipatory, presaccadic shifts of neuronal spatial receptive fields. Over time, it has become clear that there are multiple forms of remapping and that different forms of remapping may be mediated by different neural mechanisms. This review attempts to organize the various forms of remapping into a functional taxonomy based on experimental data and ongoing debates about forward versus convergent remapping, presaccadic versus postsaccadic remapping, and spatial versus attentional remapping. We integrate findings from primate neurophysiological, human neuroimaging and behavioral, and computational modeling studies. We conclude by discussing persistent open questions related to remapping, with specific attention to binding of spatial and featural information during remapping and speculations about remapping's functional significance. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio 43210, USA;
| | - James A Mazer
- Department of Microbiology and Cell Biology, Montana State University, Bozeman, Montana 59717, USA;
| |
Collapse
|
14
|
Huber-Huber C, Buonocore A, Melcher D. The extrafoveal preview paradigm as a measure of predictive, active sampling in visual perception. J Vis 2021; 21:12. [PMID: 34283203 PMCID: PMC8300052 DOI: 10.1167/jov.21.7.12] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 05/18/2021] [Indexed: 01/02/2023] Open
Abstract
A key feature of visual processing in humans is the use of saccadic eye movements to look around the environment. Saccades are typically used to bring relevant information, which is glimpsed with extrafoveal vision, into the high-resolution fovea for further processing. With the exception of some unusual circumstances, such as the first fixation when walking into a room, our saccades are mainly guided based on this extrafoveal preview. In contrast, the majority of experimental studies in vision science have investigated "passive" behavioral and neural responses to suddenly appearing and often temporally or spatially unpredictable stimuli. As reviewed here, a growing number of studies have investigated visual processing of objects under more natural viewing conditions in which observers move their eyes to a stationary stimulus, visible previously in extrafoveal vision, during each trial. These studies demonstrate that the extrafoveal preview has a profound influence on visual processing of objects, both for behavior and neural activity. Starting from the preview effect in reading research we follow subsequent developments in vision research more generally and finally argue that taking such evidence seriously leads to a reconceptualization of the nature of human visual perception that incorporates the strong influence of prediction and action on sensory processing. We review theoretical perspectives on visual perception under naturalistic viewing conditions, including theories of active vision, active sensing, and sampling. Although the extrafoveal preview paradigm has already provided useful information about the timing of, and potential mechanisms for, the close interaction of the oculomotor and visual systems while reading and in natural scenes, the findings thus far also raise many new questions for future research.
Collapse
Affiliation(s)
- Christoph Huber-Huber
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, The Netherlands
- CIMeC, University of Trento, Italy
| | - Antimo Buonocore
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen University, Tübingen, BW, Germany
- Hertie Institute for Clinical Brain Research, Tübingen University, Tübingen, BW, Germany
| | - David Melcher
- CIMeC, University of Trento, Italy
- Division of Science, New York University Abu Dhabi, UAE
| |
Collapse
|
15
|
Occipital cortex is modulated by transsaccadic changes in spatial frequency: an fMRI study. Sci Rep 2021; 11:8611. [PMID: 33883578 PMCID: PMC8060420 DOI: 10.1038/s41598-021-87506-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 03/24/2021] [Indexed: 11/15/2022] Open
Abstract
Previous neuroimaging studies have shown that inferior parietal and ventral occipital cortex are involved in the transsaccadic processing of visual object orientation. Here, we investigated whether the same areas are also involved in transsaccadic processing of a different feature, namely, spatial frequency. We employed a functional magnetic resonance imaging paradigm where participants briefly viewed a grating stimulus with a specific spatial frequency that later reappeared with the same or different frequency, after a saccade or continuous fixation. First, using a whole-brain Saccade > Fixation contrast, we localized two frontal (left precentral sulcus and right medial superior frontal gyrus), four parietal (bilateral superior parietal lobule and precuneus), and four occipital (bilateral cuneus and lingual gyri) regions. Whereas the frontoparietal sites showed task specificity, the occipital sites were also modulated in a saccade control task. Only occipital cortex showed transsaccadic feature modulations, with significant repetition enhancement in right cuneus. These observations (parietal task specificity, occipital enhancement, right lateralization) are consistent with previous transsaccadic studies. However, the specific regions differed (ventrolateral for orientation, dorsomedial for spatial frequency). Overall, this study supports a general role for occipital and parietal cortex in transsaccadic vision, with a specific role for cuneus in spatial frequency processing.
Collapse
|
16
|
Predictive remapping leaves a behaviorally measurable attentional trace on eye-centered brain maps. Psychon Bull Rev 2021; 28:1243-1251. [PMID: 33634356 DOI: 10.3758/s13423-021-01893-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2021] [Indexed: 11/08/2022]
Abstract
How does the brain maintain spatial attention despite the retinal displacement of objects by saccades? A possible solution is to use the vector of an upcoming saccade to compensate for the shift of objects on eye-centered (retinotopic) brain maps. In support of this hypothesis, previous studies have revealed attentional effects at the future retinal locus of an attended object, just before the onset of saccades. A critical yet unresolved theoretical issue is whether predictively remapped attentional effects would persist long enough on eye-centered brain maps, so no external input (goal, expectation, reward, memory, etc.) is needed to maintain spatial attention immediately following saccades. The present study examined this issue with inhibition of return (IOR), an attentional effect that reveals itself in both world-centered and eye-centered coordinates, and predictively remaps before saccades. In the first task, a saccade was introduced to a cueing task ("nonreturn-saccade task") to show that IOR is coded in world-centered coordinates following saccades. In a second cueing task, two consecutive saccades were executed to trigger remapping and to dissociate the retinal locus relevant to remapping from the cued retinal locus ("return-saccade" task). IOR was observed at the remapped retinal locus 430-ms following the (first) saccade that triggered remapping. A third cueing task ("no-remapping" task) further revealed that the lingering IOR effect left by remapping was not confounded by the attention spillover. These results together show that predictive remapping leaves a robust attentional trace on eye-centered brain maps. This retinotopic trace is sufficient to sustain spatial attention for a few hundred milliseconds following saccades.
Collapse
|
17
|
The behavioural preview effect with faces is susceptible to statistical regularities: Evidence for predictive processing across the saccade. Sci Rep 2021; 11:942. [PMID: 33441804 PMCID: PMC7806959 DOI: 10.1038/s41598-020-79957-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 12/11/2020] [Indexed: 01/29/2023] Open
Abstract
The world around us appears stable and continuous despite saccadic eye movements. This apparent visual stability is achieved by trans-saccadic perception leading at the behavioural level to preview effects: performance in processing a foveal stimulus is better if the stimulus remained unchanged (valid) compared to when it changed (invalid) during the saccade that brought it into focus. Trans-saccadic perception is known to predictively adapt to the statistics of the environment. Here, we asked whether the behavioural preview effect shows the same characteristics, employing a between-participants training design. Participants made saccades to faces which could change their orientation (upright/inverted) during the saccade. In addition, the post-saccadic face was slightly tilted and participants reported this tilt upon fixation. In a training phase, one group of participants conducted only invalid trials whereas another group conducted only valid trials. In a subsequent test phase with 50% valid and 50% invalid trials, we measured the preview effect. Invalid training reduced the preview effect. With a mixed-model analysis, we could show how this training effect gradually declines in the course of the test phase. These results show that the behavioural preview effect adapts to the statistics of the environment suggesting that it results from predictive processes.
Collapse
|
18
|
Attentional bias towards negative stimuli in healthy individuals and the effects of trait anxiety. Sci Rep 2020; 10:11826. [PMID: 32678129 PMCID: PMC7367300 DOI: 10.1038/s41598-020-68490-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Accepted: 06/24/2020] [Indexed: 01/20/2023] Open
Abstract
This study aimed to investigate the time course of attentional bias for negative information in healthy individuals and to assess the associated influence of trait anxiety. Thirty-eight healthy volunteers performed an emotional dot-probe task with pairs of negative and neutral scenes, presented for either 1 or 2 s and followed by a target placed at the previous location of either negative or neutral stimulus. Analyses included eye movements during the presentation of the scenes and response times associated with target localization. In a second step, analyses focused on the influence of trait anxiety. While there was no significant difference at the behavioral level, the eye-tracking data revealed that negative information held longer attention than neutral stimuli once fixated. This initial maintenance bias towards negative pictures then increased with increasing trait anxiety. However, at later processing stages, only individuals with the highest trait anxiety appeared to fixate longer on negative pictures than neutral pictures, individuals with low trait anxiety showing the opposite pattern. This study provides novel evidence that healthy individuals display an attentional maintenance bias towards negative stimuli, which is associated with trait anxiety.
Collapse
|
19
|
Drissi-Daoudi L, Ögmen H, Herzog MH, Cicchini GM. Object identity determines trans-saccadic integration. J Vis 2020; 20:33. [PMID: 32729906 PMCID: PMC7424110 DOI: 10.1167/jov.20.7.33] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Humans make two to four rapid eye movements (saccades) per second, which, surprisingly, does not lead to abrupt changes in vision. To the contrary, we perceive a stable world. Hence, an important question is how information is integrated across saccades. To investigate this question, we used the sequential metacontrast paradigm (SQM), where two expanding streams of lines are presented. When one line is spatially offset, the other lines are perceived as being offset, too. When more lines are offset, all offsets integrate mandatorily; that is, observers cannot report the individual offsets but perceive one integrated offset. Here, we asked observers to make a saccade during the SQM. Even though the saccades caused a highly disrupted motion trajectory on the retina, offsets presented before and after the saccade integrated mandatorily. When observers made no saccade and the streams were displaced on the screen so that a similarly disrupted retinal image occurred as in the previous condition, no integration occurred. We suggest that trans-saccadic integration and perception are determined by object identity in spatiotopic coordinates and not by the retinal image.
Collapse
|
20
|
Abstract
It is known that attention shifts prior to a saccade to start processing the saccade target before it lands in the foveola, the high-resolution region of the retina. Yet, once the target is foveated, microsaccades, tiny saccades maintaining the fixated object within the fovea, continue to occur. What is the link between these eye movements and attention? There is growing evidence that these eye movements are associated with covert shifts of attention in the visual periphery, when the attended stimuli are presented far from the center of gaze. Yet, microsaccades are primarily used to explore complex foveal stimuli and to optimize fine spatial vision in the foveola, suggesting that the influences of microsaccades on attention may predominantly impact vision at this scale. To address this question we tracked gaze position with high precision and briefly presented high-acuity stimuli at predefined foveal locations right before microsaccade execution. Our results show that visual discrimination changes prior to microsaccade onset. An enhancement occurs at the microsaccade target location. This modulation is highly selective and it is coupled with a drastic impairment at the opposite foveal location, just a few arcminutes away. This effect is strongest when stimuli are presented closer to the eye movement onset time. These findings reveal that the link between attention and microsaccades is deeper than previously thought, exerting its strongest effects within the foveola. As a result, during fixation, foveal vision is constantly being reshaped both in space and in time with the occurrence of microsaccades.
Collapse
|
21
|
Murdison TS, Blohm G, Bremmer F. Saccade-induced changes in ocular torsion reveal predictive orientation perception. J Vis 2020; 19:10. [PMID: 31533148 DOI: 10.1167/19.11.10] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Natural orienting of gaze often results in a retinal image that is rotated relative to space due to ocular torsion. However, we perceive neither this rotation nor a moving world despite visual rotational motion on the retina. This perceptual stability is often attributed to the phenomenon known as predictive remapping, but the current remapping literature ignores this torsional component. In addition, studies often simply measure remapping across either space or features (e.g., orientation) but in natural circumstances, both components are bound together for stable perception. One natural circumstance in which the perceptual system must account for the current and future eye orientation to correctly interpret the orientation of external stimuli occurs during movements to or from oblique eye orientations (i.e., eye orientations with both a horizontal and vertical angular component relative to the primary position). Here we took advantage of oblique eye orientation-induced ocular torsion to examine perisaccadic orientation perception. First, we found that orientation perception was largely predicted by the rotated retinal image. Second, we observed a presaccadic remapping of orientation perception consistent with maintaining a stable (but spatially inaccurate) retinocentric perception throughout the saccade. These findings strongly suggest that our seamless perceptual stability relies on retinocentric signals that are predictively remapped in all three ocular dimensions with each saccade.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg, Germany
| |
Collapse
|
22
|
Cimminella F, Sala SD, Coco MI. Extra-foveal Processing of Object Semantics Guides Early Overt Attention During Visual Search. Atten Percept Psychophys 2020; 82:655-670. [PMID: 31792893 PMCID: PMC7246246 DOI: 10.3758/s13414-019-01906-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Eye-tracking studies using arrays of objects have demonstrated that some high-level processing of object semantics can occur in extra-foveal vision, but its role on the allocation of early overt attention is still unclear. This eye-tracking visual search study contributes novel findings by examining the role of object-to-object semantic relatedness and visual saliency on search responses and eye-movement behaviour across arrays of increasing size (3, 5, 7). Our data show that a critical object was looked at earlier and for longer when it was semantically unrelated than related to the other objects in the display, both when it was the search target (target-present trials) and when it was a target's semantically related competitor (target-absent trials). Semantic relatedness effects manifested already during the very first fixation after array onset, were consistently found for increasing set sizes, and were independent of low-level visual saliency, which did not play any role. We conclude that object semantics can be extracted early in extra-foveal vision and capture overt attention from the very first fixation. These findings pose a challenge to models of visual attention which assume that overt attention is guided by the visual appearance of stimuli, rather than by their semantics.
Collapse
Affiliation(s)
- Francesco Cimminella
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, UK.
- Laboratory of Experimental Psychology, Suor Orsola Benincasa University, Naples, Italy.
| | - Sergio Della Sala
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, UK
| | - Moreno I Coco
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, UK.
- School of Psychology, The University of East London, London, UK.
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal.
| |
Collapse
|
23
|
Grossberg S. The resonant brain: How attentive conscious seeing regulates action sequences that interact with attentive cognitive learning, recognition, and prediction. Atten Percept Psychophys 2019; 81:2237-2264. [PMID: 31218601 PMCID: PMC6848053 DOI: 10.3758/s13414-019-01789-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This article describes mechanistic links that exist in advanced brains between processes that regulate conscious attention, seeing, and knowing, and those that regulate looking and reaching. These mechanistic links arise from basic properties of brain design principles such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance. These principles require conscious states to mark perceptual and cognitive representations that are complete, context sensitive, and stable enough to control effective actions. Surface-shroud resonances support conscious seeing and action, whereas feature-category resonances support learning, recognition, and prediction of invariant object categories. Feedback interactions between cortical areas such as peristriate visual cortical areas V2, V3A, and V4, and the lateral intraparietal area (LIP) and inferior parietal sulcus (IPS) of the posterior parietal cortex (PPC) control sequences of saccadic eye movements that foveate salient features of attended objects and thereby drive invariant object category learning. Learned categories can, in turn, prime the objects and features that are attended and searched. These interactions coordinate processes of spatial and object attention, figure-ground separation, predictive remapping, invariant object category learning, and visual search. They create a foundation for learning to control motor-equivalent arm movement sequences, and for storing these sequences in cognitive working memories that can trigger the learning of cognitive plans with which to read out skilled movement sequences. Cognitive-emotional interactions that are regulated by reinforcement learning can then help to select the plans that control actions most likely to acquire valued goal objects in different situations. Many interdisciplinary psychological and neurobiological data about conscious and unconscious behaviors in normal individuals and clinical patients have been explained in terms of these concepts and mechanisms.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Room 213, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, 677 Beacon Street, Boston, MA, 02215, USA.
| |
Collapse
|
24
|
van Leeuwen J, Belopolsky AV. Detection of object displacement during a saccade is prioritized by the oculomotor system. J Vis 2019; 19:11. [DOI: 10.1167/19.11.11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Jonathan van Leeuwen
- Department of Experimental and Applied Psychology, Vrije Universiteit, Amsterdam, The Netherlands
| | - Artem V. Belopolsky
- Department of Experimental and Applied Psychology, Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
25
|
Ramezani F, Kheradpisheh SR, Thorpe SJ, Ghodrati M. Object categorization in visual periphery is modulated by delayed foveal noise. J Vis 2019; 19:1. [PMID: 31369042 DOI: 10.1167/19.9.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Behavioral studies in humans indicate that peripheral vision can do object recognition to some extent. Moreover, recent studies have shown that some information from brain regions retinotopic to visual periphery is somehow fed back to regions retinotopic to the fovea and disrupting this feedback impairs object recognition in human. However, it is unclear to what extent the information in visual periphery contributes to human object categorization. Here, we designed two series of rapid object categorization tasks to first investigate the performance of human peripheral vision in categorizing natural object images at different eccentricities and abstraction levels (superordinate, basic, and subordinate). Then, using a delayed foveal noise mask, we studied how modulating the foveal representation impacts peripheral object categorization at any of the abstraction levels. We found that peripheral vision can quickly and accurately accomplish superordinate categorization, while its performance in finer categorization levels dramatically drops as the object presents further in the periphery. Also, we found that a 300-ms delayed foveal noise mask can significantly disturb categorization performance in basic and subordinate levels, while it has no effect on the superordinate level. Our results suggest that human peripheral vision can easily process objects at high abstraction levels, and the information is fed back to foveal vision to prime foveal cortex for finer categorizations when a saccade is made toward the target object.
Collapse
Affiliation(s)
- Farzad Ramezani
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
| | - Saeed Reza Kheradpisheh
- Department of Computer and Data Sciences, Faculty of Mathematical Sciences, Shahid Beheshti University, Tehran, Iran
| | - Simon J Thorpe
- Centre de Recherche Cerveau et Cognition (CerCo) Université Paul Sabatier, Toulouse, France
| | - Masoud Ghodrati
- Neuroscience Program, Biomedicine Discovery Institute, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
26
|
Stewart EEM, Schütz AC. Transsaccadic integration benefits are not limited to the saccade target. J Neurophysiol 2019; 122:1491-1501. [PMID: 31365324 PMCID: PMC6783298 DOI: 10.1152/jn.00420.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Across saccades, humans can integrate the low-resolution presaccadic information of an upcoming saccade target with the high-resolution postsaccadic information. There is converging evidence to suggest that transsaccadic integration occurs at the saccade target. However, given divergent evidence on the spatial specificity of related mechanisms such as attention, visual working memory, and remapping, it is unclear whether integration is also possible at locations other than the saccade target. We tested the spatial profile of transsaccadic integration, by testing perceptual performance at six locations around the saccade target and between the saccade target and initial fixation. Results show that integration benefits do not differ between the saccade target and surrounding locations. Transsaccadic integration benefits are not specific to the saccade target and can occur at other locations when they are behaviorally relevant, although there is a trend for worse performance for the location above initial fixation compared with those in the direction of the saccade. This suggests that transsaccadic integration may be a more general mechanism used to reconcile task-relevant pre- and postsaccadic information at attended locations other than the saccade target. NEW & NOTEWORTHY This study shows that integration of pre- and postsaccadic information across saccades is not restricted to the saccade target. We found performance benefits of transsaccadic integration at attended locations other than the saccade target, and these benefits did not differ from those found at the saccade target. This suggests that transsaccadic integration may be a more general mechanism used to reconcile pre- and postsaccadic information at task-relevant locations.
Collapse
Affiliation(s)
- Emma E M Stewart
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| |
Collapse
|
27
|
|
28
|
Memory for retinotopic locations is more accurate than memory for spatiotopic locations, even for visually guided reaching. Psychon Bull Rev 2019; 25:1388-1398. [PMID: 29159799 DOI: 10.3758/s13423-017-1401-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
To interact successfully with objects, we must maintain stable representations of their locations in the world. However, their images on the retina may be displaced several times per second by large, rapid eye movements. A number of studies have demonstrated that visual processing is heavily influenced by gaze-centered (retinotopic) information, including a recent finding that memory for an object's location is more accurate and precise in gaze-centered (retinotopic) than world-centered (spatiotopic) coordinates (Golomb & Kanwisher, 2012b). This effect is somewhat surprising, given our intuition that behavior is successfully guided by spatiotopic representations. In the present experiment, we asked whether the visual system may rely on a more spatiotopic memory store depending on the mode of responding. Specifically, we tested whether reaching toward and tapping directly on an object's location could improve memory for its spatiotopic location. Participants performed a spatial working memory task under four conditions: retinotopic vs. spatiotopic task, and computer mouse click vs. touchscreen reaching response. When participants responded by clicking with a mouse on the screen, we replicated Golomb & Kanwisher's original results, finding that memory was more accurate in retinotopic than spatiotopic coordinates and that the accuracy of spatiotopic memory deteriorated substantially more than retinotopic memory with additional eye movements during the memory delay. Critically, we found the same pattern of results when participants responded by using their finger to reach and tap the remembered location on the monitor. These results further support the hypothesis that spatial memory is natively retinotopic; we found no evidence that engaging the motor system improves spatiotopic memory across saccades.
Collapse
|
29
|
He T, Fritsche M, de Lange FP. Predictive remapping of visual features beyond saccadic targets. J Vis 2019; 18:20. [PMID: 30593063 DOI: 10.1167/18.13.20] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual stability is thought to be mediated by predictive remapping of the relevant object information from its current, presaccadic location to its future, postsaccadic location on the retina. However, it is heavily debated whether and what feature information is predictively remapped during the presaccadic interval. Here we examined the spatial and featural properties of predictive remapping in a set of three psychophysical studies. We made use of an orientation-adaptation paradigm, in which we induced a tilt aftereffect by prolonged exposure to an oriented adaptor stimulus. Following this adaptation phase, a test stimulus was presented shortly before saccade onset. We found strong evidence for predictive remapping of the features of this test stimulus presented shortly before saccade onset, evidenced by a large tilt aftereffect elicited when the adaptor was positioned at the postsaccadic retinal location of the test stimulus. Conversely, the adaptation state itself, caused by the exposure to the adaptor stimulus, was not predictively remapped. Furthermore, we establish that predictive remapping also occurs for stimuli that are not saccade targets, pointing toward a forward remapping process operating across the whole visual field. Together, our findings suggest that predictive feature remapping of object information plays an important role in mediating visual stability.
Collapse
Affiliation(s)
- Tao He
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Matthias Fritsche
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
30
|
Herwig A, Weiß K, Schneider WX. Feature prediction across eye movements is location specific and based on retinotopic coordinates. J Vis 2019; 18:13. [PMID: 30372762 DOI: 10.1167/18.8.13] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
With each saccadic eye movement, internal object representations change their retinal position and spatial resolution. Recently, we suggested that the visual system deals with these saccade-induced changes by predicting visual features across saccades based on transsaccadic associations of peripheral and foveal input (Herwig & Schneider, 2014). Here we tested the specificity of feature prediction by asking (a) whether it is spatially restricted to the previous learning location or the saccade target location, and (b) whether it is based on retinotopic (eye-centered) or spatiotopic (world-centered) coordinates. In a preceding acquisition phase, objects systematically changed their spatial frequency during saccades. In the following test phases of two experiments, participants had to judge the frequency of briefly presented peripheral objects. These objects were presented either at the previous learning location or at new locations and were either the target of a saccadic eye movement or not (Experiment 1). Moreover, objects were presented either in the same or different retinotopic and spatiotopic coordinates (Experiment 2). Spatial frequency perception was biased toward previously associated foveal input indicating transsaccadic learning and feature prediction. Importantly, while this pattern was not bound to the saccade target location, it was seen only at the previous learning location in retinotopic coordinates, suggesting that feature prediction probably affects low- or mid-level perception.
Collapse
Affiliation(s)
- Arvid Herwig
- Department of Psychology, Bielefeld University, Bielefeld, Germany.,Cluster of Excellence, Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Katharina Weiß
- Department of Psychology, Bielefeld University, Bielefeld, Germany.,Cluster of Excellence, Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Werner X Schneider
- Department of Psychology, Bielefeld University, Bielefeld, Germany.,Cluster of Excellence, Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
31
|
Golomb JD. Remapping locations and features across saccades: a dual-spotlight theory of attentional updating. Curr Opin Psychol 2019; 29:211-218. [PMID: 31075621 DOI: 10.1016/j.copsyc.2019.03.018] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Revised: 03/23/2019] [Accepted: 03/28/2019] [Indexed: 01/06/2023]
Abstract
How do we maintain visual stability across eye movements? Much work has focused on how visual information is rapidly updated to maintain spatiotopic representations. However, predictive spatial remapping is only part of the story. Here I review key findings, recent debates, and open questions regarding remapping and its implications for visual attention and perception. This review focuses on two key questions: when does remapping occur, and what is the impact on feature perception? Findings are reviewed within the framework of a two-stage, or dual- spotlight, remapping process, where spatial attention must be both updated to the new location (fast, predictive stage) and withdrawn from the previous retinotopic location (slow, post-saccadic stage), with a particular focus on the link between spatial and feature information across eye movements.
Collapse
Affiliation(s)
- Julie D Golomb
- Department of Psychology, The Ohio State University, United States.
| |
Collapse
|
32
|
Stewart EEM, Schütz AC. Optimal trans-saccadic integration relies on visual working memory. Vision Res 2018; 153:70-81. [PMID: 30312623 PMCID: PMC6241852 DOI: 10.1016/j.visres.2018.10.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 09/11/2018] [Accepted: 10/01/2018] [Indexed: 11/24/2022]
Abstract
Saccadic eye movements alter the visual processing of objects of interest by bringing them from the periphery, where there is only low-resolution vision, to the high-resolution fovea. Evidence suggests that people are able to achieve trans-saccadic integration in a near-optimal manner; however the mechanisms underlying integration are still unclear. Visual working memory (VWM) is sustained across a saccade, and it has been suggested that this memory resource is used to store and compare the pre- and post- saccadic percepts. This study directly tested the hypothesis that VWM is necessary for optimal trans-saccadic integration, by introducing memory load during a saccade, and testing subsequent integration performance on feature similar and dissimilar stimuli. Results show that integration performance was impaired when there was an additional memory task. Additionally, performance on the memory task was affected by feature-specific integration stimuli. Our results suggest that VWM supports the integration of pre- and post- saccadic stimuli because integration performance is impaired under VWM load.
Collapse
Affiliation(s)
- Emma E M Stewart
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| |
Collapse
|
33
|
Rolfs M, Murray-Smith N, Carrasco M. Perceptual learning while preparing saccades. Vision Res 2018; 152:126-138. [PMID: 29277450 PMCID: PMC6028304 DOI: 10.1016/j.visres.2017.11.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2017] [Revised: 11/25/2017] [Accepted: 11/28/2017] [Indexed: 10/18/2022]
Abstract
Traditional perceptual learning protocols rely almost exclusively on long periods of uninterrupted fixation. Taking a first step towards understanding perceptual learning in natural vision, we had observers report the orientation of a briefly flashed stimulus (clockwise or counterclockwise from a reference orientation) presented strictly during saccade preparation at a location offset from the saccade target. For each observer, the saccade direction, stimulus location, and orientation remained the same throughout training. Subsequently, we assessed performance during fixation in three transfer sessions, either at the trained or at an untrained location, and either using an untrained (Experiment 1) or the trained (Experiment 2) stimulus orientation. We modeled the evolution of contrast thresholds (i.e., the stimulus contrast necessary to discriminate its orientation correctly 75% of the time) as an exponential learning curve, and quantified departures from this curve in transfer sessions using two new, complementary measures of transfer costs (i.e., performance decrements after the transition into the Transfer phase). We observed robust perceptual learning and associated transfer costs for untrained locations and orientations. We also assessed if spatial transfer costs were reduced for the remapped location of the pre-saccadic stimulus-the location the stimulus would have had (but never had) after the saccade. Although the pattern of results at that location differed somewhat from that at the control location, we found no clear evidence for perceptual learning at remapped locations. Using novel, model-based ways to assess learning and transfer costs, our results show that location and feature specificity, hallmarks of perceptual learning, subsist if the target stimulus is presented strictly during saccade preparation throughout training.
Collapse
Affiliation(s)
- Martin Rolfs
- Department of Psychology, New York University, NY, USA; Center for Neural Science, New York University, NY, USA; Department of Psychology, Humboldt-Universität zu Berlin, Germany; Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Germany.
| | | | - Marisa Carrasco
- Department of Psychology, New York University, NY, USA; Center for Neural Science, New York University, NY, USA
| |
Collapse
|
34
|
Bucher L, Bublak P, Kerkhoff G, Geyer T, Müller H, Finke K. Spatial remapping in visual search: Remapping cues are provided at attended and ignored locations. Acta Psychol (Amst) 2018; 190:103-115. [PMID: 30056328 DOI: 10.1016/j.actpsy.2018.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 05/31/2018] [Accepted: 07/10/2018] [Indexed: 10/28/2022] Open
Abstract
We experience the world as stable and continuous, despite the fact that visual input is overwritten on the retina with each new ocular fixation. Spatial remapping is the process that integrates selected visual information into successive (continuous) representations of our spatial environment, thereby allowing us to keep track of objects, and experience the world as stable, despite frequent eye (re-)fixations. The present paper investigates spatial remapping in the context of visual pop-out search. Within standard instances of the pop-out paradigm, reactions to stimuli at previously attended locations are facilitated (faster and more accurate), and reactions to stimuli at previously ignored locations are inhibited (slower and less accurate). The mechanisms that support facilitation at previously attended locations, and inhibition at previously ignored locations, serve to enhance the efficiency of visual search. It is thus natural to expect that information about which locations were previously attended to or ignored is stored and remapped as a concomitant to successive representations of the spatial environment. Using variants of the pop-out paradigm, we corroborate this expectation, and show that information concerning the prior status of locations, as attended to or ignored, is remapped following attention shifts, with some degradation of information concerning ignored locations.
Collapse
|
35
|
Out of sight, out of mind: Occlusion and eye closure destabilize moving bistable structure-from-motion displays. Atten Percept Psychophys 2018; 80:1193-1204. [PMID: 29560607 DOI: 10.3758/s13414-018-1505-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Our brain constantly tries to anticipate the future by using a variety of memory mechanisms. Interestingly, studies using the intermittent presentation of multistable displays have shown little perceptual persistence for interruptions longer than a few hundred milliseconds. Here we examined whether we can facilitate the perceptual stability of bistable displays following a period of invisibility by employing a physically plausible and ecologically valid occlusion event sequence, as opposed to the typical intermittent presentation, with sudden onsets and offsets. To this end, we presented a bistable rotating structure-from-motion display that was moving along a linear horizontal trajectory on the screen and either was temporarily occluded by another object (a cardboard strip in Exp. 1, a computer-generated image in Exp. 2) or became invisible due to eye closure (Exp. 3). We report that a bistable rotation direction reliably persisted following occlusion or interruption only (1) if the pre- and postinterruption locations overlapped spatially (an occluder with apertures in Exp. 2 or brief, spontaneous blinks in Exp. 3) or (2) if an object's size allowed for the efficient grouping of dots on both sides of the occluding object (large objects in Exp. 1). In contrast, we observed no persistence whenever the pre- and postinterruption locations were nonoverlapping (large solid occluding objects in Exps. 1 and 2 and long, prompted blinks in Exp. 3). We report that the bistable rotation direction of a moving object persisted only for spatially overlapping neural representations, and that persistence was not facilitated by a physically plausible and ecologically valid occlusion event.
Collapse
|
36
|
Distractor displacements during saccades are reflected in the time-course of saccade curvature. Sci Rep 2018; 8:2469. [PMID: 29410421 PMCID: PMC5802815 DOI: 10.1038/s41598-018-20578-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 01/17/2018] [Indexed: 11/25/2022] Open
Abstract
Every time we make a saccade we form a prediction about where objects are going to be when the eye lands. This is crucial since the oculomotor system is retinotopically organized and every saccade drastically changes the projection of objects on the retina. We investigated how quickly the oculomotor system accommodates new spatial information when a distractor is displaced during a saccade. Participants performed sequences of horizontal and vertical saccades and oculomotor competition was induced by presenting a task-irrelevant distractor before the first saccade. On half of the trials the distractor remained in the same location after the first saccade and on the other half the distractor moved during the first saccade. Curvature of the second saccade was used to track target-distractor competition. At short intersaccadic intervals, saccades curved away from the original distractor location, confirming that in the oculomotor system spatiotopic representations emerge rapidly and automatically. Approximately 190 ms after the first saccade, second saccades curved away from the new distractor location. These results show that after a saccade the oculomotor system is initially driven by the spatial prediction made before the saccade, but it is able to quickly update these spatial predictions based on new visual information.
Collapse
|
37
|
Howard MW. Memory as Perception of the Past: Compressed Time inMind and Brain. Trends Cogn Sci 2018; 22:124-136. [PMID: 29389352 PMCID: PMC5881576 DOI: 10.1016/j.tics.2017.11.004] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2017] [Revised: 11/07/2017] [Accepted: 11/16/2017] [Indexed: 01/27/2023]
Abstract
In the visual system retinal space is compressed such that acuity decreases further from the fovea. Different forms of memory may rely on a compressed representation of time, manifested as decreased accuracy for events that happened further in the past. Neurophysiologically, "time cells" show receptive fields in time. Analogous to the compression of visual space, time cells show less acuity for events further in the past. Behavioral evidence suggests memory can be accessed by scanning a compressed temporal representation, analogous to visual search. This suggests a common computational language for visual attention and memory retrieval. In this view, time functions like a scaffolding that organizes memories in much the same way that retinal space functions like a scaffolding for visual perception.
Collapse
Affiliation(s)
- Marc W Howard
- Center for Memory and Brain, Department of Psychologicaland Brain Sciences, Department of Physics, Boston University, Boston, MA, USA.
| |
Collapse
|
38
|
Köller CP, Poth CH, Herwig A. Object discrepancy modulates feature prediction across eye movements. PSYCHOLOGICAL RESEARCH 2018; 84:231-244. [PMID: 29387939 DOI: 10.1007/s00426-018-0988-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2017] [Accepted: 01/27/2018] [Indexed: 10/18/2022]
Abstract
Object perception across saccadic eye movements is assumed to result from integrating two information sources: incoming peripheral object information and information from a foveal prediction (Herwig and Schneider, J Exp Psychol Gen 143(5):1903-1922, 2014, Herwig, J Vis 15(16), 7, 2015). Predictions are supposed to be based on transsaccadic associations of peripheral and foveal object information. The main function of these predictions may be to conceal discrepancies in resolution and locations across saccades. Here we ask how predictions are affected by discrepancies between peripheral and foveal objects. Participants learned unfamiliar transsaccadic associations by making saccades to objects whose shape systematically changed during the saccade. Importantly, we manipulated the size of this change between participants to induce different magnitudes of object discrepancy. In a subsequent test, we found that judgment shifts of peripheral shape perception toward the predicted foveal input depended on change size during acquisition. Specifically, the contribution of prediction decreased for large changes but did not reach zero, showing that even for large changes (i.e., square to circle or vice versa) the prediction was not ignored completely. These findings indicate that object discrepancy during learning determines how much the resulting foveal prediction contributes to perception in the periphery.
Collapse
Affiliation(s)
- Cassandra Philine Köller
- Neuro-cognitive Psychology, Department of Psychology and Cluster of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Christian H Poth
- Neuro-cognitive Psychology, Department of Psychology and Cluster of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Arvid Herwig
- Neuro-cognitive Psychology, Department of Psychology and Cluster of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany.
| |
Collapse
|
39
|
Stewart EEM, Schütz AC. Attention modulates trans-saccadic integration. Vision Res 2017; 142:1-10. [PMID: 29183779 PMCID: PMC5757795 DOI: 10.1016/j.visres.2017.11.006] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 11/13/2017] [Accepted: 11/17/2017] [Indexed: 11/16/2022]
Abstract
With every saccade, humans must reconcile the low resolution peripheral information available before a saccade, with the high resolution foveal information acquired after the saccade. While research has shown that we are able to integrate peripheral and foveal vision in a near-optimal manner, it is still unclear which mechanisms may underpin this important perceptual process. One potential mechanism that may moderate this integration process is visual attention. Pre-saccadic attention is a well documented phenomenon, whereby visual attention shifts to the location of an upcoming saccade before the saccade is executed. While it plays an important role in other peri-saccadic processes such as predictive remapping, the role of attention in the integration process is as yet unknown. This study aimed to determine whether the presentation of an attentional distractor during a saccade impaired trans-saccadic integration, and to measure the time-course of this impairment. Results showed that presenting an attentional distractor impaired integration performance both before saccade onset, and during the saccade, in selected subjects who showed integration in the absence of a distractor. This suggests that visual attention may be a mechanism that facilitates trans-saccadic integration.
Collapse
Affiliation(s)
- Emma E M Stewart
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany.
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| |
Collapse
|
40
|
Buonocore A, Fracasso A, Melcher D. Pre-saccadic perception: Separate time courses for enhancement and spatial pooling at the saccade target. PLoS One 2017; 12:e0178902. [PMID: 28614367 PMCID: PMC5470679 DOI: 10.1371/journal.pone.0178902] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2016] [Accepted: 05/19/2017] [Indexed: 11/25/2022] Open
Abstract
We interact with complex scenes using eye movements to select targets of interest. Studies have shown that the future target of a saccadic eye movement is processed differently by the visual system. A number of effects have been reported, including a benefit for perceptual performance at the target (“enhancement”), reduced influences of backward masking (“un-masking”), reduced crowding (“un-crowding”) and spatial compression towards the saccade target. We investigated the time course of these effects by measuring orientation discrimination for targets that were spatially crowded or temporally masked. In four experiments, we varied the target-flanker distance, the presence of forward/backward masks, the orientation of the flankers and whether participants made a saccade. Masking and randomizing flanker orientation reduced performance in both fixation and saccade trials. We found a small improvement in performance on saccade trials, compared to fixation trials, with a time course that was consistent with a general enhancement at the saccade target. In addition, a decrement in performance (reporting the average flanker orientation, rather than the target) was found in the time bins nearest saccade onset when random oriented flankers were used, consistent with spatial pooling around the saccade target. We did not find strong evidence for un-crowding. Overall, our pattern of results was consistent with both an early, general enhancement at the saccade target and a later, peri-saccadic compression/pooling towards the saccade target.
Collapse
Affiliation(s)
- Antimo Buonocore
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen University, Tübingen, Germany
- Hertie Institute for Clinical Brain Research, Tübingen University, Tübingen, Germany
- * E-mail:
| | - Alessio Fracasso
- Spinoza Center for Neuroimaging, Amsterdam Zuidoost, Netherlands
- Radiology, Imaging Division, University Medical Center Utrecht, Utrecht, Netherlands
| | - David Melcher
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| |
Collapse
|
41
|
Shafer-Skelton A, Kupitz CN, Golomb JD. Object-location binding across a saccade: A retinotopic spatial congruency bias. Atten Percept Psychophys 2017; 79:765-781. [PMID: 28070793 PMCID: PMC5354979 DOI: 10.3758/s13414-016-1263-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating - a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the "spatial congruency bias," to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the spatial congruency bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic congruency bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates.
Collapse
Affiliation(s)
- Anna Shafer-Skelton
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA
| | - Colin N Kupitz
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA
| | - Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA.
| |
Collapse
|
42
|
Harrison C, Binetti N, Mareschal I, Johnston A. Time-Order Errors in Duration Judgment Are Independent of Spatial Positioning. Front Psychol 2017; 8:340. [PMID: 28337162 PMCID: PMC5343025 DOI: 10.3389/fpsyg.2017.00340] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2016] [Accepted: 02/22/2017] [Indexed: 11/23/2022] Open
Abstract
Time-order errors (TOEs) occur when the discriminability between two stimuli are affected by the order in which they are presented. While TOEs have been studied since the 1860s, it is unknown whether the spatial properties of a stimulus will affect this temporal phenomenon. In this experiment, we asked whether perceived duration, or duration discrimination, might be influenced by whether two intervals in a standard two-interval method of constants paradigm were spatially overlapping in visual short-term memory. Two circular sinusoidal gratings (one standard and the other a comparison) were shown sequentially and participants judged which of the two was presented for a longer duration. The test stimuli were either spatially overlapping (in different spatial frames) or separate. Stimulus order was randomized between trials. The standard stimulus lasted 600 ms, and the test stimulus had one of seven possible values (between 300 and 900 ms). There were no overall significant differences observed between spatially overlapping and separate stimuli. However, in trials where the standard stimulus was presented second, TOEs were greater, and participants were significantly less sensitive to differences in duration. TOEs were also greater in conditions involving a saccade. This suggests there is an intrinsic memory component to two interval tasks in that the information from the first interval has to be stored; this is more demanding when the standard is presented in the second interval. Overall, this study suggests that while temporal information may be encoded in some spatial form, it is not dependent on visual short-term memory.
Collapse
Affiliation(s)
- Charlotte Harrison
- Department of Experiment Psychology, University College London London, UK
| | - Nicola Binetti
- Department of Experiment Psychology, University College London London, UK
| | - Isabelle Mareschal
- School of Biological and Chemical Sciences, Psychology, Queen Mary University of London London, UK
| | - Alan Johnston
- Department of Experiment Psychology, University College LondonLondon, UK; School of Psychology, University of NottinghamNottingham, UK
| |
Collapse
|
43
|
The reference frame of the tilt aftereffect measured by differential Pavlovian conditioning. Sci Rep 2017; 7:40525. [PMID: 28094321 PMCID: PMC5240094 DOI: 10.1038/srep40525] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Accepted: 12/07/2016] [Indexed: 11/08/2022] Open
Abstract
We used a differential Pavlovian conditioning paradigm to measure tilt aftereffect (TAE) strength. Gabor patches, rotated clockwise and anticlockwise, were used as conditioned stimuli (CSs), one of which (CS+) was followed by the unconditioned stimulus (UCS), whereas the other (CS−) appeared alone. The UCS was an air puff delivered to the left eye. In addition to the CS+ and CS−, the vertical test patch was also presented for the clockwise and anticlockwise adapters. The vertical patch was not followed by the UCS. After participants acquired differential conditioning, eyeblink conditioned responses (CRs) were observed for the vertical patch when it appeared to be tilted in the same direction as the CS+ owing to the TAE. The effect was observed not only when the adapter and test stimuli were presented in the same retinotopic position but also when they were presented in the same spatiotopic position, although spatiotopic TAE was weak—it occurred approximately half as often as the full effect. Furthermore, spatiotopic TAE decayed as the time after saccades increased, but did not decay as the time before saccades increased. These results suggest that the time before the performance of saccadic eye movements is needed to compute the spatiotopic representation.
Collapse
|
44
|
Brenner E, Smeets JB. Accumulating visual information for action. PROGRESS IN BRAIN RESEARCH 2017; 236:75-95. [DOI: 10.1016/bs.pbr.2017.07.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
45
|
Grossberg S. Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support. Neural Netw 2016; 87:38-95. [PMID: 28088645 DOI: 10.1016/j.neunet.2016.11.003] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 10/21/2016] [Accepted: 11/20/2016] [Indexed: 10/20/2022]
Abstract
The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA; Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University, 677 Beacon Street, Boston, MA 02215, USA.
| |
Collapse
|
46
|
Hussain Ismail AM, Solomon JA, Hansard M, Mareschal I. A tilt after-effect for images of buildings: evidence of selectivity for the orientation of everyday scenes. ROYAL SOCIETY OPEN SCIENCE 2016; 3:160551. [PMID: 28018643 PMCID: PMC5180141 DOI: 10.1098/rsos.160551] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2016] [Accepted: 10/25/2016] [Indexed: 06/06/2023]
Abstract
The tilt after-effect (TAE) is thought to be a manifestation of gain control in mechanisms selective for spatial orientation in visual stimuli. It has been demonstrated with luminance-defined stripes, contrast-defined stripes, orientation-defined stripes and even with natural images. Of course, all images can be decomposed into a sum of stripes, so it should not be surprising to find a TAE when adapting and test images contain stripes that differ by 15° or so. We show this latter condition is not necessary for the TAE with natural images: adaptation to slightly tilted and vertically filtered houses produced a 'repulsive' bias in the perceived orientation of horizontally filtered houses. These results suggest gain control in mechanisms selective for spatial orientation in natural images.
Collapse
Affiliation(s)
- Ahamed Miflah Hussain Ismail
- Department of Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
| | - Joshua A. Solomon
- Centre for Applied Vision Research City, University of London, London, UK
| | - Miles Hansard
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Isabelle Mareschal
- Department of Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
47
|
Sun LD, Goldberg ME. Corollary Discharge and Oculomotor Proprioception: Cortical Mechanisms for Spatially Accurate Vision. Annu Rev Vis Sci 2016; 2:61-84. [PMID: 28532350 PMCID: PMC5691365 DOI: 10.1146/annurev-vision-082114-035407] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A classic problem in psychology is understanding how the brain creates a stable and accurate representation of space for perception and action despite a constantly moving eye. Two mechanisms have been proposed to solve this problem: Herman von Helmholtz's idea that the brain uses a corollary discharge of the motor command that moves the eye to adjust the visual representation, and Sir Charles Sherrington's idea that the brain measures eye position to calculate a spatial representation. Here, we discuss the cognitive, neuropsychological, and physiological mechanisms that support each of these ideas. We propose that both are correct: A rapid corollary discharge signal remaps the visual representation before an impending saccade, computing accurate movement vectors; and an oculomotor proprioceptive signal enables the brain to construct a more accurate craniotopic representation of space that develops slowly after the saccade.
Collapse
Affiliation(s)
- Linus D Sun
- Mahoney-Keck Center for Brain and Behavior Research, Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, NY 10032;
- Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, NY 10032
- Department of Ophthalmology, Columbia University College of Physicians and Surgeons, New York, NY 10032
- Division of Neurobiology and Behavior, New York State Psychiatric Institute, New York, NY 10032
| | - Michael E Goldberg
- Mahoney-Keck Center for Brain and Behavior Research, Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, NY 10032;
- Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, NY 10032
- Department of Neurology, Columbia University College of Physicians and Surgeons, New York, NY 10032
- Department of Psychiatry, Columbia University College of Physicians and Surgeons, New York, NY 10032
- Department of Ophthalmology, Columbia University College of Physicians and Surgeons, New York, NY 10032
- Kavli Institute for Neuroscience, Columbia University, New York, NY 10032
- Division of Neurobiology and Behavior, New York State Psychiatric Institute, New York, NY 10032
| |
Collapse
|
48
|
Temporally flexible feedback signal to foveal cortex for peripheral object recognition. Proc Natl Acad Sci U S A 2016; 113:11627-11632. [PMID: 27671651 DOI: 10.1073/pnas.1606137113] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Recent studies have shown that information from peripherally presented images is present in the human foveal retinotopic cortex, presumably because of feedback signals. We investigated this potential feedback signal by presenting noise in fovea at different object-noise stimulus onset asynchronies (SOAs), whereas subjects performed a discrimination task on peripheral objects. Results revealed a selective impairment of performance when foveal noise was presented at 250-ms SOA, but only for tasks that required comparing objects' spatial details, suggesting a task- and stimulus-dependent foveal processing mechanism. Critically, the temporal window of foveal processing was shifted when mental rotation was required for the peripheral objects, indicating that the foveal retinotopic processing is not automatically engaged at a fixed time following peripheral stimulation; rather, it occurs at a stage when detailed information is required. Moreover, fMRI measurements using multivoxel pattern analysis showed that both image and object category-relevant information of peripheral objects was represented in the foveal cortex. Taken together, our results support the hypothesis of a temporally flexible feedback signal to the foveal retinotopic cortex when discriminating objects in the visual periphery.
Collapse
|
49
|
Rao HM, Mayo JP, Sommer MA. Circuits for presaccadic visual remapping. J Neurophysiol 2016; 116:2624-2636. [PMID: 27655962 DOI: 10.1152/jn.00182.2016] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Accepted: 09/14/2016] [Indexed: 01/08/2023] Open
Abstract
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF's local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about the reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Collapse
Affiliation(s)
- Hrishikesh M Rao
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina;
| | - J Patrick Mayo
- Department of Neurobiology, Duke School of Medicine, Duke University, Durham, North Carolina; and
| | - Marc A Sommer
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke School of Medicine, Duke University, Durham, North Carolina; and.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
50
|
Szinte M, Jonikaitis D, Rolfs M, Cavanagh P, Deubel H. Presaccadic motion integration between current and future retinotopic locations of attended objects. J Neurophysiol 2016; 116:1592-1602. [PMID: 27385792 DOI: 10.1152/jn.00171.2016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Accepted: 07/05/2016] [Indexed: 11/22/2022] Open
Abstract
Object tracking across eye movements is thought to rely on presaccadic updating of attention between the object's current and its "remapped" location (i.e., the postsaccadic retinotopic location). We report evidence for a bifocal, presaccadic sampling between these two positions. While preparing a saccade, participants viewed four spatially separated random dot kinematograms, one of which was cued by a colored flash. They reported the direction of a coherent motion signal at the cued location while a second signal occurred simultaneously either at the cue's remapped location or at one of several control locations. Motion integration between the signals occurred only when the two motion signals were congruent and were shown at the cue and at its remapped location. This shows that the visual system integrates features between both the current and the future retinotopic locations of an attended object and that such presaccadic sampling is feature specific.
Collapse
Affiliation(s)
- Martin Szinte
- Allgemeine und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany;
| | - Donatas Jonikaitis
- Allgemeine und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Martin Rolfs
- Bernstein Center for Computational Neuroscience and Department of Psychology, Humboldt Universität zu Berlin, Berlin, Germany
| | - Patrick Cavanagh
- Laboratoire Psychologie de la Perception, Université Paris Descartes and Centre National de la Recherche Scientifique (UMR 8242), Paris, France; and Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire
| | - Heiner Deubel
- Allgemeine und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|