1
|
Steinberg NJ, Roth ZN, Movshon JA, Merriam E. Brain representations of motion and position in the double-drift illusion. eLife 2024; 13:e76803. [PMID: 38809774 PMCID: PMC11136492 DOI: 10.7554/elife.76803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 03/28/2024] [Indexed: 05/31/2024] Open
Abstract
In the 'double-drift' illusion, local motion within a window moving in the periphery of the visual field alters the window's perceived path. The illusion is strong even when the eyes track a target whose motion matches the window so that the stimulus remains stable on the retina. This implies that the illusion involves the integration of retinal signals with non-retinal eye-movement signals. To identify where in the brain this integration occurs, we measured BOLD fMRI responses in visual cortex while subjects experienced the double-drift illusion. We then used a combination of univariate and multivariate decoding analyses to identify (1) which brain areas were sensitive to the illusion and (2) whether these brain areas contained information about the illusory stimulus trajectory. We identified a number of cortical areas that responded more strongly during the illusion than a control condition that was matched for low-level stimulus properties. Only in area hMT+ was it possible to decode the illusory trajectory. We additionally performed a number of important controls that rule out possible low-level confounds. Concurrent eye tracking confirmed that subjects accurately tracked the moving target; we were unable to decode the illusion trajectory using eye position measurements recorded during fMRI scanning, ruling out explanations based on differences in oculomotor behavior. Our results provide evidence for a perceptual representation in human visual cortex that incorporates extraretinal information.
Collapse
Affiliation(s)
- Noah J Steinberg
- Laboratory of Brain and Cognition, National Institute of Mental HealthBethesdaUnited States
| | - Zvi N Roth
- Laboratory of Brain and Cognition, National Institute of Mental HealthBethesdaUnited States
- School of Psychological Sciences, Faculty of Social Sciences, Tel Aviv UniversityTel AvivIsrael
| | | | - Elisha Merriam
- Laboratory of Brain and Cognition, National Institute of Mental HealthBethesdaUnited States
| |
Collapse
|
2
|
Lu Z, Golomb JD. Dynamic saccade context triggers more stable object-location binding. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.26.538469. [PMID: 37162863 PMCID: PMC10168424 DOI: 10.1101/2023.04.26.538469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Our visual systems rapidly perceive and integrate information about object identities and locations. There is long-standing debate about how we achieve world-centered (spatiotopic) object representations across eye movements, with many studies reporting persistent retinotopic (eye-centered) effects even for higher-level object-location binding. But these studies are generally conducted in fairly static experimental contexts. Might spatiotopic object-location binding only emerge in more dynamic saccade contexts? In the present study, we investigated this using the Spatial Congruency Bias paradigm in healthy adults. In the static (single saccade) context, we found purely retinotopic binding, as before. However, robust spatiotopic binding emerged in the dynamic (multiple frequent saccades) context. We further isolated specific factors that modulate retinotopic and spatiotopic binding. Our results provide strong evidence that dynamic saccade context can trigger more stable object-location binding in ecologically-relevant spatiotopic coordinates, perhaps via a more flexible brain state which accommodates improved visual stability in the dynamic world.
Collapse
|
3
|
Gaglianese A, Fracasso A, Fernandes FG, Harvey B, Dumoulin SO, Petridou N. Mechanisms of speed encoding in the human middle temporal cortex measured by 7T fMRI. Hum Brain Mapp 2023; 44:2050-2061. [PMID: 36637226 PMCID: PMC9980888 DOI: 10.1002/hbm.26193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 11/28/2022] [Accepted: 12/11/2022] [Indexed: 01/14/2023] Open
Abstract
Perception of dynamic scenes in our environment results from the evaluation of visual features such as the fundamental spatial and temporal frequency components of a moving object. The ratio between these two components represents the object's speed of motion. The human middle temporal cortex hMT+ has a crucial biological role in the direct encoding of object speed. However, the link between hMT+ speed encoding and the spatiotemporal frequency components of a moving object is still under explored. Here, we recorded high resolution 7T blood oxygen level-dependent BOLD responses to different visual motion stimuli as a function of their fundamental spatial and temporal frequency components. We fitted each hMT+ BOLD response with a 2D Gaussian model allowing for two different speed encoding mechanisms: (1) distinct and independent selectivity for the spatial and temporal frequencies of the visual motion stimuli; (2) pure tuning for the speed of motion. We show that both mechanisms occur but in different neuronal groups within hMT+, with the largest subregion of the complex showing separable tuning for the spatial and temporal frequency of the visual stimuli. Both mechanisms were highly reproducible within participants, reconciling single cell recordings from MT in animals that have showed both encoding mechanisms. Our findings confirm that a more complex process is involved in the perception of speed than initially thought and suggest that hMT+ plays a primary role in the evaluation of the spatial features of the moving visual input.
Collapse
Affiliation(s)
- Anna Gaglianese
- The Laboratory for Investigative Neurophysiology (The LINE), Department of RadiologyUniversity Hospital Center and University of LausanneLausanneSwitzerland
- Department of Neurosurgery and Neurology, UMC Utrecht Brain CenterUniversity Medical CenterUtrechtNetherlands
- Department of Radiology, Center for Image SciencesUniversity Medical CenterUtrechtNetherlands
| | - Alessio Fracasso
- Department of Radiology, Center for Image SciencesUniversity Medical CenterUtrechtNetherlands
- University of GlasgowSchool of Psychology and NeuroscienceGlasgowUK
- Spinoza Center for NeuroimagingAmsterdamNetherlands
| | - Francisco G. Fernandes
- Department of Neurosurgery and Neurology, UMC Utrecht Brain CenterUniversity Medical CenterUtrechtNetherlands
| | - Ben Harvey
- Experimental Psychology, Helmholtz InstituteUtrecht UniversityUtrechtNetherlands
| | - Serge O. Dumoulin
- Experimental Psychology, Helmholtz InstituteUtrecht UniversityUtrechtNetherlands
| | - Natalia Petridou
- Department of Radiology, Center for Image SciencesUniversity Medical CenterUtrechtNetherlands
| |
Collapse
|
4
|
Zhao Z, Ahissar E, Victor JD, Rucci M. Inferring visual space from ultra-fine extra-retinal knowledge of gaze position. Nat Commun 2023; 14:269. [PMID: 36650146 PMCID: PMC9845343 DOI: 10.1038/s41467-023-35834-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 01/03/2023] [Indexed: 01/18/2023] Open
Abstract
It has long been debated how humans resolve fine details and perceive a stable visual world despite the incessant fixational motion of their eyes. Current theories assume these processes to rely solely on the visual input to the retina, without contributions from motor and/or proprioceptive sources. Here we show that contrary to this widespread assumption, the visual system has access to high-resolution extra-retinal knowledge of fixational eye motion and uses it to deduce spatial relations. Building on recent advances in gaze-contingent display control, we created a spatial discrimination task in which the stimulus configuration was entirely determined by oculomotor activity. Our results show that humans correctly infer geometrical relations in the absence of spatial information on the retina and accurately combine high-resolution extraretinal monitoring of gaze displacement with retinal signals. These findings reveal a sensory-motor strategy for encoding space, in which fine oculomotor knowledge is used to interpret the fixational input to the retina.
Collapse
Affiliation(s)
- Zhetuo Zhao
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ehud Ahissar
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY, USA
| | - Michele Rucci
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
5
|
Yoshimatsu H, Murai Y, Yotsumoto Y. Effect of luminance signal and perceived speed on motion-related duration distortions. Vision Res 2022; 198:108070. [DOI: 10.1016/j.visres.2022.108070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 04/28/2022] [Accepted: 05/05/2022] [Indexed: 11/30/2022]
|
6
|
Serial dependence for oculomotor control depends on early sensory signals. Curr Biol 2022; 32:2956-2961.e3. [PMID: 35640623 DOI: 10.1016/j.cub.2022.05.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 04/21/2022] [Accepted: 05/05/2022] [Indexed: 11/23/2022]
Abstract
To create an accurate percept of the world, the visual system relies on past experience and prior assumptions.1 For example, although the retinal projection of an object moving in depth changes drastically, we still perceive the object at a constant size and velocity.2,3 Consequently, if we see the same object with a constant retinal size at two different depth levels, the perceived size differs (illustrated by the Ponzo illusion). Past experience also directly influences perceptual judgments, an effect known as serial dependence.4,5 Such sequential effects have also been reported for oculomotor behavior, even on the trial-by-trial level.6-10 An integration of past experiences seems like a smart and sophisticated mechanism to reduce uncertainty and improve behavior in a world full of statistical regularities. By leveraging the Ponzo illusion to dissociate perceived size and speed from retinal signals, we show that serial-dependence effects for oculomotor control are mediated by retinal error signals. These sequential effects likely take place in early sensory processing because they transfer to different visual stimuli. In contrast to recently reported history effects for perceptual decisions,11 sequential effects for oculomotor control deviate from perceptual mechanisms by not integrating spatial context and by ignoring size and velocity constancy. Although this dissociation might appear suboptimal, we argue that this effect reveals the different goals of the oculomotor and perceptual systems. The oculomotor system tries to reduce retinal error signals to bring and keep the target close to the fovea, whereas the visual system interprets retinal input to achieve an accurate representation of the world.12.
Collapse
|
7
|
Steinberg NJ, Roth ZN, Merriam EP. Spatiotopic and retinotopic memory in the context of natural images. J Vis 2022; 22:11. [PMID: 35323869 PMCID: PMC8963666 DOI: 10.1167/jov.22.4.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Neural responses throughout the visual cortex encode stimulus location in a retinotopic (i.e., eye-centered) reference frame, and memory for stimulus position is most precise in retinal coordinates. Yet visual perception is spatiotopic: objects are perceived as stationary, even though eye movements cause frequent displacement of their location on the retina. Previous studies found that, after a single saccade, memory of retinotopic locations is more accurate than memory of spatiotopic locations. However, it is not known whether various aspects of natural viewing affect the retinotopic reference frame advantage. We found that the retinotopic advantage may in part depend on a retinal afterimage, which can be effectively nullified through backwards masking. Moreover, in the presence of natural scenes, spatiotopic memory is more accurate than retinotopic memory, but only when subjects are provided sufficient time to process the scene before the eye movement. Our results demonstrate that retinotopic memory is not always more accurate than spatiotopic memory and that the fidelity of memory traces in both reference frames are sensitive to the presence of contextual cues.
Collapse
Affiliation(s)
- Noah J Steinberg
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD, USA.,
| | - Zvi N Roth
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD, USA.,
| | - Elisha P Merriam
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD, USA.,
| |
Collapse
|
8
|
Yoshimoto S, Hayasaka T. Common and independent processing of visual motion perception and oculomotor response. J Vis 2022; 22:6. [PMID: 35293955 PMCID: PMC8944401 DOI: 10.1167/jov.22.4.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual motion signals are used not only to drive motion perception but also to elicit oculomotor responses. A fundamental question is whether perceptual and oculomotor processing of motion signals shares a common mechanism. This study aimed to address this question using visual motion priming, in which the perceived direction of a directionally ambiguous stimulus is biased in the same (positive priming) or opposite (negative priming) direction as that of a priming stimulus. The priming effect depends on the duration of the priming stimulus. It is assumed that positive and negative priming are mediated by high- and low-level motion systems, respectively. Participants were asked to judge the perceived direction of a π-phase-shifted test grating after a smoothly drifting priming grating during varied durations. Their eye movements were measured while the test grating was presented. The perception and eye movements were discrepant under positive priming and correlated under negative priming on a trial-by-trial basis when an interstimulus interval was inserted between the priming and test stimuli, indicating that the eye movements were evoked by the test stimulus per se. These findings suggest that perceptual and oculomotor responses are induced by a common mechanism at a low level of motion processing but by independent mechanisms at a high level of motion processing.
Collapse
Affiliation(s)
- Sanae Yoshimoto
- School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan.,
| | - Tomoyuki Hayasaka
- School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan.,
| |
Collapse
|
9
|
Dreneva A, Chernova U, Ermolova M, MacInnes WJ. Attention Trade-Off for Localization and Saccadic Remapping. Vision (Basel) 2021; 5:vision5020024. [PMID: 34065173 PMCID: PMC8163179 DOI: 10.3390/vision5020024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 05/15/2021] [Accepted: 05/18/2021] [Indexed: 12/03/2022] Open
Abstract
Predictive remapping may be the principal mechanism of maintaining visual stability, and attention is crucial for this process. We aimed to investigate the role of attention in predictive remapping in a dual task paradigm with two conditions, with and without saccadic remapping. The first task was to remember the clock hand position either after a saccade to the clock face (saccade condition requiring remapping) or after the clock being displaced to the fixation point (fixation condition with no saccade). The second task was to report the remembered location of a dot shown peripherally in the upper screen for 1 s. We predicted that performance in the two tasks would interfere in the saccade condition, but not in the fixation condition, because of the attentional demands needed for remapping with the saccade. For the clock estimation task, answers in the saccadic trials tended to underestimate the actual position by approximately 37 ms while responses in the fixation trials were closer to veridical. As predicted, the findings also revealed significant interaction between the two tasks showing decreased predicted accuracy in the clock task for increased error in the localization task, but only for the saccadic condition. Taken together, these results point at the key role of attention in predictive remapping.
Collapse
Affiliation(s)
- Anna Dreneva
- Faculty of Psychology, Lomonosov Moscow State University, 125009 Moscow, Russia
- Correspondence:
| | - Ulyana Chernova
- Vision Modelling Laboratory, Faculty of Social Science, HSE University, 101000 Moscow, Russia; (U.C.); (W.J.M.)
- School of Psychology, HSE University, 101000 Moscow, Russia;
| | - Maria Ermolova
- School of Psychology, HSE University, 101000 Moscow, Russia;
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, 72074 Tübingen, Germany
| | - William Joseph MacInnes
- Vision Modelling Laboratory, Faculty of Social Science, HSE University, 101000 Moscow, Russia; (U.C.); (W.J.M.)
- School of Psychology, HSE University, 101000 Moscow, Russia;
| |
Collapse
|
10
|
Ge Y, Sun Z, Qian C, He S. Spatiotopic updating across saccades in the absence of awareness. J Vis 2021; 21:7. [PMID: 33961004 PMCID: PMC8114003 DOI: 10.1167/jov.21.5.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 03/15/2021] [Indexed: 11/25/2022] Open
Abstract
Despite the continuously changing visual inputs caused by eye movements, our perceptual representation of the visual world remains remarkably stable. Visual stability has been a major area of interest within the field of visual neuroscience. The early visual cortical areas are retinotopic-organized, and presumably there is a retinotopic to spatiotopic transformation process that supports the stable representation of the visual world. In this study, we used a cross-saccadic adaptation paradigm to show that both the orientation adaptation and face gender adaptation could still be observed at the same spatiotopic (but different retinotopic) locations even when the adapting stimuli were rendered invisible. These results suggest that awareness of a visual object is not required for its transformation from the retinotopic to the spatiotopic reference frame.
Collapse
Affiliation(s)
- Yijun Ge
- State Key Lab of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Vision and Attention Lab, Department of Psychology, University of Minnesota, MN, USA
| | - Zhouyuan Sun
- State Key Lab of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Department of Neurosurgery, Huazhong University of Science and Technology Union Shenzhen Hospital, Shenzhen, Guangdong, China
- The 6th Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, Guangdong, China
| | - Chencan Qian
- State Key Lab of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Sheng He
- State Key Lab of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Vision and Attention Lab, Department of Psychology, University of Minnesota, MN, USA
- Chinese Academy of Sciences, Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
11
|
Neural Representations of Covert Attention across Saccades: Comparing Pattern Similarity to Shifting and Holding Attention during Fixation. eNeuro 2021; 8:ENEURO.0186-20.2021. [PMID: 33558269 PMCID: PMC8026251 DOI: 10.1523/eneuro.0186-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 01/21/2021] [Accepted: 01/26/2021] [Indexed: 11/21/2022] Open
Abstract
We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold attention”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.
Collapse
|
12
|
Predictive remapping leaves a behaviorally measurable attentional trace on eye-centered brain maps. Psychon Bull Rev 2021; 28:1243-1251. [PMID: 33634356 DOI: 10.3758/s13423-021-01893-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2021] [Indexed: 11/08/2022]
Abstract
How does the brain maintain spatial attention despite the retinal displacement of objects by saccades? A possible solution is to use the vector of an upcoming saccade to compensate for the shift of objects on eye-centered (retinotopic) brain maps. In support of this hypothesis, previous studies have revealed attentional effects at the future retinal locus of an attended object, just before the onset of saccades. A critical yet unresolved theoretical issue is whether predictively remapped attentional effects would persist long enough on eye-centered brain maps, so no external input (goal, expectation, reward, memory, etc.) is needed to maintain spatial attention immediately following saccades. The present study examined this issue with inhibition of return (IOR), an attentional effect that reveals itself in both world-centered and eye-centered coordinates, and predictively remaps before saccades. In the first task, a saccade was introduced to a cueing task ("nonreturn-saccade task") to show that IOR is coded in world-centered coordinates following saccades. In a second cueing task, two consecutive saccades were executed to trigger remapping and to dissociate the retinal locus relevant to remapping from the cued retinal locus ("return-saccade" task). IOR was observed at the remapped retinal locus 430-ms following the (first) saccade that triggered remapping. A third cueing task ("no-remapping" task) further revealed that the lingering IOR effect left by remapping was not confounded by the attention spillover. These results together show that predictive remapping leaves a robust attentional trace on eye-centered brain maps. This retinotopic trace is sufficient to sustain spatial attention for a few hundred milliseconds following saccades.
Collapse
|
13
|
Sauer Y, Wahl S, Rifai K. Parallel Adaptation to Spatially Distinct Distortions. Front Psychol 2020; 11:544867. [PMID: 33329178 PMCID: PMC7715010 DOI: 10.3389/fpsyg.2020.544867] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 09/01/2020] [Indexed: 11/13/2022] Open
Abstract
Optical distortions as a visual disturbance are inherent in many optical devices such as spectacles or virtual reality headsets. In such devices, distortions vary spatially across the visual field. In progressive addition lenses, for example, the left and right regions of the lens skew the peripheral parts of the wearers visual field in opposing directions. The human visual system adapts to homogeneous distortions and the respective aftereffects are transferred to non-retinotopic locations. This study investigates simultaneous adaptation to two opposing distortions at different retinotopic locations. Two oppositely skewed natural image sequences were presented to 10 subjects as adaptation stimuli at two distinct locations in the visual field. To do so, subjects were instructed to keep fixation on a target. Eye tracking was used for gaze control. Change of perceived motion direction was measured in a direction identification task. The point of subjective equality (PSE), that is, the angle at which a group of coherently moving dots was perceived as moving horizontal, was determined for both retinal locations. The shift of perceived motion direction was evaluated by comparing PSE before and after adaptation. A significant shift at both retinal locations in the direction of the skew distortion of the corresponding adaptation stimulus is demonstrated. Consequently, parallel adaptation to two opposing distortions in a retinotopic reference frame was confirmed by this study.
Collapse
Affiliation(s)
- Yannick Sauer
- Institute for Ophtalmic Research, University of Tuebingen, Tuebingen, Germany
| | - Siegfried Wahl
- Institute for Ophtalmic Research, University of Tuebingen, Tuebingen, Germany.,Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Katharina Rifai
- Institute for Ophtalmic Research, University of Tuebingen, Tuebingen, Germany.,Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
14
|
Brief localised monocular deprivation in adults alters binocular rivalry predominance retinotopically and reduces spatial inhibition. Sci Rep 2020; 10:18739. [PMID: 33127963 PMCID: PMC7603489 DOI: 10.1038/s41598-020-75252-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Accepted: 10/07/2020] [Indexed: 11/29/2022] Open
Abstract
Short-term deprivation (2.5 h) of an eye has been shown to boost its relative ocular dominance in young adults. Here, we show that a much shorter deprivation period (3–6 min) produces a similar paradoxical boost that is retinotopic and reduces spatial inhibition on neighbouring, non-deprived areas. Partial deprivation was conducted in the left hemifield, central vision or in an annular region, later assessed with a binocular rivalry tracking procedure. Post-deprivation, dominance of the deprived eye increased when rivalling images were within the deprived retinotopic region, but not within neighbouring, non-deprived areas where dominance was dependent on the correspondence between the orientation content of the stimuli presented in the deprived and that of the stimuli presented in non-deprived areas. Together, these results accord with other deprivation studies showing V1 activity changes and reduced GABAergic inhibition.
Collapse
|
15
|
Ayhan I, Ozbagci D. Action-induced changes in the perceived temporal features of visual events. Vision Res 2020; 175:1-13. [PMID: 32623245 DOI: 10.1016/j.visres.2020.05.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Revised: 05/18/2020] [Accepted: 05/24/2020] [Indexed: 11/28/2022]
Abstract
Perceived duration can be subject to deviations around the time of a voluntary action. Whether the mechanisms underlying action-induced visual duration effects are effector-specific or require a more generalized action-linked multimodal calibration with the transient visual system, however, is a question yet to be answered. Here, we investigate this using dynamic visual stimuli presented as contingent upon the execution of an arbitrarily associated voluntary manual response. Our results demonstrate that the duration of intervals with arbitrarily associated keypress-visual event pair is perceived as shorter than the duration in a pure visual condition, where the same stimuli are rather passively observed without the execution of a concurrent action. Whereas the control experiments show that motor memory and attention cannot explain the action-induced changes in perceived temporal features, action-induced changes in perceived speed are dissociated from those in perceived duration, and that the duration compression disappears using isoluminant or static stimuli, which together provide evidence that these two effects can be modulated in the motion-processing units, although via separate neural mechanisms.
Collapse
Affiliation(s)
- Inci Ayhan
- Department of Psychology, Bogazici University, Istanbul, Turkey; Cognitive Science Program, Bogazici University, Istanbul, Turkey.
| | - Duygu Ozbagci
- Cognitive Science Program, Bogazici University, Istanbul, Turkey.
| |
Collapse
|
16
|
Van der Stoep N, Alais D. Motion Perception: Auditory Motion Encoded in a Visual Motion Area. Curr Biol 2020; 30:R775-R778. [DOI: 10.1016/j.cub.2020.05.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
17
|
Spatial congruency bias in identifying objects is triggered by retinal position congruence: Examination using the Ternus-Pikler illusion. Sci Rep 2020; 10:4630. [PMID: 32170153 PMCID: PMC7070042 DOI: 10.1038/s41598-020-61698-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 03/02/2020] [Indexed: 11/12/2022] Open
Abstract
When two different objects are sequentially presented at the same location, the viewer tends to misjudge them as identical (spatial congruency bias). The present study examined whether the spatial congruency bias would involve not only retinotopic but also non-retinotopic processing using the Ternus-Pikler illusion. In the experiments, two objects (central and peripheral) appeared in an initial frame. The target object was presented in the central area of the display, while the peripheral object was either on the left or right side of the target object. In the second frame, the target object was again presented in the central area, and the peripheral object was on the opposite side. Two kinds of inter-stimulus intervals were used. In the no-blank condition, the target object was perceived as stationary, and the peripheral object appeared to move to the opposite side. However, in the long-blank condition, the two objects were perceived to move together. Participants judged whether the target objects in the two frames were identical. As a result, the spatial congruency bias occurred irrespective of the ISI conditions. Our findings suggest that the spatial congruency bias is mainly based on retinotopic processing.
Collapse
|
18
|
Abstract
Humans are able to integrate pre- and postsaccadic percepts of an object across saccades to maintain perceptual stability. Previous studies have used Maximum Likelihood Estimation (MLE) to determine that integration occurs in a near-optimal manner. Here, we compared three different models to investigate the mechanism of integration in more detail: an early noise model, where noise is added to the pre- and postsaccadic signals before integration occurs; a late-noise model, where noise is added to the integrated signal after integration occurs; and a temporal summation model, where integration benefits arise from the longer transsaccadic presentation duration compared to pre- and postsaccadic presentation only. We also measured spatiotemporal aspects of integration to determine whether integration can occur for very brief stimulus durations, across two hemifields, and in spatiotopic and retinotopic coordinates. Pre-, post-, and transsaccadic performance was measured at different stimulus presentation durations, both at the saccade target and a location where the pre- and postsaccadic stimuli were presented in different hemifields across the saccade. Results showed that for both within- and between-hemifields conditions, integration could occur when pre- and postsaccadic stimuli were presented only briefly, and that the pattern of integration followed an early noise model. Whereas integration occurred when the pre- and post-saccadic stimuli were presented in the same spatiotopic coordinates, there was no integration when they were presented in the same retinotopic coordinates. This contrast suggests that transsaccadic integration is limited by early, independent, sensory noise acting separately on pre- and postsaccadic signals.
Collapse
Affiliation(s)
- Emma E M Stewart
- Experimental and Biological Psychology, University of Marburg, Marburg, Germany
| | - Alexander C Schütz
- Experimental and Biological Psychology, University of Marburg, Marburg, Germany
| |
Collapse
|
19
|
Memory for retinotopic locations is more accurate than memory for spatiotopic locations, even for visually guided reaching. Psychon Bull Rev 2019; 25:1388-1398. [PMID: 29159799 DOI: 10.3758/s13423-017-1401-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
To interact successfully with objects, we must maintain stable representations of their locations in the world. However, their images on the retina may be displaced several times per second by large, rapid eye movements. A number of studies have demonstrated that visual processing is heavily influenced by gaze-centered (retinotopic) information, including a recent finding that memory for an object's location is more accurate and precise in gaze-centered (retinotopic) than world-centered (spatiotopic) coordinates (Golomb & Kanwisher, 2012b). This effect is somewhat surprising, given our intuition that behavior is successfully guided by spatiotopic representations. In the present experiment, we asked whether the visual system may rely on a more spatiotopic memory store depending on the mode of responding. Specifically, we tested whether reaching toward and tapping directly on an object's location could improve memory for its spatiotopic location. Participants performed a spatial working memory task under four conditions: retinotopic vs. spatiotopic task, and computer mouse click vs. touchscreen reaching response. When participants responded by clicking with a mouse on the screen, we replicated Golomb & Kanwisher's original results, finding that memory was more accurate in retinotopic than spatiotopic coordinates and that the accuracy of spatiotopic memory deteriorated substantially more than retinotopic memory with additional eye movements during the memory delay. Critically, we found the same pattern of results when participants responded by using their finger to reach and tap the remembered location on the monitor. These results further support the hypothesis that spatial memory is natively retinotopic; we found no evidence that engaging the motor system improves spatiotopic memory across saccades.
Collapse
|
20
|
Yoshimoto S, Takeuchi T. Effect of spatial attention on spatiotopic visual motion perception. J Vis 2019; 19:4. [PMID: 30943532 DOI: 10.1167/19.4.4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We almost never experience visual instability, despite retinal image instability induced by eye movements. How the stability of visual perception is maintained through spatiotopic representation remains a matter of debate. The discrepancies observed in the findings of existing neuroscience studies regarding spatiotopic representation partly originate from differences in regard to how attention is deployed to stimuli. In this study, we psychophysically examined whether spatial attention is needed to perceive spatiotopic visual motion. For this purpose, we used visual motion priming, which is a phenomenon in which a preceding priming stimulus modulates the perceived moving direction of an ambiguous test stimulus, such as a drifting grating that phase shifts by 180°. To examine the priming effect in different coordinates, participants performed a saccade soon after the offset of a primer. The participants were tasked with judging the direction of a subsequently presented test stimulus. To control the effect of spatial attention, the participants were asked to conduct a concurrent dot contrast-change detection task after the saccade. Positive priming was prominent in spatiotopic conditions, whereas negative priming was dominant in retinotopic conditions. At least a 600-ms interval between the priming and test stimuli was needed to observe positive priming in spatiotopic coordinates. When spatial attention was directed away from the location of the test stimulus, spatiotopic positive motion priming completely disappeared; meanwhile, the spatiotopic positive motion priming at shorter interstimulus intervals was enhanced when spatial attention was directed to the location of the test stimulus. These results provide evidence that an attentional resource is requisite for developing spatiotopic representation more quickly.
Collapse
Affiliation(s)
- Sanae Yoshimoto
- Graduate School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan
| | - Tatsuto Takeuchi
- Department of Psychology, Japan Women's University, Kanagawa, Japan
| |
Collapse
|
21
|
Abstract
Both adaptation and perceptual learning can change how we perceive the visual environment, reflecting the plasticity of the visual system. Our previous work has investigated the interaction between the two aspects of visual plasticity. One of the main findings is that multiple days of repeated motion adaptation attenuates motion aftereffect, which is explained by habituation of motion adaptation. Interestingly, there was almost no transfer of the effect to the untrained adapter, which differed from the trained adapter in the features including retinotopic location, spatiotopic location, and motion direction. Given that the reference frame of motion aftereffect is proposed to be retinotopic, it remains unclear whether the effect we refer to as habituation effect of motion adaptation is more like a special type of motion adaptation or not. Therefore, in three experiments, we examined the role of retinotopic location, spatiotopic location, and motion direction on the transfer of habituation, respectively. In each experiment, only one of the features was kept the same for the trained and untrained conditions. We found that the habituation effect transferred across both the retinotopic and spatiotopic locations as long as the adapting direction remained the same. The findings indicate that the effect we refer to as habituation of motion adaptation is anchored neither in eye-centered (retinotopic) nor world-centered (spatiotopic) coordinates. Rather, it is specific to the direction of the adapter. Therefore, the habituation effect of motion adaptation cannot be ascribed to a variant of motion adaptation.
Collapse
Affiliation(s)
- Xue Dong
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Min Bao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,State Key Laboratory of Brain and Cognitive Science, Beijing, China
| |
Collapse
|
22
|
van der Groen O, Tang MF, Wenderoth N, Mattingley JB. Stochastic resonance enhances the rate of evidence accumulation during combined brain stimulation and perceptual decision-making. PLoS Comput Biol 2018; 14:e1006301. [PMID: 30020922 PMCID: PMC6066257 DOI: 10.1371/journal.pcbi.1006301] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 07/30/2018] [Accepted: 06/14/2018] [Indexed: 12/11/2022] Open
Abstract
Perceptual decision-making relies on the gradual accumulation of noisy sensory evidence. It is often assumed that such decisions are degraded by adding noise to a stimulus, or to the neural systems involved in the decision making process itself. But it has been suggested that adding an optimal amount of noise can, under appropriate conditions, enhance the quality of subthreshold signals in nonlinear systems, a phenomenon known as stochastic resonance. Here we asked whether perceptual decisions made by human observers obey these stochastic resonance principles, by adding noise directly to the visual cortex using transcranial random noise stimulation (tRNS) while participants judged the direction of coherent motion in random-dot kinematograms presented at the fovea. We found that adding tRNS bilaterally to visual cortex enhanced decision-making when stimuli were just below perceptual threshold, but not when they were well below or above threshold. We modelled the data under a drift diffusion framework, and showed that bilateral tRNS selectively increased the drift rate parameter, which indexes the rate of evidence accumulation. Our study is the first to provide causal evidence that perceptual decision-making is susceptible to a stochastic resonance effect induced by tRNS, and to show that this effect arises from selective enhancement of the rate of evidence accumulation for sub-threshold sensory events.
Collapse
Affiliation(s)
- Onno van der Groen
- Neural Control of Movement Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
- * E-mail:
| | - Matthew F. Tang
- Queensland Brain Institute, The University of Queensland, St Lucia, Queensland, Australia
| | - Nicole Wenderoth
- Neural Control of Movement Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Jason B. Mattingley
- Queensland Brain Institute, The University of Queensland, St Lucia, Queensland, Australia
- School of Psychology, The University of Queensland, St Lucia, Queensland, Australia
| |
Collapse
|
23
|
Nau M, Schindler A, Bartels A. Real-motion signals in human early visual cortex. Neuroimage 2018; 175:379-387. [PMID: 29649561 DOI: 10.1016/j.neuroimage.2018.04.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2017] [Revised: 04/06/2018] [Accepted: 04/08/2018] [Indexed: 11/25/2022] Open
Abstract
Eye movements induce visual motion that can complicate the stable perception of the world. The visual system compensates for such self-induced visual motion by integrating visual input with efference copies of eye movement commands. This mechanism is central as it does not only support perceptual stability but also mediates reliable perception of world-centered objective motion. In humans, it remains elusive whether visual motion responses in early retinotopic cortex are driven by objective motion or by retinal motion associated with it. To address this question, we used fMRI to examine functional responses of sixteen visual areas to combinations of planar objective motion and pursuit eye movements. Observers were exposed to objective motion that was faster, matched or slower relative to pursuit, allowing us to compare conditions that differed in objective motion velocity while retinal motion and eye movement signals were matched. Our results show that not only higher level motion regions such as V3A and V6, but also early visual areas signaled the velocity of objective motion, hence the product of integrating retinal with non-retinal signals. These results shed new light on mechanisms that mediate perceptual stability and real-motion perception, and show that extra-retinal signals related to pursuit eye movements influence processing in human early visual cortex.
Collapse
Affiliation(s)
- Matthias Nau
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, Trondheim, Norway; Egil & Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Trondheim, Norway; Norwegian University of Science and Technology, Trondheim, Norway
| | - Andreas Schindler
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany; Department of Psychology, University of Tübingen, Tübingen, Germany; Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Andreas Bartels
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany; Department of Psychology, University of Tübingen, Tübingen, Germany; Max Planck Institute for Biological Cybernetics, Tübingen, Germany; Bernstein Centre for Computational Neuroscience, Tübingen, Germany.
| |
Collapse
|
24
|
Organization of area hV5/MT+ in subjects with homonymous visual field defects. Neuroimage 2018; 190:254-268. [PMID: 29627591 DOI: 10.1016/j.neuroimage.2018.03.062] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2018] [Revised: 03/05/2018] [Accepted: 03/26/2018] [Indexed: 11/21/2022] Open
Abstract
Damage to the primary visual cortex (V1) leads to a visual field loss (scotoma) in the retinotopically corresponding part of the visual field. Nonetheless, a small amount of residual visual sensitivity persists within the blind field. This residual capacity has been linked to activity observed in the middle temporal area complex (V5/MT+). However, it remains unknown whether the organization of hV5/MT+ changes following early visual cortical lesions. We studied the organization of area hV5/MT+ of five patients with dense homonymous defects in a quadrant of the visual field as a result of partial V1+ or optic radiation lesions. To do so, we developed a new method, which models the boundaries of population receptive fields directly from the BOLD signal of each voxel in the visual cortex. We found responses in hV5/MT+ arising inside the scotoma for all patients and identified two possible sources of activation: 1) responses might originate from partially lesioned parts of area V1 corresponding to the scotoma, and 2) responses can also originate independent of area V1 input suggesting the existence of functional V1-bypassing pathways. Apparently, visually driven activity observed in hV5/MT+ is not sufficient to mediate conscious vision. More surprisingly, visually driven activity in corresponding regions of V1 and early extrastriate areas including hV5/MT+ did not guarantee visual perception in the group of patients with post-geniculate lesions that we examined. This suggests that the fine coordination of visual activity patterns across visual areas may be an important determinant of whether visual perception persists following visual cortical lesions.
Collapse
|
25
|
Heller NH, Davidenko N. Dissociating Higher and Lower Order Visual Motion Systems by Priming Illusory Apparent Motion. Perception 2017; 47:30-43. [PMID: 28893151 DOI: 10.1177/0301006617731007] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Motion processing is thought of as a hierarchical system composed of higher and lower order components. Past research has shown that these components can be dissociated using motion priming paradigms in which the lower order system produces negative priming while the higher order system produces positive priming. By manipulating various stimulus parameters, researchers have probed these two systems using bistable test stimuli that permit only two motion interpretations. Here we employ maximally ambiguous test stimuli composed of randomly refreshing pixels in a task that allows observers to report more than just two types of motion percepts. We show that even with such stimuli, motion priming can constrain the unstructured random pixel patterns into coherent percepts of positive or negative apparent motion. Moreover, we find that the higher order system is uniquely susceptible to cognitive influences, as evidenced by a significant suppression of positive priming in the presence of alternative response options.
Collapse
|
26
|
Abstract
UNLABELLED The ability to perceive the visual world around us as spatially stable despite frequent eye movements is one of the long-standing mysteries of neuroscience. The existence of neural mechanisms processing spatiotopic information is indispensable for a successful interaction with the external world. However, how the brain handles spatiotopic information remains a matter of debate. We here combined behavioral and fMRI adaptation to investigate the coding of spatiotopic information in the human brain. Subjects were adapted by a prolonged presentation of a tilted grating. Thereafter, they performed a saccade followed by the brief presentation of a probe. This procedure allowed dissociating adaptation aftereffects at retinal and spatiotopic positions. We found significant behavioral and functional adaptation in both retinal and spatiotopic positions, indicating information transfer into a spatiotopic coordinate system. The brain regions involved were located in ventral visual areas V3, V4, and VO. Our findings suggest that spatiotopic representations involved in maintaining visual stability are constructed by dynamically remapping visual feature information between retinotopic regions within early visual areas. SIGNIFICANCE STATEMENT Why do we perceive the visual world as stable, although we constantly perform saccadic eye movements? We investigated how the visual system codes object locations in spatiotopic (i.e., external world) coordinates. We combined visual adaptation, in which the prolonged exposure to a specific visual feature alters perception, with fMRI adaptation, where the repeated presentation of a stimulus leads to a reduction in the BOLD amplitude. Functionally, adaptation was found in visual areas representing the retinal location of an adaptor but also at representations corresponding to its spatiotopic position. The results suggest that an active dynamic shift transports information in visual cortex to counteract the retinal displacement associated with saccade eye movements.
Collapse
|
27
|
Nishimoto S, Huth AG, Bilenko NY, Gallant JL. Eye movement-invariant representations in the human visual system. J Vis 2017; 17:11. [PMID: 28114479 PMCID: PMC5256465 DOI: 10.1167/17.1.11] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
Collapse
Affiliation(s)
- Shinji Nishimoto
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USACenter for Information and Neural Networks, NICT and Osaka University, Osaka,
| | - Alexander G Huth
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Natalia Y Bilenko
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USADepartment of Psychology, University of California, Berkeley, CA, USA
| |
Collapse
|
28
|
Shafer-Skelton A, Kupitz CN, Golomb JD. Object-location binding across a saccade: A retinotopic spatial congruency bias. Atten Percept Psychophys 2017; 79:765-781. [PMID: 28070793 PMCID: PMC5354979 DOI: 10.3758/s13414-016-1263-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating - a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the "spatial congruency bias," to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the spatial congruency bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic congruency bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates.
Collapse
Affiliation(s)
- Anna Shafer-Skelton
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA
| | - Colin N Kupitz
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA
| | - Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA.
| |
Collapse
|
29
|
The reference frame of the tilt aftereffect measured by differential Pavlovian conditioning. Sci Rep 2017; 7:40525. [PMID: 28094321 PMCID: PMC5240094 DOI: 10.1038/srep40525] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Accepted: 12/07/2016] [Indexed: 11/08/2022] Open
Abstract
We used a differential Pavlovian conditioning paradigm to measure tilt aftereffect (TAE) strength. Gabor patches, rotated clockwise and anticlockwise, were used as conditioned stimuli (CSs), one of which (CS+) was followed by the unconditioned stimulus (UCS), whereas the other (CS−) appeared alone. The UCS was an air puff delivered to the left eye. In addition to the CS+ and CS−, the vertical test patch was also presented for the clockwise and anticlockwise adapters. The vertical patch was not followed by the UCS. After participants acquired differential conditioning, eyeblink conditioned responses (CRs) were observed for the vertical patch when it appeared to be tilted in the same direction as the CS+ owing to the TAE. The effect was observed not only when the adapter and test stimuli were presented in the same retinotopic position but also when they were presented in the same spatiotopic position, although spatiotopic TAE was weak—it occurred approximately half as often as the full effect. Furthermore, spatiotopic TAE decayed as the time after saccades increased, but did not decay as the time before saccades increased. These results suggest that the time before the performance of saccadic eye movements is needed to compute the spatiotopic representation.
Collapse
|
30
|
Takeuchi T, Yoshimoto S, Shimada Y, Kochiyama T, Kondo HM. Individual differences in visual motion perception and neurotransmitter concentrations in the human brain. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0111. [PMID: 28044021 DOI: 10.1098/rstb.2016.0111] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2016] [Indexed: 11/12/2022] Open
Abstract
Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Tatsuto Takeuchi
- Department of Psychology, Japan Women's University, Kawasaki, Kanagawa 214-8565, Japan .,Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa 243-0198, Japan
| | - Sanae Yoshimoto
- Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa 243-0198, Japan.,School of Psychology, Chukyo University, Nagoya, Aichi 466-8666, Japan
| | - Yasuhiro Shimada
- Brain Activity Imaging Center, ATR-Promotions, Seika-cho, Kyoto 619-0288, Japan
| | - Takanori Kochiyama
- Brain Activity Imaging Center, ATR-Promotions, Seika-cho, Kyoto 619-0288, Japan.,Department of Cognitive Neuroscience, Advanced Telecommunications Research Institute International, Seika-cho, Kyoto 619-0228, Japan
| | - Hirohito M Kondo
- Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa 243-0198, Japan
| |
Collapse
|
31
|
Mikellidou K, Turi M, Burr DC. Spatiotopic coding during dynamic head tilt. J Neurophysiol 2016; 117:808-817. [PMID: 27903636 DOI: 10.1152/jn.00508.2016] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Accepted: 11/29/2016] [Indexed: 11/22/2022] Open
Abstract
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.
Collapse
Affiliation(s)
- Kyriaki Mikellidou
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy;
| | - Marco Turi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.,Fondazione Stella Maris Mediterraneo, Chiaromonte, Potenza, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy; and.,Neuroscience Institute, National Research Council (CNR), Pisa, Italy
| |
Collapse
|
32
|
Spatiotopic updating across saccades revealed by spatially-specific fMRI adaptation. Neuroimage 2016; 147:339-345. [PMID: 27913216 DOI: 10.1016/j.neuroimage.2016.11.071] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Revised: 10/17/2016] [Accepted: 11/28/2016] [Indexed: 11/21/2022] Open
Abstract
Brain representations of visual space are predominantly eye-centred (retinotopic) yet our experience of the world is largely world-centred (spatiotopic). A long-standing question is how the brain creates continuity between these reference frames across successive eye movements (saccades). Here we use functional magnetic resonance imaging (fMRI) to address whether spatially specific repetition suppression (RS) is evident during trans-saccadic perception. We presented two successive Gabor patches (S1 and S2) in either the upper or lower visual field, left or right of fixation. Spatial congruency was manipulated by having S1 and S2 occur in the same or different upper/lower visual field. On half the trials, a saccade was cued between S1 and S2, placing spatiotopic and retinotopic reference frames in opposition. Equivalent RS was observed in the posterior parietal cortex and frontal eye fields when S1-S2 were spatiotopically congruent, irrespective of whether retinotopic and spatiotopic coordinates were in accord or were placed in opposition by a saccade. Additionally the post-saccadic response to S2 demonstrated spatially-specific RS in retinotopic visual regions, with stronger RS in extrastriate than striate cortex. Collectively, these results are consistent with a robust trans-saccadic spatial updating mechanism for object position that directly influences even the earliest levels of visual processing.
Collapse
|
33
|
Fabius JH, Fracasso A, Van der Stigchel S. Spatiotopic updating facilitates perception immediately after saccades. Sci Rep 2016; 6:34488. [PMID: 27686998 PMCID: PMC5043283 DOI: 10.1038/srep34488] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2016] [Accepted: 09/14/2016] [Indexed: 11/08/2022] Open
Abstract
As the neural representation of visual information is initially coded in retinotopic coordinates, eye movements (saccades) pose a major problem for visual stability. If no visual information were maintained across saccades, retinotopic representations would have to be rebuilt after each saccade. It is currently strongly debated what kind of information (if any at all) is accumulated across saccades, and when this information becomes available after a saccade. Here, we use a motion illusion to examine the accumulation of visual information across saccades. In this illusion, an annulus with a random texture slowly rotates, and is then replaced with a second texture (motion transient). With increasing rotation durations, observers consistently perceive the transient as large rotational jumps in the direction opposite to rotation direction (backward jumps). We first show that accumulated motion information is updated spatiotopically across saccades. Then, we show that this accumulated information is readily available after a saccade, immediately biasing postsaccadic perception. The current findings suggest that presaccadic information is used to facilitate postsaccadic perception and are in support of a forward model of transsaccadic perception, aiming at anticipating the consequences of eye movements and operating within the narrow perisaccadic time window.
Collapse
Affiliation(s)
- Jasper H. Fabius
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
| | - Alessio Fracasso
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
- Radiology, Center for Image Sciences, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands
- Spinoza Centre for Neuroimaging, University of Amsterdam, 1105 BK Amsterdam, The Netherlands
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
| |
Collapse
|
34
|
Dunkley BT, Baltaretu B, Crawford JD. Trans-saccadic interactions in human parietal and occipital cortex during the retention and comparison of object orientation. Cortex 2016; 82:263-276. [PMID: 27424061 DOI: 10.1016/j.cortex.2016.06.012] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2015] [Revised: 05/21/2016] [Accepted: 06/15/2016] [Indexed: 02/03/2023]
Abstract
The cortical sites for the trans-saccadic storage and integration of visual object features are unknown. Here, we used a variant of fMRI-Adaptation where subjects fixated to the left or right of a briefly presented visual grating, maintained fixation or saccaded to the opposite side, then judged whether a re-presented grating had the same or different orientation. fMRI analysis revealed trans-saccadic interactions (different > same orientation) in a visual field-insensitive cluster within right supramarginal gyrus. This cluster was located at the anterolateral pole of the parietal eye field (identified in a localizer task). We also observed gaze centered, field-specific interactions (same > different orientation) in an extrastriate cluster overlapping with putative 'V4'. Based on these data and our literature review, we conclude that these supramarginal and extrastriate areas are involved in the retention, spatial updating, and evaluation of object orientation information across saccades.
Collapse
Affiliation(s)
- Benjamin T Dunkley
- York Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Ontario, Canada
| | - Bianca Baltaretu
- York Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Ontario, Canada; Department of Biology, Neuroscience Graduate Diploma Program and NSERC Brain in Action CREATE Program, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- York Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Ontario, Canada; Department of Biology, Neuroscience Graduate Diploma Program and NSERC Brain in Action CREATE Program, York University, Toronto, Ontario, Canada; Departments of Psychology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada.
| |
Collapse
|
35
|
Thunell E, van der Zwaag W, Ögmen H, Plomp G, Herzog MH. Retinotopic encoding of the Ternus-Pikler display reflected in the early visual areas. J Vis 2016; 16:26. [PMID: 26894510 PMCID: PMC4777237 DOI: 10.1167/16.3.26] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The visual representation of the world is often assumed to be retinotopic, and many visual brain areas are indeed organized retinotopically. Visual perception, however, is not based on a reference frame anchored in retinotopic coordinates. For example, when an object moves, motion of its constituent parts is perceived relative to the object rather than in retinotopic coordinates. The moving object thus serves as a nonretinotopic reference system for computing the properties of its parts. It is largely unknown how the brain accomplishes this feat. Here, we used the Ternus-Pikler display to pit retinotopic processing in a stationary reference system against nonretinotopic processing in a moving one. Using 7T fMRI, we found that the average blood-oxygen-level dependent activations in V1, V2, and V3 reflected the retinotopic properties, but not the nonretinotopic percepts, of the Ternus-Pikler display. In the human motion processing complex (hMT+), activations were compatible with both retinotopic and nonretinotopic encoding. Thus, hMT+ may be the first visual area encoding the nonretinotopic percepts of the Ternus-Pikler display.
Collapse
|
36
|
Abstract
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Collapse
|
37
|
Latimer K, Curran W. The duration compression effect is mediated by adaptation of both retinotopic and spatiotopic mechanisms. Vision Res 2016; 122:60-65. [PMID: 27063361 DOI: 10.1016/j.visres.2016.01.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Revised: 12/08/2015] [Accepted: 01/13/2016] [Indexed: 11/16/2022]
Abstract
The duration compression effect is a phenomenon in which prior adaptation to a spatially circumscribed dynamic stimulus results in the duration of subsequent subsecond stimuli presented in the adapted region being underestimated. There is disagreement over the frame of reference within which the duration compression phenomenon occurs. One view holds that the effect is driven by retinotopic-tuned mechanisms located at early stages of visual processing, and an alternate position is that the mechanisms are spatiotopic and occur at later stages of visual processing (MT+). We addressed the retinotopic-spatiotopic question by using adapting stimuli - drifting plaids - that are known to activate global-motion mechanisms in area MT. If spatiotopic mechanisms contribute to the duration compression effect, drifting plaid adaptors should be well suited to revealing them. Following adaptation participants were tasked with estimating the duration of a 600ms random dot stimulus, whose direction was identical to the pattern direction of the adapting plaid, presented at either the same retinotopic or the same spatiotopic location as the adaptor. Our results reveal significant duration compression in both conditions, pointing to the involvement of both retinotopic-tuned and spatiotopic-tuned mechanisms in the duration compression effect.
Collapse
Affiliation(s)
- Kevin Latimer
- School of Psychology, Queen's University Belfast, United Kingdom.
| | - William Curran
- School of Psychology, Queen's University Belfast, United Kingdom
| |
Collapse
|
38
|
Marino AC, Mazer JA. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology. Front Syst Neurosci 2016; 10:3. [PMID: 26903820 PMCID: PMC4743436 DOI: 10.3389/fnsys.2016.00003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 01/15/2016] [Indexed: 11/13/2022] Open
Abstract
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron's spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Medical Scientist Training Program, Yale University School of MedicineNew Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Department of Neurobiology, Yale University School of MedicineNew Haven, CT, USA; Department of Psychology, Yale UniversityNew Haven, CT, USA
| |
Collapse
|
39
|
|
40
|
Nakashima Y, Iijima T, Sugita Y. Surround-contingent motion aftereffect. Vision Res 2015; 117:9-15. [PMID: 26459145 DOI: 10.1016/j.visres.2015.09.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2015] [Revised: 09/25/2015] [Accepted: 09/28/2015] [Indexed: 11/26/2022]
Abstract
We investigated whether motion aftereffects (MAE) can be contingent on surroundings. Random dots moving leftward and rightward were presented in alternation. Moving dots were surrounded by an open circle or an open square. After prolonged exposure to these stimuli, MAE were found to be contingent upon the surrounding frames: dots moving in a random direction appeared moving leftward when surrounded by the frame that was presented in conjunction with rightward motion. The effect lasted for 24h and was observed when adapter and test stimuli were presented not only retinotopically, but also at the same spatiotopic position. Furthermore, the effect was observed even when the adapter and test stimuli were presented at different retinotopic and spatiotopic positions as long as they were presented in the same hemi-field. These results indicate that MAE would be influenced not only by the stimulus features, but also by their surroundings, and they suggest that the surround-contingent MAE might be mediated in the higher stage of the motion processing pathway.
Collapse
Affiliation(s)
- Yusuke Nakashima
- Department of Psychology, Waseda University, 1-24-1 Toyama, Shinjuku-ku 162-8644, Tokyo, Japan.
| | - Takumi Iijima
- Department of Psychology, Waseda University, 1-24-1 Toyama, Shinjuku-ku 162-8644, Tokyo, Japan
| | - Yoichi Sugita
- Department of Psychology, Waseda University, 1-24-1 Toyama, Shinjuku-ku 162-8644, Tokyo, Japan
| |
Collapse
|
41
|
Abstract
Much evidence has accumulated to suggest that many animals, including young human infants, possess an abstract sense of approximate quantity, a number sense. Most research has concentrated on apparent numerosity of spatial arrays of dots or other objects, but a truly abstract sense of number should be capable of encoding the numerosity of any set of discrete elements, however displayed and in whatever sensory modality. Here, we use the psychophysical technique of adaptation to study the sense of number for serially presented items. We show that numerosity of both auditory and visual sequences is greatly affected by prior adaptation to slow or rapid sequences of events. The adaptation to visual stimuli was spatially selective (in external, not retinal coordinates), pointing to a sensory rather than cognitive process. However, adaptation generalized across modalities, from auditory to visual and vice versa. Adaptation also generalized across formats: adapting to sequential streams of flashes affected the perceived numerosity of spatial arrays. All these results point to a perceptual system that transcends vision and audition to encode an abstract sense of number in space and in time.
Collapse
Affiliation(s)
- Roberto Arrighi
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, via San Salvi 12, Florence 50135, Italy
| | - Irene Togoli
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, via San Salvi 12, Florence 50135, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, via San Salvi 12, Florence 50135, Italy Institute of Neuroscience CNR, via Moruzzi 1, Pisa 56124, Italy
| |
Collapse
|
42
|
Barendregt M, Harvey BM, Rokers B, Dumoulin SO. Transformation from a retinal to a cyclopean representation in human visual cortex. Curr Biol 2015; 25:1982-7. [PMID: 26144967 DOI: 10.1016/j.cub.2015.06.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2015] [Revised: 05/13/2015] [Accepted: 06/01/2015] [Indexed: 11/28/2022]
Abstract
We experience our visual world as seen from a single viewpoint, even though our two eyes receive slightly different images. One role of the visual system is to combine the two retinal images into a single representation of the visual field, sometimes called the cyclopean image [1]. Conventional terminology, i.e. retinotopy, implies that the topographic organization of visual areas is maintained throughout visual cortex [2]. However, following the hypothesis that a transformation occurs from a representation of the two retinal images (retinotopy) to a representation of a single cyclopean image (cyclopotopy), we set out to identify the stage in visual processing at which this transformation occurs in the human brain. Using binocular stimuli, population receptive field mapping (pRF), and ultra-high-field (7 T) fMRI, we find that responses in striate cortex (V1) best reflect stimulus position in the two retinal images. In extrastriate cortex (from V2 to LO), on the other hand, responses better reflect stimulus position in the cyclopean image. These results pinpoint the location of the transformation from a retinal to a cyclopean representation and contribute to an understanding of the transition from sensory to perceptual stimulus space in the human brain.
Collapse
Affiliation(s)
- Martijn Barendregt
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands; Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53706, USA
| | - Ben M Harvey
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands; Faculty of Psychology and Education Sciences, University of Coimbra, 3001-802 Coimbra, Portugal
| | - Bas Rokers
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands; Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53706, USA.
| | - Serge O Dumoulin
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands
| |
Collapse
|
43
|
Gardner JL. A case for human systems neuroscience. Neuroscience 2015; 296:130-7. [PMID: 24997268 DOI: 10.1016/j.neuroscience.2014.06.052] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2014] [Revised: 06/20/2014] [Accepted: 06/24/2014] [Indexed: 11/15/2022]
Abstract
Can the human brain itself serve as a model for a systems neuroscience approach to understanding the human brain? After all, how the brain is able to create the richness and complexity of human behavior is still largely mysterious. What better choice to study that complexity than to study it in humans? However, measurements of brain activity typically need to be made non-invasively which puts severe constraints on what can be learned about the internal workings of the brain. Our approach has been to use a combination of psychophysics in which we can use human behavioral flexibility to make quantitative measurements of behavior and link those through computational models to measurements of cortical activity through magnetic resonance imaging. In particular, we have tested various computational hypotheses about what neural mechanisms could account for behavioral enhancement with spatial attention (Pestilli et al., 2011). Resting both on quantitative measurements and considerations of what is known through animal models, we concluded that weighting of sensory signals by the magnitude of their response is a neural mechanism for efficient selection of sensory signals and consequent improvements in behavioral performance with attention. While animal models have many technical advantages over studying the brain in humans, we believe that human systems neuroscience should endeavor to validate, replicate and extend basic knowledge learned from animal model systems and thus form a bridge to understanding how the brain creates the complex and rich cognitive capacities of humans.
Collapse
Affiliation(s)
- J L Gardner
- Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan.
| |
Collapse
|
44
|
Transsaccadic processing: stability, integration, and the potential role of remapping. Atten Percept Psychophys 2015; 77:3-27. [PMID: 25380979 DOI: 10.3758/s13414-014-0751-y] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
While our frequent saccades allow us to sample the complex visual environment in a highly efficient manner, they also raise certain challenges for interpreting and acting upon visual input. In the present, selective review, we discuss key findings from the domains of cognitive psychology, visual perception, and neuroscience concerning two such challenges: (1) maintaining the phenomenal experience of visual stability despite our rapidly shifting gaze, and (2) integrating visual information across discrete fixations. In the first two sections of the article, we focus primarily on behavioral findings. Next, we examine the possibility that a neural phenomenon known as predictive remapping may provide an explanation for aspects of transsaccadic processing. In this section of the article, we delineate and critically evaluate multiple proposals about the potential role of predictive remapping in light of both theoretical principles and empirical findings.
Collapse
|
45
|
Uchimura M, Nakano T, Morito Y, Ando H, Kitazawa S. Automatic representation of a visual stimulus relative to a background in the right precuneus. Eur J Neurosci 2015; 42:1651-9. [PMID: 25925368 PMCID: PMC5032987 DOI: 10.1111/ejn.12935] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 04/23/2015] [Accepted: 04/27/2015] [Indexed: 11/29/2022]
Abstract
Our brains represent the position of a visual stimulus egocentrically, in either retinal or craniotopic coordinates. In addition, recent behavioral studies have shown that the stimulus position is automatically represented allocentrically relative to a large frame in the background. Here, we investigated neural correlates of the ‘background coordinate’ using an fMRI adaptation technique. A red dot was presented at different locations on a screen, in combination with a rectangular frame that was also presented at different locations, while the participants looked at a fixation cross. When the red dot was presented repeatedly at the same location relative to the rectangular frame, the fMRI signals significantly decreased in the right precuneus. No adaptation was observed after repeated presentations relative to a small, but salient, landmark. These results suggest that the background coordinate is implemented in the right precuneus.
Collapse
Affiliation(s)
- Motoaki Uchimura
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Japan Society for the Promotion of Science, 5-3-1 Kojimachi, Chiyoda, Tokyo, 102-0083, Japan
| | - Tamami Nakano
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan
| | - Yusuke Morito
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan
| | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan.,Multisensory Cognition and Computation Laboratory, National Institute of Information and Communications Technology, 3-5 Hikaridai, Seika, Kyoto, 619-0289, Japan
| | - Shigeru Kitazawa
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
46
|
Blumberg EJ, Peterson MS, Parasuraman R. Enhancing multiple object tracking performance with noninvasive brain stimulation: a causal role for the anterior intraparietal sulcus. Front Syst Neurosci 2015; 9:3. [PMID: 25698943 PMCID: PMC4318277 DOI: 10.3389/fnsys.2015.00003] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Accepted: 01/09/2015] [Indexed: 11/13/2022] Open
Abstract
Multiple object tracking (MOT) is a complex task recruiting a distributed network of brain regions. There are also marked individual differences in MOT performance. A positive causal relationship between the anterior intraparietal sulcus (AIPS), an integral region in the MOT attention network and inter-individual variation in MOT performance has not been previously established. The present study used transcranial direct current stimulation (tDCS), a form of non-invasive brain stimulation, in order to examine such a causal link. Active anodal stimulation was applied to the right AIPS and the left dorsolateral prefrontal cortex (DLPFC) (and sham stimulation), an area associated with working memory (but not MOT) while participants completed a MOT task. Stimulation to the right AIPS significantly improved MOT accuracy more than the other two conditions. The results confirm a causal role of the AIPS in the MOT task and illustrate that tDCS has the ability to improve MOT performance.
Collapse
Affiliation(s)
- Eric J Blumberg
- Arch Lab, Department of Psychology, George Mason University Fairfax, VA, USA
| | - Matthew S Peterson
- Arch Lab, Department of Psychology, George Mason University Fairfax, VA, USA
| | - Raja Parasuraman
- Arch Lab, Department of Psychology, George Mason University Fairfax, VA, USA
| |
Collapse
|
47
|
Abstract
The ventral surface of the human occipital lobe contains multiple retinotopic maps. The most posterior of these maps is considered a potential homolog of macaque V4, and referred to as human V4 ("hV4"). The location of the hV4 map, its retinotopic organization, its role in visual encoding, and the cortical areas it borders have been the subject of considerable investigation and debate over the last 25 years. We review the history of this map and adjacent maps in ventral occipital cortex, and consider the different hypotheses for how these ventral occipital maps are organized. Advances in neuroimaging, computational modeling, and characterization of the nearby anatomical landmarks and functional brain areas have improved our understanding of where human V4 is and what kind of visual representations it contains.
Collapse
Affiliation(s)
- Jonathan Winawer
- Department of Psychology and Center for Neural Science,New York University,New York,New York 10003
| | - Nathan Witthoft
- Department of Psychology,Stanford University,Stanford,California 94305
| |
Collapse
|
48
|
Latimer K, Curran W, Benton CP. Direction-contingent duration compression is primarily retinotopic. Vision Res 2014; 105:47-52. [PMID: 25250984 DOI: 10.1016/j.visres.2014.09.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Revised: 09/10/2014] [Accepted: 09/12/2014] [Indexed: 10/24/2022]
Abstract
Previous research has shown that prior adaptation to a spatially circumscribed, oscillating grating results in the duration of a subsequent stimulus briefly presented within the adapted region being underestimated. There is an on-going debate about where in the motion processing pathway the adaptation underlying this distortion of sub-second duration perception occurs. One position is that the LGN and, perhaps, early cortical processing areas are likely sites for the adaptation; an alternative suggestion is that visual area MT+ contains the neural mechanisms for sub-second timing; and a third position proposes that the effect is driven by adaptation at multiple levels of the motion processing pathway. A related issue is in what frame of reference - retinotopic or spatiotopic - does adaptation induced duration distortion occur. We addressed these questions by having participants adapt to a unidirectional random dot kinematogram (RDK), and then measuring perceived duration of a 600 ms test RDK positioned in either the same retinotopic or the same spatiotopic location as the adaptor. We found that, when it did occur, duration distortion of the test stimulus was direction contingent; that is it occurred when the adaptor and test stimuli drifted in the same direction, but not when they drifted in opposite directions. Furthermore the duration compression was evident primarily under retinotopic viewing conditions, with little evidence of duration distortion under spatiotopic viewing conditions. Our results support previous research implicating cortical mechanisms in the duration encoding of sub-second visual events, and reveal that these mechanisms encode duration within a retinotopic frame of reference.
Collapse
Affiliation(s)
- Kevin Latimer
- School of Psychology, Queen's University Belfast, United Kingdom
| | - William Curran
- School of Psychology, Queen's University Belfast, United Kingdom.
| | | |
Collapse
|
49
|
Zimmermann E, Morrone MC, Burr DC. Buildup of spatial information over time and across eye-movements. Behav Brain Res 2014; 275:281-7. [PMID: 25224817 DOI: 10.1016/j.bbr.2014.09.013] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Revised: 09/04/2014] [Accepted: 09/07/2014] [Indexed: 11/27/2022]
Abstract
To interact rapidly and effectively with our environment, our brain needs access to a neural representation of the spatial layout of the external world. However, the construction of such a map poses major challenges, as the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head and body to explore the world. Research from many laboratories including our own suggests that the visual system does compute spatial maps that are anchored to real-world coordinates. However, the construction of these maps takes time (up to 500ms) and also attentional resources. We discuss research investigating how retinotopic reference frames are transformed into spatiotopic reference-frames, and how this transformation takes time to complete. These results have implications for theories about visual space coordinates and particularly for the current debate about the existence of spatiotopic representations.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Psychology Department, University of Florence, Italy, Neuroscience Institute, National Research Council, Pisa, Italy.
| | - M Concetta Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, via San Zeno 31, 56123 Pisa, Italy; Scientific Institute Stella Maris (IRCSS), viale del Tirreno 331, 56018 Calambrone, Pisa, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Heath, University of Florence, via San Salvi 12, 50135 Florence, Italy; Institute of Neuroscience CNR, via Moruzzi 1, 56124 Pisa, Italy
| |
Collapse
|
50
|
MacInnes WJ, Hunt AR. Attentional load interferes with target localization across saccades. Exp Brain Res 2014; 232:3737-48. [PMID: 25138910 DOI: 10.1007/s00221-014-4062-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 08/01/2014] [Indexed: 11/30/2022]
Abstract
The retinal positions of objects in the world change with each eye movement, but we seem to have little trouble keeping track of spatial information from one fixation to the next. We examined the role of attention in trans-saccadic localization by asking participants to localize targets while performing an attentionally demanding secondary task. In the first experiment, attentional load decreased localization precision for a remembered target, but only when a saccade intervened between target presentation and report. We then repeated the experiment and included a salient landmark that shifted on half the trials. The shifting landmark had a larger effect on localization under high load, indicating that observers rely more on landmarks to make localization judgments under high than under low attentional load. The results suggest that attention facilitates trans-saccadic localization judgments based on spatial updating of gaze-centered coordinates when visual landmarks are not available. The availability of reliable landmarks (present in most natural circumstances) can compensate for the effects of scarce attentional resources on trans-saccadic localization.
Collapse
Affiliation(s)
- W Joseph MacInnes
- School of Psychology, University of Aberdeen, Aberdeen, AB24 3FX, UK
| | | |
Collapse
|