1
|
Psychological and physiological evidence for an initial 'Rough Sketch' calculation of personal space. Sci Rep 2021; 11:20960. [PMID: 34697390 PMCID: PMC8545955 DOI: 10.1038/s41598-021-99578-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 09/21/2021] [Indexed: 11/08/2022] Open
Abstract
Personal space has been defined as “the area individuals maintain around themselves into which others cannot intrude without arousing discomfort”. However, the precise relationship between discomfort (or arousal) responses as a function of distance from an observer remains incompletely understood. Also the mechanisms involved in recognizing conspecifics and distinguishing them from other objects within personal space have not been identified. Accordingly, here we measured personal space preferences in response to real humans and human-like avatars (in virtual reality), using well-validated “stop distance” procedures. Based on threshold measurements of personal space, we examined within-subject variations in discomfort-related responses across multiple distances (spanning inside and outside each individual’s personal space boundary), as reflected by psychological (ratings) and physiological (skin conductance) responses to both humans and avatars. We found that the discomfort-by-distance functions for both humans and avatars were closely fit by a power law. These results suggest that the brain computation of visually-defined personal space begins with a ‘rough sketch’ stage, which generates responses to a broad range of human-like stimuli, in addition to humans. Analogous processing mechanisms may underlie other brain functions which respond similarly to both real and simulated human body parts.
Collapse
|
2
|
Abstract
Our visual system is fundamentally retinotopic. When viewing a stable scene, each eye movement shifts object features and locations on the retina. Thus, sensory representations must be updated, or remapped, across saccades to align presaccadic and postsaccadic inputs. The earliest remapping studies focused on anticipatory, presaccadic shifts of neuronal spatial receptive fields. Over time, it has become clear that there are multiple forms of remapping and that different forms of remapping may be mediated by different neural mechanisms. This review attempts to organize the various forms of remapping into a functional taxonomy based on experimental data and ongoing debates about forward versus convergent remapping, presaccadic versus postsaccadic remapping, and spatial versus attentional remapping. We integrate findings from primate neurophysiological, human neuroimaging and behavioral, and computational modeling studies. We conclude by discussing persistent open questions related to remapping, with specific attention to binding of spatial and featural information during remapping and speculations about remapping's functional significance. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio 43210, USA;
| | - James A Mazer
- Department of Microbiology and Cell Biology, Montana State University, Bozeman, Montana 59717, USA;
| |
Collapse
|
3
|
Ge Y, Sun Z, Qian C, He S. Spatiotopic updating across saccades in the absence of awareness. J Vis 2021; 21:7. [PMID: 33961004 PMCID: PMC8114003 DOI: 10.1167/jov.21.5.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 03/15/2021] [Indexed: 11/25/2022] Open
Abstract
Despite the continuously changing visual inputs caused by eye movements, our perceptual representation of the visual world remains remarkably stable. Visual stability has been a major area of interest within the field of visual neuroscience. The early visual cortical areas are retinotopic-organized, and presumably there is a retinotopic to spatiotopic transformation process that supports the stable representation of the visual world. In this study, we used a cross-saccadic adaptation paradigm to show that both the orientation adaptation and face gender adaptation could still be observed at the same spatiotopic (but different retinotopic) locations even when the adapting stimuli were rendered invisible. These results suggest that awareness of a visual object is not required for its transformation from the retinotopic to the spatiotopic reference frame.
Collapse
Affiliation(s)
- Yijun Ge
- State Key Lab of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Vision and Attention Lab, Department of Psychology, University of Minnesota, MN, USA
| | - Zhouyuan Sun
- State Key Lab of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Department of Neurosurgery, Huazhong University of Science and Technology Union Shenzhen Hospital, Shenzhen, Guangdong, China
- The 6th Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, Guangdong, China
| | - Chencan Qian
- State Key Lab of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Sheng He
- State Key Lab of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Vision and Attention Lab, Department of Psychology, University of Minnesota, MN, USA
- Chinese Academy of Sciences, Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
4
|
Predictive remapping leaves a behaviorally measurable attentional trace on eye-centered brain maps. Psychon Bull Rev 2021; 28:1243-1251. [PMID: 33634356 DOI: 10.3758/s13423-021-01893-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2021] [Indexed: 11/08/2022]
Abstract
How does the brain maintain spatial attention despite the retinal displacement of objects by saccades? A possible solution is to use the vector of an upcoming saccade to compensate for the shift of objects on eye-centered (retinotopic) brain maps. In support of this hypothesis, previous studies have revealed attentional effects at the future retinal locus of an attended object, just before the onset of saccades. A critical yet unresolved theoretical issue is whether predictively remapped attentional effects would persist long enough on eye-centered brain maps, so no external input (goal, expectation, reward, memory, etc.) is needed to maintain spatial attention immediately following saccades. The present study examined this issue with inhibition of return (IOR), an attentional effect that reveals itself in both world-centered and eye-centered coordinates, and predictively remaps before saccades. In the first task, a saccade was introduced to a cueing task ("nonreturn-saccade task") to show that IOR is coded in world-centered coordinates following saccades. In a second cueing task, two consecutive saccades were executed to trigger remapping and to dissociate the retinal locus relevant to remapping from the cued retinal locus ("return-saccade" task). IOR was observed at the remapped retinal locus 430-ms following the (first) saccade that triggered remapping. A third cueing task ("no-remapping" task) further revealed that the lingering IOR effect left by remapping was not confounded by the attention spillover. These results together show that predictive remapping leaves a robust attentional trace on eye-centered brain maps. This retinotopic trace is sufficient to sustain spatial attention for a few hundred milliseconds following saccades.
Collapse
|
5
|
Malevich T, Rybina E, Ivtushok E, Ardasheva L, MacInnes WJ. No evidence for an independent retinotopic reference frame for inhibition of return. Acta Psychol (Amst) 2020; 208:103107. [PMID: 32562893 DOI: 10.1016/j.actpsy.2020.103107] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 04/07/2020] [Accepted: 05/26/2020] [Indexed: 02/07/2023] Open
Abstract
Inhibition of return (IOR) represents a delay in responding to a previously inspected location and is viewed as a crucial mechanism that sways attention toward novelty in visual search. Although most visual processing occurs in retinotopic, eye-centered, coordinates, IOR must be coded in spatiotopic, environmental, coordinates to successfully serve its role as a foraging facilitator. Early studies supported this suggestion but recent results have shown that both spatiotopic and retinotopic reference frames of IOR may co-exist. The present study tested possible sources for IOR at the retinotopic location including being part of the spatiotopic IOR gradient, part of hemifield inhibition and being an independent source of IOR. We conducted four experiments that alternated the cue-target spatial distance (discrete and contiguous) and the response modality (manual and saccadic). In all experiments, we tested spatiotopic, retinotopic and neutral (neither spatiotopic nor retinotopic) locations. We did find IOR at both the retinotopic and spatiotopic locations but no evidence for an independent source of retinotopic IOR for either of the response modalities. In fact, we observed the spread of IOR across entire validly cued hemifield including at neutral locations. We conclude that these results indicate a strategy to inhibit the whole cued hemifield or suggest a large horizontal gradient around the spatiotopically cued location. PUBLIC SIGNIFICANCE STATEMENT: We perceive the visual world around us as stable despite constant shifts of the retinal image due to saccadic eye movements. In this study, we explore whether Inhibition of return (IOR), a mechanism preventing us from returning to previously attended locations, operates in spatiotopic, world-centered or in retinal, eye-centered coordinates. We tested both saccadic and manual IOR at spatiotopic, retinotopic, and control locations. We did not find an independent retinotopic source of IOR for either of the response modalities. The results suggest that IOR spreads over the whole previously attended visual hemifield or there is a large horizontal spatiotopic gradient. The current results are in line with the idea of IOR being a foraging facilitator in visual search and contribute to our understanding of spatiotopically organized aspects of visual and attentional systems.
Collapse
Affiliation(s)
- Tatiana Malevich
- Vision Modelling Laboratory, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia; Werner Reichardt Centre for Integrative Neuroscience, University of Tuebingen, Tuebingen, Germany
| | - Elena Rybina
- Department of Psychology, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia
| | - Elizaveta Ivtushok
- Department of Psychology, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia
| | - Liubov Ardasheva
- Department of Psychology, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia
| | - W Joseph MacInnes
- Vision Modelling Laboratory, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia; Department of Psychology, Faculty of Social Sciences, National Research University - Higher School of Economics, Moscow, Russia.
| |
Collapse
|
6
|
Abstract
Spatial attention is thought to be the "glue" that binds features together (e.g., Treisman & Gelade, 1980, Psychology, 12[1], 97-136)-but attention is dynamic, constantly moving across multiple goals and locations. For example, when a person moves her eyes, visual inputs that are coded relative to the eyes (retinotopic) must be rapidly updated to maintain stable world-centered (spatiotopic) representations. Here, we examined how dynamic updating of spatial attention after a saccadic eye movement affects object-feature binding. Immediately after a saccade, participants were simultaneously presented with four colored and oriented bars (one at a precued spatiotopic target location) and instructed to reproduce both the color and orientation of the target item. Object-feature binding was assessed by applying probabilistic mixture models to the joint distribution of feature errors: feature reports for the target item could be correlated (and thus bound together) or independent. We found that compared with holding attention without an eye movement, attentional updating after an eye movement produced more independent errors, including illusory conjunctions, in which one feature of the item at the spatiotopic target location was misbound with the other feature of the item at the initial retinotopic location. These findings suggest that even when only one spatiotopic location is task relevant, spatial attention-and thus object-feature binding-is malleable across and after eye movements, heightening the challenge that eye movements pose for the binding problem and for visual stability.
Collapse
|
7
|
Grossberg S. The resonant brain: How attentive conscious seeing regulates action sequences that interact with attentive cognitive learning, recognition, and prediction. Atten Percept Psychophys 2019; 81:2237-2264. [PMID: 31218601 PMCID: PMC6848053 DOI: 10.3758/s13414-019-01789-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This article describes mechanistic links that exist in advanced brains between processes that regulate conscious attention, seeing, and knowing, and those that regulate looking and reaching. These mechanistic links arise from basic properties of brain design principles such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance. These principles require conscious states to mark perceptual and cognitive representations that are complete, context sensitive, and stable enough to control effective actions. Surface-shroud resonances support conscious seeing and action, whereas feature-category resonances support learning, recognition, and prediction of invariant object categories. Feedback interactions between cortical areas such as peristriate visual cortical areas V2, V3A, and V4, and the lateral intraparietal area (LIP) and inferior parietal sulcus (IPS) of the posterior parietal cortex (PPC) control sequences of saccadic eye movements that foveate salient features of attended objects and thereby drive invariant object category learning. Learned categories can, in turn, prime the objects and features that are attended and searched. These interactions coordinate processes of spatial and object attention, figure-ground separation, predictive remapping, invariant object category learning, and visual search. They create a foundation for learning to control motor-equivalent arm movement sequences, and for storing these sequences in cognitive working memories that can trigger the learning of cognitive plans with which to read out skilled movement sequences. Cognitive-emotional interactions that are regulated by reinforcement learning can then help to select the plans that control actions most likely to acquire valued goal objects in different situations. Many interdisciplinary psychological and neurobiological data about conscious and unconscious behaviors in normal individuals and clinical patients have been explained in terms of these concepts and mechanisms.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Room 213, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, 677 Beacon Street, Boston, MA, 02215, USA.
| |
Collapse
|
8
|
van Leeuwen J, Belopolsky AV. Detection of object displacement during a saccade is prioritized by the oculomotor system. J Vis 2019; 19:11. [DOI: 10.1167/19.11.11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Jonathan van Leeuwen
- Department of Experimental and Applied Psychology, Vrije Universiteit, Amsterdam, The Netherlands
| | - Artem V. Belopolsky
- Department of Experimental and Applied Psychology, Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
9
|
Updating spatial working memory in a dynamic visual environment. Cortex 2019; 119:267-286. [PMID: 31170650 DOI: 10.1016/j.cortex.2019.04.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 04/17/2019] [Accepted: 04/26/2019] [Indexed: 11/22/2022]
Abstract
The present review describes recent developments regarding the role of the eye movement system in representing spatial information and keeping track of locations of relevant objects. First, we discuss the active vision perspective and why eye movements are considered crucial for perception and attention. The second part focuses on the question of how the oculomotor system is used to represent spatial attentional priority, and the role of the oculomotor system in maintenance of this spatial information. Lastly, we discuss recent findings demonstrating rapid updating of information across saccadic eye movements. We argue that the eye movement system plays a key role in maintaining and rapidly updating spatial information. Furthermore, we suggest that rapid updating emerges primarily to make sure actions are minimally affected by intervening eye movements, allowing us to efficiently interact with the world around us.
Collapse
|
10
|
Abstract
Our perception of the world remains stable despite the retinal shifts that occur with each saccade. The role of spatial attention in matching pre- to postsaccadic visual information has been well established, but the role of feature-based attention remains unclear. In this study, we examined the transsaccadic processing of a color pop-out target. Participants made a saccade towards a neutral target and performed a search task on a peripheral array presented once the saccade landed. A similar array was presented just before the saccade and we analyzed what aspect of this preview benefitted the postsaccadic search task. We assessed the preview effect in the spatiotopic and retinotopic reference frames, and the potential transfer of feature selectivity across the saccade. In the first experiment, the target and distractor colors remained identical for the preview and the postsaccadic array and performance improved. The largest benefit was observed at the spatiotopic location. In the second experiment, the target and distractor colors were swapped across the saccade. All responses were slowed but the cost was least at the spatiotopic location. Our results show that the preview attracted spatial attention to the target location, which was then remapped, and suggest that previewed features, specifically colors, were transferred across the saccade. Furthermore, the preview induced a spatiotopic advantage regardless of whether the target switched color or not, suggesting that spatiotopy was established independently of feature processing. Our results support independent priming effects of features versus location and underline the role of feature-based selection in visual stability.
Collapse
|
11
|
Golomb JD. Remapping locations and features across saccades: a dual-spotlight theory of attentional updating. Curr Opin Psychol 2019; 29:211-218. [PMID: 31075621 DOI: 10.1016/j.copsyc.2019.03.018] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Revised: 03/23/2019] [Accepted: 03/28/2019] [Indexed: 01/06/2023]
Abstract
How do we maintain visual stability across eye movements? Much work has focused on how visual information is rapidly updated to maintain spatiotopic representations. However, predictive spatial remapping is only part of the story. Here I review key findings, recent debates, and open questions regarding remapping and its implications for visual attention and perception. This review focuses on two key questions: when does remapping occur, and what is the impact on feature perception? Findings are reviewed within the framework of a two-stage, or dual- spotlight, remapping process, where spatial attention must be both updated to the new location (fast, predictive stage) and withdrawn from the previous retinotopic location (slow, post-saccadic stage), with a particular focus on the link between spatial and feature information across eye movements.
Collapse
Affiliation(s)
- Julie D Golomb
- Department of Psychology, The Ohio State University, United States.
| |
Collapse
|
12
|
Yoshimoto S, Takeuchi T. Effect of spatial attention on spatiotopic visual motion perception. J Vis 2019; 19:4. [PMID: 30943532 DOI: 10.1167/19.4.4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We almost never experience visual instability, despite retinal image instability induced by eye movements. How the stability of visual perception is maintained through spatiotopic representation remains a matter of debate. The discrepancies observed in the findings of existing neuroscience studies regarding spatiotopic representation partly originate from differences in regard to how attention is deployed to stimuli. In this study, we psychophysically examined whether spatial attention is needed to perceive spatiotopic visual motion. For this purpose, we used visual motion priming, which is a phenomenon in which a preceding priming stimulus modulates the perceived moving direction of an ambiguous test stimulus, such as a drifting grating that phase shifts by 180°. To examine the priming effect in different coordinates, participants performed a saccade soon after the offset of a primer. The participants were tasked with judging the direction of a subsequently presented test stimulus. To control the effect of spatial attention, the participants were asked to conduct a concurrent dot contrast-change detection task after the saccade. Positive priming was prominent in spatiotopic conditions, whereas negative priming was dominant in retinotopic conditions. At least a 600-ms interval between the priming and test stimuli was needed to observe positive priming in spatiotopic coordinates. When spatial attention was directed away from the location of the test stimulus, spatiotopic positive motion priming completely disappeared; meanwhile, the spatiotopic positive motion priming at shorter interstimulus intervals was enhanced when spatial attention was directed to the location of the test stimulus. These results provide evidence that an attentional resource is requisite for developing spatiotopic representation more quickly.
Collapse
Affiliation(s)
- Sanae Yoshimoto
- Graduate School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan
| | - Tatsuto Takeuchi
- Department of Psychology, Japan Women's University, Kanagawa, Japan
| |
Collapse
|
13
|
Seidel Malkinson T, Bartolomeo P. Fronto-parietal organization for response times in inhibition of return: The FORTIOR model. Cortex 2018; 102:176-192. [DOI: 10.1016/j.cortex.2017.11.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 09/10/2017] [Accepted: 11/07/2017] [Indexed: 10/18/2022]
|
14
|
Michalczyk Ł, Paszulewicz J, Bielas J, Wolski P. Is saccade preparation required for inhibition of return (IOR)? Neurosci Lett 2018; 665:13-17. [DOI: 10.1016/j.neulet.2017.11.035] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 11/01/2017] [Accepted: 11/14/2017] [Indexed: 10/18/2022]
|
15
|
Abstract
Each time we make an eye movement, positions of objects on the retina change. In order to keep track of relevant objects their positions have to be updated. The situation becomes even more complex if the object is no longer present in the world and has to be held in memory. In the present study, we used saccadic curvature to investigate the time-course of updating a memorized location across saccades. Previous studies have shown that a memorized location competes with a saccade target for selection on the oculomotor map, which leads to saccades curving away from it. In our study participants performed a sequence of two saccades while keeping a location in memory. The trajectory of the second saccade was used to measure when the memorized location was updated after the first saccade. The results showed that the memorized location was rapidly updated with the eyes curving away from its spatial coordinates within 130 ms after the first eye movement. The time-course of updating was comparable to the updating of an exogenously attended location, and depended on how well the location was memorized.
Collapse
|
16
|
Hilchey MD, Pratt J, Christie J. Placeholders dissociate two forms of inhibition of return. Q J Exp Psychol (Hove) 2018; 71:360-371. [PMID: 27737621 DOI: 10.1080/17470218.2016.1247898] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Decades of research using Posner's classic spatial cueing paradigm has uncovered at least two forms of inhibition of return (IOR) in the aftermath of an exogenous, peripheral orienting cue. One prominent dissociation concerns the role of covert and overt orienting in generating IOR effects that relate to perception- and action-oriented processes, respectively. Another prominent dissociation concerns the role of covert and overt orienting in generating IOR effects that depend on object- and space-based representation, respectively. Our objective was to evaluate whether these dichotomies are functionally equivalent by manipulating placeholder object presence in the cueing paradigm. By discouraging eye movements throughout, Experiments 1A and 1B validated a perception-oriented form of IOR that depended critically on placeholders. Experiment 2A demonstrated that IOR was robust without placeholders when eye movements went to the cue and back to fixation before the manual response target. In Experiment 2B, we replicated Experiment 2A's procedures except we discouraged eye movements. IOR was observed, albeit only weakly and significantly diminished relative to when eye movements were involved. We conclude that action-oriented IOR is robust against placeholders but that the magnitude of perception-oriented IOR is critically sensitive to placeholder presence when unwanted oculomotor activity can be ruled out.
Collapse
Affiliation(s)
- Matthew D Hilchey
- 1 Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Jay Pratt
- 1 Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - John Christie
- 2 Department of Psychology and Neuroscience, Dalhousie University, Halifax, NS, Canada
| |
Collapse
|
17
|
Shafer-Skelton A, Kupitz CN, Golomb JD. Object-location binding across a saccade: A retinotopic spatial congruency bias. Atten Percept Psychophys 2017; 79:765-781. [PMID: 28070793 PMCID: PMC5354979 DOI: 10.3758/s13414-016-1263-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating - a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the "spatial congruency bias," to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the spatial congruency bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic congruency bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates.
Collapse
Affiliation(s)
- Anna Shafer-Skelton
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA
| | - Colin N Kupitz
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA
| | - Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA.
| |
Collapse
|
18
|
MacInnes WJ. Multiple Diffusion Models to Compare Saccadic and Manual Responses for Inhibition of Return. Neural Comput 2017; 29:804-824. [DOI: 10.1162/neco_a_00904] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Cuing a location in space produces a short-lived advantage in reaction time to targets at that location. This early advantage, however, switches to a reaction time cost and has been termed inhibition of return (IOR). IOR behaves differently for different response modalities, suggesting that it may not be a unified effect. This letter presents new data from two experiments testing the gradient of IOR with random, continuous cue-target Euclidean distance and cue-target onset asynchrony. These data were then used to train multiple diffusion models of saccadic and manual reaction time for these cuing experiments. Diffusion models can generate accurate distributions of reaction time data by modeling a response as a buildup of evidence toward a response threshold. If saccadic and attentional IOR are based on similar processes, then differences in distribution will be best explained by adjusting parameter values such as signal and noise within the same model structure. Although experimental data show differences in the timing of IOR across modality, best-fit models are shown to have similar model parameters for the gradient of IOR, suggesting similar underlying mechanisms for saccadic and manual IOR.
Collapse
Affiliation(s)
- W. Joseph MacInnes
- National Research University: Higher School of Economics, Moscow, Russian Federation 101000
| |
Collapse
|
19
|
Grossberg S. Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support. Neural Netw 2016; 87:38-95. [PMID: 28088645 DOI: 10.1016/j.neunet.2016.11.003] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 10/21/2016] [Accepted: 11/20/2016] [Indexed: 10/20/2022]
Abstract
The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA; Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University, 677 Beacon Street, Boston, MA 02215, USA.
| |
Collapse
|
20
|
Abstract
In oculomotor selection, each saccade is thought to be automatically biased toward uninspected locations, inhibiting the inefficient behavior of repeatedly refixating the same objects. This automatic bias is related to inhibition of return (IOR). Although IOR seems an appealing property that increases efficiency in visual search, such a mechanism would not be efficient in other tasks. Indeed, evidence for additional, more flexible control over refixations has been provided. Here, we investigated whether task demands implicitly affect the rate of refixations. We measured the probability of refixations after series of six binary saccadic decisions under two conditions: visual search and free viewing. The rate of refixations seems influenced by two effects. One effect is related to the rate of intervening fixations, specifically, more refixations were observed with more intervening fixations. In addition, we observed an effect of task set, with fewer refixations in visual search than in free viewing. Importantly, the history-related effect was more pronounced when sufficient spatial references were provided, suggesting that this effect is dependent on spatiotopic encoding of previously fixated locations. This known history-related bias in gaze direction is not the primary influence on the refixation rate. Instead, multiple factors, such as task set and spatial references, assert strong influences as well.
Collapse
|
21
|
Marino AC, Mazer JA. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology. Front Syst Neurosci 2016; 10:3. [PMID: 26903820 PMCID: PMC4743436 DOI: 10.3389/fnsys.2016.00003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 01/15/2016] [Indexed: 11/13/2022] Open
Abstract
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron's spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Medical Scientist Training Program, Yale University School of MedicineNew Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Department of Neurobiology, Yale University School of MedicineNew Haven, CT, USA; Department of Psychology, Yale UniversityNew Haven, CT, USA
| |
Collapse
|
22
|
Grossberg S. Cortical Dynamics of Figure-Ground Separation in Response to 2D Pictures and 3D Scenes: How V2 Combines Border Ownership, Stereoscopic Cues, and Gestalt Grouping Rules. Front Psychol 2016; 6:2054. [PMID: 26858665 PMCID: PMC4726768 DOI: 10.3389/fpsyg.2015.02054] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2015] [Accepted: 12/24/2015] [Indexed: 11/20/2022] Open
Abstract
The FACADE model, and its laminar cortical realization and extension in the 3D LAMINART model, have explained, simulated, and predicted many perceptual and neurobiological data about how the visual cortex carries out 3D vision and figure-ground perception, and how these cortical mechanisms enable 2D pictures to generate 3D percepts of occluding and occluded objects. In particular, these models have proposed how border ownership occurs, but have not yet explicitly explained the correlation between multiple properties of border ownership neurons in cortical area V2 that were reported in a remarkable series of neurophysiological experiments by von der Heydt and his colleagues; namely, border ownership, contrast preference, binocular stereoscopic information, selectivity for side-of-figure, Gestalt rules, and strength of attentional modulation, as well as the time course during which such properties arise. This article shows how, by combining 3D LAMINART properties that were discovered in two parallel streams of research, a unified explanation of these properties emerges. This explanation proposes, moreover, how these properties contribute to the generation of consciously seen 3D surfaces. The first research stream models how processes like 3D boundary grouping and surface filling-in interact in multiple stages within and between the V1 interblob—V2 interstripe—V4 cortical stream and the V1 blob—V2 thin stripe—V4 cortical stream, respectively. Of particular importance for understanding figure-ground separation is how these cortical interactions convert computationally complementary boundary and surface mechanisms into a consistent conscious percept, including the critical use of surface contour feedback signals from surface representations in V2 thin stripes to boundary representations in V2 interstripes. Remarkably, key figure-ground properties emerge from these feedback interactions. The second research stream shows how cells that compute absolute disparity in cortical area V1 are transformed into cells that compute relative disparity in cortical area V2. Relative disparity is a more invariant measure of an object's depth and 3D shape, and is sensitive to figure-ground properties.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center for Computational Neuroscience and Neural Technology, Boston UniversityBoston, MA, USA; Department of Mathematics, Boston UniversityBoston, MA, USA
| |
Collapse
|
23
|
Hoffmann D, Goffaux V, Schuller AM, Schiltz C. Inhibition of return and attentional facilitation: Numbers can be counted in, letters tell a different story. Acta Psychol (Amst) 2016; 163:74-80. [PMID: 26613388 DOI: 10.1016/j.actpsy.2015.11.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 09/21/2015] [Accepted: 11/17/2015] [Indexed: 11/18/2022] Open
Abstract
Prior research has provided strong evidence for spatial-numerical associations. Single digits can for instance act as attentional cues, orienting visuo-spatial attention to the left or right hemifield depending on the digit's magnitude, thus facilitating target detection in the cued hemifield (left/right hemifield after small/large digits, respectively). Studies using other types of behaviourally or biologically relevant central cues known to elicit automated symbolic attention orienting effects such as arrows or gaze have shown that the initial facilitation of cued target detection can turn into inhibition at longer stimulus onset asynchronies (SOAs). However, no studies so far investigated whether inhibition of return (IOR) is also observed using digits as uninformative central cues. To address this issue we designed an attentional cueing paradigm using SOAs ranging from 500 ms to 1650 ms. As expected, the results showed a facilitation effect at the relatively short 650 ms SOA, replicating previous findings. At the long 1650 ms SOA, however, participants were faster to detect targets in the uncued hemifield compared to the cued hemifield, showing an IOR effect. A control experiment with letters showed no such congruency effects at any SOA. These findings provide the first evidence that digits not only produce facilitation effects at shorter intervals, but also induce inhibitory effects at longer intervals, confirming that Arabic digits engage automated symbolic orienting of attention.
Collapse
Affiliation(s)
- Danielle Hoffmann
- Research and Transfer Centre LUCET, FLSHASE, University of Luxembourg, Luxembourg.
| | - Valérie Goffaux
- Research Institute IPSY, Université Catholique de Louvain, Belgium
| | | | | |
Collapse
|
24
|
He T, Ding Y, Wang Z. Environment- and eye-centered inhibitory cueing effects are both observed after a methodological confound is eliminated. Sci Rep 2015; 5:16586. [PMID: 26565380 PMCID: PMC4643241 DOI: 10.1038/srep16586] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Accepted: 10/16/2015] [Indexed: 12/02/2022] Open
Abstract
Inhibition of return (IOR), typically explored in cueing paradigms, is a performance cost associated with previously attended locations and has been suggested as a crucial attentional mechanism that biases orientation towards novelty. In their seminal IOR paper, Posner and Cohen (1984) showed that IOR is coded in spatiotopic or environment-centered coordinates. Recent studies, however, have consistently reported IOR effects in both spatiotopic and retinotopic (eye-centered) coordinates. One overlooked methodological confound of all previous studies is that the spatial gradient of IOR is not considered when selecting the baseline for estimating IOR effects. This methodological issue makes it difficult to tell if the IOR effects reported in previous studies were coded in retinotopic or spatiotopic coordinates, or in both. The present study addresses this issue with the incorporation of no-cue trials to a modified cueing paradigm in which the cue and target are always intervened by a gaze-shift. The results revealed that a) IOR is indeed coded in both spatiotopic and retinotopic coordinates, and b) the methodology of previous work may have underestimated spatiotopic and retinotopic IOR effects.
Collapse
Affiliation(s)
- Tao He
- Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, 311121, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, 311121, China
| | - Yun Ding
- Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, 311121, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, 311121, China
| | - Zhiguo Wang
- Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, 311121, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, 311121, China
| |
Collapse
|
25
|
|
26
|
Jiang YV, Won BY. Spatial scale, rather than nature of task or locomotion, modulates the spatial reference frame of attention. J Exp Psychol Hum Percept Perform 2015; 41:866-78. [PMID: 25867510 DOI: 10.1037/xhp0000056] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visuospatial attention is strongly biased to locations that had frequently contained a search target before. However, the function of this bias depends on the reference frame in which attended locations are coded. Previous research has shown a striking difference between tasks administered on a computer monitor and those administered in a large environment, with the former inducing viewer-centered learning and the latter environment-centered learning. Why does environment-centered learning fail on a computer? Here, we tested 3 possibilities: differences in spatial scale, the nature of task, and locomotion may each influence the reference frame of attention. Participants searched for a target on a monitor placed flat on a stand. On each trial, they stood at a different location around the monitor. The target was frequently located in a fixed area of the monitor, but changes in participants' perspective rendered this area random relative to the participants. Under incidental learning conditions, participants failed to acquire environment-centered learning even when (a) the task and display resembled those of a large-scale task and (b) the search task required locomotion. The difficulty in inducing environment-centered learning on a computer underscores the egocentric nature of visual attention. It supports the idea that spatial scale modulates the reference frame of attention.
Collapse
|
27
|
Spatial constancy of attention across eye movements is mediated by the presence of visual objects. Atten Percept Psychophys 2015; 77:1159-69. [DOI: 10.3758/s13414-015-0861-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
28
|
MacInnes WJ, Krüger HM, Hunt AR. Just passing through? Inhibition of return in saccadic sequences. Q J Exp Psychol (Hove) 2015; 68:402-16. [DOI: 10.1080/17470218.2014.945097] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Responses tend to be slower to previously fixated spatial locations, an effect known as “inhibition of return” (IOR). Saccades cannot be assumed to be independent, however, and saccade sequences programmed in parallel differ from independent eye movements. We measured the speed of both saccadic and manual responses to probes appearing in previously fixated locations when those locations were fixated as part of either parallel or independent saccade sequences. Saccadic IOR was observed in independent but not parallel saccade sequences, while manual IOR was present in both parallel and independent sequence types. Saccadic IOR was also short-lived, and dissipated with delays of more than ∼1500 ms between the intermediate fixation and the probe onset. The results confirm that the characteristics of IOR depend critically on the response modality used for measuring it, with saccadic and manual responses giving rise to motor and attentional forms of IOR, respectively. Saccadic IOR is relatively short-lived and is not observed at intermediate locations of parallel saccade sequences, while attentional IOR is long-lasting and consistent for all sequence types.
Collapse
Affiliation(s)
- W. Joseph MacInnes
- School of Psychology, University of Aberdeen, Old Aberdeen, UK
- Faculty of Psychology, Higher School of Economics (HSE), Moscow, Russian Federation
| | - Hannah M. Krüger
- School of Psychology, University of Aberdeen, Old Aberdeen, UK
- Centre Attention and Vision, Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| | - Amelia R. Hunt
- School of Psychology, University of Aberdeen, Old Aberdeen, UK
| |
Collapse
|
29
|
Grossberg S, Srinivasan K, Yazdanbakhsh A. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Front Psychol 2015; 5:1457. [PMID: 25642198 PMCID: PMC4294135 DOI: 10.3389/fpsyg.2014.01457] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Accepted: 11/28/2014] [Indexed: 12/02/2022] Open
Abstract
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Karthik Srinivasan
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Arash Yazdanbakhsh
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| |
Collapse
|
30
|
Odoj B, Balslev D. Role of Oculoproprioception in Coding the Locus of Attention. J Cogn Neurosci 2015; 28:517-28. [DOI: 10.1162/jocn_a_00910] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The most common neural representations for spatial attention encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be combined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allocation of attention, the source of this input has so far remained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculoproprioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants discriminated visual targets whose location was cued in a nonvisual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculoproprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculoproprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention.
Collapse
|
31
|
Jiang YV, Swallow KM. Changing viewer perspectives reveals constraints to implicit visual statistical learning. J Vis 2014; 14:14.12.3. [PMID: 25294640 DOI: 10.1167/14.12.3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations.
Collapse
Affiliation(s)
- Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Khena M Swallow
- Department of Psychology, Cornell University, Ithaca, NY, USA
| |
Collapse
|
32
|
Zimmermann E, Morrone MC, Burr DC. Buildup of spatial information over time and across eye-movements. Behav Brain Res 2014; 275:281-7. [PMID: 25224817 DOI: 10.1016/j.bbr.2014.09.013] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Revised: 09/04/2014] [Accepted: 09/07/2014] [Indexed: 11/27/2022]
Abstract
To interact rapidly and effectively with our environment, our brain needs access to a neural representation of the spatial layout of the external world. However, the construction of such a map poses major challenges, as the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head and body to explore the world. Research from many laboratories including our own suggests that the visual system does compute spatial maps that are anchored to real-world coordinates. However, the construction of these maps takes time (up to 500ms) and also attentional resources. We discuss research investigating how retinotopic reference frames are transformed into spatiotopic reference-frames, and how this transformation takes time to complete. These results have implications for theories about visual space coordinates and particularly for the current debate about the existence of spatiotopic representations.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Psychology Department, University of Florence, Italy, Neuroscience Institute, National Research Council, Pisa, Italy.
| | - M Concetta Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, via San Zeno 31, 56123 Pisa, Italy; Scientific Institute Stella Maris (IRCSS), viale del Tirreno 331, 56018 Calambrone, Pisa, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Heath, University of Florence, via San Salvi 12, 50135 Florence, Italy; Institute of Neuroscience CNR, via Moruzzi 1, 56124 Pisa, Italy
| |
Collapse
|
33
|
Abstract
In natural scenes, multiple visual stimuli compete for selection; however, each saccade displaces the stimulus representations in retinotopicaly organized visual and oculomotor maps. In the present study, we used saccade curvature to investigate whether oculomotor competition across eye movements is represented in retinotopic or spatiotopic coordinates. Participants performed a sequence of saccades and we induced oculomotor competition by briefly presenting a task-irrelevant distractor at different times during the saccade sequence. Despite the intervening saccade, the second saccade curved away from a spatial representation of the distractor that was presented before the first saccade. Furthermore, the degree of saccade curvature increased with the salience of the distractor presented before the first saccade. The results suggest that spatiotopic representations of target-distractor competition are crucial for successful interaction with objects of interest despite the intervening eye movements.
Collapse
|
34
|
|
35
|
Chang HC, Grossberg S, Cao Y. Where's Waldo? How perceptual, cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene. Front Integr Neurosci 2014; 8:43. [PMID: 24987339 PMCID: PMC4060746 DOI: 10.3389/fnint.2014.00043] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2013] [Accepted: 05/02/2014] [Indexed: 11/13/2022] Open
Abstract
The Where's Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where's Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC).
Collapse
Affiliation(s)
- Hung-Cheng Chang
- Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA
| | - Stephen Grossberg
- Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA
| | - Yongqiang Cao
- Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA
| |
Collapse
|
36
|
Jiang YV, Won BY, Swallow KM, Mussack DM. Spatial reference frame of attention in a large outdoor environment. J Exp Psychol Hum Percept Perform 2014; 40:1346-57. [PMID: 24842066 DOI: 10.1037/a0036779] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A central question about spatial attention is whether it is referenced relative to the external environment or to the viewer. This question has received great interest in recent psychological and neuroscience research, with many but not all, finding evidence for a viewer-centered representation. However, these previous findings were confined to computer-based tasks that involved stationary viewers. Because natural search behaviors differ from computer-based tasks in viewer mobility and spatial scale, it is important to understand how spatial attention is coded in the natural environment. To this end, we created an outdoor visual search task in which participants searched a large (690 square ft), concrete, outdoor space to report which side of a coin on the ground faced up. They began search in the middle of the space and were free to move around. Attentional cuing by statistical learning was examined by placing the coin in 1 quadrant of the search space on 50% of the trials. As in computer-based tasks, participants learned and used these regularities to guide search. However, cuing could be referenced to either the environment or the viewer. The spatial reference frame of attention shows greater flexibility in the natural environment than previously found in the lab.
Collapse
|
37
|
Lim A, Sinnett S. The interaction of feature and space based orienting within the attention set. Front Integr Neurosci 2014; 8:9. [PMID: 24523682 PMCID: PMC3906572 DOI: 10.3389/fnint.2014.00009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2013] [Accepted: 01/15/2014] [Indexed: 11/28/2022] Open
Abstract
The processing of sensory information relies on interacting mechanisms of sustained attention and attentional capture, both of which operate in space and on object features. While evidence indicates that exogenous attentional capture, a mechanism previously understood to be automatic, can be eliminated while concurrently performing a demanding task, we reframe this phenomenon within the theoretical framework of the “attention set” (Most et al., 2005). Consequently, the specific prediction that cuing effects should reappear when feature dimensions of the cue overlap with those in the attention set (i.e., elements of the demanding task) was empirically tested and confirmed using a dual-task paradigm involving both sustained attention and attentional capture, adapted from Santangelo et al. (2007). Participants were required to either detect a centrally presented target presented in a stream of distractors (the primary task), or respond to a spatially cued target (the secondary task). Importantly, the spatial cue could either share features with the target in the centrally presented primary task, or not share any features. Overall, the findings supported the attention set hypothesis showing that a spatial cuing effect was only observed when the peripheral cue shared a feature with objects that were already in the attention set (i.e., the primary task). However, this finding was accompanied by differential attentional orienting dependent on the different types of objects within the attention set, with feature-based orienting occurring for target-related objects, and additional spatial-based orienting for distractor-related objects.
Collapse
Affiliation(s)
- Ahnate Lim
- Department of Psychology, University of Hawaii at Manoa Honolulu, HI, USA
| | - Scott Sinnett
- Department of Psychology, University of Hawaii at Manoa Honolulu, HI, USA
| |
Collapse
|
38
|
Boon PJ, Theeuwes J, Belopolsky AV. Updating visual-spatial working memory during object movement. Vision Res 2013; 94:51-7. [PMID: 24262811 DOI: 10.1016/j.visres.2013.11.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2012] [Revised: 10/29/2013] [Accepted: 11/12/2013] [Indexed: 11/20/2022]
Abstract
Working memory enables temporary maintenance and manipulation of information for immediate access by cognitive processes. The present study investigates how spatial information stored in working memory is updated during object movement. Participants had to remember a particular location on an object which, after a retention interval, started to move. The question was whether the memorized location was updated with the movement of the object or whether after object movement it remained represented in retinotopic coordinates. We used saccade trajectories to examine how memorized locations were represented. The results showed that immediately after the object stopped moving, there was both a retinotopic and an object-centered representation. However, 200ms later, the activity at the retinotopic location decayed, making the memory representation fully object-centered. Our results suggest that memorized locations are updated from retinotopic to object-centered coordinates during, or shortly after object movement.
Collapse
Affiliation(s)
- Paul J Boon
- Department of Cognitive Psychology, Vrije Universiteit, Amsterdam, The Netherlands.
| | - Jan Theeuwes
- Department of Cognitive Psychology, Vrije Universiteit, Amsterdam, The Netherlands
| | - Artem V Belopolsky
- Department of Cognitive Psychology, Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
39
|
|
40
|
Talsma D, White BJ, Mathôt S, Munoz DP, Theeuwes J. A retinotopic attentional trace after saccadic eye movements: evidence from event-related potentials. J Cogn Neurosci 2013; 25:1563-77. [PMID: 23530898 DOI: 10.1162/jocn_a_00390] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Saccadic eye movements are a major source of disruption to visual stability, yet we experience little of this disruption. We can keep track of the same object across multiple saccades. It is generally assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Recent behavioral and ERP evidence suggests that visual attention is also remapped, but that it may still leave a residual retinotopic trace immediately after a saccade. The current study was designed to further examine electrophysiological evidence for such a retinotopic trace by recording ERPs elicited by stimuli that were presented immediately after a saccade (80 msec SOA). Participants were required to maintain attention at a specific location (and to memorize this location) while making a saccadic eye movement. Immediately after the saccade, a visual stimulus was briefly presented at either the attended location (the same spatiotopic location), a location that matched the attended location retinotopically (the same retinotopic location), or one of two control locations. ERP data revealed an enhanced P1 amplitude for the stimulus presented at the retinotopically matched location, but a significant attenuation for probes presented at the original attended location. These results are consistent with the hypothesis that visuospatial attention lingers in retinotopic coordinates immediately following gaze shifts.
Collapse
Affiliation(s)
- Durk Talsma
- Department of Experimental Psychology, Faculty of Psychology and Educational Sciences, Ghent University, Henri Dunantlaan 2, 9000 Gent, Belgium.
| | | | | | | | | |
Collapse
|
41
|
Spatial reference frame of incidentally learned attention. Cognition 2013; 126:378-90. [DOI: 10.1016/j.cognition.2012.10.011] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Revised: 09/26/2012] [Accepted: 10/08/2012] [Indexed: 11/20/2022]
|
42
|
Mathôt S, Theeuwes J. A reinvestigation of the reference frame of the tilt-adaptation aftereffect. Sci Rep 2013; 3:1152. [PMID: 23359857 PMCID: PMC3556595 DOI: 10.1038/srep01152] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2012] [Accepted: 01/09/2013] [Indexed: 11/26/2022] Open
Abstract
The tilt-adaptation aftereffect (TAE) is the phenomenon that prolonged perception of a tilted ‘adapter’ stimulus affects the perceived tilt of a subsequent ‘tester’ stimulus. Although it is clear that TAE is strongest when adapter and tester are presented at the same location, the reference frame of the effect is debated. Some authors have reported that TAE is spatiotopic (world centred): It occurs when adapter and tester are presented at the same display location, even when this corresponds to different retinal locations. Others have reported that TAE is exclusively retinotopic (eye centred): It occurs only when adapter and tester are presented at the same retinal location, even when this corresponds to different display locations. Because this issue is crucial for models of transsaccadic perception, we reinvestigated the reference frame of TAE. We report that TAE is exclusively retinotopic, supporting the notion that there is no transsaccadic integration of low-level visual information.
Collapse
|
43
|
Golomb JD, Kanwisher N. Higher level visual cortex represents retinotopic, not spatiotopic, object location. Cereb Cortex 2012; 22:2794-810. [PMID: 22190434 PMCID: PMC3491766 DOI: 10.1093/cercor/bhr357] [Citation(s) in RCA: 95] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex-important for stable object recognition and action-contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a "searchlight" analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates.
Collapse
Affiliation(s)
- Julie D Golomb
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | | |
Collapse
|
44
|
Satel J, Wang Z. Investigating a two causes theory of inhibition of return. Exp Brain Res 2012; 223:469-78. [PMID: 23111426 DOI: 10.1007/s00221-012-3274-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2012] [Accepted: 09/11/2012] [Indexed: 10/27/2022]
Abstract
It has recently been demonstrated that there are independent sensory and motor mechanisms underlying inhibition of return (IOR) when measured with oculomotor responses (Wang et al. in Exp Brain Res 218:441-453, 2012). However, these results are seemingly in conflict with previous empirical results which led to the proposal that there are two mutually exclusive flavors of IOR (Taylor and Klein in J Exp Psychol Hum Percept Perform 26:1639-1656, 2000). The observed differences in empirical results across these studies and the theoretical frameworks that were proposed based on the results are likely due to differences in the experimental designs. The current experiments establish that the existence of additive sensory and motor contributions to IOR do not depend on target type, repeated spatiotopic stimulation, attentional control settings, or a temporal gap between fixation offset and cue onset, when measured with saccadic responses. Furthermore, our experiments show that the motor mechanism proposed by Wang et al. in Exp Brain Res 218:441-453, (2012) is likely restricted to the oculomotor system, since the additivity effect does not carry over into the manual response modality.
Collapse
Affiliation(s)
- Jason Satel
- Faculty of Computer Science, Dalhousie University, Halifax, Canada.
| | | |
Collapse
|
45
|
|
46
|
Oculomotor inhibition of return: How soon is it “recoded” into spatiotopic coordinates? Atten Percept Psychophys 2012; 74:1145-53. [DOI: 10.3758/s13414-012-0312-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
47
|
Inhibition of return: a "depth-blind" mechanism? Acta Psychol (Amst) 2012; 140:75-80. [PMID: 22465912 DOI: 10.1016/j.actpsy.2012.02.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2011] [Revised: 02/23/2012] [Accepted: 02/28/2012] [Indexed: 11/21/2022] Open
Abstract
When attention is oriented to a peripheral visual event, observers respond faster to stimuli presented at a cued location than at an uncued location. Following initial reaction time facilitation responses are slower to stimuli subsequently displayed at the cued location, an effect known as inhibition of return (IOR). Both facilitatory and inhibitory effects have been extensively investigated in two-dimensional space. Facilitation has also been documented in three-dimensional space, however the presence of IOR in 3D space is unclear, possibly because IOR has not been evaluated in an empty 3D space. Determining if IOR is sensitive to the depth plane of stimuli or if only their bi-dimensional location is inhibited may clarify the nature of the IOR. To address this issue, we used an attentional cueing paradigm in three-dimensional (3D) space. Results were obtained from fourteen participants showed IOR components in 3D space when binocular disparity was used to induce depth. We conclude that attentional orienting in depth operates as efficiently as in the bi-dimensional space.
Collapse
|
48
|
Foley NC, Grossberg S, Mingolla E. Neural dynamics of object-based multifocal visual spatial attention and priming: object cueing, useful-field-of-view, and crowding. Cogn Psychol 2012; 65:77-117. [PMID: 22425615 DOI: 10.1016/j.cogpsych.2012.02.001] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2011] [Revised: 01/07/2012] [Accepted: 02/02/2012] [Indexed: 11/18/2022]
Abstract
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects.
Collapse
Affiliation(s)
- Nicholas C Foley
- Center for Adaptive Systems, Department of Cognitive and Neural Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
49
|
Wang Z, Satel J, Klein RM. Sensory and motor mechanisms of oculomotor inhibition of return. Exp Brain Res 2012; 218:441-53. [DOI: 10.1007/s00221-012-3033-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2011] [Accepted: 02/04/2012] [Indexed: 10/28/2022]
|
50
|
Crespi S, Biagi L, d'Avossa G, Burr DC, Tosetti M, Morrone MC. Spatiotopic coding of BOLD signal in human visual cortex depends on spatial attention. PLoS One 2011; 6:e21661. [PMID: 21750720 PMCID: PMC3131281 DOI: 10.1371/journal.pone.0021661] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2011] [Accepted: 06/05/2011] [Indexed: 11/19/2022] Open
Abstract
The neural substrate of the phenomenological experience of a stable visual world remains obscure. One possible mechanism would be to construct spatiotopic neural maps where the response is selective to the position of the stimulus in external space, rather than to retinal eccentricities, but evidence for these maps has been inconsistent. Here we show, with fMRI, that when human subjects perform concomitantly a demanding attentive task on stimuli displayed at the fovea, BOLD responses evoked by moving stimuli irrelevant to the task were mostly tuned in retinotopic coordinates. However, under more unconstrained conditions, where subjects could attend easily to the motion stimuli, BOLD responses were tuned not in retinal but in external coordinates (spatiotopic selectivity) in many visual areas, including MT, MST, LO and V6, agreeing with our previous fMRI study. These results indicate that spatial attention may play an important role in mediating spatiotopic selectivity.
Collapse
Affiliation(s)
- Sofia Crespi
- Department of Psychology, Università Degli Studi di Firenze, Florence, Italy
- Department of Psychology, Università Vita-Salute San Raffaele, Milan, Italy
| | - Laura Biagi
- Fondazione Stella Maris, Calambrone, Pisa, Italy
| | - Giovanni d'Avossa
- School of Psychology Adeilad Brigantia, Bangor University, Bangor, United Kingdom
| | - David C. Burr
- Department of Psychology, Università Degli Studi di Firenze, Florence, Italy
- Istituto di Neuroscienze, CNR, Pisa, Italy
- * E-mail:
| | | | - Maria Concetta Morrone
- Department of Physiological Sciences, University of Pisa, Pisa, Italy
- Department of Robotic, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|