1
|
Leticevscaia O, Brandman T, Peelen MV. Scene context and attention independently facilitate MEG decoding of object category. Vision Res 2024; 224:108484. [PMID: 39260230 DOI: 10.1016/j.visres.2024.108484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 08/25/2024] [Accepted: 09/02/2024] [Indexed: 09/13/2024]
Abstract
Many of the objects we encounter in our everyday environments would be hard to recognize without any expectations about these objects. For example, a distant silhouette may be perceived as a car because we expect objects of that size, positioned on a road, to be cars. Reflecting the influence of such expectations on visual processing, neuroimaging studies have shown that when objects are poorly visible, expectations derived from scene context facilitate the representations of these objects in visual cortex from around 300 ms after scene onset. The current magnetoencephalography (MEG) study tested whether this facilitation occurs independently of attention and task relevance. Participants viewed degraded objects alone or within scene context while they either attended the scenes (attended condition) or the fixation cross (unattended condition), also temporally directing attention away from the scenes. Results showed that at 300 ms after stimulus onset, multivariate classifiers trained to distinguish clearly visible animate vs inanimate objects generalized to distinguish degraded objects in scenes better than degraded objects alone, despite the added clutter of the scene background. Attention also modulated object representations at this latency, with better category decoding in the attended than the unattended condition. The modulatory effects of context and attention were independent of each other. Finally, data from the current study and a previous study were combined (N = 51) to provide a more detailed temporal characterization of contextual facilitation. These results extend previous work by showing that facilitatory scene-object interactions are independent of the specific task performed on the visual input.
Collapse
Affiliation(s)
- Olga Leticevscaia
- University of Reading, Centre for Integrative Neuroscience and Neurodynamics, United Kingdom
| | - Talia Brandman
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
2
|
Quek GL, de Heering A. Visual periodicity reveals distinct attentional signatures for face and non-face categories. Cereb Cortex 2024; 34:bhae228. [PMID: 38879816 PMCID: PMC11180377 DOI: 10.1093/cercor/bhae228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 03/19/2024] [Accepted: 05/14/2024] [Indexed: 06/19/2024] Open
Abstract
Observers can selectively deploy attention to regions of space, moments in time, specific visual features, individual objects, and even specific high-level categories-for example, when keeping an eye out for dogs while jogging. Here, we exploited visual periodicity to examine how category-based attention differentially modulates selective neural processing of face and non-face categories. We combined electroencephalography with a novel frequency-tagging paradigm capable of capturing selective neural responses for multiple visual categories contained within the same rapid image stream (faces/birds in Exp 1; houses/birds in Exp 2). We found that the pattern of attentional enhancement and suppression for face-selective processing is unique compared to other object categories: Where attending to non-face objects strongly enhances their selective neural signals during a later stage of processing (300-500 ms), attentional enhancement of face-selective processing is both earlier and comparatively more modest. Moreover, only the selective neural response for faces appears to be actively suppressed by attending towards an alternate visual category. These results underscore the special status that faces hold within the human visual system, and highlight the utility of visual periodicity as a powerful tool for indexing selective neural processing of multiple visual categories contained within the same image sequence.
Collapse
Affiliation(s)
- Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Westmead Innovation Quarter, 160 Hawkesbury Rd, Westmead NSW 2145, Australia
| | - Adélaïde de Heering
- Unité de Recherche en Neurosciences Cognitives (UNESCOG), ULB Neuroscience Institue (UNI), Center for Research in Cognition & Neurosciences (CRCN), Université libre de Bruxelles (ULB), Avenue Franklin Roosevelt, 50-CP191, 1050 Brussels, Belgium
| |
Collapse
|
3
|
Gayet S, Battistoni E, Thorat S, Peelen MV. Searching near and far: The attentional template incorporates viewing distance. J Exp Psychol Hum Percept Perform 2024; 50:216-231. [PMID: 38376937 PMCID: PMC7616437 DOI: 10.1037/xhp0001172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
According to theories of visual search, observers generate a visual representation of the search target (the "attentional template") that guides spatial attention toward target-like visual input. In real-world vision, however, objects produce vastly different visual input depending on their location: your car produces a retinal image that is 10 times smaller when it is parked 50 compared to 5 m away. Across four experiments, we investigated whether the attentional template incorporates viewing distance when observers search for familiar object categories. On each trial, participants were precued to search for a car or person in the near or far plane of an outdoor scene. In "search trials," the scene reappeared and participants had to indicate whether the search target was present or absent. In intermixed "catch-trials," two silhouettes were briefly presented on either side of fixation (matching the shape and/or predicted size of the search target), one of which was followed by a probe-stimulus. We found that participants were more accurate at reporting the location (Experiments 1 and 2) and orientation (Experiment 3) of probe stimuli when they were presented at the location of size-matching silhouettes. Thus, attentional templates incorporate the predicted size of an object based on the current viewing distance. This was only the case, however, when silhouettes also matched the shape of the search target (Experiment 2). We conclude that attentional templates for finding objects in scenes are shaped by a combination of category-specific attributes (shape) and context-dependent expectations about the likely appearance (size) of these objects at the current viewing location. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Surya Gayet
- Experimental Psychology, Helmholtz Institute, Utrecht University
| | | | - Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| |
Collapse
|
4
|
Dube B, Pidaparthi L, Golomb JD. Visual Distraction Disrupts Category-tuned Attentional Filters in Ventral Visual Cortex. J Cogn Neurosci 2022; 34:1521-1533. [PMID: 35579979 DOI: 10.1162/jocn_a_01870] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Our behavioral goals shape how we process information via attentional filters that prioritize goal-relevant information, dictating both where we attend and what we attend to. When something unexpected or salient appears in the environment, it captures our spatial attention. Extensive research has focused on the spatiotemporal aspects of attentional capture, but what happens to concurrent nonspatial filters during visual distraction? Here, we demonstrate a novel, broader consequence of distraction: widespread disruption to filters that regulate category-specific object processing. We recorded fMRI while participants viewed arrays of face/house hybrid images. On distractor-absent trials, we found robust evidence for the standard signature of category-tuned attentional filtering: greater BOLD activation in fusiform face area during attend-faces blocks and in parahippocampal place area during attend-houses blocks. However, on trials where a salient distractor (white rectangle) flashed abruptly around a nontarget location, not only was spatial attention captured, but the concurrent category-tuned attentional filter was disrupted, revealing a boost in activation for the to-be-ignored category. This disruption was robust, resulting in errant processing-and early on, prioritization-of goal-inconsistent information. These findings provide a direct test of the filter disruption theory: that in addition to disrupting spatial attention, distraction also disrupts nonspatial attentional filters tuned to goal-relevant information. Moreover, these results reveal that, under certain circumstances, the filter disruption may be so profound as to induce a full reversal of the attentional control settings, which carries novel implications for both theory and real-world perception.
Collapse
|
5
|
Thorat S, Peelen MV. Body shape as a visual feature: Evidence from spatially-global attentional modulation in human visual cortex. Neuroimage 2022; 255:119207. [PMID: 35427768 DOI: 10.1016/j.neuroimage.2022.119207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 03/10/2022] [Accepted: 04/08/2022] [Indexed: 10/18/2022] Open
Abstract
Feature-based attention modulates visual processing beyond the focus of spatial attention. Previous work has reported such spatially-global effects for low-level features such as color and orientation, as well as for faces. Here, using fMRI, we provide evidence for spatially-global attentional modulation for human bodies. Participants were cued to search for one of six object categories in two vertically-aligned images. Two additional, horizontally-aligned, images were simultaneously presented but were never task-relevant across three experimental sessions. Analyses time-locked to the objects presented in these task-irrelevant images revealed that responses evoked by body silhouettes were modulated by the participants' top-down attentional set, becoming more body-selective when participants searched for bodies in the task-relevant images. These effects were observed both in univariate analyses of the body-selective cortex and in multivariate analyses of the object-selective visual cortex. Additional analyses showed that this modulation reflected response gain rather than a bias induced by the cues, and that it reflected enhancement of body responses rather than suppression of non-body responses. These findings provide evidence for a spatially-global attention mechanism for body shapes, supporting the rapid and parallel detection of conspecifics in our environment.
Collapse
Affiliation(s)
- Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
6
|
Castelhano MS, Krzyś K. Rethinking Space: A Review of Perception, Attention, and Memory in Scene Processing. Annu Rev Vis Sci 2020; 6:563-586. [PMID: 32491961 DOI: 10.1146/annurev-vision-121219-081745] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Scene processing is fundamentally influenced and constrained by spatial layout and spatial associations with objects. However, semantic information has played a vital role in propelling our understanding of real-world scene perception forward. In this article, we review recent advances in assessing how spatial layout and spatial relations influence scene processing. We examine the organization of the larger environment and how we take full advantage of spatial configurations independently of semantic information. We demonstrate that a clear differentiation of spatial from semantic information is necessary to advance research in the field of scene processing.
Collapse
Affiliation(s)
- Monica S Castelhano
- Department of Psychology, Queen's University, Kingston, Ontario K7L 3N6, Canada;
| | - Karolina Krzyś
- Department of Psychology, Queen's University, Kingston, Ontario K7L 3N6, Canada;
| |
Collapse
|
7
|
Zhang X, Sun Y, Liu W, Zhang Z, Wu B. Twin mechanisms: Rapid scene recognition involves both feedforward and feedback processing. Acta Psychol (Amst) 2020; 208:103101. [PMID: 32485339 DOI: 10.1016/j.actpsy.2020.103101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 05/07/2020] [Accepted: 05/20/2020] [Indexed: 11/25/2022] Open
Abstract
The low spatial frequency (LSF) component of visual information rapidly conveyed coarse information for global perception, while the high spatial frequency (HSF) component delivered fine-grained information for detailed analyses. The feedforward theorists deemed that a coarse-to-fine process was sufficient for a rapid scene recognition. Based on the response priming paradigm, the present study aimed to deeply explore how different spatial frequency interacted with each other during rapid scene recognition. The response priming paradigm posited that as long as the prime slide could be rapidly recognized, the prime-target system was behaviorally equivalent to a feedforward system. Adopting broad spatial frequency images, experiment 1 revealed a typical response priming effect. But in experiment 2, when the HSF and the LSF components of the same pictures were separately presented, neither the LSF-to-HSF sequence nor the HSF-to-LSF sequence reproduced the response priming effect. These results demonstrated that LSF or HSF component alone was not sufficient for rapid scene recognition and, further, that the integration of different spatial frequency needed some early feedback loops. These findings supported that the local recurrent processing loops among early visual cortex was involved during rapid scene recognition.
Collapse
|
8
|
Battistoni E, Kaiser D, Hickey C, Peelen MV. The time course of spatial attention during naturalistic visual search. Cortex 2020; 122:225-234. [DOI: 10.1016/j.cortex.2018.11.018] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 07/12/2018] [Accepted: 11/21/2018] [Indexed: 11/26/2022]
|
9
|
Krasovskaya S, MacInnes WJ. Salience Models: A Computational Cognitive Neuroscience Review. Vision (Basel) 2019; 3:E56. [PMID: 31735857 PMCID: PMC6969943 DOI: 10.3390/vision3040056] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 10/12/2019] [Accepted: 10/22/2019] [Indexed: 11/21/2022] Open
Abstract
The seminal model by Laurent Itti and Cristoph Koch demonstrated that we can compute the entire flow of visual processing from input to resulting fixations. Despite many replications and follow-ups, few have matched the impact of the original model-so what made this model so groundbreaking? We have selected five key contributions that distinguish the original salience model by Itti and Koch; namely, its contribution to our theoretical, neural, and computational understanding of visual processing, as well as the spatial and temporal predictions for fixation distributions. During the last 20 years, advances in the field have brought up various techniques and approaches to salience modelling, many of which tried to improve or add to the initial Itti and Koch model. One of the most recent trends has been to adopt the computational power of deep learning neural networks; however, this has also shifted their primary focus to spatial classification. We present a review of recent approaches to modelling salience, starting from direct variations of the Itti and Koch salience model to sophisticated deep-learning architectures, and discuss the models from the point of view of their contribution to computational cognitive neuroscience.
Collapse
Affiliation(s)
- Sofia Krasovskaya
- Vision Modelling Laboratory, Faculty of Social Science, National Research University Higher School of Economics, 101000 Moscow, Russia
- School of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
| | - W. Joseph MacInnes
- Vision Modelling Laboratory, Faculty of Social Science, National Research University Higher School of Economics, 101000 Moscow, Russia
- School of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
| |
Collapse
|
10
|
Võ MLH, Boettcher SEP, Draschkow D. Reading scenes: how scene grammar guides attention and aids perception in real-world environments. Curr Opin Psychol 2019; 29:205-210. [DOI: 10.1016/j.copsyc.2019.03.009] [Citation(s) in RCA: 80] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 03/07/2019] [Accepted: 03/13/2019] [Indexed: 11/30/2022]
|
11
|
Vasser M, Vuillaume L, Cleeremans A, Aru J. Waving goodbye to contrast: self-generated hand movements attenuate visual sensitivity. Neurosci Conscious 2019; 2019:niy013. [PMID: 30687519 PMCID: PMC6342231 DOI: 10.1093/nc/niy013] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Revised: 11/19/2018] [Accepted: 11/28/2018] [Indexed: 01/17/2023] Open
Abstract
It is well known that the human brain continuously predicts the sensory consequences of its own body movements, which typically results in sensory attenuation. Yet, the extent and exact mechanisms underlying sensory attenuation are still debated. To explore this issue, we asked participants to decide which of two visual stimuli was of higher contrast in a virtual reality situation where one of the stimuli could appear behind the participants’ invisible moving hand or not. Over two experiments, we measured the effects of such “virtual occlusion” on first-order sensitivity and on metacognitive monitoring. Our findings show that self-generated hand movements reduced the apparent contrast of the stimulus. This result can be explained by the active inference theory. Moreover, sensory attenuation seemed to affect only first-order sensitivity and not (second-order) metacognitive judgments of confidence.
Collapse
Affiliation(s)
- Madis Vasser
- Institute of Computer Science, University of Tartu, Estonia
| | - Laurène Vuillaume
- Consciousness, Cognition, and Computation Group (CO3), Center for Research in Cognition and Neurosciences (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Belgium
| | - Axel Cleeremans
- Consciousness, Cognition, and Computation Group (CO3), Center for Research in Cognition and Neurosciences (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Belgium
| | - Jaan Aru
- Consciousness, Cognition, and Computation Group (CO3), Center for Research in Cognition and Neurosciences (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Belgium.,Institute of Biology, Humboldt University Berlin, Germany.,Institute of Penal Law, University of Tartu, Estonia
| |
Collapse
|
12
|
Lindsay GW, Miller KD. How biological attention mechanisms improve task performance in a large-scale visual system model. eLife 2018; 7:e38105. [PMID: 30272560 PMCID: PMC6207429 DOI: 10.7554/elife.38105] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2018] [Accepted: 09/28/2018] [Indexed: 11/13/2022] Open
Abstract
How does attentional modulation of neural activity enhance performance? Here we use a deep convolutional neural network as a large-scale model of the visual system to address this question. We model the feature similarity gain model of attention, in which attentional modulation is applied according to neural stimulus tuning. Using a variety of visual tasks, we show that neural modulations of the kind and magnitude observed experimentally lead to performance changes of the kind and magnitude observed experimentally. We find that, at earlier layers, attention applied according to tuning does not successfully propagate through the network, and has a weaker impact on performance than attention applied according to values computed for optimally modulating higher areas. This raises the question of whether biological attention might be applied at least in part to optimize function rather than strictly according to tuning. We suggest a simple experiment to distinguish these alternatives.
Collapse
Affiliation(s)
- Grace W Lindsay
- Center for Theoretical Neuroscience, College of Physicians and SurgeonsColumbia UniversityNew YorkUnited States
- Mortimer B. Zuckerman Mind Brain Behaviour InstituteColumbia UniversityNew YorkUnited States
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, College of Physicians and SurgeonsColumbia UniversityNew YorkUnited States
- Mortimer B. Zuckerman Mind Brain Behaviour InstituteColumbia UniversityNew YorkUnited States
- Swartz Program in Theoretical NeuroscienceKavli Institute for Brain ScienceNew YorkUnited States
- Department of NeuroscienceColumbia UniversityNew YorkUnited States
| |
Collapse
|
13
|
Matthews J, Schröder P, Kaunitz L, van Boxtel JJA, Tsuchiya N. Conscious access in the near absence of attention: critical extensions on the dual-task paradigm. Philos Trans R Soc Lond B Biol Sci 2018; 373:20170352. [PMID: 30061465 PMCID: PMC6074075 DOI: 10.1098/rstb.2017.0352] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/21/2018] [Indexed: 12/27/2022] Open
Abstract
Whether conscious perception requires attention remains a topic of intense debate. While certain complex stimuli such as faces and animals can be discriminated outside the focus of spatial attention, many simpler stimuli cannot. Because such evidence was obtained in dual-task paradigms involving no measure of subjective insight, it remains unclear whether accurate discrimination of unattended complex stimuli is the product of automatic, unconscious processing, as in blindsight, or is accessible to consciousness. Furthermore, these paradigms typically require extensive training over many hours, bringing into question whether this phenomenon can be achieved in naive subjects. We developed a novel dual-task paradigm incorporating confidence ratings to calculate metacognition and adaptive staircase procedures to reduce training. With minimal training, subjects were able to discriminate face-gender in the near absence of top-down attentional amplification, while also displaying above-chance metacognitive accuracy. By contrast, the discrimination of simple coloured discs was significantly impaired and metacognitive accuracy dropped to chance-level, even in a partial-report condition. In a final experiment, we used blended face/disc stimuli and confirmed that face-gender but not colour orientation can be discriminated in the dual task. Our results show direct evidence for metacognitive conscious access in the near absence of attention for complex, but not simple, stimuli.This article is part of the theme issue 'Perceptual consciousness and cognitive access'.
Collapse
Affiliation(s)
- Julian Matthews
- Cognition and Philosophy Lab, Faculty of Arts, Monash University, Clayton, Victoria 3800, Australia
- School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
| | - Pia Schröder
- School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
| | - Lisandro Kaunitz
- School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
| | - Jeroen J A van Boxtel
- School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton, Victoria 3800, Australia
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton, Victoria 3800, Australia
| |
Collapse
|
14
|
Reward Selectively Modulates the Lingering Neural Representation of Recently Attended Objects in Natural Scenes. J Neurosci 2017. [PMID: 28630254 DOI: 10.1523/jneurosci.0684-17.2017] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Theories of reinforcement learning and approach behavior suggest that reward can increase the perceptual salience of environmental stimuli, ensuring that potential predictors of outcome are noticed in the future. However, outcome commonly follows visual processing of the environment, occurring even when potential reward cues have long disappeared. How can reward feedback retroactively cause now-absent stimuli to become attention-drawing in the future? One possibility is that reward and attention interact to prime lingering visual representations of attended stimuli that sustain through the interval separating stimulus and outcome. Here, we test this idea using multivariate pattern analysis of fMRI data collected from male and female humans. While in the scanner, participants searched for examples of target categories in briefly presented pictures of cityscapes and landscapes. Correct task performance was followed by reward feedback that could randomly have either high or low magnitude. Analysis showed that high-magnitude reward feedback boosted the lingering representation of target categories while reducing the representation of nontarget categories. The magnitude of this effect in each participant predicted the behavioral impact of reward on search performance in subsequent trials. Other analyses show that sensitivity to reward-as expressed in a personality questionnaire and in reactivity to reward feedback in the dopaminergic midbrain-predicted reward-elicited variance in lingering target and nontarget representations. Credit for rewarding outcome thus appears to be assigned to the target representation, causing the visual system to become sensitized for similar objects in the future.SIGNIFICANCE STATEMENT How do reward-predictive visual stimuli become salient and attention-drawing? In the real world, reward cues precede outcome and reward is commonly received long after potential predictors have disappeared. How can the representation of environmental stimuli be affected by outcome that occurs later in time? Here, we show that reward acts on lingering representations of environmental stimuli that sustain through the interval between stimulus and outcome. Using naturalistic scene stimuli and multivariate pattern analysis of fMRI data, we show that reward boosts the representation of attended objects and reduces the representation of unattended objects. This interaction of attention and reward processing acts to prime vision for stimuli that may serve to predict outcome.
Collapse
|
15
|
Battistoni E, Stein T, Peelen MV. Preparatory attention in visual cortex. Ann N Y Acad Sci 2017; 1396:92-107. [PMID: 28253445 DOI: 10.1111/nyas.13320] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2016] [Revised: 01/16/2017] [Accepted: 01/19/2017] [Indexed: 12/01/2022]
Abstract
Top-down attention is the mechanism that allows us to selectively process goal-relevant aspects of a scene while ignoring irrelevant aspects. A large body of research has characterized the effects of attention on neural activity evoked by a visual stimulus. However, attention also includes a preparatory phase before stimulus onset in which the attended dimension is internally represented. Here, we review neurophysiological, functional magnetic resonance imaging, magnetoencephalography, electroencephalography, and transcranial magnetic stimulation (TMS) studies investigating the neural basis of preparatory attention, both when attention is directed to a location in space and when it is directed to nonspatial stimulus attributes (content-based attention) ranging from low-level features to object categories. Results show that both spatial and content-based attention lead to increased baseline activity in neural populations that selectively code for the attended attribute. TMS studies provide evidence that this preparatory activity is causally related to subsequent attentional selection and behavioral performance. Attention thus acts by preactivating selective neurons in the visual cortex before stimulus onset. This appears to be a general mechanism that can operate on multiple levels of representation. We discuss the functional relevance of this mechanism, its limitations, and its relation to working memory, imagery, and expectation. We conclude by outlining open questions and future directions.
Collapse
Affiliation(s)
- Elisa Battistoni
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | - Timo Stein
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy.,Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Marius V Peelen
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| |
Collapse
|