1
|
Martinovic J, Boyanova A, Andersen SK. Division and spreading of attention across color. Cereb Cortex 2024; 34:bhae240. [PMID: 38858841 PMCID: PMC11164655 DOI: 10.1093/cercor/bhae240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 05/16/2024] [Indexed: 06/12/2024] Open
Abstract
Biological systems must allocate limited perceptual resources to relevant elements in their environment. This often requires simultaneous selection of multiple elements from the same feature dimension (e.g. color). To establish the determinants of divided attentional selection of color, we conducted an experiment that used multicolored displays with four overlapping random dot kinematograms that differed only in hue. We manipulated (i) requirement to focus attention to a single color or divide it between two colors; (ii) distances of distractor hues from target hues in a perceptual color space. We conducted a behavioral and an electroencephalographic experiment, in which each color was tagged by a specific flicker frequency and driving its own steady-state visual evoked potential. Behavioral and neural indices of attention showed several major consistencies. Concurrent selection halved the neural signature of target enhancement observed for single targets, consistent with an approximately equal division of limited resources between two hue-selective foci. Distractors interfered with behavioral performance in a context-dependent fashion but their effects were asymmetric, indicating that perceptual distance did not adequately capture attentional distance. These asymmetries point towards an important role of higher-level mechanisms such as categorization and grouping-by-color in determining the efficiency of attentional allocation in complex, multicolored scenes.
Collapse
Affiliation(s)
- Jasna Martinovic
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, 7 George Square, EH8 9JZ, Edinburgh, United Kingdom
| | - Antoniya Boyanova
- School of Psychology, University of Aberdeen, William Guild Building, AB24 3UB, Aberdeen, United Kingdom
| | - Søren K Andersen
- School of Psychology, University of Aberdeen, William Guild Building, AB24 3UB, Aberdeen, United Kingdom
- Department of Psychology, University of Southern Denmark, Campusvej 55, 5230 Odense, Denmark
| |
Collapse
|
2
|
Walper D, Bendixen A, Grimm S, Schubö A, Einhäuser W. Attention deployment in natural scenes: Higher-order scene statistics rather than semantics modulate the N2pc component. J Vis 2024; 24:7. [PMID: 38848099 PMCID: PMC11166226 DOI: 10.1167/jov.24.6.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/19/2024] [Indexed: 06/13/2024] Open
Abstract
Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (all experiments); (b) 1/f noise (pink noise, which preserves only low-level statistics and was used in Experiments 1 and 2); (c) textures that preserve low-level and higher-level statistics but not semantics or layout (Experiments 2 and 3); and (d) inverted (upside-down) scenes that preserve statistics and semantics but not layout (Experiment 2). We included "split scenes" that contained different backgrounds left and right of the midline (Experiment 1, natural/noise; Experiment 3, natural/texture). Participants searched for a Gabor patch that occurred at one of six locations (all experiments). Reaction times were faster for targets on noise and slower on inverted images, compared to natural scenes and textures. The N2pc component of the event-related potential, a marker of attentional selection, had a shorter latency and a higher amplitude for targets in noise than for all other backgrounds. The background contralateral to the target had an effect similar to that on the target side: noise led to faster reactions and shorter N2pc latencies than natural scenes, although we observed no difference in N2pc amplitude. There were no interactions between the target side and the non-target side. Together, this shows that-at least when searching simple targets without own semantic content-natural scenes are more effective distractors than noise and that this results from higher-order statistics rather than from semantics or layout.
Collapse
Affiliation(s)
- Daniel Walper
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Chemnitz University of Technology, Chemnitz, Germany
- https://www.tu-chemnitz.de/physik/SFKS/index.html.en
| | - Sabine Grimm
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
- Cognitive Systems Lab, Chemnitz University of Technology, Chemnitz, Germany
| | - Anna Schubö
- Cognitive Neuroscience of Perception & Action, Philipps University Marburg, Marburg, Germany
- https://www.uni-marburg.de/en/fb04/team-schuboe
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
- https://www.tu-chemnitz.de/physik/PHKP/index.html.en
| |
Collapse
|
3
|
Kumle L, Võ MLH, Nobre AC, Draschkow D. Multifaceted consequences of visual distraction during natural behaviour. COMMUNICATIONS PSYCHOLOGY 2024; 2:49. [PMID: 38812582 PMCID: PMC11129948 DOI: 10.1038/s44271-024-00099-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 05/15/2024] [Indexed: 05/31/2024]
Abstract
Visual distraction is a ubiquitous aspect of everyday life. Studying the consequences of distraction during temporally extended tasks, however, is not tractable with traditional methods. Here we developed a virtual reality approach that segments complex behaviour into cognitive subcomponents, including encoding, visual search, working memory usage, and decision-making. Participants copied a model display by selecting objects from a resource pool and placing them into a workspace. By manipulating the distractibility of objects in the resource pool, we discovered interfering effects of distraction across the different cognitive subcomponents. We successfully traced the consequences of distraction all the way from overall task performance to the decision-making processes that gate memory usage. Distraction slowed down behaviour and increased costly body movements. Critically, distraction increased encoding demands, slowed visual search, and decreased reliance on working memory. Our findings illustrate that the effects of visual distraction during natural behaviour can be rather focal but nevertheless have cascading consequences.
Collapse
Affiliation(s)
- Levi Kumle
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| | - Melissa L.-H. Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | - Anna C. Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Wu Tsai Institute and Department of Psychology, Yale University, New Haven, CT USA
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| |
Collapse
|
4
|
Chapman AF, Störmer VS. Representational structures as a unifying framework for attention. Trends Cogn Sci 2024; 28:416-427. [PMID: 38280837 PMCID: PMC11290436 DOI: 10.1016/j.tics.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 01/29/2024]
Abstract
Our visual system consciously processes only a subset of the incoming information. Selective attention allows us to prioritize relevant inputs, and can be allocated to features, locations, and objects. Recent advances in feature-based attention suggest that several selection principles are shared across these domains and that many differences between the effects of attention on perceptual processing can be explained by differences in the underlying representational structures. Moving forward, it can thus be useful to assess how attention changes the structure of the representational spaces over which it operates, which include the spatial organization, feature maps, and object-based coding in visual cortex. This will ultimately add to our understanding of how attention changes the flow of visual information processing more broadly.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
5
|
Jahn CI, Markov NT, Morea B, Daw ND, Ebitz RB, Buschman TJ. Learning attentional templates for value-based decision-making. Cell 2024; 187:1476-1489.e21. [PMID: 38401541 DOI: 10.1016/j.cell.2024.01.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/18/2023] [Accepted: 01/25/2024] [Indexed: 02/26/2024]
Abstract
Attention filters sensory inputs to enhance task-relevant information. It is guided by an "attentional template" that represents the stimulus features that are currently relevant. To understand how the brain learns and uses templates, we trained monkeys to perform a visual search task that required them to repeatedly learn new attentional templates. Neural recordings found that templates were represented across the prefrontal and parietal cortex in a structured manner, such that perceptually neighboring templates had similar neural representations. When the task changed, a new attentional template was learned by incrementally shifting the template toward rewarded features. Finally, we found that attentional templates transformed stimulus features into a common value representation that allowed the same decision-making mechanisms to deploy attention, regardless of the identity of the template. Altogether, our results provide insight into the neural mechanisms by which the brain learns to control attention and how attention can be flexibly deployed across tasks.
Collapse
Affiliation(s)
- Caroline I Jahn
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA.
| | - Nikola T Markov
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA
| | - Britney Morea
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA
| | - Nathaniel D Daw
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Psychology, Princeton University, Princeton, NJ 08540, USA
| | - R Becket Ebitz
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Neurosciences, Université de Montréal, Montréal, QC H3C 3J7, Canada
| | - Timothy J Buschman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Psychology, Princeton University, Princeton, NJ 08540, USA.
| |
Collapse
|
6
|
Gayet S, Battistoni E, Thorat S, Peelen MV. Searching near and far: The attentional template incorporates viewing distance. J Exp Psychol Hum Percept Perform 2024; 50:216-231. [PMID: 38376937 PMCID: PMC7616437 DOI: 10.1037/xhp0001172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
According to theories of visual search, observers generate a visual representation of the search target (the "attentional template") that guides spatial attention toward target-like visual input. In real-world vision, however, objects produce vastly different visual input depending on their location: your car produces a retinal image that is 10 times smaller when it is parked 50 compared to 5 m away. Across four experiments, we investigated whether the attentional template incorporates viewing distance when observers search for familiar object categories. On each trial, participants were precued to search for a car or person in the near or far plane of an outdoor scene. In "search trials," the scene reappeared and participants had to indicate whether the search target was present or absent. In intermixed "catch-trials," two silhouettes were briefly presented on either side of fixation (matching the shape and/or predicted size of the search target), one of which was followed by a probe-stimulus. We found that participants were more accurate at reporting the location (Experiments 1 and 2) and orientation (Experiment 3) of probe stimuli when they were presented at the location of size-matching silhouettes. Thus, attentional templates incorporate the predicted size of an object based on the current viewing distance. This was only the case, however, when silhouettes also matched the shape of the search target (Experiment 2). We conclude that attentional templates for finding objects in scenes are shaped by a combination of category-specific attributes (shape) and context-dependent expectations about the likely appearance (size) of these objects at the current viewing location. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Surya Gayet
- Experimental Psychology, Helmholtz Institute, Utrecht University
| | | | - Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| |
Collapse
|
7
|
Yu X, Rahim RA, Geng JJ. Task-adaptive changes to the target template in response to distractor context: Separability versus similarity. J Exp Psychol Gen 2024; 153:564-572. [PMID: 37917441 PMCID: PMC10843062 DOI: 10.1037/xge0001507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Theories of attention hypothesize the existence of an attentional template that contains target features in working or long-term memory. It is frequently assumed that the template contains a veridical copy of the target, but recent studies suggest that this is not true when the distractors are linearly separable from the target. In such cases, target representations shift "off-veridical" in response to the distractor context, presumably because doing so is adaptive and increases the representational distinctiveness of targets from distractors. However, some have argued that the shifts may be entirely explained by perceptual biases created by simultaneous color contrast. Here we address this debate and test the more general hypothesis that the target template is adaptively shaped by elements of the distractor context needed to distinguish targets from distractors. We used a two-dimensional target and separately manipulated the linear separability of one dimension (color) and the visual similarity of the other (orientation). We found that target shifting along the linearly separable color dimension was dependent on the similarity of targets-to-distractors along the other dimension. The target representations were consistent with a postexperiment strategy questionnaire in which participants reported using color more when orientation was hard to use, and orientation more when it was easier to use. We conclude that the target template is task-adaptive and exploit features in the distractor context that most predictably distinguish targets from distractors to increase visual search efficiency. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California, Davis
| | - Raisa A. Rahim
- Center for Mind and Brain, University of California, Davis
| | - Joy J. Geng
- Center for Mind and Brain, University of California, Davis
- Department of Psychology, University of California, Davis
| |
Collapse
|
8
|
Liesefeld HR, Lamy D, Gaspelin N, Geng JJ, Kerzel D, Schall JD, Allen HA, Anderson BA, Boettcher S, Busch NA, Carlisle NB, Colonius H, Draschkow D, Egeth H, Leber AB, Müller HJ, Röer JP, Schubö A, Slagter HA, Theeuwes J, Wolfe J. Terms of debate: Consensus definitions to guide the scientific discourse on visual distraction. Atten Percept Psychophys 2024:10.3758/s13414-023-02820-3. [PMID: 38177944 DOI: 10.3758/s13414-023-02820-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2023] [Indexed: 01/06/2024]
Abstract
Hypothesis-driven research rests on clearly articulated scientific theories. The building blocks for communicating these theories are scientific terms. Obviously, communication - and thus, scientific progress - is hampered if the meaning of these terms varies idiosyncratically across (sub)fields and even across individual researchers within the same subfield. We have formed an international group of experts representing various theoretical stances with the goal to homogenize the use of the terms that are most relevant to fundamental research on visual distraction in visual search. Our discussions revealed striking heterogeneity and we had to invest much time and effort to increase our mutual understanding of each other's use of central terms, which turned out to be strongly related to our respective theoretical positions. We present the outcomes of these discussions in a glossary and provide some context in several essays. Specifically, we explicate how central terms are used in the distraction literature and consensually sharpen their definitions in order to enable communication across theoretical standpoints. Where applicable, we also explain how the respective constructs can be measured. We believe that this novel type of adversarial collaboration can serve as a model for other fields of psychological research that strive to build a solid groundwork for theorizing and communicating by establishing a common language. For the field of visual distraction, the present paper should facilitate communication across theoretical standpoints and may serve as an introduction and reference text for newcomers.
Collapse
Affiliation(s)
- Heinrich R Liesefeld
- Department of Psychology, University of Bremen, Hochschulring 18, D-28359, Bremen, Germany.
| | - Dominique Lamy
- The School of Psychology Sciences and The Sagol School of Neuroscience, Tel Aviv University, Ramat Aviv 69978, POB 39040, Tel Aviv, Israel.
| | | | - Joy J Geng
- University of California Davis, Daivs, CA, USA
| | | | | | | | | | | | | | | | - Hans Colonius
- Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | | | | | | | | | | | - Anna Schubö
- Philipps University Marburg, Marburg, Germany
| | | | | | - Jeremy Wolfe
- Harvard Medical School, Boston, MA, USA
- Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
9
|
Zhou Z, Geng JJ. Learned associations serve as target proxies during difficult but not easy visual search. Cognition 2024; 242:105648. [PMID: 37897882 DOI: 10.1016/j.cognition.2023.105648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/30/2023]
Abstract
The target template contains information in memory that is used to guide attention during visual search and is typically thought of as containing features of the actual target object. However, when targets are hard to find, it is advantageous to use other information in the visual environment that is predictive of the target's location to help guide attention. The purpose of these studies was to test if newly learned associations between face and scene category images lead observers to use scene information as a proxy for the face target. Our results showed that scene information was used as a proxy for the target to guide attention but only when the target face was difficult to discriminate from the distractor face; when the faces were easy to distinguish, attention was no longer guided by the scene unless the scene was presented earlier. The results suggest that attention is flexibly guided by both target features as well as features of objects that are predictive of the target location. The degree to which each contributes to guiding attention depends on the efficiency with which that information can be used to decode the location of the target in the current moment. The results contribute to the view that attentional guidance is highly flexible in its use of information to rapidly locate the target.
Collapse
Affiliation(s)
- Zhiheng Zhou
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA.
| | - Joy J Geng
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA; Department of Psychology, University of California, One Shields Ave, Davis, CA 95616, USA.
| |
Collapse
|
10
|
Witkowski PP, Geng JJ. Prefrontal Cortex Codes Representations of Target Identity and Feature Uncertainty. J Neurosci 2023; 43:8769-8776. [PMID: 37875376 PMCID: PMC10727173 DOI: 10.1523/jneurosci.1117-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 09/04/2023] [Accepted: 10/07/2023] [Indexed: 10/26/2023] Open
Abstract
Many objects in the real world have features that vary over time, creating uncertainty in how they will look in the future. This uncertainty makes statistical knowledge about the likelihood of features critical to attention demanding processes such as visual search. However, little is known about how the uncertainty of visual features is integrated into predictions about search targets in the brain. In the current study, we test the idea that regions prefrontal cortex code statistical knowledge about search targets before the onset of search. Across 20 human participants (13 female; 7 male), we observe target identity in the multivariate pattern and uncertainty in the overall activation of dorsolateral prefrontal cortex (DLPFC) and inferior frontal junction (IFJ) in advance of the search display. This indicates that the target identity (mean) and uncertainty (variance) of the target distribution are coded independently within the same regions. Furthermore, once the search display appears the univariate IFJ signal scaled with the distance of the actual target from the expected mean, but more so when expected variability was low. These results inform neural theories of attention by showing how the prefrontal cortex represents both the identity and expected variability of features in service of top-down attentional control.SIGNIFICANCE STATEMENT Theories of attention and working memory posit that when we engage in complex cognitive tasks our performance is determined by how precisely we remember task-relevant information. However, in the real world the properties of objects change over time, creating uncertainty about many aspects of the task. There is currently a gap in our understanding of how neural systems represent this uncertainty and combine it with target identity information in anticipation of attention demanding cognitive tasks. In this study, we show that the prefrontal cortex represents identity and uncertainty as unique codes before task onset. These results advance theories of attention by showing that the prefrontal cortex codes both target identity and uncertainty to implement top-down attentional control.
Collapse
Affiliation(s)
- Phillip P Witkowski
- Center for Mind and Brain, University of California, Davis, Davis, California 95618
- Department of Psychology, University of California, Davis, Davis, California 95618
| | - Joy J Geng
- Center for Mind and Brain, University of California, Davis, Davis, California 95618
- Department of Psychology, University of California, Davis, Davis, California 95618
| |
Collapse
|
11
|
Lerebourg M, de Lange FP, Peelen MV. Expected distractor context biases the attentional template for target shapes. J Exp Psychol Hum Percept Perform 2023; 49:1236-1255. [PMID: 37410402 PMCID: PMC7616464 DOI: 10.1037/xhp0001129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2023]
Abstract
Visual search is supported by an internal representation of the target, the attentional template. However, which features are diagnostic of target presence critically depends on the distractors. Accordingly, previous research showed that consistent distractor context shapes the attentional template for simple targets, with the template emphasizing diagnostic dimensions (e.g., color or orientation) in blocks of trials. Here, we investigated how distractor expectations bias attentional templates for complex shapes, and tested whether such biases reflect intertrial priming or can be instantiated flexibly. Participants searched for novel shapes (cued by name) in two probabilistic distractor contexts: Either the target's orientation or rectilinearity was unique (80% validity). Across four experiments, performance was better when the distractor context was expected, indicating that target features in the expected diagnostic dimension were emphasized. Attentional templates were biased by distractor expectations when distractor context was blocked, also for participants reporting no awareness of the manipulation. Interestingly, attentional templates were also biased when distractor context was cued on a trial-by-trial basis, but only when the two contexts were consistently presented at distinct spatial locations. These results show that attentional templates can flexibly and adaptively incorporate expectations about target-distractor relations when looking for the same object in different contexts. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Maëlle Lerebourg
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| |
Collapse
|
12
|
Yu X, Zhou Z, Becker SI, Boettcher SEP, Geng JJ. Good-enough attentional guidance. Trends Cogn Sci 2023; 27:391-403. [PMID: 36841692 DOI: 10.1016/j.tics.2023.01.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/24/2023] [Accepted: 01/25/2023] [Indexed: 02/27/2023]
Abstract
Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA
| | - Zhiheng Zhou
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
| | - Stefanie I Becker
- School of Psychology, University of Queensland, Brisbane, QLD, Australia
| | | | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
13
|
Chapman AF, Störmer VS. Efficient tuning of attention to narrow and broad ranges of task-relevant feature values. VISUAL COGNITION 2023. [DOI: 10.1080/13506285.2023.2192993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
|
14
|
Zhao C, Kong Y, Li D, Huang J, Kong L, Li X, Jensen O, Song Y. Suppression of distracting inputs by visual-spatial cues is driven by anticipatory alpha activity. PLoS Biol 2023; 21:e3002014. [PMID: 36888690 PMCID: PMC10027229 DOI: 10.1371/journal.pbio.3002014] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 03/20/2023] [Accepted: 01/27/2023] [Indexed: 03/09/2023] Open
Abstract
A growing body of research demonstrates that distracting inputs can be proactively suppressed via spatial cues, nonspatial cues, or experience, which are governed by more than one top-down mechanism of attention. However, how the neural mechanisms underlying spatial distractor cues guide proactive suppression of distracting inputs remains unresolved. Here, we recorded electroencephalography signals from 110 participants in 3 experiments to identify the role of alpha activity in proactive distractor suppression induced by spatial cues and its influence on subsequent distractor inhibition. Behaviorally, we found novel changes in the spatial proximity of the distractor: Cueing distractors far away from the target improves search performance for the target, while cueing distractors close to the target hampers performance. Crucially, we found dynamic characteristics of spatial representation for distractor suppression during anticipation. This result was further verified by alpha power increased relatively contralateral to the cued distractor. At both the between- and within-subjects levels, we found that these activities further predicted the decrement of the subsequent PD component, which was indicative of reduced distractor interference. Moreover, anticipatory alpha activity and its link with the subsequent PD component were specific to the high predictive validity of distractor cue. Together, our results reveal the underlying neural mechanisms by which cueing the spatial distractor may contribute to reduced distractor interference. These results also provide evidence supporting the role of alpha activity as gating by proactive suppression.
Collapse
Affiliation(s)
- Chenguang Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai, China
- School of Systems Science, Beijing Normal University, Beijing, China
- International Academic Center of Complex Systems, Beijing Normal University, Zhuhai, China
| | - Yuanjun Kong
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Dongwei Li
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Jing Huang
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai, China
| | - Lujiao Kong
- School of Journalism and Communication, Beijing Normal University, Beijing, China
| | - Xiaoli Li
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai, China
| | - Ole Jensen
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, United Kingdom
| | - Yan Song
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
15
|
Du YK, McAvan AS, Zheng J, Ekstrom AD. Spatial memory distortions for the shapes of walked paths occur in violation of physically experienced geometry. PLoS One 2023; 18:e0281739. [PMID: 36763702 PMCID: PMC9916584 DOI: 10.1371/journal.pone.0281739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 01/30/2023] [Indexed: 02/12/2023] Open
Abstract
An important question regards the nature of our spatial memories for the paths that we have walked and, in particular, whether such distortions might violate the topological properties of the shape of the paths (i.e., creating an intersection when two paths did not intersect or vice versa). To investigate whether and how this might occur, we tested humans in situations in which they walked simple paths and idiothetic and visual cues either matched or mismatched, with the mismatching cues creating the greatest potential for topological distortions. Participants walked four-segment paths with 90° turns in immersive virtual reality and pointed to their start location when they arrived at the end of the path. In paths with a crossing, when the intersection was not presented, participants pointed to a novel start location suggesting a topological distortion involving non-crossed paths. In paths without a crossing, when a false intersection was presented, participants pointed to a novel start location suggesting a topological distortion involving crossed paths. In paths without crossings and without false intersections, participants showed reduced pointing errors that typically did not involve topological distortions. Distortions more generally, as indicated by pointing errors to the start location, were significantly reduced for walked paths involving primarily idiothetic cues with limited visual cues; conversely, distortions were significantly increased when idiothetic cues were diminished and navigation relied primarily on visual cues. Our findings suggest that our spatial memories for walked paths sometimes involve topological distortions, particularly when resolving the competition between idiothetic and visual cues.
Collapse
Affiliation(s)
- Yu K. Du
- Department of Psychology, University of Arizona, Tucson, AZ, United States of America
| | - Andrew S. McAvan
- Department of Psychology, University of Arizona, Tucson, AZ, United States of America
| | - Jingyi Zheng
- Department of Mathematics and Statistics, Auburn University, Auburn, AL, United States of America
| | - Arne D. Ekstrom
- Department of Psychology, University of Arizona, Tucson, AZ, United States of America
- Evelyn McKnight Brain Institute, University of Arizona, Tucson, AZ, United States of America
- * E-mail:
| |
Collapse
|
16
|
Witkowski PP, Geng JJ. Attentional priority is determined by predicted feature distributions. J Exp Psychol Hum Percept Perform 2022; 48:1201-1212. [PMID: 36048065 PMCID: PMC10249461 DOI: 10.1037/xhp0001041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual attention is often characterized as being guided by precise memories for target objects. However, real-world search targets have dynamic features that vary over time, meaning that observers must predict how the target could look based on how features are expected to change. Despite its importance, little is known about how target feature predictions influence feature-based attention, or how these predictions are represented in the target template. In Experiment 1 (N = 60 university students), we show observers readily track the statistics of target features over time and adapt attentional priority to predictions about the distribution of target features. In Experiments 2a and 2b (N = 480 university students), we show that these predictions are encoded into the target template as a distribution of likelihoods over possible target features, which are independent of memory precision for the cued item. These results provide a novel demonstration of how observers represent predicted feature distributions when target features are uncertain and show that these predictions are used to set attentional priority during visual search. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Phillip P. Witkowski
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618
- Department of Psychology, University of California Davis, Davis, CA, 95618
| | - Joy J. Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618
- Department of Psychology, University of California Davis, Davis, CA, 95618
| |
Collapse
|
17
|
Kawashima T, Amano K. Can enhancement and suppression concurrently guide attention? An assessment at the individual level. F1000Res 2022; 11:232. [PMID: 35811789 PMCID: PMC9237560 DOI: 10.12688/f1000research.77430.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/21/2022] [Indexed: 11/26/2022] Open
Abstract
Background: Although people can pay attention to targets while ignoring distractors, previous research suggests that target enhancement and distractor suppression work separately and independently. Here, we sought to replicate previous findings and re-establish their independence. Methods: We employed an internet-based psychological experiment. We presented participants with a visual search task in which they searched for a specified shape with or without a singleton. We replicated the singleton-presence benefit in search performance, but this effect was limited to cases where the target color was fixed across all trials. In a randomly intermixed probe task (30% of all trials), the participants searched for a letter among colored probes; we used this task to assess how far attention was separately allocated toward the target or distractor dimensions. Results: We found a negative correlation between target enhancement and distractor suppression, indicating that the participants who paid closer attention to target features ignored distractor features less effectively and vice versa. Averaged data showed no benefit from target color or cost from distractor color, possibly because of the substantial differences in strategy across participants. Conclusions: These results suggest that target enhancement and distractor suppression guide attention in mutually dependent ways and that the relative contribution of these components depends on the participants’ search strategy.
Collapse
Affiliation(s)
- Tomoya Kawashima
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), 1-4 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Kaoru Amano
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), 1-4 Yamadaoka, Suita City, Osaka, 565-0871, Japan
- Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| |
Collapse
|
18
|
Thorat S, Quek GL, Peelen MV. Statistical learning of distractor co-occurrences facilitates visual search. J Vis 2022; 22:2. [PMID: 36053133 PMCID: PMC9440606 DOI: 10.1167/jov.22.10.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual search is facilitated by knowledge of the relationship between the target and the distractors, including both where the target is likely to be among the distractors and how it differs from the distractors. Whether the statistical structure among distractors themselves, unrelated to target properties, facilitates search is less well understood. Here, we assessed the benefit of distractor structure using novel shapes whose relationship to each other was learned implicitly during visual search. Participants searched for target items in arrays of shapes that comprised either four pairs of co-occurring distractor shapes (structured scenes) or eight distractor shapes randomly partitioned into four pairs on each trial (unstructured scenes). Across five online experiments (N = 1,140), we found that after a period of search training, participants were more efficient when searching for targets in structured than unstructured scenes. This structure benefit emerged independently of whether the position of the shapes within each pair was fixed or variable and despite participants having no explicit knowledge of the structured pairs they had seen. These results show that implicitly learned co-occurrence statistics between distractor shapes increases search efficiency. Increased efficiency in the rejection of regularly co-occurring distractors may contribute to the efficiency of visual search in natural scenes, where such regularities are abundant.
Collapse
Affiliation(s)
- Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,
| | - Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.,
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,
| |
Collapse
|
19
|
Hansmann-Roth S, Þorsteinsdóttir S, Geng JJ, Kristjánsson Á. Temporal integration of feature probability distributions. PSYCHOLOGICAL RESEARCH 2022; 86:2030-2044. [PMID: 34997327 DOI: 10.1007/s00426-021-01621-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Accepted: 11/13/2021] [Indexed: 10/19/2022]
Abstract
Humans are surprisingly good at learning the statistical characteristics of their visual environment. Recent studies have revealed that not only can the visual system learn repeated features of visual search distractors, but also their actual probability distributions. Search times were determined by the frequency of distractor features over consecutive search trials. The search displays applied in these studies involved many exemplars of distractors on each trial and while there is clear evidence that feature distributions can be learned from large distractor sets, it is less clear if distributions are well learned for single targets presented on each trial. Here, we investigated potential learning of probability distributions of single targets during visual search. Over blocks of trials, observers searched for an oddly colored target that was drawn from either a Gaussian or a uniform distribution. Search times for the different target colors were clearly influenced by the probability of that feature within trial blocks. The same search targets, coming from the extremes of the two distributions were found significantly slower during the blocks where the targets were drawn from a Gaussian distribution than from a uniform distribution indicating that observers were sensitive to the target probability determined by the distribution shape. In Experiment 2, we replicated the effect using binned distributions and revealed the limitations of encoding complex target distributions. Our results demonstrate detailed internal representations of target feature distributions and that the visual system integrates probability distributions of target colors over surprisingly long trial sequences.
Collapse
Affiliation(s)
- Sabrina Hansmann-Roth
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- Université de Lille, CNRS, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, 59000, Lille, France.
| | - Sóley Þorsteinsdóttir
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
- Department of Psychology, University of California Davis, Davis, CA, USA
| | - Árni Kristjánsson
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
20
|
Chapman AF, Störmer VS. Feature similarity is non-linearly related to attentional selection: Evidence from visual search and sustained attention tasks. J Vis 2022; 22:4. [PMID: 35834377 PMCID: PMC9290316 DOI: 10.1167/jov.22.8.4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Although many theories of attention highlight the importance of similarity between target and distractor items for selection, few studies have directly quantified the function underlying this relationship. Across two commonly used tasks-visual search and sustained attention-we investigated how target-distractor similarity impacts feature-based attentional selection. Importantly, we found comparable patterns of performance in both visual search and sustained feature-based attention tasks, with performance (response times and d', respectively) plateauing at medium target-distractor distances (40°-50° around a luminance-matched color wheel). In contrast, visual search efficiency, as measured by search slopes, was affected by a much more narrow range of similarity levels (10°-20°). We assessed the relationship between target-distractor similarity and attentional performance using both a stimulus-based and psychologically-based measure of similarity and found this nonlinear relationship in both cases. However, psychological similarity accounted for some of the nonlinearities observed in the data, suggesting that measures of psychological similarity are more appropriate when studying effects of target-distractor similarities. These findings place novel constraints on models of selective attention and emphasize the importance of considering the similarity structure of the feature space over which attention operates. Broadly, the nonlinear effects of similarity on attention are consistent with accounts that propose attention exaggerates the distance between competing representations, possibly through enhancement of off-tuned neurons.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA.,
| | - Viola S Störmer
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, NH, USA.,
| |
Collapse
|
21
|
Wen 文雯 W, Huang 黄志邦 Z, Hou 侯寅 Y, Li 李晟 S. Tracking Neural Markers of Template Formation and Implementation in Attentional Inhibition under Different Distractor Consistency. J Neurosci 2022; 42:4927-4936. [PMID: 35545435 PMCID: PMC9188384 DOI: 10.1523/jneurosci.1705-21.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 05/07/2022] [Accepted: 05/08/2022] [Indexed: 11/21/2022] Open
Abstract
Performing visual search tasks requires optimal attention deployment to promote targets and inhibit distractors. Rejection templates based on the feature of the distractor can be built to constrain the search process. We measured electroencephalography (EEG) of human participants of both sexes when they performed a visual search task in conditions where the distractor cues were constant within a block (fixed cueing) or changed on a trial-by-trial basis (varied cueing). In the fixed-cueing condition, sustained decoding of the cued colors could be achieved during the retention interval, and participants with higher decoding accuracy showed larger suppression benefits of the distractor cueing in the search period. In the varied-cueing condition, the cued color could only be transiently decoded after its onset, and higher decoding accuracy was observed from the participants who demonstrated lower suppression benefit. The differential neural representations of the to-be-ignored color in the two cueing conditions as well as their reverse associations with behavioral performance implied that rejection templates were formed in the fixed-cueing condition but not in the varied-cueing condition. Additionally, we observed stronger posterior alpha lateralization and midfrontal theta/beta power during the retention interval of the varied-cueing condition, indicating the cognitive costs in template formation caused by the trialwise change of distractor colors. Together, our findings revealed the neural markers associated with the critical roles of distractor consistency in linking template formation to successful inhibition.SIGNIFICANCE STATEMENT How do we strategically build a rejection template based on distractor features to filter out matched items when performing visual search tasks? Previous studies have suggested that the consistency of the to-be-ignored feature may play a significant role in this process. We recorded scalp EEGs when human participants searched for a target among distractors. Capitalized on multivariate decoding technique and time-frequency analysis, we revealed the neural markers of the rejection template under different distractor consistencies. Being able to track these processes in visual search could help us to understand the connection between template formation and successful distractor inhibition. Our findings may also benefit future EEG-based interventions on individuals with deficits in attentional control.
Collapse
Affiliation(s)
- Wen Wen 文雯
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
| | - Zhibang Huang 黄志邦
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
| | - Yin Hou 侯寅
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
| | - Sheng Li 李晟
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100101, China
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100875, China
- Key Laboratory of Machine Perception, Ministry of Education, Peking University, Beijing 100871, China
| |
Collapse
|
22
|
Wöstmann M, Störmer VS, Obleser J, Addleman DA, Andersen SK, Gaspelin N, Geng JJ, Luck SJ, Noonan MP, Slagter HA, Theeuwes J. Ten simple rules to study distractor suppression. Prog Neurobiol 2022. [PMID: 35427732 DOI: 10.1016/j.pneurobio.2022.102269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Distractor suppression refers to the ability to filter out distracting and task-irrelevant information. Distractor suppression is essential for survival and considered a key aspect of selective attention. Despite the recent and rapidly evolving literature on distractor suppression, we still know little about how the brain suppresses distracting information. What limits progress is that we lack mutually agreed upon principles of how to study the neural basis of distractor suppression and its manifestation in behavior. Here, we offer ten simple rules that we believe are fundamental when investigating distractor suppression. We provide guidelines on how to design conclusive experiments on distractor suppression (Rules 1-3), discuss different types of distractor suppression that need to be distinguished (Rules 4-6), and provide an overview of models of distractor suppression and considerations of how to evaluate distractor suppression statistically (Rules 7-10). Together, these rules provide a concise and comprehensive synopsis of promising advances in the field of distractor suppression. Following these rules will propel research on distractor suppression in important ways, not only by highlighting prominent issues to both new and more advanced researchers in the field, but also by facilitating communication between sub-disciplines.
Collapse
Affiliation(s)
- Malte Wöstmann
- Department of Psychology, University of Lübeck, Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, USA.
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | | | - Søren K Andersen
- School of Psychology, University of Aberdeen, UK; Department of Psychology, University of Southern Denmark, Denmark
| | - Nicholas Gaspelin
- Department of Psychology and Department of Integrative Neuroscience, Binghamton University, State University of New York, USA
| | - Joy J Geng
- Center for Mind and Brain and Department of Psychology, University of California, Davis, USA
| | - Steven J Luck
- Center for Mind and Brain and Department of Psychology, University of California, Davis, USA
| | | | - Heleen A Slagter
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Jan Theeuwes
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
23
|
Wang B, Knapen T, Olivers CNL. Visual Working Memory Adapts to the Nature of Anticipated Interference. J Cogn Neurosci 2022; 34:1148-1163. [PMID: 35468211 DOI: 10.1162/jocn_a_01853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Visual working memory has been proven to be relatively robust against interference. However, little is known on whether such robust coding is obligatory, or can be flexibly recruited depending on its expected usefulness. To address this, participants remembered both the color and orientation of a grating. During the maintenance, we inserted a secondary color/orientation memory task, interfering with the primary task. Crucially, we varied the expectations of the type of interference by varying the probability of the two types of intervening task. Behavioral data indicate that to-be-remembered features for which interference is expected are bolstered, whereas to-be-remembered features for which no interference is expected are left vulnerable. This was further supported by fMRI data obtained from visual cortex. In conclusion, the flexibility of visual working memory allows it to strengthen memories for which it anticipates the highest risk of interference.
Collapse
Affiliation(s)
- Benchi Wang
- South China Normal University, China.,Cognition and Education Sciences (South China Normal University), China.,Vrije Universiteit Amsterdam, The Netherlands
| | | | | |
Collapse
|
24
|
Is perceptual learning always better at task-relevant locations? It depends on the distractors. Atten Percept Psychophys 2022; 84:992-1003. [PMID: 35217980 DOI: 10.3758/s13414-022-02450-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/25/2022] [Indexed: 11/08/2022]
Abstract
The role of attention in task-irrelevant perceptual learning has been contested. Attention has been studied in the past using distractor-type manipulations. Hence, during an initial exposure phase, we manipulated distractor similarity within a set of six gratings, to study its effects on perceptual learning at task-relevant and task-irrelevant locations. Of these six gratings, one was at a task-relevant location, one at a task-irrelvant location, which shared the orientation with the task-relevant grating, and the rest (four) were distractor gratings. The orientations of the distractor gratings were all either the same (homogeneous) or different from each other (heterogeneity). We hypothesized that learning at the task-irrelevant location would be worse than learning at the task-relevant location when distractors are heterogeneous and vice versa when the distractors are homogeneous. Participants were initially exposed to a grating set; they reported contrast changes at only one prespecified task-relevant location. This grating was grouped based on orientation with a task-irrelevant grating presented at the furthermost distractor location and presented alongside four control-distractors (homogeneous or heterogeneous). In the testing phase, orientation discrimination performance was measured at task-relevant, task-irrelevant (grouped), and control-distractor locations. Participants were exposed and tested sequentially, each day for 5 days. Participants learned and performed better at the task-irrelevant location compared to the task-relevant location with homogenous distractors and vice versa with heterogenous distractors. The poorer learning at the task-relevant location compared to the task-irrelevant location challenges current models of perceptual learning. Selection mechanisms driven by the nature of distractors influence perceptual learning at both task-relevant and task-irrelevant locations.
Collapse
|
25
|
Wöstmann M, Störmer VS, Obleser J, Addleman DA, Andersen SK, Gaspelin N, Geng JJ, Luck SJ, Noonan MP, Slagter HA, Theeuwes J. Ten simple rules to study distractor suppression. Prog Neurobiol 2022; 213:102269. [PMID: 35427732 PMCID: PMC9069241 DOI: 10.1016/j.pneurobio.2022.102269] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 03/28/2022] [Accepted: 04/04/2022] [Indexed: 01/23/2023]
Abstract
Distractor suppression refers to the ability to filter out distracting and task-irrelevant information. Distractor suppression is essential for survival and considered a key aspect of selective attention. Despite the recent and rapidly evolving literature on distractor suppression, we still know little about how the brain suppresses distracting information. What limits progress is that we lack mutually agreed upon principles of how to study the neural basis of distractor suppression and its manifestation in behavior. Here, we offer ten simple rules that we believe are fundamental when investigating distractor suppression. We provide guidelines on how to design conclusive experiments on distractor suppression (Rules 1–3), discuss different types of distractor suppression that need to be distinguished (Rules 4–6), and provide an overview of models of distractor suppression and considerations of how to evaluate distractor suppression statistically (Rules 7–10). Together, these rules provide a concise and comprehensive synopsis of promising advances in the field of distractor suppression. Following these rules will propel research on distractor suppression in important ways, not only by highlighting prominent issues to both new and more advanced researchers in the field, but also by facilitating communication between sub-disciplines. Distractor suppression is the ability to filter out irrelevant information. At present, we know little about how the brain suppresses distraction. We offer ten rules that are fundamental when investigating distractor suppression. Following the rules will propel research and foster interaction between disciplines.
Collapse
Affiliation(s)
- Malte Wöstmann
- Department of Psychology, University of Lübeck, Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, USA.
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | | | - Søren K Andersen
- School of Psychology, University of Aberdeen, UK; Department of Psychology, University of Southern Denmark, Denmark
| | - Nicholas Gaspelin
- Department of Psychology and Department of Integrative Neuroscience, Binghamton University, State University of New York, USA
| | - Joy J Geng
- Center for Mind and Brain and Department of Psychology, University of California, Davis, USA
| | - Steven J Luck
- Center for Mind and Brain and Department of Psychology, University of California, Davis, USA
| | | | - Heleen A Slagter
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Jan Theeuwes
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
26
|
Yu X, Hanks TD, Geng JJ. Attentional Guidance and Match Decisions Rely on Different Template Information During Visual Search. Psychol Sci 2021; 33:105-120. [PMID: 34878949 DOI: 10.1177/09567976211032225] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
When searching for a target object, we engage in a continuous "look-identify" cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students (Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California, Davis.,Department of Psychology, University of California, Davis
| | - Timothy D Hanks
- Center for Neuroscience, University of California, Davis.,Department of Neurology, University of California, Davis
| | - Joy J Geng
- Center for Mind and Brain, University of California, Davis.,Department of Psychology, University of California, Davis
| |
Collapse
|
27
|
Rafiei M, Chetverikov A, Hansmann-Roth S, Kristjánsson Á. You see what you look for: Targets and distractors in visual search can cause opposing serial dependencies. J Vis 2021; 21:3. [PMID: 34468704 PMCID: PMC8419872 DOI: 10.1167/jov.21.10.3] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 08/06/2021] [Indexed: 01/06/2023] Open
Abstract
Visual perception is, at any given moment, strongly influenced by its temporal context-what stimuli have recently been perceived and in what surroundings. We have previously shown that to-be-ignored items produce a bias upon subsequent perceptual decisions that acts in parallel with other biases induced by attended items. However, our previous investigations were confined to biases upon the perceived orientation of a visual search target, and it is unclear whether these biases influence perceptual decisions in a more general sense. Here, we test whether the biases from visual search targets and distractors affect the perceived orientation of a neutral test line, one that is neither a target nor a distractor. To do so, we asked participants to search for an oddly oriented line among distractors and report its location for a few trials and next presented a test line irrelevant to the search task. Participants were asked to report the orientation of the test line. Our results indicate that in tasks involving visual search, targets induce a positive bias upon a neutral test line if their orientations are similar, whereas distractors produce an attractive bias for similar test lines and a repulsive bias if the orientations of the test line and the average orientation of the distractors are far apart in feature space. In sum, our results show that both attentional role and proximity in feature space between previous and current stimuli determine the direction of biases in perceptual decisions.
Collapse
Affiliation(s)
- Mohsen Rafiei
- Icelandic Vision Lab, Faculty of Psychology, University of Iceland, Reykjavík, Iceland
| | - Andrey Chetverikov
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Sabrina Hansmann-Roth
- Icelandic Vision Lab, Faculty of Psychology, University of Iceland, Reykjavík, Iceland
- Sciences Cognitives et Sciences Affectives (SCALab), Université de Lille, Lille, France
| | - Árni Kristjánsson
- Icelandic Vision Lab, Faculty of Psychology, University of Iceland, Reykjavík, Iceland
- School of Psychology, National Research University, Higher School of Economics, Moscow, Russian Federation
| |
Collapse
|
28
|
Freund MC, Etzel JA, Braver TS. Neural Coding of Cognitive Control: The Representational Similarity Analysis Approach. Trends Cogn Sci 2021; 25:622-638. [PMID: 33895065 PMCID: PMC8279005 DOI: 10.1016/j.tics.2021.03.011] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 03/17/2021] [Accepted: 03/18/2021] [Indexed: 01/07/2023]
Abstract
Cognitive control relies on distributed and potentially high-dimensional frontoparietal task representations. Yet, the classical cognitive neuroscience approach in this domain has focused on aggregating and contrasting neural measures - either via univariate or multivariate methods - along highly abstracted, 1D factors (e.g., Stroop congruency). Here, we present representational similarity analysis (RSA) as a complementary approach that can powerfully inform representational components of cognitive control theories. We review several exemplary uses of RSA in this regard. We further show that most classical paradigms, given their factorial structure, can be optimized for RSA with minimal modification. Our aim is to illustrate how RSA can be incorporated into cognitive control investigations to shed new light on old questions.
Collapse
Affiliation(s)
- Michael C Freund
- Department of Psychological and Brain Sciences, Washington University in St Louis, St Louis, MO 63130, USA
| | - Joset A Etzel
- Department of Psychological and Brain Sciences, Washington University in St Louis, St Louis, MO 63130, USA
| | - Todd S Braver
- Department of Psychological and Brain Sciences, Washington University in St Louis, St Louis, MO 63130, USA; Department of Radiology, Washington University in St Louis, School of Medicine, St Louis, MO 63110, USA; Department of Neuroscience, Washington University in St Louis, School of Medicine, St Louis, MO 63110, USA.
| |
Collapse
|
29
|
Optimizing perception: Attended and ignored stimuli create opposing perceptual biases. Atten Percept Psychophys 2021; 83:1230-1239. [PMID: 32333372 DOI: 10.3758/s13414-020-02030-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans have remarkable abilities to construct a stable visual world from continuously changing input. There is increasing evidence that momentary visual input blends with previous input to preserve perceptual continuity. Most studies have shown that such influences can be traced to characteristics of the attended object at a given moment. Little is known about the role of ignored stimuli in creating this continuity. This is important since while some input is selected for processing, other input must be actively ignored for efficient selection of the task-relevant stimuli. We asked whether attended targets and actively ignored distractor stimuli in an odd-one-out search task would bias observers' perception differently. Our observers searched for an oddly oriented line among distractors and were occasionally asked to report the orientation of the last visual search target they saw in an adjustment task. Our results show that at least two opposite biases from past stimuli influence current perception: A positive bias caused by serial dependence pulls perception of the target toward the previous target features, while a negative bias induced by the to-be-ignored distractor features pushes perception of the target away from the distractor distribution. Our results suggest that to-be-ignored items produce a perceptual bias that acts in parallel with other biases induced by attended items to optimize perception. Our results are the first to demonstrate how actively ignored information facilitates continuity in visual perception.
Collapse
|
30
|
Hamblin-Frohman Z, Becker SI. The attentional template in high and low similarity search: Optimal tuning or tuning to relations? Cognition 2021; 212:104732. [PMID: 33862440 DOI: 10.1016/j.cognition.2021.104732] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 04/08/2021] [Accepted: 04/09/2021] [Indexed: 10/21/2022]
Abstract
The attentional template is often described as the mental representation that drives attentional selection and guidance, for instance, in visual search. Recent research suggests that this template is not a veridical representation of the sought-for target, but instead an altered representation that allows more efficient search. The current paper contrasts two such theories. Firstly, the Optimal Tuning account which posits that the attentional template shifts to an exaggerated target value to maximise the signal-to-noise ratio between similar targets and non-targets. Secondly, the Relational account which states that instead of tuning to feature values, attention is directed to the relative value created by the search context, e.g. all redder items or the reddest item. Both theories are empirically supported, but used different paradigms (perceptual decision tasks vs. visual search), and different attentional measures (probe response accuracy vs. gaze capture). The current design incorporates both paradigms and measures. The results reveal that while Optimal Tuning shifts are observed in probe trials they do not drive early attention or eye- movement behaviour in visual search. Instead, early attention follows the Relational Account, selecting all items with the relative target colour (e.g., redder). This suggests that the masked probe trials used in Optimal Tuning do not probe the attentional template that guides attention. In Experiment 3 we find that optimal tuning shifts correspond in magnitude to purely perceptual shifts created by contrast biases in the visual search arrays. This suggests that the shift in probe responses may in fact be a perceptual artefact rather than a strategic adaptation to optimise the signal-to-noise ratio. These results highlight the distinction between early attentional mechanisms and later, target identification mechanisms. SIGNIFICANCE STATEMENT: Classical theories of attention suggest that attention is guided by a feature-specific target template. In recent designs this has been challenged by an apparent non- veridical tuning of the template in situations where the target stimulus is similar to non-targets. The current studies compare two theories that propose different explanations for non-veridical tuning; the Relational and the Optimal Tuning account. We show that the Relational account describes the mechanism that guides early search behaviour, while the Optimal Tuning account describes perceptual decision-making. Optimal Tuning effects may be due to an artefact that has not been described in visual search before (simultaneous contrast).
Collapse
Affiliation(s)
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, Australia
| |
Collapse
|
31
|
Constant M, Liesefeld HR. Massive Effects of Saliency on Information Processing in Visual Working Memory. Psychol Sci 2021; 32:682-691. [DOI: 10.1177/0956797620975785] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Limitations in the ability to temporarily represent information in visual working memory (VWM) are crucial for visual cognition. Whether VWM processing is dependent on an object’s saliency (i.e., how much it stands out) has been neglected in VWM research. Therefore, we developed a novel VWM task that allows direct control over saliency. In three experiments with this task (on 10, 31, and 60 adults, respectively), we consistently found that VWM performance is strongly and parametrically influenced by saliency and that both an object’s relative saliency (compared with concurrently presented objects) and absolute saliency influence VWM processing. We also demonstrated that this effect is indeed due to bottom-up saliency rather than differential fit between each object and the top-down attentional template. A simple computational model assuming that VWM performance is determined by the weighted sum of absolute and relative saliency accounts well for the observed data patterns.
Collapse
Affiliation(s)
- Martin Constant
- Department of Psychology, Ludwig-Maximilians-Universität München
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München
| | - Heinrich R. Liesefeld
- Department of Psychology, Ludwig-Maximilians-Universität München
- Munich Center for Neurosciences–Brain & Mind, Ludwig-Maximilians-Universität München
| |
Collapse
|
32
|
Kerzel D, Cong SH. Attentional Templates Are Sharpened through Differential Signal Enhancement, Not Differential Allocation of Attention. J Cogn Neurosci 2021; 33:594-610. [PMID: 33464161 DOI: 10.1162/jocn_a_01677] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
In visual search, the internal representation of the target feature is referred to as the attentional template. The attentional template can be broad or precise depending on the task requirements. In singleton search, the attentional template is broad because the target is the only colored element in the display. In feature search, a precise attentional template is required because the target is in a specific color in an array of varied colors. To measure the precision of the attentional template, we used a cue-target paradigm where cueing benefits decrease when the cue color differs from the target color. Consistent with broad and precise attentional templates, the decrease of cueing effects was stronger in feature than in singleton search. Measurements of ERPs showed that the N2pc elicited by the cue decreased with increasing color difference, suggesting that attention was more strongly captured by cues that were similar to the target. However, the cue-elicited N2pc did not differ between feature and singleton search, making it unlikely to reflect the mechanism underlying attentional template precision. Furthermore, there was no evidence for attentional suppression as there was no cue-elicited PD, even in conditions where the cueing benefit turned into a same-location cost. However, an index of signal enhancement, the contralateral positivity, reflected attention template precision. In general, there was sensory enhancement of the stimulus appearing at the cued location in the search display. With broad attentional templates, any stimulus at the cued location was enhanced, whereas enhancement was restricted to target-matching colors with precise attentional templates.
Collapse
|
33
|
van Moorselaar D, Lampers E, Cordesius E, Slagter HA. Neural mechanisms underlying expectation-dependent inhibition of distracting information. eLife 2020; 9:e61048. [PMID: 33320084 PMCID: PMC7758066 DOI: 10.7554/elife.61048] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 12/14/2020] [Indexed: 12/30/2022] Open
Abstract
Predictions based on learned statistical regularities in the visual world have been shown to facilitate attention and goal-directed behavior by sharpening the sensory representation of goal-relevant stimuli in advance. Yet, how the brain learns to ignore predictable goal-irrelevant or distracting information is unclear. Here, we used EEG and a visual search task in which the predictability of a distractor's location and/or spatial frequency was manipulated to determine how spatial and feature distractor expectations are neurally implemented and reduce distractor interference. We find that expected distractor features could not only be decoded pre-stimulus, but their representation differed from the representation of that same feature when part of the target. Spatial distractor expectations did not induce changes in preparatory neural activity, but a strongly reduced Pd, an ERP index of inhibition. These results demonstrate that neural effects of statistical learning critically depend on the task relevance and dimension (spatial, feature) of predictions.
Collapse
Affiliation(s)
- Dirk van Moorselaar
- Department of Psychology, University of AmsterdamAmsterdamNetherlands
- Amsterdam Brain and Cognition, University of AmsterdamAmsterdamNetherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
- Institute of Brain and Behaviour AmsterdamAmsterdamNetherlands
| | - Eline Lampers
- Department of Psychology, University of AmsterdamAmsterdamNetherlands
| | - Elisa Cordesius
- Department of Psychology, University of AmsterdamAmsterdamNetherlands
| | - Heleen A Slagter
- Department of Psychology, University of AmsterdamAmsterdamNetherlands
- Amsterdam Brain and Cognition, University of AmsterdamAmsterdamNetherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
- Institute of Brain and Behaviour AmsterdamAmsterdamNetherlands
| |
Collapse
|
34
|
Boettcher SEP, van Ede F, Nobre AC. Functional biases in attentional templates from associative memory. J Vis 2020; 20:7. [PMID: 33296459 PMCID: PMC7729124 DOI: 10.1167/jov.20.13.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In everyday life, attentional templates—which facilitate the perception of task-relevant sensory inputs—are often based on associations in long-term memory. We ask whether templates retrieved from memory are necessarily faithful reproductions of the encoded information or if associative-memory templates can be functionally adapted after retrieval in service of current task demands. Participants learned associations between four shapes and four colored gratings, each with a characteristic combination of color (green or pink) and orientation (left or right tilt). On each trial, observers saw one shape followed by a grating and indicated whether the pair matched the learned shape-grating association. Across experimental blocks, we manipulated the types of nonmatch (lure) gratings most often presented. In some blocks the lures were most likely to differ in color but not tilt, whereas in other blocks this was reversed. If participants functionally adapt the retrieved template such that the distinguishing information between lures and targets is prioritized, then they should overemphasize the most commonly diagnostic feature dimension within the template. We found evidence for this in the behavioral responses to the lures: participants were more accurate and faster when responding to common versus rare lures, as predicted by the functional—but not the strictly veridical—template hypothesis. This shows that templates retrieved from memory can be functionally biased to optimize task performance in a flexible, context-dependent, manner.
Collapse
Affiliation(s)
- Sage E P Boettcher
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.,
| | - Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.,Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, The Netherlands.,
| | - Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.,
| |
Collapse
|
35
|
Won BY, Haberman J, Bliss-Moreau E, Geng JJ. Flexible target templates improve visual search accuracy for faces depicting emotion. Atten Percept Psychophys 2020; 82:2909-2923. [PMID: 31974937 PMCID: PMC8806142 DOI: 10.3758/s13414-019-01965-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Theories of visual attention hypothesize that target selection depends upon matching visual inputs to a memory representation of the target - i.e., the target or attentional template. Most theories assume that the template contains a veridical copy of target features, but recent studies suggest that target representations may shift "off veridical" from actual target features to increase target-to-distractor distinctiveness. However, these studies have been limited to simple visual features (e.g., orientation, color), which leaves open the question of whether similar principles apply to complex stimuli, such as a face depicting an emotion, the perception of which is known to be shaped by conceptual knowledge. In three studies, we find confirmatory evidence for the hypothesis that attention modulates the representation of an emotional face to increase target-to-distractor distinctiveness. This occurs over-and-above strong pre-existing conceptual and perceptual biases in the representation of individual faces. The results are consistent with the view that visual search accuracy is determined by the representational distance between the target template in memory and distractor information in the environment, not the veridical target and distractor features.
Collapse
Affiliation(s)
- Bo-Yeong Won
- Center for Mind and Brain, University of California Davis, Davis, CA, USA.
| | | | - Eliza Bliss-Moreau
- Department of Psychology, University of California Davis, Davis, CA, USA
- California National Primate Research Center, University of California Davis, Davis, CA, USA
| | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA.
- Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
36
|
Liesefeld HR, Liesefeld AM, Sauseng P, Jacob SN, Müller HJ. How visual working memory handles distraction: cognitive mechanisms and electrophysiological correlates. VISUAL COGNITION 2020. [DOI: 10.1080/13506285.2020.1773594] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Heinrich R. Liesefeld
- Department Psychologie, Ludwig-Maximilians-Universität München, München, Germany
- Munich Center for Neurosciences – Brain & Mind, Ludwig-Maximilians-Universität München, München, Germany
| | - Anna M. Liesefeld
- Department Psychologie, Ludwig-Maximilians-Universität München, München, Germany
| | - Paul Sauseng
- Department Psychologie, Ludwig-Maximilians-Universität München, München, Germany
| | - Simon N. Jacob
- Department of Neurosurgery, Technische Universität München, München, Germany
| | - Hermann J. Müller
- Department Psychologie, Ludwig-Maximilians-Universität München, München, Germany
| |
Collapse
|
37
|
Ryan JD, Shen K, Liu Z. The intersection between the oculomotor and hippocampal memory systems: empirical developments and clinical implications. Ann N Y Acad Sci 2020; 1464:115-141. [PMID: 31617589 PMCID: PMC7154681 DOI: 10.1111/nyas.14256] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 08/29/2019] [Accepted: 09/19/2019] [Indexed: 12/28/2022]
Abstract
Decades of cognitive neuroscience research has shown that where we look is intimately connected to what we remember. In this article, we review findings from human and nonhuman animals, using behavioral, neuropsychological, neuroimaging, and computational modeling methods, to show that the oculomotor and hippocampal memory systems interact in a reciprocal manner, on a moment-to-moment basis, mediated by a vast structural and functional network. Visual exploration serves to efficiently gather information from the environment for the purpose of creating new memories, updating existing memories, and reconstructing the rich, vivid details from memory. Conversely, memory increases the efficiency of visual exploration. We call for models of oculomotor control to consider the influence of the hippocampal memory system on the cognitive control of eye movements, and for models of hippocampal and broader medial temporal lobe function to consider the influence of the oculomotor system on the development and expression of memory. We describe eye movement-based applications for the detection of neurodegeneration and delivery of therapeutic interventions for mental health disorders for which the hippocampus is implicated and memory dysfunctions are at the forefront.
Collapse
Affiliation(s)
- Jennifer D. Ryan
- Rotman Research InstituteBaycrestTorontoOntarioCanada
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
- Department of PsychiatryUniversity of TorontoTorontoOntarioCanada
| | - Kelly Shen
- Rotman Research InstituteBaycrestTorontoOntarioCanada
| | - Zhong‐Xu Liu
- Department of Behavioral SciencesUniversity of Michigan‐DearbornDearbornMichigan
| |
Collapse
|
38
|
van Moorselaar D, Slagter HA. Inhibition in selective attention. Ann N Y Acad Sci 2020; 1464:204-221. [PMID: 31951294 PMCID: PMC7155061 DOI: 10.1111/nyas.14304] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 12/23/2019] [Accepted: 01/06/2020] [Indexed: 01/04/2023]
Abstract
Our ability to focus on goal-relevant aspects of the environment is critically dependent on our ability to ignore or inhibit distracting information. One perspective is that distractor inhibition is under similar voluntary control as attentional facilitation of target processing. However, a rapidly growing body of research shows that distractor inhibition often relies on prior experience with the distracting information or other mechanisms that need not rely on active representation in working memory. Yet, how and when these different forms of inhibition are neurally implemented remains largely unclear. Here, we review findings from recent behavioral and neuroimaging studies to address this outstanding question. We specifically explore how experience with distracting information may change the processing of that information in the context of current predictive processing views of perception: by modulating a distractor's representation already in anticipation of the distractor, or after integration of top-down and bottom-up sensory signals. We also outline directions for future research necessary to enhance our understanding of how the brain filters out distracting information.
Collapse
Affiliation(s)
- Dirk van Moorselaar
- Department of Experimental and Applied PsychologyVrije Universiteit Amsterdam and Institute of Brain and Behavior AmsterdamAmsterdamthe Netherlands
| | - Heleen A. Slagter
- Department of Experimental and Applied PsychologyVrije Universiteit Amsterdam and Institute of Brain and Behavior AmsterdamAmsterdamthe Netherlands
| |
Collapse
|
39
|
Kerzel D, Andres MKS. Object features reinstated from episodic memory guide attentional selection. Cognition 2020; 197:104158. [PMID: 31986352 DOI: 10.1016/j.cognition.2019.104158] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 12/09/2019] [Accepted: 12/14/2019] [Indexed: 10/25/2022]
Abstract
When observers search for an object in the environment, they compare the incoming sensory information to the attentional template, a representation of the target in visual working memory (VWM). Previous studies have shown that visual search is more efficient when the attentional template is precise. We pursued the hypothesis that the attentional template in VWM is automatically complemented by features from long-term memory, possibly to increase its precision. At the beginning of the experiment, observers learned associations between shape and color. Then, we tested whether selecting one of these shapes was influenced by the previously associated color. To this end, we ran a saccadic selection task consisting of a memory and choice display. In the memory display, the target shape was presented at central fixation and participants were instructed to foveate this shape in the subsequent choice display. In the choice display, the target shape appeared together with a distractor shape at eccentric positions. Importantly, the target shape was colorless (gray) in the memory display so that only shape, but not color was loaded into VWM. However, saccades went more frequently to the target shape when it was shown in the learned color than when this color was shown in the distractor. Thus, the color of the target shape was reinstated from episodic memory to complement the attentional template in VWM.
Collapse
Affiliation(s)
- Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland.
| | - Maïté Kun-Sook Andres
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland
| |
Collapse
|
40
|
de Vries IEJ, Slagter HA, Olivers CNL. Oscillatory Control over Representational States in Working Memory. Trends Cogn Sci 2019; 24:150-162. [PMID: 31791896 DOI: 10.1016/j.tics.2019.11.006] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 11/05/2019] [Accepted: 11/07/2019] [Indexed: 12/21/2022]
Abstract
In the visual world, attention is guided by perceptual goals activated in visual working memory (VWM). However, planning multiple-task sequences also requires VWM to store representations for future goals. These future goals need to be prevented from interfering with the current perceptual task. Recent findings have implicated neural oscillations as a control mechanism serving the implementation and switching of different states of prioritization of VWM representations. We review recent evidence that posterior alpha-band oscillations underlie the flexible activation and deactivation of VWM representations and that frontal delta-to-theta-band oscillations play a role in the executive control of this process. That is, frontal delta-to-theta appears to orchestrate posterior alpha through long-range oscillatory networks to flexibly set up and change VWM states during multitask sequences.
Collapse
Affiliation(s)
- Ingmar E J de Vries
- Department of Experimental and Applied Psychology and Institute for Brain and Behavior Amsterdam, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands.
| | - Heleen A Slagter
- Department of Experimental and Applied Psychology and Institute for Brain and Behavior Amsterdam, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| | - Christian N L Olivers
- Department of Experimental and Applied Psychology and Institute for Brain and Behavior Amsterdam, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| |
Collapse
|
41
|
Affiliation(s)
- Joy J Geng
- Department of Psychology, Center for Mind and Brain at University of California Davis, United states.
| | - Andrew B Leber
- Department of Psychology and Center for Cognitive & Brain Sciences, The Ohio State University, United states.
| | - Sarah Shomstein
- Department of Psychological and Brain Sciences, George Washington University, United states.
| |
Collapse
|
42
|
Clarke ADF, Nowakowska A, Hunt AR. Seeing Beyond Salience and Guidance: The Role of Bias and Decision in Visual Search. Vision (Basel) 2019; 3:E46. [PMID: 31735847 PMCID: PMC6802808 DOI: 10.3390/vision3030046] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 08/07/2019] [Accepted: 08/21/2019] [Indexed: 11/17/2022] Open
Abstract
Visual search is a popular tool for studying a range of questions about perception and attention, thanks to the ease with which the basic paradigm can be controlled and manipulated. While often thought of as a sub-field of vision science, search tasks are significantly more complex than most other perceptual tasks, with strategy and decision playing an essential, but neglected, role. In this review, we briefly describe some of the important theoretical advances about perception and attention that have been gained from studying visual search within the signal detection and guided search frameworks. Under most circumstances, search also involves executing a series of eye movements. We argue that understanding the contribution of biases, routines and strategies to visual search performance over multiple fixations will lead to new insights about these decision-related processes and how they interact with perception and attention. We also highlight the neglected potential for variability, both within and between searchers, to contribute to our understanding of visual search. The exciting challenge will be to account for variations in search performance caused by these numerous factors and their interactions. We conclude the review with some recommendations for ways future research can tackle these challenges to move the field forward.
Collapse
Affiliation(s)
| | - Anna Nowakowska
- School of Psychology, University of Aberdeen, Aberdeen AB24 3FX, UK
| | - Amelia R. Hunt
- School of Psychology, University of Aberdeen, Aberdeen AB24 3FX, UK
| |
Collapse
|
43
|
Yu X, Geng JJ. The attentional template is shifted and asymmetrically sharpened by distractor context. J Exp Psychol Hum Percept Perform 2019; 45:336-353. [PMID: 30742475 DOI: 10.1037/xhp0000609] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Theories of attention hypothesize the existence of an "attentional template" that contains target features in working or long-term memory. It is often assumed that the template contents are veridical, but recent studies have found that this is not true when the distractor set is linearly separable from the target (e.g., all distractors are "yellower" than an orange-colored target). In such cases, the target representation in memory shifts away from distractor features (Navalpakkam & Itti, 2007) and develops a sharper boundary with distractors (Geng, DiQuattro, & Helm, 2017). These changes in the target template are presumed to increase the target-to-distractor psychological distinctiveness and lead to better attentional selection, but it remains unclear what characteristics of the distractor context produce shifting versus sharpening. Here, we tested the hypothesis that the template representation shifts whenever the distractor set (i.e., all of the distractors) is linearly separable from the target but asymmetrical sharpening occurs only when linearly separable distractors are highly target-similar. Our results were consistent, suggesting that template shifting and asymmetrical sharpening are 2 mechanisms that increase the representational distinctiveness of targets from expected distractors and improve visual search performance. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
44
|
Feature Distribution Learning (FDL): A New Method for Studying Visual Ensembles Perception with Priming of Attention Shifts. SPATIAL LEARNING AND ATTENTION GUIDANCE 2019. [DOI: 10.1007/7657_2019_20] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
45
|
Witkowski P, Geng JJ. Learned feature variance is encoded in the target template and drives visual search. VISUAL COGNITION 2019; 27:487-501. [PMID: 32982562 DOI: 10.1080/13506285.2019.1645779] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Real world visual search targets are frequently imperfect perceptual matches to our internal target templates. For example, the same friend on different occasions is likely to wear different clothes, hairstyles, and accessories, but some of these may be more likely to vary than others. The ability to deal with template-to-target variability is important to visual search in natural environments, but we know relatively little about how feature variability is handled by the attentional system. In these studies, we test the hypothesis that top-down attentional biases are sensitive to the variance of target feature dimensions over time and prioritize information from less-variable dimensions. On each trial, subjects were shown a target cue composed of colored dots moving in a specific direction followed by a working memory probe (30%) or visual search display (70%). Critically, the target features in the visual search display differed from the cue, with one feature drawn from a distribution narrowly centered over the cued feature (low-variance dimension), and the other sampled from a broader distribution (high-variance dimension). The results demonstrate that subjects used knowledge of the likely cue-to-target variance to set template precision and bias attentional selection. Moreover, an individual's working memory precision for each feature predicted search performance. Our results suggest that observers are sensitive to the variance of feature dimensions within a target and use this information to weight mechanisms of attentional selection.
Collapse
Affiliation(s)
- Phillip Witkowski
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616.,Department of Psychology, University of California Davis, Davis, CA, 95616
| | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616.,Department of Psychology, University of California Davis, Davis, CA, 95616
| |
Collapse
|