1
|
Yu X, Rahim RA, Geng JJ. Task-adaptive changes to the target template in response to distractor context: Separability versus similarity. J Exp Psychol Gen 2024; 153:564-572. [PMID: 37917441 PMCID: PMC10843062 DOI: 10.1037/xge0001507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Theories of attention hypothesize the existence of an attentional template that contains target features in working or long-term memory. It is frequently assumed that the template contains a veridical copy of the target, but recent studies suggest that this is not true when the distractors are linearly separable from the target. In such cases, target representations shift "off-veridical" in response to the distractor context, presumably because doing so is adaptive and increases the representational distinctiveness of targets from distractors. However, some have argued that the shifts may be entirely explained by perceptual biases created by simultaneous color contrast. Here we address this debate and test the more general hypothesis that the target template is adaptively shaped by elements of the distractor context needed to distinguish targets from distractors. We used a two-dimensional target and separately manipulated the linear separability of one dimension (color) and the visual similarity of the other (orientation). We found that target shifting along the linearly separable color dimension was dependent on the similarity of targets-to-distractors along the other dimension. The target representations were consistent with a postexperiment strategy questionnaire in which participants reported using color more when orientation was hard to use, and orientation more when it was easier to use. We conclude that the target template is task-adaptive and exploit features in the distractor context that most predictably distinguish targets from distractors to increase visual search efficiency. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California, Davis
| | - Raisa A. Rahim
- Center for Mind and Brain, University of California, Davis
| | - Joy J. Geng
- Center for Mind and Brain, University of California, Davis
- Department of Psychology, University of California, Davis
| |
Collapse
|
2
|
Witkowski PP, Geng JJ. Prefrontal Cortex Codes Representations of Target Identity and Feature Uncertainty. J Neurosci 2023; 43:8769-8776. [PMID: 37875376 PMCID: PMC10727173 DOI: 10.1523/jneurosci.1117-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 09/04/2023] [Accepted: 10/07/2023] [Indexed: 10/26/2023] Open
Abstract
Many objects in the real world have features that vary over time, creating uncertainty in how they will look in the future. This uncertainty makes statistical knowledge about the likelihood of features critical to attention demanding processes such as visual search. However, little is known about how the uncertainty of visual features is integrated into predictions about search targets in the brain. In the current study, we test the idea that regions prefrontal cortex code statistical knowledge about search targets before the onset of search. Across 20 human participants (13 female; 7 male), we observe target identity in the multivariate pattern and uncertainty in the overall activation of dorsolateral prefrontal cortex (DLPFC) and inferior frontal junction (IFJ) in advance of the search display. This indicates that the target identity (mean) and uncertainty (variance) of the target distribution are coded independently within the same regions. Furthermore, once the search display appears the univariate IFJ signal scaled with the distance of the actual target from the expected mean, but more so when expected variability was low. These results inform neural theories of attention by showing how the prefrontal cortex represents both the identity and expected variability of features in service of top-down attentional control.SIGNIFICANCE STATEMENT Theories of attention and working memory posit that when we engage in complex cognitive tasks our performance is determined by how precisely we remember task-relevant information. However, in the real world the properties of objects change over time, creating uncertainty about many aspects of the task. There is currently a gap in our understanding of how neural systems represent this uncertainty and combine it with target identity information in anticipation of attention demanding cognitive tasks. In this study, we show that the prefrontal cortex represents identity and uncertainty as unique codes before task onset. These results advance theories of attention by showing that the prefrontal cortex codes both target identity and uncertainty to implement top-down attentional control.
Collapse
Affiliation(s)
- Phillip P Witkowski
- Center for Mind and Brain, University of California, Davis, Davis, California 95618
- Department of Psychology, University of California, Davis, Davis, California 95618
| | - Joy J Geng
- Center for Mind and Brain, University of California, Davis, Davis, California 95618
- Department of Psychology, University of California, Davis, Davis, California 95618
| |
Collapse
|
3
|
Lerebourg M, de Lange FP, Peelen MV. Expected distractor context biases the attentional template for target shapes. J Exp Psychol Hum Percept Perform 2023; 49:1236-1255. [PMID: 37410402 PMCID: PMC7616464 DOI: 10.1037/xhp0001129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2023]
Abstract
Visual search is supported by an internal representation of the target, the attentional template. However, which features are diagnostic of target presence critically depends on the distractors. Accordingly, previous research showed that consistent distractor context shapes the attentional template for simple targets, with the template emphasizing diagnostic dimensions (e.g., color or orientation) in blocks of trials. Here, we investigated how distractor expectations bias attentional templates for complex shapes, and tested whether such biases reflect intertrial priming or can be instantiated flexibly. Participants searched for novel shapes (cued by name) in two probabilistic distractor contexts: Either the target's orientation or rectilinearity was unique (80% validity). Across four experiments, performance was better when the distractor context was expected, indicating that target features in the expected diagnostic dimension were emphasized. Attentional templates were biased by distractor expectations when distractor context was blocked, also for participants reporting no awareness of the manipulation. Interestingly, attentional templates were also biased when distractor context was cued on a trial-by-trial basis, but only when the two contexts were consistently presented at distinct spatial locations. These results show that attentional templates can flexibly and adaptively incorporate expectations about target-distractor relations when looking for the same object in different contexts. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Maëlle Lerebourg
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| |
Collapse
|
4
|
Yu X, Zhou Z, Becker SI, Boettcher SEP, Geng JJ. Good-enough attentional guidance. Trends Cogn Sci 2023; 27:391-403. [PMID: 36841692 DOI: 10.1016/j.tics.2023.01.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/24/2023] [Accepted: 01/25/2023] [Indexed: 02/27/2023]
Abstract
Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA
| | - Zhiheng Zhou
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
| | - Stefanie I Becker
- School of Psychology, University of Queensland, Brisbane, QLD, Australia
| | | | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
5
|
Priming of probabilistic attentional templates. Psychon Bull Rev 2023; 30:22-39. [PMID: 35831678 DOI: 10.3758/s13423-022-02125-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2022] [Indexed: 11/08/2022]
Abstract
Attentional priming has a dominating influence on vision, speeding visual search, releasing items from crowding, reducing masking effects, and during free-choice, primed targets are chosen over unprimed ones. Many accounts postulate that templates stored in working memory control what we attend to and mediate the priming. But what is the nature of these templates (or representations)? Analyses of real-world visual scenes suggest that tuning templates to exact color or luminance values would be impractical since those can vary greatly because of changes in environmental circumstances and perceptual interpretation. Tuning templates to a range of the most probable values would be more efficient. Recent evidence does indeed suggest that the visual system represents such probability, gradually encoding statistical variation in the environment through repeated exposure to input statistics. This is consistent with evidence from neurophysiology and theoretical neuroscience as well as computational evidence of probabilistic representations in visual perception. I argue that such probabilistic representations are the unit of attentional priming and that priming of, say, a repeated single-color value simply involves priming of a distribution with no variance. This "priming of probability" view can be modelled within a Bayesian framework where priming provides contextual priors. Priming can therefore be thought of as learning of the underlying probability density function of the target or distractor sets in a given continuous task.
Collapse
|
6
|
Adamo SH, Roque N, Barufaldi B, Schmidt J, Mello-Thoms C, Lago M. Assessing satisfaction of search in virtual mammograms for experienced and novice searchers. J Med Imaging (Bellingham) 2023; 10:S11917. [PMID: 37485309 PMCID: PMC10359808 DOI: 10.1117/1.jmi.10.s1.s11917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 06/08/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
Purpose Satisfaction of search (SOS) is a phenomenon where searchers are more likely to miss a lesion/target after detecting a first lesion/target. Here, we investigated SOS for masses and calcifications in virtual mammograms with experienced and novice searchers to determine the extent to which: (1) SOS affects breast lesion detection, (2) similarity between lesions impacts detection, and (3) experience impacts SOS rates. Approach The open virtual clinical trials framework was used to simulate the breast anatomy of patients, and up to two simulated masses and/or single-calcifications were inserted into the breast models. Experienced searchers (residents, fellows, and radiologists with breast imaging experience) and novice searchers (undergraduates who had no breast imaging experience) were instructed to search for up to two lesions (masses and calcifications) per image. Results 2 × 2 mixed factors analysis of variances (ANOVAs) were run with: (1) single versus second lesion hit rates, (2) similar versus dissimilar second-lesion hit rates, and (3) similar versus dissimilar second-lesion response times as within-subject factors and experience as the between subject's factor. The ANOVAs demonstrated that: (1) experienced and novice searchers made a significant amount of SOS errors, (2) similarity had little impact on experienced searchers, but novice searchers were more likely to miss a dissimilar second lesion compared to when it was similar to a detected first lesion, (3) experienced and novice searchers were faster at finding similar compared to dissimilar second lesions. Conclusions We demonstrated that SOS is a significant cause of lesion misses in virtual mammograms and that reader experience impacts detection rates for similar compared to dissimilar abnormalities. These results suggest that experience may impact strategy and/or recognition with theoretical implications for determining why SOS occurs.
Collapse
Affiliation(s)
| | - Nelson Roque
- University of Central Florida, Orlando, Florida, United States
| | - Bruno Barufaldi
- University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Joseph Schmidt
- University of Central Florida, Orlando, Florida, United States
| | | | - Miguel Lago
- U.S. Food and Drug Administration, Silver Spring, Maryland, United States
| |
Collapse
|
7
|
Learned feature regularities enable suppression of spatially overlapping stimuli. Atten Percept Psychophys 2022; 85:769-784. [PMID: 36417129 PMCID: PMC10066085 DOI: 10.3758/s13414-022-02612-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/01/2022] [Indexed: 11/24/2022]
Abstract
AbstractContemporary theories of attentional control state that information can be prioritized based on selection history. Even though theories agree that selection history can impact representations of spatial location, which in turn helps guide attention, there remains disagreement on whether nonspatial features (e.g., color) are modulated in a similar way. While previous work has demonstrated color suppression using visual search tasks, it is possible that the location corresponding to the distractor was suppressed, consistent with a spatial mechanism of suppression. Here, we sought to rule out this possibility by testing whether similar suppression of a learned distractor color can occur for spatially overlapping visual stimuli. On a given trial, two spatially superimposed stimuli (line arrays) were tilted either left or right of vertical and presented in one of four distinct colors. Subjects performed a speeded report of the orientation of the “target” array with the most lines. Critically, the distractor array was regularly one color, and this high-probability color was never the color of the target array, which encouraged learned suppression. In two experiments, responses to the target array were fastest when the distractor array was in the high-probability color, suggesting participants suppressed the distractor color. Additionally, when regularities were removed, the high-probability distractor color continued to benefit speeded target identification for individual subjects (E1) but slowed target identification (E2) when presented in the target array. Together, these results indicate that learned suppression of feature-based regularities modulates target detection performance independent of spatial location and persists over time.
Collapse
|
8
|
Witkowski PP, Geng JJ. Attentional priority is determined by predicted feature distributions. J Exp Psychol Hum Percept Perform 2022; 48:1201-1212. [PMID: 36048065 PMCID: PMC10249461 DOI: 10.1037/xhp0001041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual attention is often characterized as being guided by precise memories for target objects. However, real-world search targets have dynamic features that vary over time, meaning that observers must predict how the target could look based on how features are expected to change. Despite its importance, little is known about how target feature predictions influence feature-based attention, or how these predictions are represented in the target template. In Experiment 1 (N = 60 university students), we show observers readily track the statistics of target features over time and adapt attentional priority to predictions about the distribution of target features. In Experiments 2a and 2b (N = 480 university students), we show that these predictions are encoded into the target template as a distribution of likelihoods over possible target features, which are independent of memory precision for the cued item. These results provide a novel demonstration of how observers represent predicted feature distributions when target features are uncertain and show that these predictions are used to set attentional priority during visual search. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Phillip P. Witkowski
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618
- Department of Psychology, University of California Davis, Davis, CA, 95618
| | - Joy J. Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618
- Department of Psychology, University of California Davis, Davis, CA, 95618
| |
Collapse
|
9
|
Hansmann-Roth S, Þorsteinsdóttir S, Geng JJ, Kristjánsson Á. Temporal integration of feature probability distributions. PSYCHOLOGICAL RESEARCH 2022; 86:2030-2044. [PMID: 34997327 DOI: 10.1007/s00426-021-01621-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Accepted: 11/13/2021] [Indexed: 10/19/2022]
Abstract
Humans are surprisingly good at learning the statistical characteristics of their visual environment. Recent studies have revealed that not only can the visual system learn repeated features of visual search distractors, but also their actual probability distributions. Search times were determined by the frequency of distractor features over consecutive search trials. The search displays applied in these studies involved many exemplars of distractors on each trial and while there is clear evidence that feature distributions can be learned from large distractor sets, it is less clear if distributions are well learned for single targets presented on each trial. Here, we investigated potential learning of probability distributions of single targets during visual search. Over blocks of trials, observers searched for an oddly colored target that was drawn from either a Gaussian or a uniform distribution. Search times for the different target colors were clearly influenced by the probability of that feature within trial blocks. The same search targets, coming from the extremes of the two distributions were found significantly slower during the blocks where the targets were drawn from a Gaussian distribution than from a uniform distribution indicating that observers were sensitive to the target probability determined by the distribution shape. In Experiment 2, we replicated the effect using binned distributions and revealed the limitations of encoding complex target distributions. Our results demonstrate detailed internal representations of target feature distributions and that the visual system integrates probability distributions of target colors over surprisingly long trial sequences.
Collapse
Affiliation(s)
- Sabrina Hansmann-Roth
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- Université de Lille, CNRS, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, 59000, Lille, France.
| | - Sóley Þorsteinsdóttir
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
- Department of Psychology, University of California Davis, Davis, CA, USA
| | - Árni Kristjánsson
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
10
|
Abstract
Models of attention posit that attentional priority is established by summing the saliency and relevancy signals from feature-selective maps. The dimension-weighting account further hypothesizes that information from each feature-selective map is weighted based on expectations of how informative each dimension will be. In the current studies, we investigated the question of whether attentional biases to the features of a conjunction target (color and orientation) differ when one dimension is expected to be more diagnostic of the target. In a series of color-orientation conjunction search tasks, observers saw an exact cue for the upcoming target, while the probability of distractors sharing a target feature in each dimension was manipulated. In one context, distractors were more likely to share the target color, and in another, distractors were more likely to share the target orientation. The results indicated that despite an overall bias toward color, attentional priority to each target feature was flexibly adjusted according to distractor context: RT and accuracy performance was better when the diagnostic feature was expected than unexpected. This occurred both when the distractor context was learned implicitly and explicitly. These results suggest that feature-based enhancement can occur selectively for the dimension expected to be most informative in distinguishing the target from distractors.
Collapse
|