1
|
Wang Y, Buetti S, Cui AY, Lleras A. Color-color feature guidance in visual search. Atten Percept Psychophys 2025:10.3758/s13414-025-03055-0. [PMID: 40295424 DOI: 10.3758/s13414-025-03055-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/09/2025] [Indexed: 04/30/2025]
Abstract
Previous work has demonstrated that when the target and distractors differ in two features across different dimensions (e.g., red and square), search can unfold in parallel in either a simultaneous or a sequential feature-guidance manner. However, the underlying mechanism of how two features within a single feature dimension guide search remains elusive. This study specifically aims to explore how two colors, arranged in a center-surround configuration (e.g., red center/green surround), guide search. Our investigation encompasses homogeneous (Experiments 1-3) and heterogeneous (Experiment 4) search displays. Experiment 1 demonstrated a parallel search mechanism with a two-color location-bound search template by using a search display containing distractors that have inverse color relation with the target. Experiments 2 revealed a strategic preference for using a single color to guide search without location binding when distractor types were intermixed across trials, and this preference persisted even when the template was emphasized by presenting it before each trial. Furthermore, Experiment 3 illustrated that, with fixed-distractor practice, participants can acquire a two-color location-bound search strategy. Once in place, this strategy persists, even when the distractor types become intermixed in subsequent blocks of trials. Experiments 4A-D used a computational modeling approach and found that two-color guidance search works in a parallel sequential manner in heterogeneous displays. Participants utilize one of the two target colors first in a location-bound manner to filter out one subset of distractors and then attended to the second target color (location bound) to reject the remaining distractors.
Collapse
Affiliation(s)
- Yiwen Wang
- Department of Psychology, University of Illinois at Urbana-Champaign, 603 E. Daniel St., Champaign, IL, 61820, USA.
| | - Simona Buetti
- Department of Psychology, University of Illinois at Urbana-Champaign, 603 E. Daniel St., Champaign, IL, 61820, USA
| | - Andrea Yaoyun Cui
- Department of Psychology, University of Illinois at Urbana-Champaign, 603 E. Daniel St., Champaign, IL, 61820, USA
| | - Alejandro Lleras
- Department of Psychology, University of Illinois at Urbana-Champaign, 603 E. Daniel St., Champaign, IL, 61820, USA
| |
Collapse
|
2
|
Cui AY, Buetti S, Xu ZJ, Lleras A. Evaluating the contribution of parallel processing of color and shape in a conjunction search task. Sci Rep 2025; 15:7760. [PMID: 40044949 PMCID: PMC11882880 DOI: 10.1038/s41598-025-92453-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Accepted: 02/26/2025] [Indexed: 03/09/2025] Open
Abstract
Traditionally, researchers interpret the difficulty in conjunction search as difficulty in binding features. In the present study, we used a behavioral-computational approach to assess if parameters from feature search could predict performance in a color-shape conjunction search task. We also investigated whether pooling-mediated processing in peripheral regions was a key-limiting factor in performance in conjunction search by manipulating display arrangements across different experiments. The results indicated that parameters in homogeneous search displays can indeed be used to successfully predict performance in conjunction search displays. This finding is noteworthy because it indicates that the visual system must be extracting the same information from the display in feature and conjunction search tasks (i.e., the target-distractor similarity) using color and shape. Furthermore, there was no compelling evidence that pooling-mediated processing was the primary constraint on performance in this conjunction search task. A model-comparison approach compared the accuracy of different distractor rejection architectures in predicting performance in conjunction search tasks. The winning model showed participants engaging hierarchically with the display, selecting and rejecting distractor subsets based on a single defining feature. Taken in the context of previous research on heterogeneous search performance, the current results imply that the inherent demands of search for a conjunction of color and shape compel participants to adopt this targeted search strategy.
Collapse
Affiliation(s)
- Andrea Yaoyun Cui
- Department of Psychology, University of Illinois Urbana-Champaign, Champaign, 61820, United States.
| | - Simona Buetti
- Department of Psychology, University of Illinois Urbana-Champaign, Champaign, 61820, United States
| | - Zoe Jing Xu
- Department of Psychology, University of Illinois Urbana-Champaign, Champaign, 61820, United States
| | - Alejandro Lleras
- Department of Psychology, University of Illinois Urbana-Champaign, Champaign, 61820, United States
| |
Collapse
|
3
|
Xu ZJ, Lleras A, Gong ZG, Buetti S. Top-down instructions influence the attentional weight on color and shape dimensions during bidimensional search. Sci Rep 2024; 14:31376. [PMID: 39732851 PMCID: PMC11682205 DOI: 10.1038/s41598-024-82866-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Accepted: 12/09/2024] [Indexed: 12/30/2024] Open
Abstract
Efficient searches are guided by target-distractor distinctiveness: the greater the distinctiveness, the faster the search. Previous research showed that when the target and distractors differ along both color and shape dimensions (i.e., bidimensional search), distinctiveness along individual dimensions combine collinearly to guide the search, following a city-block metric. This result was found when participants expected the target and distractors to differ along both dimensions. In the present study, we used an instruction manipulation to investigate how bidimensional search varies in response to different top-down instructions. Using unidimensional search performance observed in Experiment 1, we predicted bidimensional search performance under three conditions: when participants were instructed to attend to color (Experiment 2), shape (Experiment 3), or both (Experiment 4). Results showed that instructions influenced how distinctiveness along color and shape combine to guide attention: when instructed to search for a target color, participants allocated more attentional weight to the color dimension (and less weight to the shape dimension) compared to when instructed to search for a target shape. Our study presents a novel technique to quantify how top-down instructions change attentional weighting to different features during bidimensional visual searches.
Collapse
Affiliation(s)
- Zoe Jing Xu
- Psychology Department, University of Illinois at Urbana Champaign, Champaign, United States.
| | - Alejandro Lleras
- Psychology Department, University of Illinois at Urbana Champaign, Champaign, United States
| | - Zixu Gavin Gong
- Psychology Department, University of Illinois at Urbana Champaign, Champaign, United States
| | - Simona Buetti
- Psychology Department, University of Illinois at Urbana Champaign, Champaign, United States
| |
Collapse
|
4
|
Xu ZJ, Buetti S, Xia Y, Lleras A. Skills and cautiousness predict performance in difficult search. Atten Percept Psychophys 2024; 86:1897-1912. [PMID: 38997576 DOI: 10.3758/s13414-024-02923-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2024] [Indexed: 07/14/2024]
Abstract
People differ in how well they search. What are the factors that might contribute to this variability? We tested the contribution of two cognitive abilities: visual working memory (VWM) capacity and object recognition ability. Participants completed three tasks: a difficult inefficient visual search task, where they searched for a target letter T among skewed L distractors; a VWM task, where they memorized a color array and then identified whether a probed color belonged to the previous array; and the Novel Object Memory Test (NOMT), where they learnt complex novel objects and then identified them amongst objects that closely resembled them. Exploratory and confirmatory factor analyses revealed that there are two latent factors that explain the shared variance among these three tasks: a factor indicative of the level of caution participants exercised during the challenging visual search task, and a factor representing their visual cognitive abilities. People who score high on the search cautiousness tend to perform a more accurate but slower search. People who score high on the visual cognitive ability factor tend to have a higher VWM capacity, a better object recognition ability, and a faster search speed. The results reflect two points: (1) Visual search tasks share components with visual working memory and object recognition tasks. (2) Search performance is influenced not only by the search display's properties but also by individual predispositions such as caution and general visual abilities. This study introduces new factors for consideration when interpreting variations in visual search behaviors.
Collapse
Affiliation(s)
- Zoe Jing Xu
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA.
| | - Simona Buetti
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| | - Yan Xia
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| | - Alejandro Lleras
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| |
Collapse
|
5
|
Chapman AF, Störmer VS. Target-distractor similarity predicts visual search efficiency but only for highly similar features. Atten Percept Psychophys 2024; 86:1872-1882. [PMID: 39251566 DOI: 10.3758/s13414-024-02954-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/23/2024] [Indexed: 09/11/2024]
Abstract
A major constraining factor for attentional selection is the similarity between targets and distractors. When similarity is low, target items can be identified quickly and efficiently, whereas high similarity can incur large costs on processing speed. Models of visual search contrast a fast, efficient parallel stage with a slow serial processing stage where search times are strongly modulated by the number of distractors in the display. In particular, recent work has argued that the magnitude of search slopes should be inversely proportional to target-distractor similarity. Here, we assessed the relationship between target-distractor similarity and search slopes. In our visual search tasks, participants detected an oddball color target among distractors (Experiments 1 & 2) or discriminated the direction of a triangle in the oddball color (Experiment 3). We systematically varied the similarity between target and distractor colors (along a circular CIELAB color wheel) and the number of distractors in the search array, finding logarithmic search slopes that were inversely proportional to the number of items in the array. Surprisingly, we also found that searches were highly efficient (i.e., near-zero slopes) for targets and distractors that were extremely similar (≤20° in color space). These findings indicate that visual search is systematically influenced by target-distractor similarity across different processing stages. Importantly, we found that search can be highly efficient and entirely unaffected by the number of distractors despite high perceptual similarity, in contrast to the general assumption that high similarity must lead to slow and serial search behavior.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA.
- Department of Psychology, University of California San Diego, La Jolla, CA, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
6
|
Noonan MP, Störmer VS. Contextual and Temporal Constraints for Attentional Capture: Commentary on Theeuwes' 2023 Review "The Attentional Capture Debate: When Can We Avoid Salient Distractors and when Not?". J Cogn 2023; 6:37. [PMID: 37426062 PMCID: PMC10327855 DOI: 10.5334/joc.274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 04/06/2023] [Indexed: 07/11/2023] Open
Abstract
Salient distractors demand our attention. Their salience, derived from intensity, relative contrast or learned relevance, captures our limited information capacity. This is typically an adaptive response as salient stimuli may require an immediate change in behaviour. However, sometimes apparent salient distractors do not capture attention. Theeuwes, in his recent commentary, has proposed certain boundary conditions of the visual scene that result in one of two search modes, serial or parallel, that determine whether we can avoid salient distractors or not. Here, we argue that a more complete theory should consider the temporal and contextual factors that influence the very salience of the distractor itself.
Collapse
Affiliation(s)
- MaryAnn P. Noonan
- Department of Psychology, University of York, Heslington, York, UK
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, UK
| | - Viola S. Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, USA
| |
Collapse
|
7
|
Yu X, Zhou Z, Becker SI, Boettcher SEP, Geng JJ. Good-enough attentional guidance. Trends Cogn Sci 2023; 27:391-403. [PMID: 36841692 DOI: 10.1016/j.tics.2023.01.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/24/2023] [Accepted: 01/25/2023] [Indexed: 02/27/2023]
Abstract
Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA
| | - Zhiheng Zhou
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
| | - Stefanie I Becker
- School of Psychology, University of Queensland, Brisbane, QLD, Australia
| | | | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
8
|
Expectations generated based on associative learning guide visual search for novel packaging labels. Food Qual Prefer 2023. [DOI: 10.1016/j.foodqual.2022.104743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
9
|
Chapman AF, Störmer VS. Feature similarity is non-linearly related to attentional selection: Evidence from visual search and sustained attention tasks. J Vis 2022; 22:4. [PMID: 35834377 PMCID: PMC9290316 DOI: 10.1167/jov.22.8.4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Although many theories of attention highlight the importance of similarity between target and distractor items for selection, few studies have directly quantified the function underlying this relationship. Across two commonly used tasks-visual search and sustained attention-we investigated how target-distractor similarity impacts feature-based attentional selection. Importantly, we found comparable patterns of performance in both visual search and sustained feature-based attention tasks, with performance (response times and d', respectively) plateauing at medium target-distractor distances (40°-50° around a luminance-matched color wheel). In contrast, visual search efficiency, as measured by search slopes, was affected by a much more narrow range of similarity levels (10°-20°). We assessed the relationship between target-distractor similarity and attentional performance using both a stimulus-based and psychologically-based measure of similarity and found this nonlinear relationship in both cases. However, psychological similarity accounted for some of the nonlinearities observed in the data, suggesting that measures of psychological similarity are more appropriate when studying effects of target-distractor similarities. These findings place novel constraints on models of selective attention and emphasize the importance of considering the similarity structure of the feature space over which attention operates. Broadly, the nonlinear effects of similarity on attention are consistent with accounts that propose attention exaggerates the distance between competing representations, possibly through enhancement of off-tuned neurons.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA.,
| | - Viola S Störmer
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, NH, USA.,
| |
Collapse
|
10
|
Regenwetter M, Robinson MM, Wang C. Four Internal Inconsistencies in Tversky and Kahneman’s (1992) Cumulative Prospect Theory Article: A Case Study in Ambiguous Theoretical Scope and Ambiguous Parsimony. ADVANCES IN METHODS AND PRACTICES IN PSYCHOLOGICAL SCIENCE 2022. [DOI: 10.1177/25152459221074653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Scholars heavily rely on theoretical scope as a tool to challenge existing theory. We advocate that scientific discovery could be accelerated if far more effort were invested into also overtly specifying and painstakingly delineating the intended purview of any proposed new theory at the time of its inception. As a case study, we consider Tversky and Kahneman (1992). They motivated their Nobel-Prize-winning cumulative prospect theory with evidence that in each of two studies, roughly half of the participants violated independence, a property required by expected utility theory (EUT). Yet even at the time of inception, new theories may reveal signs of their own limited scope. For example, we show that Tversky and Kahneman’s findings in their own test of loss aversion provide evidence that at least half of their participants violated their theory, in turn, in that study. We highlight a combination of conflicting findings in the original article that make it ambiguous to evaluate both cumulative prospect theory’s scope and its parsimony on the authors’ own evidence. The Tversky and Kahneman article is illustrative of a social and behavioral research culture in which theoretical scope plays an extremely asymmetric role: to call existing theory into question and motivate surrogate proposals.
Collapse
Affiliation(s)
- Michel Regenwetter
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, Illinois
- Department of Political Science, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Department of Electrical & Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Maria M. Robinson
- Department of Psychology, University of California San Diego, La Jolla, California
| | - Cihang Wang
- Department of Economics, University of Illinois at Urbana-Champaign, Urbana, Illinois
| |
Collapse
|
11
|
Dent K. On the relationship between cognitive load and the efficiency of distractor rejection in visual search: The case of motion-form conjunctions. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.2017376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Kevin Dent
- Department of Psychology, University of Essex, Colchester, Essex
| |
Collapse
|
12
|
Balke J, Rolke B, Seibold VC. Temporal preparation accelerates spatial selection by facilitating bottom-up processing. Brain Res 2021; 1777:147765. [PMID: 34951971 DOI: 10.1016/j.brainres.2021.147765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 11/19/2021] [Accepted: 12/17/2021] [Indexed: 11/02/2022]
Abstract
Temporal preparation facilitates spatial selection in visual search. This selection benefit has not only been observed for targets, but also for task-irrelevant, salient distractors. This result suggests that temporal preparation influences bottom-up salience in spatial selection. To test this assumption, we conducted an event-related-potential (ERP) study in which we measured the joint effect of temporal preparation and target salience on the N2pc as an index of spatial selection and the N1 as an index of perceptual discrimination. To manipulate target salience, we employed two different setsizes (i.e., a small or large number of homogeneous distractors). To manipulate temporal preparation, we presented a warning signal before the search display and we varied the length of the interval (foreperiod) between warning signal and search display in different blocks of trials (constant foreperiod paradigm). Replicating previous results, we observed that the N1 and the N2pc arose earlier in case of good temporal preparation. Importantly, the beneficial effect on the N2pc onset latency was stronger when the target salience was initially low (i.e., small setsize). This result provides evidence that temporal preparation influences bottom-up processing and, thereby, facilitates spatial selection.
Collapse
Affiliation(s)
- Janina Balke
- Evolutionary Cognition, Department of Psychology, University of Tuebingen, Schleichstraße 4, 72076 Tuebingen, Germany.
| | - Bettina Rolke
- Evolutionary Cognition, Department of Psychology, University of Tuebingen, Schleichstraße 4, 72076 Tuebingen, Germany.
| | - Verena C Seibold
- Evolutionary Cognition, Department of Psychology, University of Tuebingen, Schleichstraße 4, 72076 Tuebingen, Germany.
| |
Collapse
|
13
|
Yu X, Hanks TD, Geng JJ. Attentional Guidance and Match Decisions Rely on Different Template Information During Visual Search. Psychol Sci 2021; 33:105-120. [PMID: 34878949 DOI: 10.1177/09567976211032225] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
When searching for a target object, we engage in a continuous "look-identify" cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students (Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California, Davis.,Department of Psychology, University of California, Davis
| | - Timothy D Hanks
- Center for Neuroscience, University of California, Davis.,Department of Neurology, University of California, Davis
| | - Joy J Geng
- Center for Mind and Brain, University of California, Davis.,Department of Psychology, University of California, Davis
| |
Collapse
|
14
|
Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology and Radiology, Brigham & Women's Hospital/Harvard Medical School, Cambridge, MA, USA.
- Visual Attention Lab, 65 Landsdowne St, 4th Floor, Cambridge, MA, 02139, USA.
| |
Collapse
|
15
|
Xu ZJ, Lleras A, Buetti S. Predicting how surface texture and shape combine in the human visual system to direct attention. Sci Rep 2021; 11:6170. [PMID: 33731840 PMCID: PMC7971056 DOI: 10.1038/s41598-021-85605-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 02/23/2021] [Indexed: 11/12/2022] Open
Abstract
Objects differ from one another along a multitude of visual features. The more distinct an object is from other objects in its surroundings, the easier it is to find it. However, it is still unknown how this distinctiveness advantage emerges in human vision. Here, we studied how visual distinctiveness signals along two feature dimensions—shape and surface texture—combine to determine the overall distinctiveness of an object in the scene. Distinctiveness scores between a target object and distractors were measured separately for shape and texture using a search task. These scores were then used to predict search times when a target differed from distractors along both shape and texture. Model comparison showed that the overall object distinctiveness was best predicted when shape and texture combined using a Euclidian metric, confirming the brain is computing independent distinctiveness scores for shape and texture and combining them to direct attention.
Collapse
Affiliation(s)
- Zoe Jing Xu
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA.
| | - Alejandro Lleras
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| | - Simona Buetti
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| |
Collapse
|
16
|
Abstract
In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.
Collapse
Affiliation(s)
- Jeremy M. Wolfe
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
17
|
Wolfe JM. Major issues in the study of visual search: Part 2 of "40 Years of Feature Integration: Special Issue in Memory of Anne Treisman". Atten Percept Psychophys 2020; 82:383-393. [PMID: 32291612 PMCID: PMC7250731 DOI: 10.3758/s13414-020-02022-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology & Radiology, Harvard Medical School, Boston, MA, USA.
- Visual Attention Lab, Department of Surgery, Brigham & Women's Hospital, 65 Landsdowne St, 4th Floor, Cambridge, MA, 02139, USA.
| |
Collapse
|