1
|
Chapman AF, Störmer VS. Target-distractor similarity predicts visual search efficiency but only for highly similar features. Atten Percept Psychophys 2024; 86:1872-1882. [PMID: 39251566 DOI: 10.3758/s13414-024-02954-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/23/2024] [Indexed: 09/11/2024]
Abstract
A major constraining factor for attentional selection is the similarity between targets and distractors. When similarity is low, target items can be identified quickly and efficiently, whereas high similarity can incur large costs on processing speed. Models of visual search contrast a fast, efficient parallel stage with a slow serial processing stage where search times are strongly modulated by the number of distractors in the display. In particular, recent work has argued that the magnitude of search slopes should be inversely proportional to target-distractor similarity. Here, we assessed the relationship between target-distractor similarity and search slopes. In our visual search tasks, participants detected an oddball color target among distractors (Experiments 1 & 2) or discriminated the direction of a triangle in the oddball color (Experiment 3). We systematically varied the similarity between target and distractor colors (along a circular CIELAB color wheel) and the number of distractors in the search array, finding logarithmic search slopes that were inversely proportional to the number of items in the array. Surprisingly, we also found that searches were highly efficient (i.e., near-zero slopes) for targets and distractors that were extremely similar (≤20° in color space). These findings indicate that visual search is systematically influenced by target-distractor similarity across different processing stages. Importantly, we found that search can be highly efficient and entirely unaffected by the number of distractors despite high perceptual similarity, in contrast to the general assumption that high similarity must lead to slow and serial search behavior.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA.
- Department of Psychology, University of California San Diego, La Jolla, CA, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
2
|
Davis G. ATLAS: Mapping ATtention's Location And Size to probe five modes of serial and parallel search. Atten Percept Psychophys 2024; 86:1938-1962. [PMID: 38982008 PMCID: PMC11410986 DOI: 10.3758/s13414-024-02921-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2024] [Indexed: 07/11/2024]
Abstract
Conventional visual search tasks do not address attention directly and their core manipulation of 'set size' - the number of displayed items - introduces stimulus confounds that hinder interpretation. However, alternative approaches have not been widely adopted, perhaps reflecting their complexity, assumptions, or indirect attention-sampling. Here, a new procedure, the ATtention Location And Size ('ATLAS') task used probe displays to track attention's location, breadth, and guidance during search. Though most probe displays comprised six items, participants reported only the single item they judged themselves to have perceived most clearly - indexing the attention 'peak'. By sampling peaks across variable 'choice sets', the size and position of the attention window during search was profiled. These indices appeared to distinguish narrow- from broad attention, signalled attention to pairs of items where it arose and tracked evolving attention-guidance over time. ATLAS is designed to discriminate five key search modes: serial-unguided, sequential-guided, unguided attention to 'clumps' with local guidance, and broad parallel-attention with or without guidance. This initial investigation used only an example set of highly regular stimuli, but its broader potential should be investigated.
Collapse
Affiliation(s)
- Gregory Davis
- Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK.
| |
Collapse
|
3
|
Hughes AE, Nowakowska A, Clarke ADF. Bayesian multi-level modelling for predicting single and double feature visual search. Cortex 2024; 171:178-193. [PMID: 38007862 DOI: 10.1016/j.cortex.2023.10.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/19/2023] [Accepted: 10/05/2023] [Indexed: 11/28/2023]
Abstract
Performance in visual search tasks is frequently summarised by "search slopes" - the additional cost in reaction time for each additional distractor. While search tasks with a shallow search slopes are termed efficient (pop-out, parallel, feature), there is no clear dichotomy between efficient and inefficient (serial, conjunction) search. Indeed, a range of search slopes are observed in empirical data. The Target Contrast Signal (TCS) Theory is a rare example of quantitative model that attempts to predict search slopes for efficient visual search. One study using the TCS framework has shown that the search slope in a double-feature search (where the target differs in both colour and shape from the distractors) can be estimated from the slopes of the associated single-feature searches. This estimation is done using a contrast combination model, and a collinear contrast integration model was shown to outperform other options. In our work, we extend TCS to a Bayesian multi-level framework. We investigate modelling using normal and shifted-lognormal distributions, and show that the latter allows for a better fit to previously published data. We run a new fully within-subjects experiment to attempt to replicate the key original findings, and show that overall, TCS does a good job of predicting the data. However, we do not replicate the finding that the collinear combination model outperforms the other contrast combination models, instead finding that it may be difficult to conclusively distinguish between them.
Collapse
Affiliation(s)
- Anna E Hughes
- Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK.
| | - Anna Nowakowska
- School of Psychology, University of Aberdeen, Aberdeen, AB24 3FX, UK; School of Psychology and Vision Sciences, University of Leicester, Leicester, LE1 7RH, UK
| | | |
Collapse
|
4
|
van Heusden E, Olivers CNL, Donk M. The effects of eccentricity on attentional capture. Atten Percept Psychophys 2024; 86:422-438. [PMID: 37258897 PMCID: PMC10806068 DOI: 10.3758/s13414-023-02735-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/14/2023] [Indexed: 06/02/2023]
Abstract
Visual attention may be captured by an irrelevant yet salient distractor, thereby slowing search for a relevant target. This phenomenon has been widely studied using the additional singleton paradigm in which search items are typically all presented at one and the same eccentricity. Yet, differences in eccentricity may well bias the competition between target and distractor. Here we investigate how attentional capture is affected by the relative eccentricities of a target and a distractor. Participants searched for a shape-defined target in a grid of homogeneous nontargets of the same color. On 75% of trials, one of the nontarget items was replaced by a salient color-defined distractor. Crucially, target and distractor eccentricities were independently manipulated across three levels of eccentricity (i.e., near, middle, and far). Replicating previous work, we show that the presence of a distractor slows down search. Interestingly, capture as measured by manual reaction times was not affected by target and distractor eccentricity, whereas capture as measured by the eyes was: items close to fixation were more likely to be selected than items presented further away. Furthermore, the effects of target and distractor eccentricity were largely additive, suggesting that the competition between saliency- and relevance-driven selection was modulated by an independent eccentricity-based spatial component. Implications of the dissociation between manual and oculomotor responses are also discussed.
Collapse
Affiliation(s)
- Elle van Heusden
- Faculty of Behavioral and Movement Sciences, Cognitive Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 HV, Amsterdam, The Netherlands.
| | - Christian N L Olivers
- Faculty of Behavioral and Movement Sciences, Cognitive Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 HV, Amsterdam, The Netherlands
| | - Mieke Donk
- Faculty of Behavioral and Movement Sciences, Cognitive Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 HV, Amsterdam, The Netherlands
| |
Collapse
|
5
|
Xu ZJ, Lleras A, Buetti S. Predicting how surface texture and shape combine in the human visual system to direct attention. Sci Rep 2021; 11:6170. [PMID: 33731840 PMCID: PMC7971056 DOI: 10.1038/s41598-021-85605-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 02/23/2021] [Indexed: 11/12/2022] Open
Abstract
Objects differ from one another along a multitude of visual features. The more distinct an object is from other objects in its surroundings, the easier it is to find it. However, it is still unknown how this distinctiveness advantage emerges in human vision. Here, we studied how visual distinctiveness signals along two feature dimensions—shape and surface texture—combine to determine the overall distinctiveness of an object in the scene. Distinctiveness scores between a target object and distractors were measured separately for shape and texture using a search task. These scores were then used to predict search times when a target differed from distractors along both shape and texture. Model comparison showed that the overall object distinctiveness was best predicted when shape and texture combined using a Euclidian metric, confirming the brain is computing independent distinctiveness scores for shape and texture and combining them to direct attention.
Collapse
Affiliation(s)
- Zoe Jing Xu
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA.
| | - Alejandro Lleras
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| | - Simona Buetti
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| |
Collapse
|
6
|
Abstract
Feature Integration Theory (FIT) set out the groundwork for much of the work in visual cognition since its publication. One of the most important legacies of this theory has been the emphasis on feature-specific processing. Nowadays, visual features are thought of as a sort of currency of visual attention (e.g., features can be attended, processing of attended features is enhanced), and attended features are thought to guide attention towards likely targets in a scene. Here we propose an alternative theory - the Target Contrast Signal Theory - based on the idea that when we search for a specific target, it is not the target-specific features that guide our attention towards the target; rather, what determines behavior is the result of an active comparison between the target template in mind and every element present in the scene. This comparison occurs in parallel and is aimed at rejecting from consideration items that peripheral vision can confidently reject as being non-targets. The speed at which each item is evaluated is determined by the overall contrast between that item and the target template. We present computational simulations to demonstrate the workings of the theory as well as eye-movement data that support core predictions of the theory. The theory is discussed in the context of FIT and other important theories of visual search.
Collapse
|
7
|
Abstract
The mechanisms guiding visual attention are of great interest within cognitive and perceptual psychology. Many researchers have proposed models of these mechanisms, which serve to both formalize their theories and to guide further empirical investigations. The assumption that a number of basic features are processed in parallel early in the attentional process is common among most models of visual attention and visual search. To date, much of the evidence for parallel processing has been limited to set-size manipulations. Unfortunately, set-size manipulations have been shown to be insufficient evidence for parallel processing. We applied Systems Factorial Technology, a general nonparametric framework, to test this assumption, specifically whether color and shape are processed in parallel or in serial, in three experiments representative of feature search, conjunctive search, and odd-one-out search, respectively. Our results provide strong evidence that color and shape information guides search through parallel processes. Furthermore, we found evidence for facilitation between color and shape when the target was known in advance but performance consistent with unlimited capacity, independent parallel processing in odd-one-out search. These results confirm core assumptions about color and shape feature processing instantiated in most models of visual search and provide more detailed clues about the manner in which color and shape information is combined to guide search.
Collapse
|
8
|
Buetti S, Xu J, Lleras A. Predicting how color and shape combine in the human visual system to direct attention. Sci Rep 2019; 9:20258. [PMID: 31889066 PMCID: PMC6937264 DOI: 10.1038/s41598-019-56238-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 12/07/2019] [Indexed: 11/19/2022] Open
Abstract
Objects in a scene can be distinct from one another along a multitude of visual attributes, such as color and shape, and the more distinct an object is from its surroundings, the easier it is to find it. However, exactly how this distinctiveness advantage arises in vision is not well understood. Here we studied whether and how visual distinctiveness along different visual attributes (color and shape, assessed in four experiments) combine to determine an object’s overall distinctiveness in a scene. Unidimensional distinctiveness scores were used to predict performance in six separate experiments where a target object differed from distractor objects along both color and shape. Results showed that there is mathematical law determining overall distinctiveness as the simple sum of the distinctiveness scores along each visual attribute. Thus, the brain must compute distinctiveness scores independently for each visual attribute before summing them into the overall score that directs human attention.
Collapse
Affiliation(s)
| | - Jing Xu
- University of Illinois, Champaign, United States
| | | |
Collapse
|
9
|
Ng GJP, Buetti S, Dolcos S, Dolcos F, Lleras A. Distractor rejection in parallel search tasks takes time but does not benefit from context repetition. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2019.1676353] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Gavin Jun Peng Ng
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Simona Buetti
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Sanda Dolcos
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Beckman Institute for Advanced Science and Technology, Urbana, IL, USA
| | - Florin Dolcos
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Beckman Institute for Advanced Science and Technology, Urbana, IL, USA
| | - Alejandro Lleras
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Beckman Institute for Advanced Science and Technology, Urbana, IL, USA
| |
Collapse
|
10
|
Predicting Search Performance in Heterogeneous Scenes: Quantifying the Impact of Homogeneity Effects in Efficient Search. COLLABRA-PSYCHOLOGY 2019. [DOI: 10.1525/collabra.151] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Recently, Wang, Buetti and Lleras (2017) developed an equation to predict search performance in heterogeneous visual search scenes (i.e., multiple types of non-target objects simultaneously present) based on parameters observed when participants perform search in homogeneous scenes (i.e., when all non-target objects are identical to one another). The equation was based on a computational model where every item in the display is processed with unlimited capacity and independently of one another, with the goal of determining whether the item is likely to be a target or not. The model was tested in two experiments using real-world objects. Here, we extend those findings by testing the predictive power of the equation to simpler objects. Further, we compare the model’s performance under two stimulus arrangements: spatially-intermixed (items randomly placed around the scene) and spatially-segregated displays (identical items presented near each other). This comparison allowed us to isolate and quantify the facilitatory effect of processing displays that contain identical items (homogeneity facilitation), a factor that improves performance in visual search above-and-beyond target-distractor dissimilarity. The results suggest that homogeneity facilitation effects in search arise from local item-to-item interaction (rather than by rejecting items as “groups”) and that the strength of those interactions might be determined by stimulus complexity (with simpler stimuli producing stronger interactions and thus, stronger homogeneity facilitation effects).
Collapse
|
11
|
Fixed-target efficient search has logarithmic efficiency with and without eye movements. Atten Percept Psychophys 2018; 80:1752-1762. [PMID: 29981011 DOI: 10.3758/s13414-018-1561-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Stage 1 processing in visual search (e.g., efficient search) has long been thought to be unaffected by factors such as set size or lure-distractor similarity (or at least to be only minimally affected). Recent research from Buetti, Cronin, Madison, Wang, and Lleras (Journal of Experimental Psychology: General, 145, 672-707, 2016) showed that in efficient visual search with a fixed target, reaction times increase logarithmically as a function of set size and, further, that the slope of these logarithmic functions is modulated by target-distractor similarity. This has led to the proposal that the cognitive architecture of Stage 1 processing is parallel, of unlimited capacity, and exhaustive in nature. Such an architecture produces reaction time functions that increase logarithmically with set size (as opposed to being unaffected by it). However, in the previous studies, eye movements were not monitored. It is thus possible that the logarithmicity of the reaction time functions emerged simply as an artifact of eye movements rather than as a reflection of the underlying cognitive architecture. Here we ruled out the possibility that eye movements resulted in the observed logarithmic functions, by asking participants to keep their eyes at fixation while completing fixed-target efficient visual search tasks. The logarithmic RT functions still emerged even when participants were not allowed to make eye movements, thus providing further support for our proposal. Additionally, we found that search efficiency is slightly improved when eye movements are restricted and lure-target similarity is relatively high.
Collapse
|