1
|
Khvostov VA, Iakovlev AU, Wolfe JM, Utochkin IS. What is the basis of ensemble subset selection? Atten Percept Psychophys 2024; 86:776-798. [PMID: 38351233 DOI: 10.3758/s13414-024-02850-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/23/2024] [Indexed: 05/03/2024]
Abstract
The visual system can rapidly calculate the ensemble statistics of a set of objects; for example, people can easily estimate an average size of apples on a tree. To accomplish this, it is not always useful to summarize all the visual information. If there are various types of objects, the visual system should select a relevant subset: only apples, not leaves and branches. Here, we ask what kind of visual information makes a "good" ensemble that can be selectively attended to provide an accurate summary estimate. We tested three candidate representations: basic features, preattentive object files, and full-fledged bound objects. In four experiments, we presented a target and several distractors' sets of differently colored objects. We found that conditions where a target ensemble had at least one unique color (basic feature) provided ensemble averaging performance comparable to the baseline displays without distractors. When the target subset was defined as a conjunction of two colors or color-shape partly shared with distractors (so that they could be differentiated only as preattentive object files), subset averaging was also possible but less accurate than in the baseline and feature conditions. Finally, performance was very poor when the target subset was defined by an exact feature relationship, such as in the spatial conjunction of two colors (spatially bound object). Overall, these results suggest that distinguishable features and, to a lesser degree, preattentive object files can serve as the representational basis of ensemble selection, while bound objects cannot.
Collapse
Affiliation(s)
- Vladislav A Khvostov
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland.
- HSE University, Moscow, Russia.
| | - Aleksei U Iakovlev
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland
| | - Jeremy M Wolfe
- Visual Attention Laboratory, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Igor S Utochkin
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
2
|
The influence of category representativeness on the low prevalence effect in visual search. Psychon Bull Rev 2022; 30:634-642. [PMID: 36138284 DOI: 10.3758/s13423-022-02183-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/02/2022] [Indexed: 11/08/2022]
Abstract
Visual search is greatly affected by the appearance rate of given target types, such that low-prevalence items are harder to detect, which has consequences for real-world search tasks where target frequency cannot be balanced. However, targets that are highly representative of a categorically defined task set are also easier to find. We hypothesized that targets that are highly representative are less vulnerable to low-prevalence effects because an observer's attentional set prioritizes guidance toward them even when they are rare. We assessed this hypothesis by first determining the categorical structure of "prohibited carry-ons" via an exemplar-naming task, and used this structure to assess how category representativeness interacted with prevalence. Specifically, from the exemplar-naming task we selected a commonly named (knives) and rarely named (gas cans) target for a search task in which one of the targets was shown infrequently. As predicted, highly representative targets were found more easily than their less representative counterparts, but they also were less affected by prevalence manipulations. Experiment 1b replicated the results with targets matched for emotional valence (water bottles and fireworks). These findings demonstrate the powerful explanatory power of theories of attentional guidance that incorporate the dynamic influence of recent experience with the knowledge that comes from life experience to better predict behavioral outcomes associated with high-stakes search environments.
Collapse
|
3
|
Body dissatisfaction, rumination and attentional disengagement toward computer-generated bodies. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02180-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
4
|
Xu ZJ, Lleras A, Buetti S. Predicting how surface texture and shape combine in the human visual system to direct attention. Sci Rep 2021; 11:6170. [PMID: 33731840 PMCID: PMC7971056 DOI: 10.1038/s41598-021-85605-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 02/23/2021] [Indexed: 11/12/2022] Open
Abstract
Objects differ from one another along a multitude of visual features. The more distinct an object is from other objects in its surroundings, the easier it is to find it. However, it is still unknown how this distinctiveness advantage emerges in human vision. Here, we studied how visual distinctiveness signals along two feature dimensions—shape and surface texture—combine to determine the overall distinctiveness of an object in the scene. Distinctiveness scores between a target object and distractors were measured separately for shape and texture using a search task. These scores were then used to predict search times when a target differed from distractors along both shape and texture. Model comparison showed that the overall object distinctiveness was best predicted when shape and texture combined using a Euclidian metric, confirming the brain is computing independent distinctiveness scores for shape and texture and combining them to direct attention.
Collapse
Affiliation(s)
- Zoe Jing Xu
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA.
| | - Alejandro Lleras
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| | - Simona Buetti
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| |
Collapse
|
5
|
Nazareth ACDP, Escobar VS, DeCastro TG. Body Size Judgments at 17 ms: Evidence From Perceptual and Attitudinal Body Image Indexes. Front Psychol 2020; 10:3018. [PMID: 32010033 PMCID: PMC6978682 DOI: 10.3389/fpsyg.2019.03018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 12/19/2019] [Indexed: 11/13/2022] Open
Abstract
Evidence related to temporal control for stimuli presentation of whole-body image is generally associated with attentional bias to ideal thin bodies. Few studies present evidence concerning whole-body stimuli recognition during fast visual exposure intervals. The aim of this study was to evaluate the accuracy and reaction times for the judgment of different sized body silhouettes presented at 17 ms in a non-clinical sample. Thirty-one participants were divided in attitudinal and perceptual body image groups based on Figure Rating Scale output and performed two experiments. First experiment assessed perception and the clarity of visual experience for human and non-human body stimuli at 17 ms. A general accuracy of 69.17% was registered with no differences between perceptual and attitudinal body image groups. These results indicated that the way participants perceive their own bodies does not influence the recognition of general visual silhouette stimuli. It was also observed that the clarity of visual experience is positively correlated to stimuli recognition accuracy. In the second experiment participants had to respond in a seven-point Likert scale if the presented image of body silhouettes were bigger, equal or thinner than their own bodies. Trials were divided in two blocks based on spatial rotation, half at 0° and half at 180°. General accuracy for body silhouettes recognition was 41.1%. Greater accuracy recognition for regular positioned stimuli was observed. Attitudinal dimension of body image was not a predictor of differential performance whereas perceptual body image groups recorded contrasting recognition performance. Distorted body image participants presented higher accuracy than undistorted body image participants, with greater accuracy to thinner silhouette figures. Women had significantly higher overall accuracy than men considering both experimental blocks. When comparing the cumulative accuracy curves across experimental trials, an exposure effect was registered only for the first experiment. Results showed that body silhouette stimuli were judged in a fast exposure interval with differential accuracy rates only for perceptual body image groups. Such evidence signals that conscious body image can be associated to implicit detection of visual human body stimuli. Future studies should further test how traditional explicit body image outputs perform within experimental approaches.
Collapse
Affiliation(s)
- Ana Clara de Paula Nazareth
- Laboratory of Experimental Phenomenology and Cognition, Institute of Psychology, Federal University of Rio Grande do Sul, Porto Alegre, Brazil
| | - Vinícius Spencer Escobar
- Laboratory of Experimental Phenomenology and Cognition, Institute of Psychology, Federal University of Rio Grande do Sul, Porto Alegre, Brazil
| | - Thiago Gomes DeCastro
- Laboratory of Experimental Phenomenology and Cognition, Institute of Psychology, Federal University of Rio Grande do Sul, Porto Alegre, Brazil
| |
Collapse
|
6
|
Buetti S, Xu J, Lleras A. Predicting how color and shape combine in the human visual system to direct attention. Sci Rep 2019; 9:20258. [PMID: 31889066 PMCID: PMC6937264 DOI: 10.1038/s41598-019-56238-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 12/07/2019] [Indexed: 11/19/2022] Open
Abstract
Objects in a scene can be distinct from one another along a multitude of visual attributes, such as color and shape, and the more distinct an object is from its surroundings, the easier it is to find it. However, exactly how this distinctiveness advantage arises in vision is not well understood. Here we studied whether and how visual distinctiveness along different visual attributes (color and shape, assessed in four experiments) combine to determine an object’s overall distinctiveness in a scene. Unidimensional distinctiveness scores were used to predict performance in six separate experiments where a target object differed from distractor objects along both color and shape. Results showed that there is mathematical law determining overall distinctiveness as the simple sum of the distinctiveness scores along each visual attribute. Thus, the brain must compute distinctiveness scores independently for each visual attribute before summing them into the overall score that directs human attention.
Collapse
Affiliation(s)
| | - Jing Xu
- University of Illinois, Champaign, United States
| | | |
Collapse
|
7
|
Ng GJP, Buetti S, Dolcos S, Dolcos F, Lleras A. Distractor rejection in parallel search tasks takes time but does not benefit from context repetition. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2019.1676353] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Gavin Jun Peng Ng
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Simona Buetti
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Sanda Dolcos
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Beckman Institute for Advanced Science and Technology, Urbana, IL, USA
| | - Florin Dolcos
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Beckman Institute for Advanced Science and Technology, Urbana, IL, USA
| | - Alejandro Lleras
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Beckman Institute for Advanced Science and Technology, Urbana, IL, USA
| |
Collapse
|
8
|
Predicting Search Performance in Heterogeneous Scenes: Quantifying the Impact of Homogeneity Effects in Efficient Search. COLLABRA-PSYCHOLOGY 2019. [DOI: 10.1525/collabra.151] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Recently, Wang, Buetti and Lleras (2017) developed an equation to predict search performance in heterogeneous visual search scenes (i.e., multiple types of non-target objects simultaneously present) based on parameters observed when participants perform search in homogeneous scenes (i.e., when all non-target objects are identical to one another). The equation was based on a computational model where every item in the display is processed with unlimited capacity and independently of one another, with the goal of determining whether the item is likely to be a target or not. The model was tested in two experiments using real-world objects. Here, we extend those findings by testing the predictive power of the equation to simpler objects. Further, we compare the model’s performance under two stimulus arrangements: spatially-intermixed (items randomly placed around the scene) and spatially-segregated displays (identical items presented near each other). This comparison allowed us to isolate and quantify the facilitatory effect of processing displays that contain identical items (homogeneity facilitation), a factor that improves performance in visual search above-and-beyond target-distractor dissimilarity. The results suggest that homogeneity facilitation effects in search arise from local item-to-item interaction (rather than by rejecting items as “groups”) and that the strength of those interactions might be determined by stimulus complexity (with simpler stimuli producing stronger interactions and thus, stronger homogeneity facilitation effects).
Collapse
|
9
|
Parallel, exhaustive processing underlies logarithmic search functions: Visual search with cortical magnification. Psychon Bull Rev 2018; 25:1343-1350. [DOI: 10.3758/s13423-018-1466-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
10
|
The role of crowding in parallel search: Peripheral pooling is not responsible for logarithmic efficiency in parallel search. Atten Percept Psychophys 2017; 80:352-373. [DOI: 10.3758/s13414-017-1441-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|