1
|
Chapman AF, Störmer VS. Representational structures as a unifying framework for attention. Trends Cogn Sci 2024; 28:416-427. [PMID: 38280837 PMCID: PMC11290436 DOI: 10.1016/j.tics.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 01/29/2024]
Abstract
Our visual system consciously processes only a subset of the incoming information. Selective attention allows us to prioritize relevant inputs, and can be allocated to features, locations, and objects. Recent advances in feature-based attention suggest that several selection principles are shared across these domains and that many differences between the effects of attention on perceptual processing can be explained by differences in the underlying representational structures. Moving forward, it can thus be useful to assess how attention changes the structure of the representational spaces over which it operates, which include the spatial organization, feature maps, and object-based coding in visual cortex. This will ultimately add to our understanding of how attention changes the flow of visual information processing more broadly.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
2
|
Cybulski P, Krassanakis V. Motion velocity as a preattentive feature in cartographic symbolization. J Eye Mov Res 2023; 16:10.16910/jemr.16.4.1. [PMID: 38379834 PMCID: PMC10875601 DOI: 10.16910/jemr.16.4.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024] Open
Abstract
The presented study aims to examine the process of preattentive processing of dynamic point symbols used in cartographic symbology. More specifically, we explore different motion types of geometric symbols on a map together with various motion velocity distribution scales. The main hypothesis is that, in specific cases, motion velocity of dynamic point symbols is the feature that could be perceived preattentively on a map. In a controlled laboratory experiment, with 103 participants and eye tracking methods, we used administrative border maps with animated symbols. Participants' task was to find and precisely identify the fastest changing symbol. It turned out that not every type of motion could be perceived preattentively even though the motion distribution scale did not change. The same applied to symbols' shape. Eye movement analysis revealed that successful detection was closely related to the fixation on the target after initial preattentive vision. This confirms a significant role of the motion velocity distribution and the usage of symbols' shape in cartographic design of animated maps.
Collapse
|
3
|
Li Y, Ye B, Bao Y. The same phase creates a unique visual rhythm unifying moving elements in time. Psych J 2023; 12:500-506. [PMID: 36916772 DOI: 10.1002/pchj.636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 12/02/2022] [Indexed: 03/16/2023]
Abstract
Attention can be selectively tuned to particular features at different spatial locations or objects. The deployment of attention can be guided by properties, such as color, orientation, and so forth, as guiding features. What might be such guiding features for visual stimuli under dynamic rhythmic conditions? We asked specifically what might be the parameters that attract attention when perceiving a visual rhythm. We used a visual search paradigm, in which a dynamic search display consisted of vertically "bouncing balls" with regular rhythms. The search target was defined by a unique visual rhythm (i.e., with either a shorter or longer period) among rhythmic distractors sharing an identical period. We modulated amplitudes and phases of the distractor balls systematically. The results showed a crucial factor of the phase, not the amplitude. If the phase is violated, the target suddenly "pops out" as an "oddball," showing an efficient parallel search. The findings indicate in general the essential role of the phase in conjunction with amplitude and period for visual rhythm perception. Furthermore, a higher saliency of moving objects with a higher frequency component has also been disclosed.
Collapse
Affiliation(s)
- Yao Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| | - Biyi Ye
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Yan Bao
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Institute of Medical Psychology, Ludwig Maximilian University Munich, Munich, Germany
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
4
|
Yang Y, Mo L, Lio G, Huang Y, Perret T, Sirigu A, Duhamel JR. Assessing the allocation of attention during visual search using digit-tracking, a calibration-free alternative to eye tracking. Sci Rep 2023; 13:2376. [PMID: 36759694 PMCID: PMC9911646 DOI: 10.1038/s41598-023-29133-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 01/31/2023] [Indexed: 02/11/2023] Open
Abstract
Digit-tracking, a simple, calibration-free technique, has proven to be a good alternative to eye tracking in vision science. Participants view stimuli superimposed by Gaussian blur on a touchscreen interface and slide a finger across the display to locally sharpen an area the size of the foveal region just at the finger's position. Finger movements are recorded as an indicator of eye movements and attentional focus. Because of its simplicity and portability, this system has many potential applications in basic and applied research. Here we used digit-tracking to investigate visual search and replicated several known effects observed using different types of search arrays. Exploration patterns measured with digit-tracking during visual search of natural scenes were comparable to those previously reported for eye-tracking and constrained by similar saliency. Therefore, our results provide further evidence for the validity and relevance of digit-tracking for basic and applied research on vision and attention.
Collapse
Affiliation(s)
- Yidong Yang
- Key Laboratory of Brain, Cognition and Education, Ministry of Education, South China Normal University, Guangzhou, 510631, China.,Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France
| | - Lei Mo
- Key Laboratory of Brain, Cognition and Education, Ministry of Education, South China Normal University, Guangzhou, 510631, China
| | - Guillaume Lio
- IMind Center of Excellence for Autism, Le Vinatier Hospital, Bron, France
| | - Yulong Huang
- Key Laboratory of Brain, Cognition and Education, Ministry of Education, South China Normal University, Guangzhou, 510631, China.,Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France
| | - Thomas Perret
- Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France
| | - Angela Sirigu
- Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France.,IMind Center of Excellence for Autism, Le Vinatier Hospital, Bron, France
| | - Jean-René Duhamel
- Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France.
| |
Collapse
|
5
|
Earl B. Humans, fish, spiders and bees inherited working memory and attention from their last common ancestor. Front Psychol 2023; 13:937712. [PMID: 36814887 PMCID: PMC9939904 DOI: 10.3389/fpsyg.2022.937712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 11/11/2022] [Indexed: 02/08/2023] Open
Abstract
All brain processes that generate behaviour, apart from reflexes, operate with information that is in an "activated" state. This activated information, which is known as working memory (WM), is generated by the effect of attentional processes on incoming information or information previously stored in short-term or long-term memory (STM or LTM). Information in WM tends to remain the focus of attention; and WM, attention and STM together enable information to be available to mental processes and the behaviours that follow on from them. WM and attention underpin all flexible mental processes, such as solving problems, making choices, preparing for opportunities or threats that could be nearby, or simply finding the way home. Neither WM nor attention are necessarily conscious, and both may have evolved long before consciousness. WM and attention, with similar properties, are possessed by humans, archerfish, and other vertebrates; jumping spiders, honey bees, and other arthropods; and members of other clades, whose last common ancestor (LCA) is believed to have lived more than 600 million years ago. It has been reported that very similar genes control the development of vertebrate and arthropod brains, and were likely inherited from their LCA. Genes that control brain development are conserved because brains generate adaptive behaviour. However, the neural processes that generate behaviour operate with the activated information in WM, so WM and attention must have existed prior to the evolution of brains. It is proposed that WM and attention are widespread amongst animal species because they are phylogenetically conserved mechanisms that are essential to all mental processing, and were inherited from the LCA of vertebrates, arthropods, and some other animal clades.
Collapse
|
6
|
Is a new feature learned behind a newly efficient color-orientation conjunction search? Psychon Bull Rev 2023; 30:250-260. [PMID: 35953667 DOI: 10.3758/s13423-022-02156-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/18/2022] [Indexed: 11/08/2022]
Abstract
It is well known that feature search is efficient, whereas conjunction search is usually inefficient. However, prior studies have shown that some conjunction search could become very efficient through perceptual learning, behaving like a traditional feature search. An unanswered question is whether a new feature is learned when an inefficient conjunction search become efficient after extensive training. A popular view is that the trained conjunction has been successfully unitized into a new feature and thus could pop out from neighboring distractors. Here, by using stimulus specificity and transfer of perceptual learning as an approach, we investigate whether a new feature is learned when an initially inefficient conjunction search becomes highly efficient after extensive training. In two experiments, we consistently found that long-term perceptual learning over days could induce an inefficient-to-efficient pattern change in a color-orientation conjunction search. Moreover, the learning effect of the conjunction target could partly transfer to a new target that shared a same color or a same orientation with the trained target. Remarkably, the total amount of the learning effect was approximately equal to the sum of the transfer effects of individual features. Such additive learning pattern could last for at least several months, although the learning of separate features showed different patterns of persistence. These results do not support that the trained conjunction is unitized into a new and inseparable feature after learning. Instead, our findings point to a feature-based attention enhancement mechanism underlying long-term perceptual learning and its persistence of color-orientation conjunction search.
Collapse
|
7
|
Hout MC, Papesh MH, Masadeh S, Sandin H, Walenchok SC, Post P, Madrid J, White B, Pinto JDG, Welsh J, Goode D, Skulsky R, Rodriguez MC. The Oddity Detection in Diverse Scenes (ODDS) database: Validated real-world scenes for studying anomaly detection. Behav Res Methods 2023; 55:583-599. [PMID: 35353316 PMCID: PMC8966608 DOI: 10.3758/s13428-022-01816-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2022] [Indexed: 11/24/2022]
Abstract
Many applied screening tasks (e.g., medical image or baggage screening) involve challenging searches for which standard laboratory search is rarely equivalent. For example, whereas laboratory search frequently requires observers to look for precisely defined targets among isolated, non-overlapping images randomly arrayed on clean backgrounds, medical images present unspecified targets in noisy, yet spatially regular scenes. Those unspecified targets are typically oddities, elements that do not belong. To develop a closer laboratory analogue to this, we created a database of scenes containing subtle, ill-specified "oddity" targets. These scenes have similar perceptual densities and spatial regularities to those found in expert search tasks, and each includes 16 variants of the unedited scene wherein an oddity (a subtle deformation of the scene) is hidden. In Experiment 1, eight volunteers searched thousands of scene variants for an oddity. Regardless of their search accuracy, they were then shown the highlighted anomaly and rated its subtlety. Subtlety ratings reliably predicted search performance (accuracy and response times) and did so better than image statistics. In Experiment 2, we conducted a conceptual replication in which a larger group of naïve searchers scanned subsets of the scene variants. Prior subtlety ratings reliably predicted search outcomes. Whereas medical image targets are difficult for naïve searchers to detect, our database contains thousands of interior and exterior scenes that vary in difficulty, but are nevertheless searchable by novices. In this way, the stimuli will be useful for studying visual search as it typically occurs in expert domains: Ill-specified search for anomalies in noisy displays.
Collapse
Affiliation(s)
- Michael C Hout
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA.
- National Science Foundation, Alexandria, VA, USA.
| | - Megan H Papesh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Saleem Masadeh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Hailey Sandin
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Phillip Post
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Jessica Madrid
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Bryan White
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Julian Welsh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Dre Goode
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Rebecca Skulsky
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Mariana Cazares Rodriguez
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| |
Collapse
|
8
|
Tsurumi S, Kanazawa S, Yamaguchi MK, Kawahara JI. Infants' anticipatory eye movements: feature-based attention guides infants' visual attention. Exp Brain Res 2022; 240:2277-2284. [PMID: 35906428 DOI: 10.1007/s00221-022-06428-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 07/23/2022] [Indexed: 11/28/2022]
Abstract
When looking for an object, we identify it by selectively focusing our attention to a specific feature, known as feature-based attention. This basic attentional system has been reported in young children; however, little is known of whether infants could use feature-based attention. We have introduced a newly developed anticipation-looking task, where infants learned to direct their attention endogenously to a specific feature based on the learned feature (color or orientation), in 60 preverbal infants aged 7-8 months. We found that preverbal infants aged 7-8 months can direct their attention endogenously to the specific target feature among irrelevant features, thus showing the feature-based attentional selection. Experiment 2 bolstered this finding by demonstrating that infants directed their attention depending on the familiarized feature that belongs to a never-experienced object. These results that infants can form anticipation by color and orientation reflect they could drive their attention through feature-based selection.
Collapse
Affiliation(s)
- Shuma Tsurumi
- Department of Psychology, Chuo University, 742-1 Higashi-Nakano, Hachioji, Tokyo, 192-0393, Japan.
- Japan Society for the Promotion of Science, 5-3-1 Kojimachi, Chiyoda-ku, Tokyo, 102-0083, Japan.
- Department of Psychology, Hokkaido University, N10 W7, Kita, Sapporo, Hokkaido, 060-0810, Japan.
| | - So Kanazawa
- Department of Psychology, Japan Women's University, 2-8-1 Mejirodai, Bunkyo-ku, Tokyo, 112-8681, Japan
| | - Masami K Yamaguchi
- Department of Psychology, Chuo University, 742-1 Higashi-Nakano, Hachioji, Tokyo, 192-0393, Japan
| | - Jun-Ichiro Kawahara
- Department of Psychology, Hokkaido University, N10 W7, Kita, Sapporo, Hokkaido, 060-0810, Japan
| |
Collapse
|
9
|
Chen J, Paul JM, Reeve R. Manipulation of Attention Affects Subitizing Performance: A Systematic Review and Meta-analysis. Neurosci Biobehav Rev 2022; 139:104753. [PMID: 35772633 DOI: 10.1016/j.neubiorev.2022.104753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 06/07/2022] [Accepted: 06/22/2022] [Indexed: 11/18/2022]
Abstract
Subitizing is the fast and accurate enumeration of small sets. Whether attention is necessary for subitizing remains controversial considering (1) subitizing is claimed to be "pre-attentive", and (2) existing experimental methods and results are inconsistent. To determine whether manipulations to attention demonstratively affect subitizing, the current study comprises a systematic review and meta-analysis. Results from fourteen studies (22 experiments, 35 comparisons) suggest that changes to attentional demands interferes with enumeration of small sets; leading to slower response times, lower accuracy, and poorer Weber acuity (p <.010; p <.001; p <.001; respectively)-notwithstanding a potential publication bias. A unifying framework is proposed to explain the role of attention in visual enumeration, with progressively greater attentional involvement from estimation to subitizing to counting. Our findings suggest attention is integral for subitizing and highlights the need to emphasise attentional mechanisms into neurocognitive models of numerosity processing. We also discuss the possible role of attention in numerical processing difficulties (e.g., dyscalculia).
Collapse
Affiliation(s)
- Jian Chen
- Institute for Social Neuroscience, Melbourne, VIC, Australia; School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia.
| | - Jacob M Paul
- School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Robert Reeve
- School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
10
|
Fonteyn-Vinke A, Huurneman B, Boonstra FN. Viewing Strategies in Children With Visual Impairment and Children With Normal Vision: A Systematic Scoping Review. Front Psychol 2022; 13:898719. [PMID: 35783772 PMCID: PMC9248372 DOI: 10.3389/fpsyg.2022.898719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 05/19/2022] [Indexed: 11/25/2022] Open
Abstract
Viewing strategies are strategies used to support visual information processing. These strategies may differ between children with cerebral visual impairment (CVI), children with ocular visual impairment, and children with normal vision since visual impairment might have an impact on viewing behavior. In current visual rehabilitation practice a variety of strategies is used without consideration of the differences in etiology of the visual impairment or in the spontaneous viewing strategies used. This systematic scoping review focuses on viewing strategies used during near school-based tasks like reading and on possible interventions aimed at viewing strategies. The goal is threefold: (1) creating a clear concept of viewing strategies, (2) mapping differences in viewing strategies between children with ocular visual impairment, children with CVI and children with normal vision, and (3) identifying interventions that can improve visual processing by targeting viewing strategies. Four databases were used to conduct the literature search: PubMed, Embase, PsycINFO and Cochrane. Seven hundred and ninety-nine articles were screened by two independent reviewers using PRISMA reporting guidelines of which 30 were included for qualitative analysis. Only five studies explicitly mentioned strategies used during visual processing, namely gaze strategies, reading strategies and search strategies. We define a viewing strategy as a conscious and systematic way of viewing during task performance. The results of this review are integrated with different attention network systems, which provide direction on how to design future interventions targeting the use of viewing strategies to improve different aspects of visual processing.
Collapse
Affiliation(s)
- Anke Fonteyn-Vinke
- Royal Dutch Visio, Nijmegen, Netherlands
- Behavioral Science Institute, Radboud University, Nijmegen, Netherlands
| | - Bianca Huurneman
- Royal Dutch Visio, Nijmegen, Netherlands
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, Netherlands
- *Correspondence: Bianca Huurneman
| | - Frouke N. Boonstra
- Royal Dutch Visio, Nijmegen, Netherlands
- Behavioral Science Institute, Radboud University, Nijmegen, Netherlands
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, Netherlands
| |
Collapse
|
11
|
Dreneva A, Shvarts A, Chumachenko D, Krichevets A. Extrafoveal Processing in Categorical Search for Geometric Shapes: General Tendencies and Individual Variations. Cogn Sci 2021; 45:e13025. [PMID: 34379345 PMCID: PMC8459262 DOI: 10.1111/cogs.13025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Revised: 06/10/2021] [Accepted: 06/27/2021] [Indexed: 11/29/2022]
Abstract
The paper addresses the capabilities and limitations of extrafoveal processing during a categorical visual search. Previous research has established that a target could be identified from the very first or without any saccade, suggesting that extrafoveal perception is necessarily involved. However, the limits in complexity defining the processed information are still not clear. We performed four experiments with a gradual increase of stimuli complexity to determine the role of extrafoveal processing in searching for the categorically defined geometric shape. The series of experiments demonstrated a significant role of extrafoveal processing while searching for simple two-dimensional shapes and its gradual decrease in a condition with more complicated three-dimensional shapes. The factors of objects' spatial orientation and distractor homogeneity significantly influenced both reaction time and the number of saccades required to identify a categorically defined target. An analysis of the individual p-value distributions revealed pronounced individual differences in using extrafoveal analysis and allowed examination of the performance of each particular participant. The condition with the forced prohibition of eye movements enabled us to investigate the efficacy of covert attention in the condition with complicated shapes. Our results indicate that both foveal and extrafoveal processing are simultaneously involved during a categorical search, and the specificity of their interaction is determined by the spatial orientation of objects, type of distractors, the prohibition to use overt attention, and individual characteristics of the participants.
Collapse
Affiliation(s)
- Anna Dreneva
- Faculty of PsychologyLomonosov Moscow State University
| | - Anna Shvarts
- Freudenthal InstituteFaculty of ScienceUtrecht University
| | | | | |
Collapse
|
12
|
Sun Z, Firestone C. Curious Objects: How Visual Complexity Guides Attention and Engagement. Cogn Sci 2021; 45:e12933. [PMID: 33873259 DOI: 10.1111/cogs.12933] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 12/10/2020] [Accepted: 12/15/2020] [Indexed: 11/26/2022]
Abstract
Some things look more complex than others. For example, a crenulate and richly organized leaf may seem more complex than a plain stone. What is the nature of this experience-and why do we have it in the first place? Here, we explore how object complexity serves as an efficiently extracted visual signal that the object merits further exploration. We algorithmically generated a library of geometric shapes and determined their complexity by computing the cumulative surprisal of their internal skeletons-essentially quantifying the "amount of information" within each shape-and then used this approach to ask new questions about the perception of complexity. Experiments 1-3 asked what kind of mental process extracts visual complexity: a slow, deliberate, reflective process (as when we decide that an object is expensive or popular) or a fast, effortless, and automatic process (as when we see that an object is big or blue)? We placed simple and complex objects in visual search arrays and discovered that complex objects were easier to find among simple distractors than simple objects are among complex distractors-a classic search asymmetry indicating that complexity is prioritized in visual processing. Next, we explored the function of complexity: Why do we represent object complexity in the first place? Experiments 4-5 asked subjects to study serially presented objects in a self-paced manner (for a later memory test); subjects dwelled longer on complex objects than simple objects-even when object shape was completely task-irrelevant-suggesting a connection between visual complexity and exploratory engagement. Finally, Experiment 6 connected these implicit measures of complexity to explicit judgments. Collectively, these findings suggest that visual complexity is extracted efficiently and automatically, and even arouses a kind of "perceptual curiosity" about objects that encourages subsequent attentional engagement.
Collapse
Affiliation(s)
- Zekun Sun
- Department of Psychological & Brain Sciences, Johns Hopkins University
| | - Chaz Firestone
- Department of Psychological & Brain Sciences, Johns Hopkins University
| |
Collapse
|
13
|
Human–Machine Interface Design for Monitoring Safety Risks Associated with Operating Small Unmanned Aircraft Systems in Urban Areas. AEROSPACE 2021. [DOI: 10.3390/aerospace8030071] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The envisioned introduction of autonomous Small Unmanned Aircraft Systems (sUAS) into low-altitude urban airspace necessitates high levels of system safety. Despite increased system autonomy, humans will most likely remain an essential component in assuring safety. This paper derives, applies, and evaluates a display design concept that aims to support safety risk monitoring of multiple sUAS by a human operator. The concept comprises of five design principles. The core idea of the concept is to limit display complexity despite increasing the number of sUAS monitored by primarily visualizing highly abstracted information while hiding detailed information of lower abstraction, unless specifically requested by the human operator. States of highly abstracted functions are visualized by function-specific icons that change hue in accordance to specified system states. Simultaneously, the design concept aims to support the human operator in identifying off-nominal situations by implementing design properties that guide visual attention. The display was evaluated in a study with seven subject matter experts. Although preliminary, the results clearly favor the proposed display design concept. The advantages of the proposed design concept are demonstrated, and the next steps for further exploring the proposed display design concept are outlined.
Collapse
|
14
|
Na E, Lee K, Kim EJ, Bae JB, Suh SW, Byun S, Han JW, Kim KW. Pre-attentive Visual Processing in Alzheimer's Disease: An Event-related Potential Study. Curr Alzheimer Res 2021; 17:1195-1207. [PMID: 33593259 DOI: 10.2174/1567205018666210216084534] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 10/16/2020] [Accepted: 12/27/2020] [Indexed: 11/22/2022]
Abstract
INTRODUCTION While identifying Alzheimer's Disease (AD) in its early stages is crucial, traditional neuropsychological tests tend to lack sensitivity and specificity for its diagnosis. Neuropsychological studies have reported visual processing deficits of AD, and event-related potentials (ERPs) are suitable to investigate pre-attentive processing with superior temporal resolution. OBJECTIVE This study aimed to investigate visual attentional characteristics of adults with AD, from pre-attentive to attentive processing, using a visual oddball task and ERPs. METHODS Cognitively normal elderly controls (CN) and patients with probable AD (AD) were recruited. Participants performed a three-stimulus visual oddball task and were asked to press a designated button in response to the target stimuli. The amplitudes of 4 ERPs were analyzed. Mismatchnegativity (vMMN) was analyzed around the parieto-occipital and temporo-occipital regions. P3a was analyzed around the fronto-central regions, whereas P3b was analyzed around the centro-parietal regions. RESULTS Late vMMN amplitudes of the AD group were significantly smaller than those of the CN group, while early vMMN amplitudes were comparable. Compared to the CN group, P3a amplitudes of the AD group were significantly smaller for the infrequent deviant stimuli, but the amplitudes for the standard stimuli were comparable. Lastly, the AD group had significantly smaller P3b amplitudes for the target stimuli compared to the CN group. CONCLUSION Our findings imply that AD patients exhibit pre-attentive visual processing deficits, known to affect later higher-order brain functions. In a clinical setting, the visual oddball paradigm could be used to provide helpful diagnostic information since pre-attentive ERPs can be induced by passive exposure to infrequent stimuli.
Collapse
Affiliation(s)
- Eunchan Na
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Kanghee Lee
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Eun J Kim
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Jong B Bae
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Seung W Suh
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Seonjeong Byun
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Ji W Han
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Ki W Kim
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
15
|
Papesh MH, Hout MC, Guevara Pinto JD, Robbins A, Lopez A. Eye movements reflect expertise development in hybrid search. Cogn Res Princ Implic 2021; 6:7. [PMID: 33587219 PMCID: PMC7884546 DOI: 10.1186/s41235-020-00269-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 12/23/2020] [Indexed: 11/10/2022] Open
Abstract
Domain-specific expertise changes the way people perceive, process, and remember information from that domain. This is often observed in visual domains involving skilled searches, such as athletics referees, or professional visual searchers (e.g., security and medical screeners). Although existing research has compared expert to novice performance in visual search, little work has directly documented how accumulating experiences change behavior. A longitudinal approach to studying visual search performance may permit a finer-grained understanding of experience-dependent changes in visual scanning, and the extent to which various cognitive processes are affected by experience. In this study, participants acquired experience by taking part in many experimental sessions over the course of an academic semester. Searchers looked for 20 categories of targets simultaneously (which appeared with unequal frequency), in displays with 0-3 targets present, while having their eye movements recorded. With experience, accuracy increased and response times decreased. Fixation probabilities and durations decreased with increasing experience, but saccade amplitudes and visual span increased. These findings suggest that the behavioral benefits endowed by expertise emerge from oculomotor behaviors that reflect enhanced reliance on memory to guide attention and the ability to process more of the visual field within individual fixations.
Collapse
Affiliation(s)
- Megan H Papesh
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA.
| | - Michael C Hout
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Arryn Robbins
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA
- Carthage College, Kenosha, WI, USA
| | - Alexis Lopez
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA
| |
Collapse
|
16
|
Markov YA, Tiurina NA. Size-distance rescaling in the ensemble representation of range: Study with binocular and monocular cues. Acta Psychol (Amst) 2021; 213:103238. [PMID: 33387867 DOI: 10.1016/j.actpsy.2020.103238] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Revised: 10/08/2020] [Accepted: 12/09/2020] [Indexed: 11/15/2022] Open
Abstract
According to numerous studies observers can rapidly and precisely evaluate mean or range of the set. Recent studies have shown that the mean size estimated based on sizes of objects rescaled to their distances (Tiurina & Utochkin, 2019). In the current study, we directly tested this rescaling mechanism on the perception of range using binocular and monocular cues. In Experiment 1, a sample set of circles with different angular sizes and in different apparent distances were stereoscopically presented. Participants had to adjust the range of the test set to match the range of the sample set. The main manipulation was the size-distance correlation for sample and test sets: in negative size-distance correlation, the apparent range had to decrease, while in positive correlation - increase. We found the highest underestimation in the condition with the negative sample correlation and positive test correlation, which could be explained only if ensemble summary statistics were estimated after the item's rescaling. In Experiment 2, we used Ponzo-like illusion and spatial positions as a depth cue. Sets were presented with positive, negative or without size-distance correlation on a grey background or the background with Ponzo-like illusion. We found that the range was underestimated in negative correlation and overestimated in positive correlation. Thus, items of ensemble could be automatically rescaled according to their distance, based on both binocular and monocular cues, and ensemble summary statistics estimation is based on perceived sizes.
Collapse
Affiliation(s)
- Yuri A Markov
- National Research University Higher School of Economics, Russia.
| | | |
Collapse
|
17
|
Sasin E, Fougnie D. Memory-driven capture occurs for individual features of an object. Sci Rep 2020; 10:19499. [PMID: 33177574 PMCID: PMC7658969 DOI: 10.1038/s41598-020-76431-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 10/23/2020] [Indexed: 11/09/2022] Open
Abstract
Items held in working memory (WM) capture attention (memory-driven capture). People can selectively prioritize specific object features in WM. Here, we examined whether feature-specific prioritization within WM modulates memory-driven capture. In Experiment 1, after remembering the color and orientation of a triangle, participants were instructed, via retro-cue, whether the color, the orientation, or both features were relevant. To measure capture, we asked participants to execute a subsequent search task, and we compared performance in displays that did and did not contain the memory-matching feature. Color attracted attention only when it was relevant. No capture by orientation was found. In Experiment 2, we presented the retro-cue at one of the four locations of the search display to direct attention to specific objects. We found capture by color and this capture was larger when it was indicated as relevant. Crucially, orientation also attracted attention, but only when it was relevant. These findings provide evidence for reciprocal interaction between internal prioritization and external attention on the features level. Specifically, internal feature-specific prioritization modulates memory-driven capture but this capture also depends on the salience of the features.
Collapse
Affiliation(s)
- Edyta Sasin
- Department of Psychology, New York University of Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Daryl Fougnie
- Department of Psychology, New York University of Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
18
|
Behe BK, Huddleston PT, Childs KL, Chen J, Muraro IS. Seeing through the forest: The gaze path to purchase. PLoS One 2020; 15:e0240179. [PMID: 33036020 PMCID: PMC7546910 DOI: 10.1371/journal.pone.0240179] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 09/21/2020] [Indexed: 11/18/2022] Open
Abstract
Eye tracking studies have analyzed the relationship between visual attention to point of purchase marketing elements (price, signage, etc.) and purchase intention. Our study is the first to investigate the relationship between the gaze sequence in which consumers view a display (including gaze aversion away from products) and the influence of consumer (top down) characteristics on product choice. We conducted an in-lab 3 (display size: large, moderate, small) X 2 (price: sale, non-sale) within-subject experiment with 92 persons. After viewing the displays, subjects completed an online survey to provide demographic data, self-reported and actual product knowledge, and past purchase information. We employed a random forest machine learning approach via R software to analyze all possible three-unit subsequences of gaze fixations. Models comparing multiclass F1-macro score and F1-micro score of product choice were analyzed. Gaze sequence models that included gaze aversion more accurately predicted product choice in a lab setting for more complex displays. Inclusion of consumer characteristics generally improved model predictive F1-macro and F1-micro scores for less complex displays with fewer plant sizes Consumer attributes that helped improve model prediction performance were product expertise, ethnicity, and previous plant purchases.
Collapse
Affiliation(s)
- Bridget K. Behe
- Department of Horticulture, Michigan State University, East Lansing, Michigan, United States of America
| | - Patricia T. Huddleston
- Department of Advertising & Public Relations, College of Communication Arts & Sciences, Michigan State University, East Lansing, Michigan, United States of America
| | - Kevin L. Childs
- Department of Plant Biology, Michigan State University, East Lansing, Michigan, United States of America
| | - Jiaoping Chen
- Eli Broad College of Business, Michigan State University, East Lansing, Michigan, United States of America
| | - Iago S. Muraro
- Department of Advertising & Public Relations, College of Communication Arts & Sciences, Michigan State University, East Lansing, Michigan, United States of America
| |
Collapse
|
19
|
Exogenous Orientation of Attention to the Center of Mass in a Visual Search Task. Atten Percept Psychophys 2020; 82:729-738. [PMID: 31875316 DOI: 10.3758/s13414-019-01908-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Anne Treisman's scientific career included broad-ranging contributions that advanced our understanding of the attentional mechanisms that people rely on to make sense of the world. In this paper, we describe results from a visual-search paradigm first developed by Grabowecky and Treisman (Grabowecky, 1992). Their design exploited known feature-search asymmetries (Treisman & Gormican, 1988) to investigate the role of a center of mass (CoM) mechanism in determining the initial locus of visual-spatial attention in visual search. The original experiment supported the hypothesis that CoM influences initial orienting of visual-spatial attention, as targets near the CoM of a multi-element array were detected more quickly than targets distant from the CoM. These findings were replicated in a follow-up experiment using a different feature-search asymmetry, with eye-tracking added to verify central fixation. We also investigated whether CoM had any influence on pop-out search, and found no evidence that it does. Surprisingly, the effect of position of the search array on the CoM suggested that CoM may be computed independently for elements contained within each visual hemifield. Whereas our work on CoM with Treisman was initiated within an earlier theoretical context, the present results are also compatible with contemporary theoretical advances; both the early results and the new results can be integrated within current ways of thinking about attention and pre-attentive mechanisms.
Collapse
|
20
|
Chuquichambi EG, Rey C, Llames R, Escudero JT, Dorado A, Munar E. Circles Are Detected Faster Than Downward-Pointing Triangles in a Speeded Response Task. Perception 2020; 49:1026-1042. [PMID: 32957841 DOI: 10.1177/0301006620957472] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Simple geometric shapes are associated with facial emotional expressions. According to previous research, a downward-pointing triangle conveys the threatening perception of an angry facial expression, and a circle conveys the pleasant perception of a happy facial expression. Some studies showed that downward-pointing triangles have the advantage to capture attention faster than circles. Other studies proposed that curvature enhances visual detection and guides attention. We tested a downward-pointing triangle and a circle as target stimuli for a speeded response task. The distractors were two stimuli that resulted from the mixture of both targets to control for low-level features' balanced presentation. We used 3 × 3, 4 × 4, and 5 × 5 matrices to test whether these shapes led attention to an efficient response. In Experiment 1, participants responded faster to the circle than to the downward-pointing triangle. They also responded slower to both targets as the number of distractors increased. In Experiment 2, we replicated the main findings of Experiment 1. Overall, the circle was detected faster than the downward-pointing triangle with small matrices, but this difference decreased as the matrix size increased. We suggest that circles capture attention faster because of the influence of low-level features, that is, curvature in this case.
Collapse
Affiliation(s)
| | - Carlos Rey
- University of the Balearic Islands, Spain.,University of the Balearic Islands, Spain
| | - Rosana Llames
- University of Seville, Spain.,University of the Balearic Islands, Spain
| | | | | | | |
Collapse
|
21
|
Yang YH, Wolfe JM. Is apparent instability a guiding feature in visual search? VISUAL COGNITION 2020; 28:218-238. [PMID: 33100884 PMCID: PMC7577071 DOI: 10.1080/13506285.2020.1779892] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 06/01/2020] [Indexed: 10/24/2022]
Abstract
Humans are quick to notice if an object is unstable. Does that assessment require attention or can instability serve as a preattentive feature that can guide the deployment of attention? This paper describes a series of visual search experiments, designed to address this question. Experiment 1 shows that less stable images among more stable images are found more efficiently than more stable among less stable; a search asymmetry that supports guidance by instability. Experiment 2 shows efficient search but no search asymmetry when the orientation of the objects is removed as a confound. Experiment 3 independently varies the orientation cues and perceived stability and finds a clear main effect of apparent stability. Experiment 4 shows converging evidence for a role of stability using different stimuli that lack an orientation cue. However, here both search for stable and unstable targets is inefficient. Experiment 5 is a control for Experiment 4, showing that the stability effect in Experiment 4 is not simple side-effects of the geometry of the stimuli. On balance, the data support a role for instability in the guidance of attention in visual search. (184 words).
Collapse
Affiliation(s)
- Yung-Hao Yang
- Visual Attention Laboratory, Brigham and Women’s Hospital & Harvard Medical School, Boston, MA, USA
- Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Jeremy M Wolfe
- Visual Attention Laboratory, Brigham and Women’s Hospital & Harvard Medical School, Boston, MA, USA
| |
Collapse
|
22
|
Bonacci LM, Bressler S, Kwasa JAC, Noyce AL, Shinn-Cunningham BG. Effects of Visual Scene Complexity on Neural Signatures of Spatial Attention. Front Hum Neurosci 2020; 14:91. [PMID: 32265675 PMCID: PMC7105597 DOI: 10.3389/fnhum.2020.00091] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 03/02/2020] [Indexed: 11/28/2022] Open
Abstract
Spatial selective attention greatly affects our processing of complex visual scenes, yet the way in which the brain selects relevant objects while suppressing irrelevant objects is still unclear. Evidence of these processes has been found using non-invasive electroencephalography (EEG). However, few studies have characterized these measures during attention to dynamic stimuli, and little is known regarding how these measures change with increased scene complexity. Here, we compared attentional modulation of the EEG N1 and alpha power (oscillations between 8–14 Hz) across three visual selective attention tasks. The tasks differed in the number of irrelevant stimuli presented, but all required sustained attention to the orientation trajectory of a lateralized stimulus. In scenes with few irrelevant stimuli, top-down control of spatial attention is associated with strong modulation of both the N1 and alpha power across parietal-occipital channels. In scenes with many irrelevant stimuli in both hemifields, however, top-down control is no longer represented by strong modulation of alpha power, and N1 amplitudes are overall weaker. These results suggest that as a scene becomes more complex, requiring suppression in both hemifields, the neural signatures of top-down control degrade, likely reflecting some limitation in EEG to represent this suppression.
Collapse
Affiliation(s)
- Lia M Bonacci
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
| | - Scott Bressler
- Graduate Program in Neuroscience, Boston University, Boston, MA, United States
| | - Jasmine A C Kwasa
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
| | - Abigail L Noyce
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
| | | |
Collapse
|
23
|
Cimminella F, Sala SD, Coco MI. Extra-foveal Processing of Object Semantics Guides Early Overt Attention During Visual Search. Atten Percept Psychophys 2020; 82:655-670. [PMID: 31792893 PMCID: PMC7246246 DOI: 10.3758/s13414-019-01906-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Eye-tracking studies using arrays of objects have demonstrated that some high-level processing of object semantics can occur in extra-foveal vision, but its role on the allocation of early overt attention is still unclear. This eye-tracking visual search study contributes novel findings by examining the role of object-to-object semantic relatedness and visual saliency on search responses and eye-movement behaviour across arrays of increasing size (3, 5, 7). Our data show that a critical object was looked at earlier and for longer when it was semantically unrelated than related to the other objects in the display, both when it was the search target (target-present trials) and when it was a target's semantically related competitor (target-absent trials). Semantic relatedness effects manifested already during the very first fixation after array onset, were consistently found for increasing set sizes, and were independent of low-level visual saliency, which did not play any role. We conclude that object semantics can be extracted early in extra-foveal vision and capture overt attention from the very first fixation. These findings pose a challenge to models of visual attention which assume that overt attention is guided by the visual appearance of stimuli, rather than by their semantics.
Collapse
Affiliation(s)
- Francesco Cimminella
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, UK.
- Laboratory of Experimental Psychology, Suor Orsola Benincasa University, Naples, Italy.
| | - Sergio Della Sala
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, UK
| | - Moreno I Coco
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, UK.
- School of Psychology, The University of East London, London, UK.
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal.
| |
Collapse
|
24
|
Petilli MA, Marini F, Daini R. Distractor context manipulation in visual search: How expectations modulate proactive control. Cognition 2019; 196:104129. [PMID: 31765925 DOI: 10.1016/j.cognition.2019.104129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Revised: 11/03/2019] [Accepted: 11/04/2019] [Indexed: 11/16/2022]
Abstract
Visual search can be guided by top-down and bottom-up processes, with either one dominating the other depending on the task (e.g., feature versus conjunction). Moreover, different search tasks bring about different expectations about the type, or frequency, of distractor stimuli. These expectations could promote top-down "task-sets" that may impact performance even when distractors are temporarily absent. Here, we characterized the role and extent of recruitment of proactive top-down processes for distractor expectation in feature and conjunction search. Participants conducted feature and conjunction search tasks for a visual target among distractors, which were either frequently presented or completely absent. The effects of the recruitment of proactive top-down processes for distractor expectation entailed slower responses, yet more accurate, on distractor-absent trials in the frequent-distractor (versus no-distractor) context of both tasks. These effects were larger in the conjunction versus feature task and were not impacted by stimulus duration and time pressure (short/present in Experiment 1, unlimited/absent in Experiment 2, respectively). Results were replicated when the presence/absence of distractors at each trial was fully predictable (Experiment 3), and when several parameters of visual search were changed (Experiment 4). Our findings indicate that top-down task-sets related to distractor expectation entail performance costs and benefits in visual search. These effects occur throughout task blocks rather than trial-to-trial, are modulated by search type, and confirm that proactive top-down processes intervene in feature search.
Collapse
Affiliation(s)
- Marco A Petilli
- Department of Psychology, University of Milano-Bicocca, Milan, Italy.
| | - Francesco Marini
- Swartz Center for Computational Neuroscience, University of California, San Diego, La Jolla, USA
| | - Roberta Daini
- Department of Psychology, University of Milano-Bicocca, Milan, Italy; NeuroMI - Milan Center for Neuroscience, Milan, Italy; COMiB - Optics and Optometry Research Center, University of Milano-Bicocca, Milano, Italy
| |
Collapse
|
25
|
Affiliation(s)
- Joy J Geng
- Department of Psychology, Center for Mind and Brain at University of California Davis, United states.
| | - Andrew B Leber
- Department of Psychology and Center for Cognitive & Brain Sciences, The Ohio State University, United states.
| | - Sarah Shomstein
- Department of Psychological and Brain Sciences, George Washington University, United states.
| |
Collapse
|
26
|
Clarke ADF, Nowakowska A, Hunt AR. Seeing Beyond Salience and Guidance: The Role of Bias and Decision in Visual Search. Vision (Basel) 2019; 3:E46. [PMID: 31735847 PMCID: PMC6802808 DOI: 10.3390/vision3030046] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 08/07/2019] [Accepted: 08/21/2019] [Indexed: 11/17/2022] Open
Abstract
Visual search is a popular tool for studying a range of questions about perception and attention, thanks to the ease with which the basic paradigm can be controlled and manipulated. While often thought of as a sub-field of vision science, search tasks are significantly more complex than most other perceptual tasks, with strategy and decision playing an essential, but neglected, role. In this review, we briefly describe some of the important theoretical advances about perception and attention that have been gained from studying visual search within the signal detection and guided search frameworks. Under most circumstances, search also involves executing a series of eye movements. We argue that understanding the contribution of biases, routines and strategies to visual search performance over multiple fixations will lead to new insights about these decision-related processes and how they interact with perception and attention. We also highlight the neglected potential for variability, both within and between searchers, to contribute to our understanding of visual search. The exciting challenge will be to account for variations in search performance caused by these numerous factors and their interactions. We conclude the review with some recommendations for ways future research can tackle these challenges to move the field forward.
Collapse
Affiliation(s)
| | - Anna Nowakowska
- School of Psychology, University of Aberdeen, Aberdeen AB24 3FX, UK
| | - Amelia R. Hunt
- School of Psychology, University of Aberdeen, Aberdeen AB24 3FX, UK
| |
Collapse
|
27
|
Preview of partial stimulus information in search prioritizes features and conjunctions, not locations. Atten Percept Psychophys 2019; 82:140-152. [PMID: 31482279 PMCID: PMC6994444 DOI: 10.3758/s13414-019-01841-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus-either its color or orientation-before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.
Collapse
|
28
|
Abstract
In a series of four experiments, standard visual search was used to explore whether the onset of illusory motion pre-attentively guides vision in the same way that the onset of real-motion is known to do. Participants searched for target stimuli based on Akiyoshi Kitaoka's classic illusions, configured so that they either did or did not give the subjective impression of illusory motion. Distractor items always contained the same elements as target items, but did not convey a sense of illusory motion. When target items contained illusory motion, they popped-out, with flat search slopes that were independent of set size. Search for control items without illusory motion - but with identical structural differences to distractors - was slow and serial in nature (> 200 ms/item). Using a nulling task, we estimated the speed of illusory rotation in our displays to be approximately 2 °/s. Direct comparison of illusory and real-motion targets moving with matched velocity showed that illusory motion targets were detected more quickly. Blurred target items that conveyed a weak subjective impression of illusory motion gave rise to serial but faster (< 100 ms/item) search than control items. Our behavioral findings of parallel detection across the visual field, together with previous imaging and neurophysiological studies, suggests that relatively early cortical areas play a causal role in the perception of illusory motion. Furthermore, we hope to re-emphasize the way in which visual search can be used as a flexible, objective measure of illusion strength.
Collapse
|