101
|
Periodic attention operates faster during more complex visual search. Sci Rep 2022; 12:6688. [PMID: 35461325 PMCID: PMC9035177 DOI: 10.1038/s41598-022-10647-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 03/24/2022] [Indexed: 11/16/2022] Open
Abstract
Attention has been found to sample visual information periodically, in a wide range of frequencies below 20 Hz. This periodicity may be supported by brain oscillations at corresponding frequencies. We propose that part of the discrepancy in periodic frequencies observed in the literature is due to differences in attentional demands, resulting from heterogeneity in tasks performed. To test this hypothesis, we used visual search and manipulated task complexity, i.e., target discriminability (high, medium, low) and number of distractors (set size), while electro-encephalography was simultaneously recorded. We replicated previous results showing that the phase of pre-stimulus low-frequency oscillations predicts search performance. Crucially, such effects were observed at increasing frequencies within the theta-alpha range (6–18 Hz) for decreasing target discriminability. In medium and low discriminability conditions, correct responses were further associated with higher post-stimulus phase-locking than incorrect ones, in increasing frequency and latency. Finally, the larger the set size, the later the post-stimulus effect peaked. Together, these results suggest that increased complexity (lower discriminability or larger set size) requires more attentional cycles to perform the task, partially explaining discrepancies between reports of attentional sampling. Low-frequency oscillations structure the temporal dynamics of neural activity and aid top-down, attentional control for efficient visual processing.
Collapse
|
102
|
Facial hair may slow detection of happy facial expressions in the face in the crowd paradigm. Sci Rep 2022; 12:5911. [PMID: 35396450 PMCID: PMC8993935 DOI: 10.1038/s41598-022-09397-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Accepted: 03/08/2022] [Indexed: 11/08/2022] Open
Abstract
Human visual systems have evolved to extract ecologically relevant information from complex scenery. In some cases, the face in the crowd visual search task demonstrates an anger superiority effect, where anger is allocated preferential attention. Across three studies (N = 419), we tested whether facial hair guides attention in visual search and influences the speed of detecting angry and happy facial expressions in large arrays of faces. In Study 1, participants were faster to search through clean-shaven crowds and detect bearded targets than to search through bearded crowds and detect clean-shaven targets. In Study 2, targets were angry and happy faces presented in neutral backgrounds. Facial hair of the target faces was also manipulated. An anger superiority effect emerged that was augmented by the presence of facial hair, which was due to the slower detection of happiness on bearded faces. In Study 3, targets were happy and angry faces presented in either bearded or clean-shaven backgrounds. Facial hair of the background faces was also systematically manipulated. A significant anger superiority effect was revealed, although this was not moderated by the target's facial hair. Rather, the anger superiority effect was larger in clean-shaven than bearded face backgrounds. Together, results suggest that facial hair does influence detection of emotional expressions in visual search, however, rather than facilitating an anger superiority effect as a potential threat detection system, facial hair may reduce detection of happy faces within the face in the crowd paradigm.
Collapse
|
103
|
Felin T, Koenderink J. A Generative View of Rationality and Growing Awareness †. Front Psychol 2022; 13:807261. [PMID: 35465538 PMCID: PMC9021390 DOI: 10.3389/fpsyg.2022.807261] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 02/16/2022] [Indexed: 11/13/2022] Open
Abstract
In this paper we contrast bounded and ecological rationality with a proposed alternative, generative rationality. Ecological approaches to rationality build on the idea of humans as "intuitive statisticians" while we argue for a more generative conception of humans as "probing organisms." We first highlight how ecological rationality's focus on cues and statistics is problematic for two reasons: (a) the problem of cue salience, and (b) the problem of cue uncertainty. We highlight these problems by revisiting the statistical and cue-based logic that underlies ecological rationality, which originate from the misapplication of concepts in psychophysics (e.g., signal detection, just-noticeable-differences). We then work through the most popular experimental task in the ecological rationality literature-the city size task-to illustrate how psychophysical assumptions have informally been linked to ecological rationality. After highlighting these problems, we contrast ecological rationality with a proposed alternative, generative rationality. Generative rationality builds on biology-in contrast to ecological rationality's focus on statistics. We argue that in uncertain environments cues are rarely given or available for statistical processing. Therefore we focus on the psychogenesis of awareness rather than psychophysics of cues. For any agent or organism, environments "teem" with indefinite cues, meanings and potential objects, the salience or relevance of which is scarcely obvious based on their statistical or physical properties. We focus on organism-specificity and the organism-directed probing that shapes awareness and perception. Cues in teeming environments are noticed when they serve as cues-for-something, requiring what might be called a "cue-to-clue" transformation. In this sense, awareness toward a cue or cues is actively "grown." We thus argue that perception might more productively be seen as the presentation of cues and objects rather than their representation. This generative approach not only applies to relatively mundane organism (including human) interactions with their environments-as well as organism-object relationships and their embodied nature-but also has significant implications for understanding the emergence of novelty in economic settings. We conclude with a discussion of how our arguments link with-but modify-Herbert Simon's popular "scissors" metaphor, as it applies to bounded rationality and its implications for decision making in uncertain, teeming environments.
Collapse
Affiliation(s)
- Teppo Felin
- Jon M. Huntsman School of Business, Utah State University, Logan, UT, United States
- Saïd Business School, University of Oxford, Oxford, United Kingdom
| | - Jan Koenderink
- Department of Experimental Psychology, Katholieke Universiteit Leuven, Leuven, Belgium
- Department of Experimental Psychology, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
104
|
Shalev N, Boettcher S, Wilkinson H, Scerif G, Nobre AC. Be there on time: Spatial-temporal regularities guide young children's attention in dynamic environments. Child Dev 2022; 93:1414-1426. [PMID: 35385168 PMCID: PMC9545323 DOI: 10.1111/cdev.13770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 02/21/2022] [Accepted: 02/28/2022] [Indexed: 11/30/2022]
Abstract
Children's ability to benefit from spatiotemporal regularities to detect goal-relevant targets was tested in a dynamic, extended context. Young adults and children (from a low-deprivation area school in the United Kingdom; N = 80; 5-6 years; 39 female; ethics approval did not permit individual-level race/ethnicity surveying) completed a dynamic visual-search task. Targets and distractors faded in and out of a display over seconds. Half of the targets appeared at predictable times and locations. Search performance in children was poorer overall. Nevertheless, they benefitted equivalently from spatiotemporal regularities, detecting more predictable than unpredictable targets. Children's benefits from predictions correlated positively with their attention. The study brings ecological validity to the study of attentional guidance in children, revealing striking behavioral benefits of dynamic experience-based predictions.
Collapse
Affiliation(s)
- Nir Shalev
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Sage Boettcher
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Hannah Wilkinson
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
105
|
Hilchey MD, Rondina R, Soman D. Information‐seeking when information doesn't matter. JOURNAL OF BEHAVIORAL DECISION MAKING 2022. [DOI: 10.1002/bdm.2280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Matthew D. Hilchey
- Rotman School of Management University of Toronto 105 St George St Toronto Ontario M5S 3E6 Canada
| | - Renante Rondina
- Rotman School of Management University of Toronto 105 St George St Toronto Ontario M5S 3E6 Canada
| | - Dilip Soman
- Rotman School of Management University of Toronto 105 St George St Toronto Ontario M5S 3E6 Canada
- Canada Research Chair in Behavioural Science and Economics, Rotman School of Management University of Toronto 105 St George St Toronto Ontario M5S 3E6 Canada
| |
Collapse
|
106
|
Doradzińska Ł, Furtak M, Bola M. Perception of semantic relations in scenes: A registered report study of attention hold. Conscious Cogn 2022; 100:103315. [PMID: 35339910 DOI: 10.1016/j.concog.2022.103315] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 03/07/2022] [Accepted: 03/12/2022] [Indexed: 11/16/2022]
Abstract
To what extent the semantic relations present in scenes guide spatial attention automatically remains a matter of debate. Considering that spatial attention can be understood as a sequence of shifts, engagements, and disengagements, semantic relations might affect each stage of this process differently. Therefore, we investigated whether objects that violate semantic rules engage attention for longer than objects that are expected in a given context. The experiment involved a central presentation of a distractor scene that contained a semantically congruent or incongruent object, and a peripheral presentation of a small target letter. We found that incongruent scenes did not delay responses to the peripheral target, which indicates that they did not hold attention for longer than congruent scenes. Therefore, by showing that violations of semantic relations do not engage attention automatically, our study contributes to a better understanding of how attention operates in naturalistic settings.
Collapse
Affiliation(s)
- Łucja Doradzińska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 3 Pasteur Street, 02-093 Warsaw, Poland
| | - Marcin Furtak
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 3 Pasteur Street, 02-093 Warsaw, Poland
| | - Michał Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 3 Pasteur Street, 02-093 Warsaw, Poland.
| |
Collapse
|
107
|
Chakraborty S, Samaras D, Zelinsky GJ. Weighting the factors affecting attention guidance during free viewing and visual search: The unexpected role of object recognition uncertainty. J Vis 2022; 22:13. [PMID: 35323870 PMCID: PMC8963662 DOI: 10.1167/jov.22.4.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The factors determining how attention is allocated during visual tasks have been studied for decades, but few studies have attempted to model the weighting of several of these factors within and across tasks to better understand their relative contributions. Here we consider the roles of saliency, center bias, target features, and object recognition uncertainty in predicting the first nine changes in fixation made during free viewing and visual search tasks in the OSIE and COCO-Search18 datasets, respectively. We focus on the latter-most and least familiar of these factors by proposing a new method of quantifying uncertainty in an image, one based on object recognition. We hypothesize that the greater the number of object categories competing for an object proposal, the greater the uncertainty of how that object should be recognized and, hence, the greater the need for attention to resolve this uncertainty. As expected, we found that target features best predicted target-present search, with their dominance obscuring the use of other features. Unexpectedly, we found that target features were only weakly used during target-absent search. We also found that object recognition uncertainty outperformed an unsupervised saliency model in predicting free-viewing fixations, although saliency was slightly more predictive of search. We conclude that uncertainty in object recognition, a measure that is image computable and highly interpretable, is better than bottom–up saliency in predicting attention during free viewing.
Collapse
Affiliation(s)
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA.,
| | - Gregory J Zelinsky
- Department of Psychology, Stony Brook University, Stony Brook, NY, USA.,Department of Computer Science, Stony Brook University, Stony Brook, NY, USA.,
| |
Collapse
|
108
|
Humans represent the precision and utility of information acquired across fixations. Sci Rep 2022; 12:2411. [PMID: 35165336 PMCID: PMC8844410 DOI: 10.1038/s41598-022-06357-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 01/27/2022] [Indexed: 11/28/2022] Open
Abstract
Our environment contains an abundance of objects which humans interact with daily, gathering visual information using sequences of eye-movements to choose which object is best-suited for a particular task. This process is not trivial, and requires a complex strategy where task affordance defines the search strategy, and the estimated precision of the visual information gathered from each object may be used to track perceptual confidence for object selection. This study addresses the fundamental problem of how such visual information is metacognitively represented and used for subsequent behaviour, and reveals a complex interplay between task affordance, visual information gathering, and metacogntive decision making. People fixate higher-utility objects, and most importantly retain metaknowledge about how much information they have gathered about these objects, which is used to guide perceptual report choices. These findings suggest that such metacognitive knowledge is important in situations where decisions are based on information acquired in a temporal sequence.
Collapse
|
109
|
Chen S, Shi Z, Zinchenko A, Müller HJ, Geyer T. Cross-modal contextual memory guides selective attention in visual-search tasks. Psychophysiology 2022; 59:e14025. [PMID: 35141899 DOI: 10.1111/psyp.14025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 12/14/2021] [Accepted: 01/21/2022] [Indexed: 11/30/2022]
Abstract
Visual search is speeded when a target item is positioned consistently within an invariant (repeatedly encountered) configuration of distractor items ("contextual cueing"). Contextual cueing is also observed in cross-modal search, when the location of the-visual-target is predicted by distractors from another-tactile-sensory modality. Previous studies examining lateralized waveforms of the event-related potential (ERP) with millisecond precision have shown that learned visual contexts improve a whole cascade of search-processing stages. Drawing on ERPs, the present study tested alternative accounts of contextual cueing in tasks in which distractor-target contextual associations are established across, as compared to, within sensory modalities. To this end, we devised a novel, cross-modal search task: search for a visual feature singleton, with repeated (and nonrepeated) distractor configurations presented either within the same (visual) or a different (tactile) modality. We found reaction times (RTs) to be faster for repeated versus nonrepeated configurations, with comparable facilitation effects between visual (unimodal) and tactile (crossmodal) context cues. Further, for repeated configurations, there were enhanced amplitudes (and reduced latencies) of ERPs indexing attentional allocation (PCN) and postselective analysis of the target (CDA), respectively; both components correlated positively with the RT facilitation. These effects were again comparable between uni- and crossmodal cueing conditions. In contrast, motor-related processes indexed by the response-locked LRP contributed little to the RT effects. These results indicate that both uni- and crossmodal context cues benefit the same, visual processing stages related to the selection and subsequent analysis of the search target.
Collapse
Affiliation(s)
- Siyi Chen
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Zhuanghua Shi
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Artyom Zinchenko
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Hermann J Müller
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Thomas Geyer
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
110
|
Galusca CI, Fang W, Wang Z, Zhong M, Sun YHP, Pascalis O, Xiao NG. The "Fat Face" illusion: A robust adaptation for processing pairs of faces. Vision Res 2022; 195:108015. [PMID: 35149376 DOI: 10.1016/j.visres.2022.108015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 01/15/2022] [Accepted: 01/19/2022] [Indexed: 11/13/2022]
Abstract
Converging evidence has demonstrated our remarkable capacities to process individual faces. However, in real-life contexts, we rarely see faces in isolation. It is largely unknown how our visual system processes a multitude of faces. The current study explored this question by using the "Fat Face" illusion: when two identical faces are vertically aligned, the bottom face appears bigger. In Experiment 1, we tested the robustness of this illusion by using faces varied by gender and race, by recruiting participants from different countries (Canadian, Chinese, and French), and by implementing different task requirements. We found that the illusion was stable and immune to variations in face gender or face race, perceptual familiarity, and task requirements. Experiment 2 further indicated that binocular vision was essential for this visual illusion. When participants performed the task with one eye covered, the previously robust illusion completely disappeared. Together, these findings revealed a visual adaptation for processing multiple faces in the environment: the face at the top is perceived as more distant from the viewer and appears smaller in size than the face at the bottom. More broadly, overestimating the size of the bottom face may represent a fundamental mechanism for social interactions, ensuring the deployment of attention to those closest to self.
Collapse
Affiliation(s)
| | - Wei Fang
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Ontario, Canada
| | - Zhe Wang
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China.
| | - Ming Zhong
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China
| | - Yu-Hao P Sun
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China
| | | | - Naiqi G Xiao
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Ontario, Canada
| |
Collapse
|
111
|
Abstract
Despite our best intentions, physically salient but entirely task-irrelevant stimuli can sometimes capture our attention. With learning, it is possible to more efficiently ignore such stimuli, although specifically how the visual system accomplishes this remains to be clarified. Using a sample of young-adult participants, we examined the time course of eye movements to targets and distractors. We replicate a reduced frequency of eye movements to the distractor when appearing in a location at which distractors are frequently encountered. This reduction was observed even for the earliest saccades, when selection tends to be most stimulus-driven. When the distractor appeared at the high-probability location, saccadic reaction time was slowed specifically for distractor-going saccades, suggesting a slowing of priority accumulation at this location. In the event that the distractor was fixated, disengagement from the distractor was also faster when it appeared in the high-probability location. Both proactive and reactive mechanisms of distractor suppression work together to minimize attentional capture by frequently encountered distractors.
Collapse
|
112
|
Thornton IM, Nguyen TT, Kristjánsson Á. Foraging tempo: Human run patterns in multiple-target search are constrained by the rate of successive responses. Q J Exp Psychol (Hove) 2022; 75:297-312. [PMID: 32933424 DOI: 10.1177/1747021820961640] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Human foraging tasks are beginning to provide new insights into the roles of vision, attention, and working memory during complex, multiple-target search. Here, we test the idea that "foraging tempo"-the rate of successive target selections-helps determine patterns of behaviour in these tasks. Previously, we established that the majority of target selections during unconstrained foraging happen at regular, rapid intervals, forming the "cruise phase" of a foraging trial. Furthermore, we noted that when the temporal interval between cruise phase responses was longer, the tendency to switch between target categories increased. To directly explore this relationship, we modified our standard iPad foraging task so that observers had to synchronise each response with an auditory metronome signal. Across trials, we increased the tempo and examined how this changed patterns of foraging when targets were defined either by a single feature or by a conjunction of features. The results were very clear. Increasing tempo systematically decreased the tendency for participants to switch between target categories. Although this was true for both feature and conjunction trials, there was also evidence that time constraints and target complexity interacted. As in our previous work, we also observed clear individual differences in how participants responded to changes in task difficulty. Overall, our results show that foraging tempo does influence the way participants respond, and we suggest this parameter may prove to be useful in further explorations of group and individual strategies during multiple-target search.
Collapse
Affiliation(s)
- Ian M Thornton
- Department of Cognitive Science, Faculty of Media and Knowledge Sciences, University of Malta, Msida, Malta
| | - Tram Tn Nguyen
- Department of Cognitive Science, Faculty of Media and Knowledge Sciences, University of Malta, Msida, Malta
| | - Árni Kristjánsson
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland
- School of Psychology, National Research University Higher School of Economics, Moscow, Russian Federation
| |
Collapse
|
113
|
Tagu J, Kristjánsson Á. Dynamics of attentional and oculomotor orienting in visual foraging tasks. Q J Exp Psychol (Hove) 2022; 75:260-276. [PMID: 32238034 DOI: 10.1177/1747021820919351] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A vast amount of research has been carried out to understand how humans visually search for targets in their environment. However, this research has typically involved search for one unique target among several distractors. Although this line of research has yielded important insights into the basic characteristics of how humans explore their visual environment, this may not be a very realistic model for everyday visual orientation. Recently, researchers have used multi-target displays to assess orienting in the visual field. Eye movements in such tasks are, however, less well understood. Here, we investigated oculomotor dynamics during four visual foraging tasks differing in target crypticity (feature-based foraging vs. conjunction-based foraging) and the effector type being used for target selection (mouse foraging vs. gaze foraging). Our results show that both target crypticity and effector type affect foraging strategies. These changes are reflected in oculomotor dynamics, feature foraging being associated with focal exploration (long fixations and short-amplitude saccades), and conjunction foraging with ambient exploration (short fixations and high-amplitude saccades). These results provide important new information for existing accounts of visual attention and oculomotor control and emphasise the usefulness of foraging tasks for a better understanding of how humans orient in the visual environment.
Collapse
Affiliation(s)
- Jérôme Tagu
- Icelandic Vision Laboratory, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Árni Kristjánsson
- Icelandic Vision Laboratory, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavík, Iceland
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
114
|
Stilwell BT, Egeth H, Gaspelin N. Electrophysiological Evidence for the Suppression of Highly Salient Distractors. J Cogn Neurosci 2022; 34:787-805. [PMID: 35104346 DOI: 10.1162/jocn_a_01827] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
There has been a longstanding debate as to whether salient stimuli have the power to involuntarily capture attention. As a potential resolution to this debate, the signal suppression hypothesis proposes that salient items generate a bottom-up signal that automatically attracts attention, but that salient items can be suppressed by top-down mechanisms to prevent attentional capture. Despite much support, the signal suppression hypothesis has been challenged on the grounds that many prior studies may have used color singletons with relatively low salience that are too weak to capture attention. The current study addressed this by using previous methods to study suppression but increased the set size to improve the relative salience of the color singletons. To assess whether salient distractors captured attention, electrophysiological markers of attentional allocation (the N2pc component) and suppression (the PD component) were measured. The results provided no evidence of attentional capture, but instead indicated suppression of the highly salient singleton distractors, as indexed by the PD component. This suppression occurred even though a computational model of saliency confirmed that the color singleton was highly salient. Altogether, this supports the signal suppression hypothesis and is inconsistent with stimulus-driven models of attentional capture.
Collapse
|
115
|
Yu Y, Qian J, Wu Q. Visual Saliency via Multiscale Analysis in Frequency Domain and Its Applications to Ship Detection in Optical Satellite Images. Front Neurorobot 2022; 15:767299. [PMID: 35095455 PMCID: PMC8793482 DOI: 10.3389/fnbot.2021.767299] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 12/01/2021] [Indexed: 11/13/2022] Open
Abstract
This article proposes a bottom-up visual saliency model that uses the wavelet transform to conduct multiscale analysis and computation in the frequency domain. First, we compute the multiscale magnitude spectra by performing a wavelet transform to decompose the magnitude spectrum of the discrete cosine coefficients of an input image. Next, we obtain multiple saliency maps of different spatial scales through an inverse transformation from the frequency domain to the spatial domain, which utilizes the discrete cosine magnitude spectra after multiscale wavelet decomposition. Then, we employ an evaluation function to automatically select the two best multiscale saliency maps. A final saliency map is generated via an adaptive integration of the two selected multiscale saliency maps. The proposed model is fast, efficient, and can simultaneously detect salient regions or objects of different sizes. It outperforms state-of-the-art bottom-up saliency approaches in the experiments of psychophysical consistency, eye fixation prediction, and saliency detection for natural images. In addition, the proposed model is applied to automatic ship detection in optical satellite images. Ship detection tests on satellite data of visual optical spectrum not only demonstrate our saliency model's effectiveness in detecting small and large salient targets but also verify its robustness against various sea background disturbances.
Collapse
Affiliation(s)
- Ying Yu
- School of Information Science and Engineering, Yunnan University, Kunming, China
| | | | | |
Collapse
|
116
|
Miuccio MT, Zelinsky GJ, Schmidt J. Are all real-world objects created equal? Estimating the "set-size" of the search target in visual working memory. Psychophysiology 2022; 59:e13998. [PMID: 35001411 PMCID: PMC8957527 DOI: 10.1111/psyp.13998] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 11/23/2021] [Accepted: 12/16/2021] [Indexed: 11/30/2022]
Abstract
Are all real-world objects created equal? Visual search difficulty increases with the number of targets and as target-related visual working memory (VWM) load increases. Our goal was to investigate the load imposed by individual real-world objects held in VWM in the context of search. Measures of visual clutter attempt to quantify real-world set-size in the context of scenes. We applied one of these measures, the number of proto-objects, to individual real-world objects and used contralateral delay activity (CDA) to measure the resulting VWM load. The current study presented a real-world object as a target cue, followed by a delay where CDA was measured. This was followed by a four-object search array. We compared CDA and later search performance from target cues containing a high or low number of proto-objects. High proto-object target cues resulted in greater CDA, longer search RTs, target dwell times, and reduced search guidance, relative to low proto-object targets. These findings demonstrate that targets with more proto-objects result in a higher VWM load and reduced search performance. This shows that the number of proto-objects contained within individual objects produce set-size like effects in VWM and suggests proto-objects may be a viable unit of measure of real-world VWM load. Importantly, this demonstrates that not all real-world objects are created equal.
Collapse
Affiliation(s)
- Michael T Miuccio
- Department of Psychology, University of Central Florida, Orlando, FLorida, USA
| | - Gregory J Zelinsky
- Department of Psychology, Stony Brook University, Stony Brook, New York, USA.,Department of Computer Science, Stony Brook University, Stony Brook, New York, USA
| | - Joseph Schmidt
- Department of Psychology, University of Central Florida, Orlando, FLorida, USA
| |
Collapse
|
117
|
Satmarean TS, Milne E, Rowe R. Working memory guidance of visual attention to threat in offenders. PLoS One 2022; 17:e0261882. [PMID: 34995301 PMCID: PMC8741051 DOI: 10.1371/journal.pone.0261882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 12/13/2021] [Indexed: 11/18/2022] Open
Abstract
Aggression and trait anger have been linked to attentional biases toward angry faces and attribution of hostile intent in ambiguous social situations. Memory and emotion play a crucial role in social-cognitive models of aggression but their mechanisms of influence are not fully understood. Combining a memory task and a visual search task, this study investigated the guidance of attention allocation toward naturalistic face targets during visual search by visual working memory (WM) templates in 113 participants who self-reported having served a custodial sentence. Searches were faster when angry faces were held in working memory regardless of the emotional valence of the visual search target. Higher aggression and trait anger predicted increased working memory modulated attentional bias. These results are consistent with the Social-Information Processing model, demonstrating that internal representations bias attention allocation to threat and that the bias is linked to aggression and trait anger.
Collapse
Affiliation(s)
- Tamara S. Satmarean
- Department of Psychology, University of Sheffield, Sheffield, United Kingdom
| | - Elizabeth Milne
- Department of Psychology, University of Sheffield, Sheffield, United Kingdom
| | - Richard Rowe
- Department of Psychology, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
118
|
Nuthmann A, Canas-Bajo T. Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays. J Vis 2022; 22:10. [PMID: 35044436 PMCID: PMC8802022 DOI: 10.1167/jov.22.1.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 12/03/2021] [Indexed: 11/24/2022] Open
Abstract
How important foveal, parafoveal, and peripheral vision are depends on the task. For object search and letter search in static images of real-world scenes, peripheral vision is crucial for efficient search guidance, whereas foveal vision is relatively unimportant. Extending this research, we used gaze-contingent Blindspots and Spotlights to investigate visual search in complex dynamic and static naturalistic scenes. In Experiment 1, we used dynamic scenes only, whereas in Experiments 2 and 3, we directly compared dynamic and static scenes. Each scene contained a static, contextually irrelevant target (i.e., a gray annulus). Scene motion was not predictive of target location. For dynamic scenes, the search-time results from all three experiments converge on the novel finding that neither foveal nor central vision was necessary to attain normal search proficiency. Since motion is known to attract attention and gaze, we explored whether guidance to the target was equally efficient in dynamic as compared to static scenes. We found that the very first saccade was guided by motion in the scene. This was not the case for subsequent saccades made during the scanning epoch, representing the actual search process. Thus, effects of task-irrelevant motion were fast-acting and short-lived. Furthermore, when motion was potentially present (Spotlights) or absent (Blindspots) in foveal or central vision only, we observed differences in verification times for dynamic and static scenes (Experiment 2). When using scenes with greater visual complexity and more motion (Experiment 3), however, the differences between dynamic and static scenes were much reduced.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, Kiel University, Kiel, Germany
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
- http://orcid.org/0000-0003-3338-3434
| | - Teresa Canas-Bajo
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
119
|
Phasic Alertness is Unaffected by the Attentional Set for Orienting. J Cogn 2022; 5:46. [PMID: 36304587 PMCID: PMC9541150 DOI: 10.5334/joc.242] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 09/19/2022] [Indexed: 11/05/2022] Open
Abstract
Warning stimuli preceding target stimuli for behaviour improve behavioural performance, which is referred to as phasic alerting. Similar benefits occur due to preceding orienting cues that draw spatial attention to the targets. It has long been assumed that alerting and orienting effects arise from separate attention systems, but recent views call this into question. As it stands, it remains unclear if the two systems are interdependent, or if they function independently. Here, we investigated whether the current attentional set for orienting modulates the effectiveness of alerting. In three experiments, participants classified visual stimuli in a speeded fashion. These target stimuli were preceded by orienting cues that could predict the target’s location, by alerting cues that were neutral regarding the target’s location, or by no cues. Alerting cues and orienting cues consisted of the same visual stimuli, linking alerting cues with the attentional set for orienting. The attentional set for orienting was manipulated in blocks, in which orienting cues were either informative or uninformative about the target’s location. Results showed that while alerting generally enhanced performance, alerting was unaffected by the informativeness of the orienting cues. These findings show that alerting does not depend on the attentional set that controls orienting based on the informational value of orienting cues. As such, the findings provide a simple dissociation of mechanisms underlying phasic alertness and spatial attentional orienting.
Collapse
|
120
|
Ren Z, Yu SX, Whitney D. Controllable Medical Image Generation via GAN. JOURNAL OF PERCEPTUAL IMAGING 2022; 5:0005021-50215. [PMID: 37621378 PMCID: PMC10448967 DOI: 10.2352/j.percept.imaging.2022.5.000502] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/26/2023]
Abstract
Medical image data is critically important for a range of disciplines, including medical image perception research, clinician training programs, and computer vision algorithms, among many other applications. Authentic medical image data, unfortunately, is relatively scarce for many of these uses. Because of this, researchers often collect their own data in nearby hospitals, which limits the generalizabilty of the data and findings. Moreover, even when larger datasets become available, they are of limited use because of the necessary data processing procedures such as de-identification, labeling, and categorizing, which requires significant time and effort. Thus, in some applications, including behavioral experiments on medical image perception, researchers have used naive artificial medical images (e.g., shapes or textures that are not realistic). These artificial medical images are easy to generate and manipulate, but the lack of authenticity inevitably raises questions about the applicability of the research to clinical practice. Recently, with the great progress in Generative Adversarial Networks (GAN), authentic images can be generated with high quality. In this paper, we propose to use GAN to generate authentic medical images for medical imaging studies. We also adopt a controllable method to manipulate the generated image attributes such that these images can satisfy any arbitrary experimenter goals, tasks, or stimulus settings. We have tested the proposed method on various medical image modalities, including mammogram, MRI, CT, and skin cancer images. The generated authentic medical images verify the success of the proposed method. The model and generated images could be employed in any medical image perception research.
Collapse
Affiliation(s)
- Zhihang Ren
- Vision Science Graduate Group, University of California, Berkeley, CA 94720, United States of America
- International Computer Science Institute, Berkeley, CA 94720, United States of America
| | - Stella X Yu
- Vision Science Graduate Group, University of California, Berkeley, CA 94720, United States of America
- International Computer Science Institute, Berkeley, CA 94720, United States of America
| | - David Whitney
- Vision Science Graduate Group, University of California, Berkeley, CA 94720, United States of America
- International Computer Science Institute, Berkeley, CA 94720, United States of America
- Department of Psychology, University of California, Berkeley CA 94720, United States of America
- Helen Wills Neuroscience Institute, University of California, Berkeley CA 94720, United States of America
| |
Collapse
|
121
|
Franconeri SL, Padilla LM, Shah P, Zacks JM, Hullman J. The Science of Visual Data Communication: What Works. Psychol Sci Public Interest 2021; 22:110-161. [PMID: 34907835 DOI: 10.1177/15291006211051956] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Effectively designed data visualizations allow viewers to use their powerful visual systems to understand patterns in data across science, education, health, and public policy. But ineffectively designed visualizations can cause confusion, misunderstanding, or even distrust-especially among viewers with low graphical literacy. We review research-backed guidelines for creating effective and intuitive visualizations oriented toward communicating data to students, coworkers, and the general public. We describe how the visual system can quickly extract broad statistics from a display, whereas poorly designed displays can lead to misperceptions and illusions. Extracting global statistics is fast, but comparing between subsets of values is slow. Effective graphics avoid taxing working memory, guide attention, and respect familiar conventions. Data visualizations can play a critical role in teaching and communication, provided that designers tailor those visualizations to their audience.
Collapse
Affiliation(s)
| | - Lace M Padilla
- Department of Cognitive and Information Sciences, University of California, Merced
| | - Priti Shah
- Department of Psychology, University of Michigan
| | - Jeffrey M Zacks
- Department of Psychological & Brain Sciences, Washington University in St. Louis
| | | |
Collapse
|
122
|
Rhodes RE, Cowley HP, Huang JG, Gray-Roncal W, Wester BA, Drenkow N. Benchmarking Human Performance for Visual Search of Aerial Images. Front Psychol 2021; 12:733021. [PMID: 34970183 PMCID: PMC8713551 DOI: 10.3389/fpsyg.2021.733021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 11/08/2021] [Indexed: 12/05/2022] Open
Abstract
Aerial images are frequently used in geospatial analysis to inform responses to crises and disasters but can pose unique challenges for visual search when they contain low resolution, degraded information about color, and small object sizes. Aerial image analysis is often performed by humans, but machine learning approaches are being developed to complement manual analysis. To date, however, relatively little work has explored how humans perform visual search on these tasks, and understanding this could ultimately help enable human-machine teaming. We designed a set of studies to understand what features of an aerial image make visual search difficult for humans and what strategies humans use when performing these tasks. Across two experiments, we tested human performance on a counting task with a series of aerial images and examined the influence of features such as target size, location, color, clarity, and number of targets on accuracy and search strategies. Both experiments presented trials consisting of an aerial satellite image; participants were asked to find all instances of a search template in the image. Target size was consistently a significant predictor of performance, influencing not only accuracy of selections but the order in which participants selected target instances in the trial. Experiment 2 demonstrated that the clarity of the target instance and the match between the color of the search template and the color of the target instance also predicted accuracy. Furthermore, color also predicted the order of selecting instances in the trial. These experiments establish not only a benchmark of typical human performance on visual search of aerial images but also identify several features that can influence the task difficulty level for humans. These results have implications for understanding human visual search on real-world tasks and when humans may benefit from automated approaches.
Collapse
Affiliation(s)
- Rebecca E. Rhodes
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD, United States
| | | | | | | | | | - Nathan Drenkow
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD, United States
| |
Collapse
|
123
|
Moon A, He C, Ditta AS, Cheung OS, Wu R. Rapid category selectivity for animals versus man-made objects: An N2pc study. Int J Psychophysiol 2021; 171:20-28. [PMID: 34856220 DOI: 10.1016/j.ijpsycho.2021.11.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 08/24/2021] [Accepted: 11/25/2021] [Indexed: 10/19/2022]
Abstract
Visual recognition occurs rapidly at multiple categorization levels, including the superordinate level (e.g., animal), basic level (e.g., cat), or exemplar level (e.g., my cat). Visual search for animals is faster than for man-made objects, even when the images from those categories have comparable gist statistics (i.e., low- or mid-level visual information), which suggests that higher-level, conceptual influences may support this search advantage for animals. However, it remains unclear whether the search advantage can be explained in part by early visual search processes via the N2pc ERP component, which emerges earlier than behavioral responses, across different categorization levels. Participants searched for 1) an exact image (e.g., a specific squirrel image, Exemplar-level Search), 2) any images of an item (e.g., any squirrels, Basic-level Search), or 3) any items in a category (e.g., any animals, Superordinate-level Search). In addition to Target Present trials, Foil trials measured involuntary attentional selection of task-irrelevant images related to the targets (e.g., other squirrel images when searching for a specific squirrel image, or other animals when searching for squirrels). ERP results revealed 1) a larger N2pc amplitude during Foil trials in Exemplar-level Search for animals than man-made objects, and 2) faster onset latencies for animal search than man-made object search across all categorization levels. These results suggest that the search advantage for animals over man-made objects emerges early, and that attentional selection is more biased toward the basic-level (e.g., squirrel) for animals than for man-made objects during visual search.
Collapse
Affiliation(s)
- Austin Moon
- Department of Psychology, University of California, Riverside, United States of America.
| | - Chenxi He
- INSERM, U992, Cognitive Neuroimaging Unit, Gif/Yvette, France
| | - Annie S Ditta
- Department of Psychology, University of California, Riverside, United States of America
| | - Olivia S Cheung
- Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates
| | - Rachel Wu
- Department of Psychology, University of California, Riverside, United States of America
| |
Collapse
|
124
|
Fryburg DA. What's Playing in Your Waiting Room? Patient and Provider Stress and the Impact of Waiting Room Media. J Patient Exp 2021; 8:23743735211049880. [PMID: 34869835 PMCID: PMC8641118 DOI: 10.1177/23743735211049880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Patients enter the healthcare space shouldering a lot of personal stress. Concurrently, health care providers and staff are managing their own personalstressors as well as workplace stressors. As stress can negatively affect the patient-provider experience and cognitive function of both individuals, it is imperative to try to uplift the health care environment for all. Part of the healthcare environmental psychology strategy to reduce stress often includes televisions in waiting rooms, cafeterias, and elsewhere, with the intent to distract the viewer and make waiting easier. Although well-intentioned, many select programming which can induce stress (eg, news). In contrast, as positive media can induce desirable changes in mood, it is possible to use it to decrease stress and uplift viewers, including staff. Positive media includes both nature media, which can relax and calm viewers and kindness media, which uplifts viewers, induces calm, and promotes interpersonal connection and generosity. Careful consideration of waiting room media can affect the patient-provider experience.
Collapse
|
125
|
van der Horst F, Snell J, Theeuwes J. Enhancing banknote authentication by guiding attention to security features and manipulating prevalence expectancy. Cogn Res Princ Implic 2021; 6:73. [PMID: 34773512 PMCID: PMC8590640 DOI: 10.1186/s41235-021-00341-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 10/22/2021] [Indexed: 11/10/2022] Open
Abstract
All banknotes have security features which are intended to help determine whether they are false or genuine. Typically, however, the general public has limited knowledge of where on a banknote these security features can be found. Here, we tested whether counterfeit detection can be improved with the help of salient elements, designed to guide bottom-up visuospatial attention. We also tested the influence of the participant's a priori level of trust in the authenticity of the banknote. In an online study (N = 422), a demographically diverse panel of Dutch participants distinguished genuine banknotes from banknotes with one (left- or right-sided) counterfeited security feature. Either normal banknotes (without novel design elements) or banknotes that contained a salient element (a pink rectangular frame) were presented for 1 s. To manipulate the participant's level of trust, trials were administered in three blocks, whereby at the start of each block, participants were instructed that either one third, one half, or two thirds of the upcoming banknotes were counterfeit (though the true ratio was always 1:1). We hypothesized (i) that in the presence of a salient element, counterfeits would be better detected when the location of the salient element aligned with the location of the counterfeited security feature-i.e. that it would act as an attentional cue; and (ii) that this effect would be stronger with lower trust. Our hypotheses were partly confirmed: counterfeit detection improved with 'valid cues' and decreasing trust, but the level of trust did not modulate the cueing effect. As the overall detection performance was rather poor, we replicated the study with a sample of university students (N = 66), this time presenting stimuli until response. While indeed observing better overall performance, all other patterns were replicated. Our results provide evidence that attention can be guided to enhance banknote authentication.
Collapse
Affiliation(s)
| | - Joshua Snell
- Department of Experimental and Applied Psychology Vrije Universiteit, Amsterdam, The Netherlands
- Institute of Brain and Behavior Amsterdam (iBBA), Amsterdam, The Netherlands
| | - Jan Theeuwes
- Department of Experimental and Applied Psychology Vrije Universiteit, Amsterdam, The Netherlands
- Institute of Brain and Behavior Amsterdam (iBBA), Amsterdam, The Netherlands
| |
Collapse
|
126
|
Kanaan M, Moacdieh NM. How do we react to cluttered displays? Evidence from the first seconds of visual search in websites. ERGONOMICS 2021; 64:1452-1464. [PMID: 33957850 DOI: 10.1080/00140139.2021.1927200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Accepted: 05/03/2021] [Indexed: 06/12/2023]
Abstract
Display clutter is known to degrade search performance and lead to differences in eye movement measures in different contexts. The goal of this study was to determine whether these differences in eye movements could be detected in the first few seconds of a search task using a realistic display, both with or without time pressure. Participants were asked to search for image or word targets in 40 website screenshots. Time pressure was introduced for half the trials. Clutter algorithms were used to classify the websites as low- or high-clutter. Performance, subjective, and eye-tracking metrics were collected. Results showed that people's attention allocation within the first 3 s of search is different when viewing high-clutter websites. In particular, people's spread of attention was larger in high-clutter websites. The results can be used to detect whether a person is struggling with clutter early on after they view a display. Practitioner summary: Eye-tracking metrics showed that people react differently to a cluttered website in a variety of conditions. These differences were evident within the first 3 s of the search. The eye-tracking metrics identified can be used to detect people struggling with clutter as soon as they look at a website.
Collapse
Affiliation(s)
- Malk Kanaan
- Department of Industrial Engineering and Management, American University of Beirut, Beirut, Lebanon
| | - Nadine Marie Moacdieh
- Department of Industrial Engineering and Management, American University of Beirut, Beirut, Lebanon
| |
Collapse
|
127
|
Lautenschlager S. True colours or red herrings?: colour maps for finite-element analysis in palaeontological studies to enhance interpretation and accessibility. ROYAL SOCIETY OPEN SCIENCE 2021; 8:211357. [PMID: 34804580 PMCID: PMC8596014 DOI: 10.1098/rsos.211357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Accepted: 10/21/2021] [Indexed: 06/13/2023]
Abstract
Accessibility is a key aspect for the presentation of research data. In palaeontology, new data is routinely obtained with computational techniques, such as finite-element analysis (FEA). FEA is used to calculate stress and deformation in objects when subjected to external forces. Results are displayed using contour plots in which colour information is used to convey the underlying biomechanical data. The Rainbow colour map is nearly exclusively used for these contour plots in palaeontological studies. However, numerous studies in other disciplines have shown the Rainbow map to be problematic due to uneven colour representation and its inaccessibility for those with colour vision deficiencies. Here, different colour maps were tested for their accuracy in representing values of FEA models. Differences in stress magnitudes (ΔS) and colour values (ΔE) of subsequent points from the FEA models were compared and their correlation was used as a measure of accuracy. The results confirm that the Rainbow colour map is not well suited to represent the underlying stress distribution of FEA models with other colour maps showing a higher discriminative power. As the performance of the colour maps varied with tested scenarios/stress types, it is recommended to use different colour maps for specific purposes.
Collapse
Affiliation(s)
- Stephan Lautenschlager
- School of Geography, Earth and Environmental Sciences, University of Birmingham, Birmingham, UK
| |
Collapse
|
128
|
Phyo Wai AA, Ern Tchen J, Guan C. A Study of Visual Search based Calibration Protocol for EEG Attention Detection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:5792-5795. [PMID: 34892436 DOI: 10.1109/embc46164.2021.9631083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Attention, a multi-faceted cognitive process, is essential in our daily lives. We can measure visual attention using an EEG Brain-Computer Interface for detecting different levels of attention in gaming, performance training, and clinical applications. In attention calibration, we use Flanker task to capture EEG data for attentive class. For EEG data belonging to inattentive class calibration, we instruct subject not focusing on a specific position on screen. We then classify attention levels using binary classifier trained with these surrogate ground-truth classes. However, subjects may not be in desirable attention conditions when performing repetitive boring activities over a long experiment duration. We propose attention calibration protocols in this paper that use simultaneous visual search with an audio directional change paradigm and static white noise as 'attentive' and 'inattentive' conditions, respectively. To compare the performance of proposed calibrations against baselines, we collected data from sixteen healthy subjects. For a fair comparison of classification performance; we used six basic EEG band-power features with a standard binary classifier. With the new calibration protocol, we achieved 74.37 ± 6.56% mean subject accuracy, which is about 3.73 ± 2.49% higher than the baseline, but there were no statistically significant differences. According to post-experiment survey results, new calibrations are more effective in inducing desired perceived attention levels. We will improve calibration protocols with reliable attention classifier modeling to enable better attention recognition based on these promising results.
Collapse
|
129
|
Anderson BA, Kim H, Kim AJ, Liao MR, Mrkonja L, Clement A, Grégoire L. The past, present, and future of selection history. Neurosci Biobehav Rev 2021; 130:326-350. [PMID: 34499927 PMCID: PMC8511179 DOI: 10.1016/j.neubiorev.2021.09.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 07/08/2021] [Accepted: 09/02/2021] [Indexed: 01/22/2023]
Abstract
The last ten years of attention research have witnessed a revolution, replacing a theoretical dichotomy (top-down vs. bottom-up control) with a trichotomy (biased by current goals, physical salience, and selection history). This third new mechanism of attentional control, selection history, is multifaceted. Some aspects of selection history must be learned over time whereas others reflect much more transient influences. A variety of different learning experiences can shape the attention system, including reward, aversive outcomes, past experience searching for a target, target‒non-target relations, and more. In this review, we provide an overview of the historical forces that led to the proposal of selection history as a distinct mechanism of attentional control. We then propose a formal definition of selection history, with concrete criteria, and identify different components of experience-driven attention that fit within this definition. The bulk of the review is devoted to exploring how these different components relate to one another. We conclude by proposing an integrative account of selection history centered on underlying themes that emerge from our review.
Collapse
Affiliation(s)
- Brian A Anderson
- Texas A&M University, College Station, TX, 77843, United States.
| | - Haena Kim
- Texas A&M University, College Station, TX, 77843, United States
| | - Andy J Kim
- Texas A&M University, College Station, TX, 77843, United States
| | - Ming-Ray Liao
- Texas A&M University, College Station, TX, 77843, United States
| | - Lana Mrkonja
- Texas A&M University, College Station, TX, 77843, United States
| | - Andrew Clement
- Texas A&M University, College Station, TX, 77843, United States
| | | |
Collapse
|
130
|
Bröder A, Scharf S, Jekel M, Glöckner A, Franke N. Salience effects in information acquisition: No evidence for a top-down coherence influence. Mem Cognit 2021; 49:1537-1554. [PMID: 34133002 PMCID: PMC8563519 DOI: 10.3758/s13421-021-01188-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/08/2021] [Indexed: 11/08/2022]
Abstract
The Integrated Coherence-Based Decision and Search (iCodes) model proposed by Jekel et al. (Psychological Review, 125 (5), 744-768, 2018) formalizes both decision making and pre-decisional information search as coherence-maximization processes in an interactive network. Next to bottom-up attribute influences, the coherence of option information exerts a top-down influence on the search processes in this model, predicting the tendency to continue information search with the currently most attractive option. This hallmark "attraction search effect" (ASE) has been demonstrated in several studies. In three experiments with 250 participants altogether, a more subtle prediction of an extended version of iCodes including exogenous influence factors was tested: The salience of information is assumed to have both a direct (bottom-up) and an indirect (top-down) effect on search, the latter driven by the match between information valence and option attractiveness. The results of the experiments largely agree in (1) showing a strong ASE, (2) demonstrating a bottom-up salience effect on search, but (3) suggesting the absence of the hypothesized indirect top-down salience effect. Hence, only two of three model predictions were confirmed. Implications for various implementations of exogenous factors in the iCodes model are discussed.
Collapse
Affiliation(s)
- Arndt Bröder
- School of Social Sciences, University of Mannheim, 68131, Mannheim, Germany.
| | - Sophie Scharf
- School of Social Sciences, University of Mannheim, 68131, Mannheim, Germany
| | - Marc Jekel
- Department of Psychology, University of Cologne, Cologne, Germany
| | - Andreas Glöckner
- Department of Psychology, University of Cologne, Cologne, Germany
| | - Nicole Franke
- Department of Psychology, University of Hagen, Hagen, Germany
| |
Collapse
|
131
|
Cooper PS, Baillet S, Maroun REK, Chong TTJ. Over the rainbow: Guidelines for meaningful use of colour maps in neurophysiology. Neuroimage 2021; 245:118628. [PMID: 34637902 DOI: 10.1016/j.neuroimage.2021.118628] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/16/2021] [Accepted: 09/28/2021] [Indexed: 10/20/2022] Open
Abstract
Visualization of complex data is commonplace in neurophysiology research. Here, we highlight specific perceptual issues related to the ongoing misuse of variations of the rainbow colour scheme, with a particular emphasis on time-frequency decompositions in electrophysiology as an illustrative example. We review the risks of biased interpretation of neurophysiological data in this context, and provide guidelines to improve the use of colour maps to visualise complex, multidimensional data in neurophysiology research.
Collapse
Affiliation(s)
- Patrick S Cooper
- Turner Institute for Brain and Mental Health, Monash University, Victoria 3800, Australia; Melbourne School of Psychological Sciences, University of Melbourne, Victoria 3010, Australia.
| | - Sylvain Baillet
- Montreal Neurological Institute, McGill University, Québec H3A 2B4, Canada
| | | | - Trevor T-J Chong
- Turner Institute for Brain and Mental Health, Monash University, Victoria 3800, Australia; Department of Neurology, Alfred Health, Melbourne, Victoria 3004, Australia; Department of Clinical Neurosciences, St Vincent's Hospital, Victoria 3065, Australia
| |
Collapse
|
132
|
Does feature intertrial priming guide attention? The jury is still out. Psychon Bull Rev 2021; 29:369-393. [PMID: 34625924 DOI: 10.3758/s13423-021-01997-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/20/2021] [Indexed: 11/08/2022]
Abstract
Our search performance is strongly influenced by our past experience. In the lab, this influence has been demonstrated by investigating a variety of phenomena, including intertrial priming, statistical learning, and reward history, and collectively referred to as selection history. The resulting findings have led researchers to claim that selection history guides attention, thereby challenging the prevailing dichotomy, according to which top-down and bottom-up factors alone determine attentional priority. Here, we re-examine this claim with regard to one selection-history phenomenon, feature intertrial priming (aka priming of pop-out). We evaluate the evidence that specifically pertains to the role of feature intertrial priming in attentional guidance, rather than in later selective processes occurring after the target is found. We distinguish between the main experimental rationales, while considering the extent to which feature intertrial priming, as studied through different protocols, shares characteristics of top-down attention. We show that there is strong evidence that feature intertrial priming guides attention when the experimental protocol departs from the canonical paradigm and encourages observers to maintain the critical feature in visual working memory or to form expectations about the upcoming target. By contrast, the current evidence regarding the standard feature intertrial priming phenomenon is inconclusive. We propose directions for future research and suggest that applying the methodology used here in order to re-evaluate of the role of other selection history phenomena in attentional guidance should clarify the mechanisms underlying the strong impact of past experience on visual search performance.
Collapse
|
133
|
Peacock CE, Cronin DA, Hayes TR, Henderson JM. Meaning and expected surfaces combine to guide attention during visual search in scenes. J Vis 2021; 21:1. [PMID: 34609475 PMCID: PMC8496418 DOI: 10.1167/jov.21.11.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 09/02/2021] [Indexed: 11/24/2022] Open
Abstract
How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target surfaces as continuous probabilities. Meaning was represented by meaning maps highlighting the distribution of semantic content in local scene regions. Attention was indexed by eye movements during the search for target objects that varied in the likelihood they would appear on specific surfaces. The interaction between surface maps and meaning maps was analyzed to test whether fixations were directed to meaningful scene regions on target-related surfaces. Overall, meaningful scene regions were more likely to be fixated if they appeared on target-related surfaces than if they appeared on target-unrelated surfaces. These findings suggest that the visual system prioritizes meaningful scene regions on target-related surfaces during visual search in scenes.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Deborah A Cronin
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
134
|
Blakley EC, Gaspelin N, Gerhardstein P. The development of oculomotor suppression of salient distractors in children. J Exp Child Psychol 2021; 214:105291. [PMID: 34607075 DOI: 10.1016/j.jecp.2021.105291] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 07/10/2021] [Accepted: 08/25/2021] [Indexed: 10/20/2022]
Abstract
There is considerable evidence that adults can prevent attentional capture by physically salient stimuli via proactive inhibition. A key question is whether young children can also inhibit salient stimuli to prevent visual distraction. The current study directly compared attentional capture in children (Mage = 5.5 years) and adults (Mage = 19.3 years) by measuring overt eye movements. Participants searched for a target shape among heterogeneous distractor shapes and attempted to ignore a salient color singleton distractor. The destination of first saccades was used to assess attentional capture by the salient distractor, providing a more direct index of attentional allocation than prior developmental studies. Adults were able to suppress saccades to the singleton distractor, replicating previous studies. Children, however, demonstrated no such oculomotor suppression; first saccades were equally likely to be directed to the singleton distractor and nonsingleton distractors. Subsequent analyses indicated that children were able to suppress the distractor, but this occurred approximately 550 ms after stimulus presentation. The current results suggest that children possess some level of top-down control over visual attention, but this top-down control is delayed compared with adults. Development of this ability may be related to executive functions, which include goal-directed behavior such as organized search and impulse control as well as preparatory and inhibitory cognitive functions.
Collapse
Affiliation(s)
- Emily C Blakley
- Department of Psychology, Binghamton University, State University of New York, Binghamton, NY 13902, USA.
| | - Nicholas Gaspelin
- Department of Psychology, Binghamton University, State University of New York, Binghamton, NY 13902, USA
| | - Peter Gerhardstein
- Department of Psychology, Binghamton University, State University of New York, Binghamton, NY 13902, USA
| |
Collapse
|
135
|
Salinas E, Stanford TR. Under time pressure, the exogenous modulation of saccade plans is ubiquitous, intricate, and lawful. Curr Opin Neurobiol 2021; 70:154-162. [PMID: 34818614 PMCID: PMC8688226 DOI: 10.1016/j.conb.2021.10.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 09/29/2021] [Accepted: 10/27/2021] [Indexed: 11/21/2022]
Abstract
The choice of where to look next is determined by both exogenous (bottom-up) and endogenous (top-down) factors, but details of their interaction and distinct contributions to target selection have remained elusive. Recent experiments with urgent choice tasks, in which stimuli are evaluated while motor plans are already advancing, have greatly clarified these contributions. Specifically, exogenous modulations associated with stimulus detection act rapidly and briefly (∼25 ms) to automatically halt and/or boost ongoing motor plans as per spatial congruence rules. These stereotypical modulations explain, in quantitative detail, characteristic features of many saccadic tasks (e.g. antisaccade, countermanding, saccadic-inhibition, gap, and double-step). Thus, the same low-level visuomotor interactions contribute to diverse oculomotor phenomena traditionally attributed to different neural mechanisms.
Collapse
Affiliation(s)
- Emilio Salinas
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, 1 Medical Center Blvd., Winston-Salem, NC, 27157-1010, USA.
| | - Terrence R Stanford
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, 1 Medical Center Blvd., Winston-Salem, NC, 27157-1010, USA
| |
Collapse
|
136
|
The relationship between the subjective experience of real-world cognitive failures and objective target-detection performance in visual search. Cognition 2021; 217:104914. [PMID: 34592479 DOI: 10.1016/j.cognition.2021.104914] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 09/18/2021] [Accepted: 09/20/2021] [Indexed: 11/24/2022]
Abstract
Visual search is a common occurrence in everyday life, such as searching for the location of keys, identifying a friend in a crowd, or scanning an upcoming intersection for hazards while driving. Visual search is also used in professional contexts, such as medical diagnostic imaging and airport baggage screening. These contexts are often characterised by low-prevalence or rare targets. Here we tested whether individual differences in the detection of targets in visual search could be predicted from variables derived from the rich informational source of participants' subjective experience of their cognitive and attentional function in everyday life. We tested this in both low-prevalence (Experiment 1) and high-prevalence (Experiment 2) visual search conditions. In both experiments, participants completed a visual search with arrays containing multiple photorealistic objects, and their task was to detect the presence of a gun. Following this, they completed the Cognitive Failures Questionnaire (CFQ) and the Attentional Control Scale (ACS). In Experiment 1, the target was present on 2% of trials, while in Experiment 2, it was present on 50%. In both experiments, participants' scores on the False Triggering component of the CFQ were negatively associated with accuracy on target-present trials, while participants' scores on the Forgetfulness component of the CFQ were positively associated with target-present accuracy. These results show that objective performance in visual search can be predicted from subjective experiences of cognitive function. They also highlight that the CFQ is not monolithic. Instead, the CFQ subfactors can have qualitatively different relationships with performance. Theoretical and practical implications are discussed.
Collapse
|
137
|
Liesefeld HR, Liesefeld AM, Müller HJ. Preparatory Control Against Distraction Is Not Feature-Based. Cereb Cortex 2021; 32:2398-2411. [PMID: 34585718 DOI: 10.1093/cercor/bhab341] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/24/2021] [Accepted: 08/25/2021] [Indexed: 12/20/2022] Open
Abstract
Salient-but-irrelevant stimuli (distractors) co-occurring with search targets can capture attention against the observer's will. Recently, evidence has accumulated that preparatory control can prevent this misguidance of spatial attention in predictable situations. However, the underlying mechanisms have remained elusive. Most pertinent theories assume that attention is guided by specific features. This widespread theoretical claim provides several strong predictions with regard to distractor handling that are disconfirmed here: Employing electrophysiological markers of covert attentional dynamics, in three experiments, we show that distractors standing out by a feature that is categorically different from the target consistently captures attention. However, equally salient distractors standing out in a different feature dimension are effectively down-weighted, even if unpredictably swapping their defining feature with the target. This shows that preparing for a distractor's feature is neither necessary nor sufficient for successful avoidance of attentional capture. Rather, capture is prevented by preparing for the distractor's feature dimension.
Collapse
Affiliation(s)
- Heinrich R Liesefeld
- Department of Psychology, University of Bremen, Bremen D-28359, Germany.,Department Psychologie, Ludwig-Maximilians-Universität, München D-80802, Germany
| | - Anna M Liesefeld
- Department Psychologie, Ludwig-Maximilians-Universität, München D-80802, Germany
| | - Hermann J Müller
- Department Psychologie, Ludwig-Maximilians-Universität, München D-80802, Germany
| |
Collapse
|
138
|
Liesefeld HR, Liesefeld AM, Müller HJ. Attentional capture: An ameliorable side-effect of searching for salient targets. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1925798] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Heinrich R. Liesefeld
- Department of Psychology, University of Bremen, Bremen, Germany
- Department Psychologie, Ludwig-Maximilians-Universität, München, Germany
| | - Anna M. Liesefeld
- Department Psychologie, Ludwig-Maximilians-Universität, München, Germany
| | - Hermann J. Müller
- Department Psychologie, Ludwig-Maximilians-Universität, München, Germany
| |
Collapse
|
139
|
Pitt KM, McCarthy JW. What's in a Photograph? The Perspectives of Composition Experts on Factors Impacting Visual Scene Display Complexity for Augmentative and Alternative Communication and Strategies for Improving Visual Communication. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2021; 30:2080-2097. [PMID: 34310201 DOI: 10.1044/2021_ajslp-20-00350] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose Visual scene displays (VSDs) can support augmentative and alternative communication (AAC) success for children and adults with complex communication needs. Static VSDs incorporate contextual photographs that include meaningful events, places, and people. Although the processing of VSDs has been studied, their power as a medium to effectively convey meaning may benefit from the perspective of individuals who regularly engage in visual storytelling. The aim of this study was to evaluate the perspectives of individuals with expertise in photographic and/or artistic composition regarding factors contributing to VSD complexity and how to limit the time and effort required to apply principles of photographic composition. Method Semistructured interviews were completed with 13 participants with expertise in photographic and/or artistic composition. Results Four main themes were noted, including (a) factors increasing photographic image complexity and decreasing cohesion, (b) how complexity impacts the viewer, (c) composition strategies to decrease photographic image complexity and increase cohesion, and (d) strategies to support the quick application of composition strategies in a just-in-time setting. Findings both support and extend existing research regarding best practice for VSD design. Conclusions Findings provide an initial framework for understanding photographic image complexity and how it differs from drawn AAC symbols. Furthermore, findings outline a toolbox of composition principles that may help limit VSD complexity, along with providing recommendations for AAC development to support the quick application of compositional principles to limit burdens associated with capturing photographic images. Supplemental Material https://doi.org/10.23641/asha.15032700.
Collapse
Affiliation(s)
- Kevin M Pitt
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln
| | - John W McCarthy
- Division of Communication Sciences and Disorders, Ohio University, Athens
| |
Collapse
|
140
|
Li W, Guan J, Shi W. Increasing the load on executive working memory reduces the search performance in the natural scenes: Evidence from eye movements. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02270-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
141
|
Hayes TR, Henderson JM. Deep saliency models learn low-, mid-, and high-level features to predict scene attention. Sci Rep 2021; 11:18434. [PMID: 34531484 PMCID: PMC8445969 DOI: 10.1038/s41598-021-97879-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 08/31/2021] [Indexed: 02/08/2023] Open
Abstract
Deep saliency models represent the current state-of-the-art for predicting where humans look in real-world scenes. However, for deep saliency models to inform cognitive theories of attention, we need to know how deep saliency models prioritize different scene features to predict where people look. Here we open the black box of three prominent deep saliency models (MSI-Net, DeepGaze II, and SAM-ResNet) using an approach that models the association between attention, deep saliency model output, and low-, mid-, and high-level scene features. Specifically, we measured the association between each deep saliency model and low-level image saliency, mid-level contour symmetry and junctions, and high-level meaning by applying a mixed effects modeling approach to a large eye movement dataset. We found that all three deep saliency models were most strongly associated with high-level and low-level features, but exhibited qualitatively different feature weightings and interaction patterns. These findings suggest that prominent deep saliency models are primarily learning image features associated with high-level scene meaning and low-level image saliency and highlight the importance of moving beyond simply benchmarking performance.
Collapse
Affiliation(s)
- Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, 95618, USA.
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, 95618, USA
- Department of Psychology, University of California, Davis, 95616, USA
| |
Collapse
|
142
|
Plewan T, Rinkenauer G. Visual search in virtual 3D space: the relation of multiple targets and distractors. PSYCHOLOGICAL RESEARCH 2021; 85:2151-2162. [PMID: 33388993 PMCID: PMC8357743 DOI: 10.1007/s00426-020-01392-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Accepted: 07/13/2020] [Indexed: 11/16/2022]
Abstract
Visual search and attentional alignment in 3D space are potentially modulated by information in unattended depth planes. The number of relevant and irrelevant items as well as their spatial relations may be regarded as factors which contribute to such effects. On a behavioral level, it might be different whether multiple distractors are presented in front of or behind target items. However, several studies revealed that attention cannot be restricted to a single depth plane. To further investigate this issue, two experiments were conducted. In the first experiment, participants searched for (multiple) targets in one depth plane, while non-target items (distractors) were simultaneously presented in this or another depth plane. In the second experiment, an additional spatial cue was presented with different validities to highlight the target position. Search durations were generally shorter when the search array contained two additional targets and were markedly longer when three distractors were displayed. The latter effect was most pronounced when a single target and three distractors coincided in the same depth plane and this effect persisted even when the target position was validly cued. The study reveals that the depth relation of target and distractor stimuli was more important than the absolute distance between these objects. Furthermore, the present findings suggest that within an attended depth plane, irrelevant information elicits strong interference. In sum, this study provides further evidence that allocation of attention is a flexible process which may be modulated by a variety of perceptual and cognitive factors.
Collapse
Affiliation(s)
- Thorsten Plewan
- Department of Ergonomics, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Ardeystr. 67, 44139, Dortmund, Germany.
- Psychology School, Hochschule Fresenius - University of Applied Sciences Düsseldorf, Düsseldorf, Germany.
| | - Gerhard Rinkenauer
- Department of Ergonomics, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Ardeystr. 67, 44139, Dortmund, Germany
| |
Collapse
|
143
|
I see what you mean: Semantic but not lexical factors modulate image processing in bilingual adults. Mem Cognit 2021; 50:245-260. [PMID: 34462894 DOI: 10.3758/s13421-021-01229-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2021] [Indexed: 11/08/2022]
Abstract
Bilinguals frequently juggle competing representations from their two languages when they interact with their environment (i.e., nonselective activation). As a result, both first (L1) and second language (L2) communication may be impeded when words share orthographic form but not meaning (i.e., interlingual homographs; e.g., CRANE, a machine in English, a skull in French). Similarly, bilinguals' reduced exposure to each known language makes bilingual lexical processing more vulnerable to larger frequency effects. While much is known about processes within the language system, less is known about how the bilingual language system interacts with the visual system, specifically in the context of image processing. We investigated this by testing whether commonly observed semantic (homograph interference) and lexical (frequency) effects extend to a visual word-image matching task. We tested 48 bilinguals, who were asked to determine whether an image corresponded to a written word that was presented immediately beforehand. By modulating the complexity of visual referents and the semantic (Analysis 1) or lexical (Analysis 2) complexity of word cues, we simultaneously burdened the visual and language systems. The results showed that both semantic and lexical factors modulated response accuracy and correct reaction time on the word-image matching task. Crucially, we observed an interaction between the image factor (visual complexity) with the semantic (homograph status) but not the lexical factor (word frequency). We conclude that it is possible for the language and image processing systems to interact, although the extent to which this occurs depends on the degree of linguistic processing involved.
Collapse
|
144
|
Womelsdorf T, Thomas C, Neumann A, Watson MR, Banaie Boroujeni K, Hassani SA, Parker J, Hoffman KL. A Kiosk Station for the Assessment of Multiple Cognitive Domains and Cognitive Enrichment of Monkeys. Front Behav Neurosci 2021; 15:721069. [PMID: 34512289 PMCID: PMC8426617 DOI: 10.3389/fnbeh.2021.721069] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Accepted: 07/30/2021] [Indexed: 02/01/2023] Open
Abstract
Nonhuman primates (NHP's) are self-motivated to perform cognitive tasks on touchscreens in their animal housing setting. To leverage this ability, fully integrated hardware and software solutions are needed that work within housing and husbandry routines while also spanning cognitive task constructs of the Research Domain Criteria (RDoC). Here, we detail such an integrated robust hardware and software solution for running cognitive tasks in cage-housed NHP's with a cage-mounted Kiosk Station (KS-1). KS-1 consists of a frame for mounting flexibly on housing cages, a touchscreen animal interface with mounts for receptables, reward pumps, and cameras, and a compact computer cabinet with an interface for controlling behavior. Behavioral control is achieved with a Unity3D program that is virtual-reality capable, allowing semi-naturalistic visual tasks to assess multiple cognitive domains.KS-1 is fully integrated into the regular housing routines of monkeys. A single person can operate multiple KS-1's. Monkeys engage with KS-1 at high motivation and cognitive performance levels at high intra-individual consistency. KS-1 is optimized for flexible mounting onto standard apartment cage systems and provides a new design variation complementing existing cage-mounted touchscreen systems. KS-1 has a robust animal interface with options for gaze/reach monitoring. It has an integrated user interface for controlling multiple cognitive tasks using a common naturalistic object space designed to enhance task engagement. All custom KS-1 components are open-sourced.In summary, KS-1 is a versatile new tool for cognitive profiling and cognitive enrichment of cage-housed monkeys. It reliably measures multiple cognitive domains which promises to advance our understanding of animal cognition, inter-individual differences, and underlying neurobiology in refined, ethologically meaningful behavioral foraging contexts.
Collapse
Affiliation(s)
- Thilo Womelsdorf
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Christopher Thomas
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Adam Neumann
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Marcus R. Watson
- Department of Biology, Centre for Vision Research, York University, Toronto, ON, Canada
| | | | - Seyed A. Hassani
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Jeremy Parker
- Division of Animal Care, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Kari L. Hoffman
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
145
|
Carrigan AJ, Stoodley P, Ng K, Moerel D, Wiggins MW. Static versus dynamic medical images: The role of cue utilization in diagnostic performance. APPLIED COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1002/acp.3861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Ann J. Carrigan
- Centre for Elite Performance, Expertise and Training Macquarie University Sydney New South Wales Australia
- Perception in Action Research Centre Macquarie University Sydney New South Wales Australia
- Department of Psychology Macquarie University Sydney New South Wales Australia
| | - Paul Stoodley
- School of Medicine Western Sydney University Sydney, New South Wales Australia
- Westmead Private Cardiology Westmead New South Wales Australia
| | - Kenny Ng
- Cardiology Department Royal North Shore Hospital Sydney New South Wales Australia
| | - Denise Moerel
- Perception in Action Research Centre Macquarie University Sydney New South Wales Australia
- Department of Cognitive Science Macquarie University Sydney New South Wales Australia
| | - Mark W. Wiggins
- Centre for Elite Performance, Expertise and Training Macquarie University Sydney New South Wales Australia
- Department of Psychology Macquarie University Sydney New South Wales Australia
| |
Collapse
|
146
|
The detail is in the difficulty: Challenging search facilitates rich incidental object encoding. Mem Cognit 2021; 48:1214-1233. [PMID: 32562249 DOI: 10.3758/s13421-020-01051-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When searching for objects in the environment, observers necessarily encounter other, nontarget, objects. Despite their irrelevance for search, observers often incidentally encode the details of these objects, an effect that is exaggerated as the search task becomes more challenging. Although it is well established that searchers create incidental memories for targets, less is known about the fidelity with which nontargets are remembered. Do observers store richly detailed representations of nontargets, or are these memories characterized by gist-level detail, containing only the information necessary to reject the item as a nontarget? We addressed this question across two experiments in which observers completed multiple-target (one to four potential targets) searches, followed by surprise alternative forced-choice (AFC) recognition tests for all encountered objects. To assess the detail of incidentally stored memories, we used similarity rankings derived from multidimensional scaling to manipulate the perceptual similarity across objects in 4-AFC (Experiment 1a) and 16-AFC (Experiments 1b and 2) tests. Replicating prior work, observers recognized more nontarget objects encountered during challenging, relative to easier, searches. More importantly, AFC results revealed that observers stored more than gist-level detail: When search objects were not recognized, observers systematically chose lures with higher perceptual similarity, reflecting partial encoding of the search object's perceptual features. Further, similarity effects increased with search difficulty, revealing that incidental memories for visual search objects are sharpened when the search task requires greater attentional processing.
Collapse
|
147
|
Dreneva A, Shvarts A, Chumachenko D, Krichevets A. Extrafoveal Processing in Categorical Search for Geometric Shapes: General Tendencies and Individual Variations. Cogn Sci 2021; 45:e13025. [PMID: 34379345 PMCID: PMC8459262 DOI: 10.1111/cogs.13025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Revised: 06/10/2021] [Accepted: 06/27/2021] [Indexed: 11/29/2022]
Abstract
The paper addresses the capabilities and limitations of extrafoveal processing during a categorical visual search. Previous research has established that a target could be identified from the very first or without any saccade, suggesting that extrafoveal perception is necessarily involved. However, the limits in complexity defining the processed information are still not clear. We performed four experiments with a gradual increase of stimuli complexity to determine the role of extrafoveal processing in searching for the categorically defined geometric shape. The series of experiments demonstrated a significant role of extrafoveal processing while searching for simple two-dimensional shapes and its gradual decrease in a condition with more complicated three-dimensional shapes. The factors of objects' spatial orientation and distractor homogeneity significantly influenced both reaction time and the number of saccades required to identify a categorically defined target. An analysis of the individual p-value distributions revealed pronounced individual differences in using extrafoveal analysis and allowed examination of the performance of each particular participant. The condition with the forced prohibition of eye movements enabled us to investigate the efficacy of covert attention in the condition with complicated shapes. Our results indicate that both foveal and extrafoveal processing are simultaneously involved during a categorical search, and the specificity of their interaction is determined by the spatial orientation of objects, type of distractors, the prohibition to use overt attention, and individual characteristics of the participants.
Collapse
Affiliation(s)
- Anna Dreneva
- Faculty of PsychologyLomonosov Moscow State University
| | - Anna Shvarts
- Freudenthal InstituteFaculty of ScienceUtrecht University
| | | | | |
Collapse
|
148
|
Assessing how visual search entropy and engagement predict performance in a multiple-objects tracking air traffic control task. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2021. [DOI: 10.1016/j.chbr.2021.100127] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
149
|
Hayes TR, Henderson JM. Looking for Semantic Similarity: What a Vector-Space Model of Semantics Can Tell Us About Attention in Real-World Scenes. Psychol Sci 2021; 32:1262-1270. [PMID: 34252325 PMCID: PMC8726595 DOI: 10.1177/0956797621994768] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 12/23/2020] [Indexed: 11/15/2022] Open
Abstract
The visual world contains more information than we can perceive and understand in any given moment. Therefore, we must prioritize important scene regions for detailed analysis. Semantic knowledge gained through experience is theorized to play a central role in determining attentional priority in real-world scenes but is poorly understood. Here, we examined the relationship between object semantics and attention by combining a vector-space model of semantics with eye movements in scenes. In this approach, the vector-space semantic model served as the basis for a concept map, an index of the spatial distribution of the semantic similarity of objects across a given scene. The results showed a strong positive relationship between the semantic similarity of a scene region and viewers' focus of attention; specifically, greater attention was given to more semantically related scene regions. We conclude that object semantics play a critical role in guiding attention through real-world scenes.
Collapse
Affiliation(s)
| | - John M. Henderson
- Center for Mind and Brain, University of California, Davis
- Department of Psychology, University of California, Davis
| |
Collapse
|
150
|
Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology and Radiology, Brigham & Women's Hospital/Harvard Medical School, Cambridge, MA, USA.
- Visual Attention Lab, 65 Landsdowne St, 4th Floor, Cambridge, MA, 02139, USA.
| |
Collapse
|