1
|
Hoogerbrugge AJ, Strauch C, Nijboer TCW, der Stigchel SV. Persistent resampling of external information despite 25 repetitions of the same visual search templates. Atten Percept Psychophys 2024; 86:2301-2314. [PMID: 39285145 PMCID: PMC11480145 DOI: 10.3758/s13414-024-02953-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/12/2024] [Indexed: 10/16/2024]
Abstract
We commonly load visual working memory minimally when to-be-remembered information remains available in the external world. In visual search, this is characterised by participants frequently resampling previously encoded templates, which helps minimize cognitive effort and improves task performance. If all search templates have been rehearsed many times, they should become strongly represented in memory, possibly eliminating the benefit of reinspections. To test whether repetition indeed leads to less resampling, participants searched for sets of 1, 2, and 4 continuously available search templates. Critically, each unique set of templates was repeated 25 trials consecutively. Although the number of inspections and inspection durations initially decreased strongly when a template set was repeated, behaviour largely stabilised between the tenth and last repetition: Participants kept resampling templates frequently. In Experiment 2, participants performed the same task, but templates became unavailable after 15 repetitions. Strikingly, accuracy remained high even when templates could not be inspected, suggesting that resampling was not strictly necessary in later repetitions. We further show that seemingly 'excessive' resampling behaviour had no direct within-trial benefit to speed nor accuracy, and did not improve performance on long-term memory tests. Rather, we argue that resampling was partially used to boost metacognitive confidence regarding memory representations. As such, eliminating the benefit of minimizing working memory load does not eliminate the persistence with which we sample information from the external world - although the underlying reason for resampling behaviour may be different.
Collapse
Affiliation(s)
- Alex J Hoogerbrugge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - Christoph Strauch
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Tanja C W Nijboer
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
2
|
Moher J, Delos Reyes A, Drew T. Cue relevance drives early quitting in visual search. Cogn Res Princ Implic 2024; 9:54. [PMID: 39183257 PMCID: PMC11345343 DOI: 10.1186/s41235-024-00587-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 08/08/2024] [Indexed: 08/27/2024] Open
Abstract
Irrelevant salient distractors can trigger early quitting in visual search, causing observers to miss targets they might otherwise find. Here, we asked whether task-relevant salient cues can produce a similar early quitting effect on the subset of trials where those cues fail to highlight the target. We presented participants with a difficult visual search task and used two cueing conditions. In the high-predictive condition, a salient cue in the form of a red circle highlighted the target most of the time a target was present. In the low-predictive condition, the cue was far less accurate and did not reliably predict the target (i.e., the cue was often a false positive). These were contrasted against a control condition in which no cues were presented. In the high-predictive condition, we found clear evidence of early quitting on trials where the cue was a false positive, as evidenced by both increased miss errors and shorter response times on target absent trials. No such effects were observed with low-predictive cues. Together, these results suggest that salient cues which are false positives can trigger early quitting, though perhaps only when the cues have a high-predictive value. These results have implications for real-world searches, such as medical image screening, where salient cues (referred to as computer-aided detection or CAD) may be used to highlight potentially relevant areas of images but are sometimes inaccurate.
Collapse
Affiliation(s)
- Jeff Moher
- Psychology Department, Connecticut College, 270 Mohegan Avenue, New London, CT, 06320, USA.
| | | | | |
Collapse
|
3
|
Davis G. ATLAS: Mapping ATtention's Location And Size to probe five modes of serial and parallel search. Atten Percept Psychophys 2024; 86:1938-1962. [PMID: 38982008 PMCID: PMC11410986 DOI: 10.3758/s13414-024-02921-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2024] [Indexed: 07/11/2024]
Abstract
Conventional visual search tasks do not address attention directly and their core manipulation of 'set size' - the number of displayed items - introduces stimulus confounds that hinder interpretation. However, alternative approaches have not been widely adopted, perhaps reflecting their complexity, assumptions, or indirect attention-sampling. Here, a new procedure, the ATtention Location And Size ('ATLAS') task used probe displays to track attention's location, breadth, and guidance during search. Though most probe displays comprised six items, participants reported only the single item they judged themselves to have perceived most clearly - indexing the attention 'peak'. By sampling peaks across variable 'choice sets', the size and position of the attention window during search was profiled. These indices appeared to distinguish narrow- from broad attention, signalled attention to pairs of items where it arose and tracked evolving attention-guidance over time. ATLAS is designed to discriminate five key search modes: serial-unguided, sequential-guided, unguided attention to 'clumps' with local guidance, and broad parallel-attention with or without guidance. This initial investigation used only an example set of highly regular stimuli, but its broader potential should be investigated.
Collapse
Affiliation(s)
- Gregory Davis
- Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK.
| |
Collapse
|
4
|
Walter K, Freeman M, Bex P. Quantifying task-related gaze. Atten Percept Psychophys 2024; 86:1318-1329. [PMID: 38594445 PMCID: PMC11093728 DOI: 10.3758/s13414-024-02883-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 04/11/2024]
Abstract
Competing theories attempt to explain what guides eye movements when exploring natural scenes: bottom-up image salience and top-down semantic salience. In one study, we apply language-based analyses to quantify the well-known observation that task influences gaze in natural scenes. Subjects viewed ten scenes as if they were performing one of two tasks. We found that the semantic similarity between the task and the labels of objects in the scenes captured the task-dependence of gaze (t(39) = 13.083; p < 0.001). In another study, we examined whether image salience or semantic salience better predicts gaze during a search task, and if viewing strategies are affected by searching for targets of high or low semantic relevance to the scene. Subjects searched 100 scenes for a high- or low-relevance object. We found that image salience becomes a worse predictor of gaze across successive fixations, while semantic salience remains a consistent predictor (X2(1, N=40) = 75.148, p < .001). Furthermore, we found that semantic salience decreased as object relevance decreased (t(39) = 2.304; p = .027). These results suggest that semantic salience is a useful predictor of gaze during task-related scene viewing, and that even in target-absent trials, gaze is modulated by the relevance of a search target to the scene in which it might be located.
Collapse
Affiliation(s)
- Kerri Walter
- Department of Psychology, Northeastern University, Boston, MA, USA.
| | - Michelle Freeman
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Peter Bex
- Department of Psychology, Northeastern University, Boston, MA, USA
| |
Collapse
|
5
|
Sternberg S. Combining reaction-time distributions to conserve shape. Behav Res Methods 2024; 56:1164-1191. [PMID: 37253959 DOI: 10.3758/s13428-023-02084-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/03/2023] [Indexed: 06/01/2023]
Abstract
To improve the estimate of the shape of a reaction-time distribution, it is sometimes desirable to combine several samples, drawn from different sessions or different subjects. How should these samples be combined? This paper provides an evaluation of four combination methods, two that are currently in use (the bin-means histogram, often called "Vincentizing", and quantile averaging) and two that are new (linear-transform pooling and shape averaging). The evaluation makes use of a modern method for describing the shape of a distribution, based on L-moments, rather than the traditional method, based on central moments. Also provided is an introduction to shape descriptors based on L-moments, whose advantages over central moments-less biased and less sensitive to outliers-are demonstrated. Whether traditional or modern shape descriptions are employed, the combination methods currently in use, especially bin-means histograms, based on averaged bin means, prove to be substantially inferior to the new methods. Averaged bin-means themselves are less deficient when estimating differences between distribution shapes, as in delta plots, but are nonetheless inferior to linear-transform pooling.
Collapse
|
6
|
Hughes AE, Nowakowska A, Clarke ADF. Bayesian multi-level modelling for predicting single and double feature visual search. Cortex 2024; 171:178-193. [PMID: 38007862 DOI: 10.1016/j.cortex.2023.10.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/19/2023] [Accepted: 10/05/2023] [Indexed: 11/28/2023]
Abstract
Performance in visual search tasks is frequently summarised by "search slopes" - the additional cost in reaction time for each additional distractor. While search tasks with a shallow search slopes are termed efficient (pop-out, parallel, feature), there is no clear dichotomy between efficient and inefficient (serial, conjunction) search. Indeed, a range of search slopes are observed in empirical data. The Target Contrast Signal (TCS) Theory is a rare example of quantitative model that attempts to predict search slopes for efficient visual search. One study using the TCS framework has shown that the search slope in a double-feature search (where the target differs in both colour and shape from the distractors) can be estimated from the slopes of the associated single-feature searches. This estimation is done using a contrast combination model, and a collinear contrast integration model was shown to outperform other options. In our work, we extend TCS to a Bayesian multi-level framework. We investigate modelling using normal and shifted-lognormal distributions, and show that the latter allows for a better fit to previously published data. We run a new fully within-subjects experiment to attempt to replicate the key original findings, and show that overall, TCS does a good job of predicting the data. However, we do not replicate the finding that the collinear combination model outperforms the other contrast combination models, instead finding that it may be difficult to conclusively distinguish between them.
Collapse
Affiliation(s)
- Anna E Hughes
- Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK.
| | - Anna Nowakowska
- School of Psychology, University of Aberdeen, Aberdeen, AB24 3FX, UK; School of Psychology and Vision Sciences, University of Leicester, Leicester, LE1 7RH, UK
| | | |
Collapse
|
7
|
Tricoche L, Meunier M, Hassen S, Prado J, Pélisson D. Developmental Trajectory of Anticipation: Insights from Sequential Comparative Judgments. Behav Sci (Basel) 2023; 13:646. [PMID: 37622787 PMCID: PMC10451546 DOI: 10.3390/bs13080646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 07/24/2023] [Accepted: 08/01/2023] [Indexed: 08/26/2023] Open
Abstract
Reaction time (RT) is a critical measure of performance, and studying its distribution at the group or individual level provides useful information on the cognitive processes or strategies used to perform a task. In a previous study measuring RT in children and adults asked to compare two successive stimuli (quantities or words), we discovered that the group RT distribution was bimodal, with some subjects responding with a mean RT of around 1100 ms and others with a mean RT of around 500 ms. This bimodal distribution suggested two distinct response strategies, one reactive, the other anticipatory. In the present study, we tested whether subjects' segregation into fast and slow responders (1) extended to other sequential comparative judgments (2) evolved from age 8 to adulthood, (3) could be linked to anticipation as assessed using computer modeling (4) stemmed from individual-specific strategies amenable to instruction. To test the first three predictions, we conducted a distributional and theoretical analysis of the RT of 158 subjects tested earlier using four different sequential comparative judgment tasks (numerosity, phonological, multiplication, subtraction). Group RT distributions were bimodal in all tasks, with the two strategies differing in speed and sometimes accuracy too. The fast strategy, which was rare or absent in 8- to 9-year-olds, steadily increased through childhood. Its frequency in adolescence remained, however, lower than in adulthood. A mixture model confirmed this developmental evolution, while a diffusion model corroborated the idea that the difference between the two strategies concerns anticipatory processes preceding decision processes. To test the fourth prediction, we conducted an online experiment where 236 participants made numerosity comparisons before and after an instruction favoring either reactive or anticipatory responses. The results provide out-of-the-lab evidence of the bimodal RT distribution associated with sequential comparisons and demonstrated that the proportions of fast vs. slow responders can be modulated simply by asking subjects to anticipate or not the future result of the comparison. Although anticipation of the future is as important for cognition as memory of the past, its evolution after the first year of life is much more poorly known. The present study is a step toward meeting this challenge. It also illustrates how analyzing individual RT distributions in addition to group RT distributions and using computational models can improve the assessment of decision making cognitive processes.
Collapse
Affiliation(s)
- Leslie Tricoche
- IMPACT Team, Lyon Neuroscience Research Center, University Lyon, UCBL, UJM, INSERM, CNRS, U1028, UMR5292, F-69000 Lyon, France; (M.M.); (S.H.); (D.P.)
| | - Martine Meunier
- IMPACT Team, Lyon Neuroscience Research Center, University Lyon, UCBL, UJM, INSERM, CNRS, U1028, UMR5292, F-69000 Lyon, France; (M.M.); (S.H.); (D.P.)
| | - Sirine Hassen
- IMPACT Team, Lyon Neuroscience Research Center, University Lyon, UCBL, UJM, INSERM, CNRS, U1028, UMR5292, F-69000 Lyon, France; (M.M.); (S.H.); (D.P.)
| | - Jérôme Prado
- EDUWELL Team, Lyon Neuroscience Research Center, University Lyon, UCBL, UJM, INSERM, CNRS, U1028, UMR5292, F-69000 Lyon, France;
| | - Denis Pélisson
- IMPACT Team, Lyon Neuroscience Research Center, University Lyon, UCBL, UJM, INSERM, CNRS, U1028, UMR5292, F-69000 Lyon, France; (M.M.); (S.H.); (D.P.)
| |
Collapse
|
8
|
Yang Y, Mo L, Lio G, Huang Y, Perret T, Sirigu A, Duhamel JR. Assessing the allocation of attention during visual search using digit-tracking, a calibration-free alternative to eye tracking. Sci Rep 2023; 13:2376. [PMID: 36759694 PMCID: PMC9911646 DOI: 10.1038/s41598-023-29133-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 01/31/2023] [Indexed: 02/11/2023] Open
Abstract
Digit-tracking, a simple, calibration-free technique, has proven to be a good alternative to eye tracking in vision science. Participants view stimuli superimposed by Gaussian blur on a touchscreen interface and slide a finger across the display to locally sharpen an area the size of the foveal region just at the finger's position. Finger movements are recorded as an indicator of eye movements and attentional focus. Because of its simplicity and portability, this system has many potential applications in basic and applied research. Here we used digit-tracking to investigate visual search and replicated several known effects observed using different types of search arrays. Exploration patterns measured with digit-tracking during visual search of natural scenes were comparable to those previously reported for eye-tracking and constrained by similar saliency. Therefore, our results provide further evidence for the validity and relevance of digit-tracking for basic and applied research on vision and attention.
Collapse
Affiliation(s)
- Yidong Yang
- Key Laboratory of Brain, Cognition and Education, Ministry of Education, South China Normal University, Guangzhou, 510631, China.,Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France
| | - Lei Mo
- Key Laboratory of Brain, Cognition and Education, Ministry of Education, South China Normal University, Guangzhou, 510631, China
| | - Guillaume Lio
- IMind Center of Excellence for Autism, Le Vinatier Hospital, Bron, France
| | - Yulong Huang
- Key Laboratory of Brain, Cognition and Education, Ministry of Education, South China Normal University, Guangzhou, 510631, China.,Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France
| | - Thomas Perret
- Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France
| | - Angela Sirigu
- Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France.,IMind Center of Excellence for Autism, Le Vinatier Hospital, Bron, France
| | - Jean-René Duhamel
- Institute of Cognitive Sciences Marc Jeannerod CNRS, UMR 5229, 69675, Bron, France.
| |
Collapse
|
9
|
|
10
|
The influence of category representativeness on the low prevalence effect in visual search. Psychon Bull Rev 2022; 30:634-642. [PMID: 36138284 DOI: 10.3758/s13423-022-02183-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/02/2022] [Indexed: 11/08/2022]
Abstract
Visual search is greatly affected by the appearance rate of given target types, such that low-prevalence items are harder to detect, which has consequences for real-world search tasks where target frequency cannot be balanced. However, targets that are highly representative of a categorically defined task set are also easier to find. We hypothesized that targets that are highly representative are less vulnerable to low-prevalence effects because an observer's attentional set prioritizes guidance toward them even when they are rare. We assessed this hypothesis by first determining the categorical structure of "prohibited carry-ons" via an exemplar-naming task, and used this structure to assess how category representativeness interacted with prevalence. Specifically, from the exemplar-naming task we selected a commonly named (knives) and rarely named (gas cans) target for a search task in which one of the targets was shown infrequently. As predicted, highly representative targets were found more easily than their less representative counterparts, but they also were less affected by prevalence manipulations. Experiment 1b replicated the results with targets matched for emotional valence (water bottles and fireworks). These findings demonstrate the powerful explanatory power of theories of attentional guidance that incorporate the dynamic influence of recent experience with the knowledge that comes from life experience to better predict behavioral outcomes associated with high-stakes search environments.
Collapse
|
11
|
Kühne SJ, Reijnen E, Granja G, Hansen RS. Labels Affect Food Choices, but in What Ways? Nutrients 2022; 14:nu14153204. [PMID: 35956380 PMCID: PMC9370702 DOI: 10.3390/nu14153204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 07/31/2022] [Accepted: 08/02/2022] [Indexed: 11/16/2022] Open
Abstract
To reduce obesity and thus promote healthy food choices, front-of-pack (FOP) labels have been introduced. Though FOP labels help identify healthy foods, their impact on actual food choices is rather small. A newly developed so-called swipe task was used to investigate whether the type of label used (summary vs. nutrient-specific) had differential effects on different operationalizations of the "healthier choice" measure (e.g., calories and sugar). After learning about the product offerings of a small online store, observers (N = 354) could, by means of a swipe gesture, purchase the products they needed for a weekend with six people. Observers were randomly assigned to one of five conditions, two summary label conditions (Nutri-Score and HFL), two nutrient (sugar)-specific label conditions (manga and comic), or a control condition without a label. Unexpectedly, more products (+7.3 products)-albeit mostly healthy ones-and thus more calories (+1732 kcal) were purchased in the label conditions than in the control condition. Furthermore, the tested labels had different effects with respect to the different operationalizations (e.g., manga reduced sugar purchase). We argue that the additional green-labeled healthy products purchased (in label conditions) "compensate" for the purchase of red-labeled unhealthy products (see averaging bias and licensing effect).
Collapse
|
12
|
Huang YY, Menozzi M. Effects of viewing distance and age on the performance and symptoms in a visual search task in augmented reality. APPLIED ERGONOMICS 2022; 102:103746. [PMID: 35290897 DOI: 10.1016/j.apergo.2022.103746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 03/08/2022] [Accepted: 03/08/2022] [Indexed: 06/14/2023]
Abstract
In augmented reality (AR), virtual information is optically combined with the physical environment. In the most frequently used combination technique, optical settings in AR depart from the settings in natural viewing. Depending on the combination of viewing distances of the virtual task and its physical background, this deviation may lower visual performance and cause visual disturbance symptoms. The so-called vergence-accommodation conflict (VAC) has been identified as a cause for the visual disturbance symptoms in AR. In this study, for various distance combinations, the performance and symptoms when performing a search task displayed in a see-through head-mounted display (AR HMD, HoloLens 1st generation, Microsoft, USA) was investigated. The search task was displayed at a virtual distance of either 200 cm or 30 cm, and the real background was viewed either at a distance of 200 cm or 30 cm. Three combinations of viewing distances for the background and the virtual task were studied: 200 cm/200 cm, 200 cm/30 cm, and 30 cm/30 cm. Results revealed that both performance and visual disturbance symptoms depend on the combination of the viewing distances of the physical background and the virtual task. When the physical background was viewed at a distance of 200 cm, younger participants showed a significantly better search performance and reported stronger symptoms compared with older participants, no matter whether the virtual task was performed at 30 cm or at 200 cm. However, with the physical background at a distance of 30 cm, the performance of the younger group dropped to the level of the performance of the older group, and younger participants tended to report a stronger increase in visual disturbance symptoms compared with the older participants. From the AR HMD technology used in this study, it can be concluded that a near viewing distance of the virtual task does not cause a negative impact on performance and visual disturbance symptoms, provided any physical background seen through the AR HMD is not at a near viewing distance. The findings indicate that the VAC, which persists in augmented and virtual reality, depends, in addition to the physical component evaluating the optical distance, on a cognitive component evaluating the perceived distance. AR settings should therefore also be evaluated in terms of possible effects on perceived distance.
Collapse
Affiliation(s)
- Ying-Yin Huang
- Department of Industrial Engineering and Management, National Taipei University of Technology, Taipei, 10608, Taiwan.
| | - Marino Menozzi
- Human Factors Engineering, Department of Health Sciences and Technology, ETH, Zürich, Switzerland
| |
Collapse
|
13
|
Nicholson DA, Prinz AA. Could simplified stimuli change how the brain performs visual search tasks? A deep neural network study. J Vis 2022; 22:3. [PMID: 35675057 PMCID: PMC9187944 DOI: 10.1167/jov.22.7.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/04/2022] [Indexed: 11/24/2022] Open
Abstract
Visual search is a complex behavior influenced by many factors. To control for these factors, many studies use highly simplified stimuli. However, the statistics of these stimuli are very different from the statistics of the natural images that the human visual system is optimized by evolution and experience to perceive. Could this difference change search behavior? If so, simplified stimuli may contribute to effects typically attributed to cognitive processes, such as selective attention. Here we use deep neural networks to test how optimizing models for the statistics of one distribution of images constrains performance on a task using images from a different distribution. We train four deep neural network architectures on one of three source datasets-natural images, faces, and x-ray images-and then adapt them to a visual search task using simplified stimuli. This adaptation produces models that exhibit performance limitations similar to humans, whereas models trained on the search task alone exhibit no such limitations. However, we also find that deep neural networks trained to classify natural images exhibit similar limitations when adapted to a search task that uses a different set of natural images. Therefore, the distribution of data alone cannot explain this effect. We discuss how future work might integrate an optimization-based approach into existing models of visual search behavior.
Collapse
Affiliation(s)
- David A Nicholson
- Emory University, Department of Biology, O. Wayne Rollins Research Center, Atlanta, Georgia
| | - Astrid A Prinz
- Emory University, Department of Biology, O. Wayne Rollins Research Center, Atlanta, Georgia
| |
Collapse
|
14
|
Riedel P, Domachowska IM, Lee Y, Neukam PT, Tönges L, Li SC, Goschke T, Smolka MN. L-DOPA administration shifts the stability-flexibility balance towards attentional capture by distractors during a visual search task. Psychopharmacology (Berl) 2022; 239:867-885. [PMID: 35147724 PMCID: PMC8891202 DOI: 10.1007/s00213-022-06077-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 01/24/2022] [Indexed: 12/20/2022]
Abstract
RATIONALE The cognitive control dilemma describes the necessity to balance two antagonistic modes of attention: stability and flexibility. Stability refers to goal-directed thought, feeling, or action and flexibility refers to the complementary ability to adapt to an ever-changing environment. Their balance is thought to be maintained by neurotransmitters such as dopamine, most likely in a U-shaped rather than linear manner. However, in humans, studies on the stability-flexibility balance using a dopaminergic agent and/or measurement of brain dopamine are scarce. OBJECTIVE The study aimed to investigate the causal involvement of dopamine in the stability-flexibility balance and the nature of this relationship in humans. METHODS Distractibility was assessed as the difference in reaction time (RT) between distractor and non-distractor trials in a visual search task. In a randomized, placebo-controlled, double-blind, crossover study, 65 healthy participants performed the task under placebo and a dopamine precursor (L-DOPA). Using 18F-DOPA-PET, dopamine availability in the striatum was examined at baseline to investigate its relationship to the RT distractor effect and to the L-DOPA-induced change of the RT distractor effect. RESULTS There was a pronounced RT distractor effect in the placebo session that increased under L-DOPA. Neither the RT distractor effect in the placebo session nor the magnitude of its L-DOPA-induced increase were related to baseline striatal dopamine. CONCLUSIONS L-DOPA administration shifted the stability-flexibility balance towards attentional capture by distractors, suggesting causal involvement of dopamine. This finding is consistent with current theories of prefrontal cortex dopamine function. Current data can neither confirm nor falsify the inverted U-shaped function hypothesis with regard to cognitive control.
Collapse
Affiliation(s)
- P. Riedel
- Department of Psychiatry and Psychotherapy, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - I. M. Domachowska
- Department of Psychology, Technische Universität Dresden, Zellescher Weg 17, 01069 Dresden, Germany
| | - Y. Lee
- Department of Psychiatry and Psychotherapy, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - P. T. Neukam
- Department of Psychiatry and Psychotherapy, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - L. Tönges
- Department of Neurology, Ruhr University Bochum, St. Josef-Hospital, Gudrunstraße 56, 44791 Bochum, Germany
| | - S. C. Li
- Department of Psychology, Technische Universität Dresden, Zellescher Weg 17, 01069 Dresden, Germany ,Centre for Tactile Internet With Human-in-the-Loop, Technische Universität Dresden, Georg-Schumman-Str. 9, 01187 Dresden, Germany
| | - T. Goschke
- Department of Psychology, Technische Universität Dresden, Zellescher Weg 17, 01069 Dresden, Germany
| | - M. N. Smolka
- Department of Psychiatry and Psychotherapy, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| |
Collapse
|
15
|
The impact of cognitive load on prospective and retrospective time estimates at long durations: An investigation using a visual and memory search paradigm. Mem Cognit 2021; 50:837-851. [PMID: 34655029 DOI: 10.3758/s13421-021-01241-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/12/2021] [Indexed: 11/08/2022]
Abstract
As human beings, we are bound by time. It is essential for daily functioning, and yet our ability to keep track of time is influenced by a myriad of factors (Block & Zakay, 1997, Psychonomic Bulletin & Review, 4[2], 184-197). First and foremost, time estimation has been found to depend on whether participants estimate the time prospectively or retrospectively (Hicks et al., 1976, The American Journal of Psychology, 89[4], 719-730). However, there is a paucity of research investigating differences between these two conditions in tasks over two minutes (Tobin et al., 2010, PLOS ONE, 5[2], Article e9271). Moreover, estimates have also been shown to be influenced by cognitive load. We thus investigated participants' ability to keep track of time during a visual and memory search task and manipulated its difficulty and duration. Two hundred and ninety-two participants performed the task for 8 or 58 minutes. Participants in the prospective time judgment condition were forewarned of an impending time estimate, whereas participants in the retrospective condition were not. Cognitive load was manipulated and assessed by altering the task's difficulty. The results revealed a higher overestimation of time in the prospective condition compared with the retrospective condition. However, this was found in the 8-minute task only. Overall, participants significantly overestimated the duration of the 8-minute task and underestimated the 58-minute task. Finally, cognitive load had no effect on participants' time estimates. Thus, the well-known cross-over interaction between cognitive load and estimation paradigm (Block et al., 2010, Acta Psychologica, 134[3], 330-343) did not extend to a longer duration in this experiment.
Collapse
|
16
|
Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology and Radiology, Brigham & Women's Hospital/Harvard Medical School, Cambridge, MA, USA.
- Visual Attention Lab, 65 Landsdowne St, 4th Floor, Cambridge, MA, 02139, USA.
| |
Collapse
|
17
|
Park HB, Ahn S, Zhang W. Visual search under physical effort is faster but more vulnerable to distractor interference. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:17. [PMID: 33710497 PMCID: PMC7977006 DOI: 10.1186/s41235-021-00283-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 03/02/2021] [Indexed: 11/15/2022]
Abstract
Cognition and action are often intertwined in everyday life. It is thus pivotal to understand how cognitive processes operate with concurrent actions. The present study aims to assess how simple physical effort operationalized as isometric muscle contractions affects visual attention and inhibitory control. In a dual-task paradigm, participants performed a singleton search task and a handgrip task concurrently. In the search task, the target was a shape singleton among distractors with a homogeneous but different shape. A salient-but-irrelevant distractor with a unique color (i.e., color singleton) appeared on half of the trials (Singleton distractor present condition), and its presence often captures spatial attention. Critically, the visual search task was performed by the participants with concurrent hand grip exertion, at 5% or 40% of their maximum strength (low vs. high physical load), on a hand dynamometer. We found that visual search under physical effort is faster, but more vulnerable to distractor interference, potentially due to arousal and reduced inhibitory control, respectively. The two effects further manifest in different aspects of RT distributions that can be captured by different components of the ex-Gaussian model using hierarchical Bayesian method. Together, these results provide behavioral evidence and a novel model for two dissociable cognitive mechanisms underlying the effects of simple muscle exertion on the ongoing visual search process on a moment-by-moment basis.
Collapse
Affiliation(s)
- Hyung-Bum Park
- Department of Psychology, University of California, Riverside, USA.
| | - Shinhae Ahn
- Department of Psychology, Chungbuk National University, Cheongju, Korea
| | - Weiwei Zhang
- Department of Psychology, University of California, Riverside, USA
| |
Collapse
|
18
|
Panis S, Schmidt F, Wolkersdorfer MP, Schmidt T. Analyzing Response Times and Other Types of Time-to-Event Data Using Event History Analysis: A Tool for Mental Chronometry and Cognitive Psychophysiology. Iperception 2020; 11:2041669520978673. [PMID: 35145613 PMCID: PMC8822313 DOI: 10.1177/2041669520978673] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Accepted: 11/16/2020] [Indexed: 11/17/2022] Open
Abstract
In this Methods article, we discuss and illustrate a unifying, principled way to analyze response time data from psychological experiments—and all other types of time-to-event data. We advocate the general application of discrete-time event history analysis (EHA) which is a well-established, intuitive longitudinal approach to statistically describe and model the shape of time-to-event distributions. After discussing the theoretical background behind the so-called hazard function of event occurrence in both continuous and discrete time units, we illustrate how to calculate and interpret the descriptive statistics provided by discrete-time EHA using two example data sets (masked priming, visual search). In case of discrimination data, the hazard analysis of response occurrence can be extended with a microlevel speed-accuracy trade-off analysis. We then discuss different approaches for obtaining inferential statistics. We consider the advantages and disadvantages of a principled use of discrete-time EHA for time-to-event data compared to (a) comparing means with analysis of variance, (b) other distributional methods available in the literature such as delta plots and continuous-time EHA methods, and (c) only fitting parametric distributions or computational models to empirical data. We conclude that statistically controlling for the passage of time during data analysis is equally important as experimental control during the design of an experiment, to understand human behavior in our experimental paradigms.
Collapse
Affiliation(s)
- Sven Panis
- Experimental Psychology Unit, Faculty of Social Sciences, Technische Universität Kaiserslautern, Kaiserslautern, Germany
- Experimental Psychology Unit, Faculty of Social Sciences, Technische Universität Kaiserslautern, Kaiserslautern, Germany
| | - Filipp Schmidt
- Abteilung Allgemeine Psychologie, Fachbereich 06, Psychologie und Sportwissenschaft, Justus-Liebig-Universität Gießen, Giessen, Germany
- Experimental Psychology Unit, Faculty of Social Sciences, Technische Universität Kaiserslautern, Kaiserslautern, Germany
| | - Maximilian P. Wolkersdorfer
- Experimental Psychology Unit, Faculty of Social Sciences, Technische Universität Kaiserslautern, Kaiserslautern, Germany
| | - Thomas Schmidt
- Experimental Psychology Unit, Faculty of Social Sciences, Technische Universität Kaiserslautern, Kaiserslautern, Germany
| |
Collapse
|
19
|
Studying the dynamics of visual search behavior using RT hazard and micro-level speed-accuracy tradeoff functions: A role for recurrent object recognition and cognitive control processes. Atten Percept Psychophys 2020; 82:689-714. [PMID: 31942704 DOI: 10.3758/s13414-019-01897-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Thanks to the work of Anne Treisman and many others, the visual search paradigm has become one of the most popular paradigms in the study of visual attention. However, statistics like mean correct response time (RT) and percent error do not usually suffice to decide between the different search models that have been developed. Recently, to move beyond mean performance measures in visual search, RT histograms have been plotted, theoretical waiting time distributions have been fitted, and whole RT and error distributions have been simulated. Here we promote and illustrate the general application of discrete-time hazard analysis to response times, and of micro-level speed-accuracy tradeoff analysis to timed response accuracies. An exploratory analysis of published benchmark search data from feature, conjunction, and spatial configuration search tasks reveals new features of visual search behavior, such as a relatively flat hazard function in the right tail of the RT distributions for all tasks, a clear effect of set size on the shape of the RT distribution for the feature search task, and individual differences in the presence of a systematic pattern of early errors. Our findings suggest that the temporal dynamics of visual search behavior results from a decision process that is temporally modulated by concurrently active recurrent object recognition, learning, and cognitive control processes, next to attentional selection processes.
Collapse
|
20
|
Visual and central attention share a capacity limitation when the demands for serial item selection in visual search are high. Atten Percept Psychophys 2020; 82:715-728. [PMID: 31974939 DOI: 10.3758/s13414-019-01903-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual and central attention are limited in capacity. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color, form, size), which results in a serial search process. In dual-tasks, central attention is required for response selection, but because central attention is limited in capacity, response selection can only be carried out for one task at a time. Here, we investigated whether visual and central attention rely on a common or on distinct capacity limitations. In two dual-task experiments, participants completed an auditory two-choice discrimination Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (stimulus onset asynchrony [SOA]). In Experiment 1, Task 2 was a triple conjunction search task. Each item consisted of a conjunction of three features, so that target and distractors shared two features. In Experiment 2, Task 2 was a plus conjunction search task, in which target and distractors shared the same four features. The hypotheses for conjunction search time were derived from the locus-of-slack method. While plus conjunction search was performed after response selection in Task 1, a small part of triple conjunction search was still performed in parallel to response selection in Task 1. However, the between-experiment comparison was not significant, indicating that both search tasks may require central attention. Taken together, the present study provides evidence that visual and central attention share a common capacity limitation when conjunction search relies strongly on serial item selection.
Collapse
|
21
|
Abstract
In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.
Collapse
Affiliation(s)
- Jeremy M. Wolfe
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
22
|
Synthetic-Neuroscore: Using a neuro-AI interface for evaluating generative adversarial networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.069] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
23
|
Utochkin IS. Categorical grouping is not required for guided conjunction search. J Vis 2020; 20:30. [PMID: 32857110 PMCID: PMC7463200 DOI: 10.1167/jov.20.8.30] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 07/17/2020] [Indexed: 11/30/2022] Open
Abstract
Knowledge of target features can guide attention in many conjunction searches in a top-down manner. For example, in search of a red vertical line among blue vertical and red horizontal lines, observers can guide attention toward all red items and all vertical items. In typical conjunction searches, distractors often form perceptually vivid, categorical groups of identical objects. This could favor the efficient search via guidance of attention to these "segmentable" groups. Can attention be guided if the distractors are not neatly segmentable (e.g., if colors vary continuously from red through purple to blue)? We tested search for conjunctions of color × orientation (Experiments 1, 3, 4, 5) or length × orientation (Experiment 2). In segmentable conditions, distractors could form two clear groups (e.g., blue steep and red flat). In non-segmentable conditions, distractors varied smoothly from red to blue and/or steep to flat; thus, discouraging grouping and increasing overall heterogeneity. We found that the efficiency of conjunction search was reasonably high and unaffected by segmentability. The same lack of segmentability had a detrimental effect on feature search (Experiment 4) and on conjunction search, if target information was limited to one feature (e.g., find the odd item in the red set, "subset search," Experiment 3). Guidance in conjunction search may not require grouping and segmentation cues that are very important in other tasks like texture discrimination. Our results support an idea of simultaneous, parallel top-down guidance by multiple features and argue against models suggesting sequential guidance by each feature in turn.
Collapse
|
24
|
Yang YH, Wolfe JM. Is apparent instability a guiding feature in visual search? VISUAL COGNITION 2020; 28:218-238. [PMID: 33100884 PMCID: PMC7577071 DOI: 10.1080/13506285.2020.1779892] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 06/01/2020] [Indexed: 10/24/2022]
Abstract
Humans are quick to notice if an object is unstable. Does that assessment require attention or can instability serve as a preattentive feature that can guide the deployment of attention? This paper describes a series of visual search experiments, designed to address this question. Experiment 1 shows that less stable images among more stable images are found more efficiently than more stable among less stable; a search asymmetry that supports guidance by instability. Experiment 2 shows efficient search but no search asymmetry when the orientation of the objects is removed as a confound. Experiment 3 independently varies the orientation cues and perceived stability and finds a clear main effect of apparent stability. Experiment 4 shows converging evidence for a role of stability using different stimuli that lack an orientation cue. However, here both search for stable and unstable targets is inefficient. Experiment 5 is a control for Experiment 4, showing that the stability effect in Experiment 4 is not simple side-effects of the geometry of the stimuli. On balance, the data support a role for instability in the guidance of attention in visual search. (184 words).
Collapse
Affiliation(s)
- Yung-Hao Yang
- Visual Attention Laboratory, Brigham and Women’s Hospital & Harvard Medical School, Boston, MA, USA
- Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Jeremy M Wolfe
- Visual Attention Laboratory, Brigham and Women’s Hospital & Harvard Medical School, Boston, MA, USA
| |
Collapse
|
25
|
Abstract
In our physical environment as well as in many experimental paradigms, we need to decide whether an occurring stimulus is relevant to us or not; further, stimuli have uneven probabilities to emerge. Both, decision making and the difference between rare and frequent stimuli (oddball effect) are described to affect pupil dilation. Surprisingly though, conjoint systematic pupillometric investigations into both factors are still rare. In two experiments, both factors as well as their interplay were investigated. Participants completed a sequential letter matching task. In this task, stimulus probability and letter matching (decision making) were manipulated independently. As dependent variables, pupil dilation and reaction time were assessed. Results suggest a clearly larger pupil dilation for target than for distractor letters, even when targets were frequent and distractors rare. When considering the data structure best, no main effect of stimulus probability was found, instead, oddball effects only emerged when stimuli were goal-relevant to participants. The results are discussed in the light of common theoretical concepts of decision making and stimulus probability. Finally, relating theories of each factor, we propose an integrated framework for effects of decision making and stimulus features on pupil dilation. We assume a sequential mechanism during which incoming stimuli are decided upon regarding their goal relevance and, about 200 ms later, relevant stimuli are appraised regarding their value.
Collapse
|
26
|
Merzon L, Malevich T, Zhulikov G, Krasovskaya S, MacInnes WJ. Temporal Limitations of the Standard Leaky Integrate and Fire Model. Brain Sci 2019; 10:E16. [PMID: 31892197 PMCID: PMC7016704 DOI: 10.3390/brainsci10010016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 12/18/2019] [Accepted: 12/19/2019] [Indexed: 11/17/2022] Open
Abstract
Itti and Koch's Saliency Model has been used extensively to simulate fixation selection in a variety of tasks from visual search to simple reaction times. Although the Saliency Model has been tested for its spatial prediction of fixations in visual salience, it has not been well tested for their temporal accuracy. Visual tasks, like search, invariably result in a positively skewed distribution of saccadic reaction times over large numbers of samples, yet we show that the leaky integrate and fire (LIF) neuronal model included in the classic implementation of the model tends to produce a distribution shifted to shorter fixations (in comparison with human data). Further, while parameter optimization using a genetic algorithm and Nelder-Mead method does improve the fit of the resulting distribution, it is still unable to match temporal distributions of human responses in a visual task. Analysis of times for individual images reveal that the LIF algorithm produces initial fixation durations that are fixed instead of a sample from a distribution (as in the human case). Only by aggregating responses over many input images do they result in a distribution, although the form of this distribution still depends on the input images used to create it and not on internal model variability.
Collapse
Affiliation(s)
- Liya Merzon
- Vision Modelling Laboratory, National Research University Higher School of Economics, 109074 Moscow, Russia; (G.Z.); (S.K.)
- Department of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
- Neuroscience and Biomedical Engineering Department, Aalto University, 02150 Espoo, Finland
| | - Tatiana Malevich
- Werner Reichardt Centre for Integrative Neuroscience, 72076 Tuebingen, Germany;
| | - Georgiy Zhulikov
- Vision Modelling Laboratory, National Research University Higher School of Economics, 109074 Moscow, Russia; (G.Z.); (S.K.)
- Institute of Water Problems Russian Academy of Science, 117971 Moscow, Russia
| | - Sofia Krasovskaya
- Vision Modelling Laboratory, National Research University Higher School of Economics, 109074 Moscow, Russia; (G.Z.); (S.K.)
- Department of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
| | - W. Joseph MacInnes
- Vision Modelling Laboratory, National Research University Higher School of Economics, 109074 Moscow, Russia; (G.Z.); (S.K.)
- Department of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
| |
Collapse
|
27
|
Lu H, Yi L, Zhang H. Autistic traits influence the strategic diversity of information sampling: Insights from two-stage decision models. PLoS Comput Biol 2019; 15:e1006964. [PMID: 31790391 PMCID: PMC6907874 DOI: 10.1371/journal.pcbi.1006964] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 12/12/2019] [Accepted: 11/10/2019] [Indexed: 11/18/2022] Open
Abstract
Information sampling can reduce uncertainty in future decisions but is often costly. To maximize reward, people need to balance sampling cost and information gain. Here we aimed to understand how autistic traits influence the optimality of information sampling and to identify the particularly affected cognitive processes. Healthy human adults with different levels of autistic traits performed a probabilistic inference task, where they could sequentially sample information to increase their likelihood of correct inference and may choose to stop at any moment. We manipulated the cost and evidence associated with each sample and compared participants’ performance to strategies that maximize expected gain. We found that participants were overall close to optimal but also showed autistic-trait-related differences. Participants with higher autistic traits had a higher efficiency of winning rewards when the sampling cost was zero but a lower efficiency when the cost was high and the evidence was more ambiguous. Computational modeling of participants’ sampling choices and decision times revealed a two-stage decision process, with the second stage being an optional second thought. Participants may consider cost in the first stage and evidence in the second stage, or in the reverse order. The probability of choosing to stop sampling at a specific stage increases with increasing cost or increasing evidence. Surprisingly, autistic traits did not influence the decision in either stage. However, participants with higher autistic traits inclined to consider cost first, while those with lower autistic traits considered cost or evidence first in a more balanced way. This would lead to the observed autistic-trait-related advantages or disadvantages in sampling optimality, depending on whether the optimal sampling strategy is determined only by cost or jointly by cost and evidence. Children with autism can spend hours practicing lining up toys or learning all about cars or lighthouses. This kind of behaviors, we think, may reflect suboptimal information sampling strategies, that is, a failure to balance the gain of information with the cost (time, energy, or money) of information sampling. We hypothesized that suboptimal information sampling is a general characteristic of people with autism or high level of autistic traits. In our experiment, we tested how participants may adjust their sampling strategies with the change of sampling cost and information gain in the environment. Though all participants were healthy young adults who had similar IQs, higher autistic traits were associated with higher or lower efficiency of winning rewards under different conditions. Counterintuitively, participants with different levels of autistic traits did not differ in the general tendency of oversampling or undersampling, or in the decision they would reach when a specific set of sampling cost or information gain was considered. Instead, participants with higher autistic traits consistently considered sampling cost first and only weighed information gain during a second thought, while those with lower autistic traits had more diverse sampling strategies that consequently better balanced sampling cost and information gain.
Collapse
Affiliation(s)
- Haoyang Lu
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| | - Li Yi
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
- * E-mail: (YL); (HZ)
| | - Hang Zhang
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China
- * E-mail: (YL); (HZ)
| |
Collapse
|
28
|
Clarke ADF, Nowakowska A, Hunt AR. Seeing Beyond Salience and Guidance: The Role of Bias and Decision in Visual Search. Vision (Basel) 2019; 3:E46. [PMID: 31735847 PMCID: PMC6802808 DOI: 10.3390/vision3030046] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 08/07/2019] [Accepted: 08/21/2019] [Indexed: 11/17/2022] Open
Abstract
Visual search is a popular tool for studying a range of questions about perception and attention, thanks to the ease with which the basic paradigm can be controlled and manipulated. While often thought of as a sub-field of vision science, search tasks are significantly more complex than most other perceptual tasks, with strategy and decision playing an essential, but neglected, role. In this review, we briefly describe some of the important theoretical advances about perception and attention that have been gained from studying visual search within the signal detection and guided search frameworks. Under most circumstances, search also involves executing a series of eye movements. We argue that understanding the contribution of biases, routines and strategies to visual search performance over multiple fixations will lead to new insights about these decision-related processes and how they interact with perception and attention. We also highlight the neglected potential for variability, both within and between searchers, to contribute to our understanding of visual search. The exciting challenge will be to account for variations in search performance caused by these numerous factors and their interactions. We conclude the review with some recommendations for ways future research can tackle these challenges to move the field forward.
Collapse
Affiliation(s)
| | - Anna Nowakowska
- School of Psychology, University of Aberdeen, Aberdeen AB24 3FX, UK
| | - Amelia R. Hunt
- School of Psychology, University of Aberdeen, Aberdeen AB24 3FX, UK
| |
Collapse
|
29
|
|
30
|
|
31
|
|
32
|
Wiegand I, Wolfe JM. Age doesn't matter much: hybrid visual and memory search is preserved in older adults. AGING NEUROPSYCHOLOGY AND COGNITION 2019; 27:220-253. [PMID: 31050319 DOI: 10.1080/13825585.2019.1604941] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
We tested younger and older observers' attention and long-term memory functions in a "hybrid search" task, in which observers look through visual displays for instances of any of several types of targets held in memory. Apart from a general slowing, search efficiency did not change with age. In both age groups, reaction times increased linearly with the visual set size and logarithmically with the memory set size, with similar relative costs of increasing load (Experiment 1). We replicated the finding and further showed that performance remained comparable between age groups when familiarity cues were made irrelevant (Experiment 2) and target-context associations were to be retrieved (Experiment 3). Our findings are at variance with theories of cognitive aging that propose age-specific deficits in attention and memory. As hybrid search resembles many real-world searches, our results might be relevant to improve the ecological validity of assessing age-related cognitive decline.
Collapse
Affiliation(s)
- Iris Wiegand
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, MA, USA.,Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany.,Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Jeremy M Wolfe
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, MA, USA.,Departments of Ophthalmology & Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
33
|
Abstract
In Hybrid Foraging tasks, observers search for multiple instances of several types of target. Collecting all the dirty laundry and kitchenware out of a child's room would be a real-world example. How are such foraging episodes structured? A series of four experiments shows that selection of one item from the display makes it more likely that the next item will be of the same type. This pattern holds if the targets are defined by basic features like color and shape but not if they are defined by their identity (e.g., the letters p & d). Additionally, switching between target types during search is expensive in time, with longer response times between successive selections if the target type changes than if they are the same. Finally, the decision to leave a screen/patch for the next screen in these foraging tasks is imperfectly consistent with the predictions of optimal foraging theory. The results of these hybrid foraging studies cast new light on the ways in which prior selection history guides subsequent visual search in general.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Visual Attention Laboratory, Department of Surgery, Brigham and Women's Hospital, Boston, MA, USA.
- Department of Ophthalmology and Radiology, Harvard Medical School, Boston, MA, USA.
- Visual Attention Laboratory, Department of Surgery, Brigham and Women's Hospital, 64 Sidney St. Suite. 170, Cambridge, MA, 02139-4170, USA.
| | - Matthew S Cain
- Development, and Engineering Center, US Army Natick Soldier Research, Natick, MA, USA
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Avigael M Aizenman
- Vision Science Department, University of California Berkeley, Berkeley, CA, USA
| |
Collapse
|
34
|
Liesefeld HR, Liesefeld AM, Müller HJ. Distractor-interference reduction is dimensionally constrained. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2018.1561568] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Heinrich René Liesefeld
- Department Psychologie, Ludwig-Maximilians-Universität, München, Germany
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Anna M. Liesefeld
- Department Psychologie, Ludwig-Maximilians-Universität, München, Germany
| | - Hermann J. Müller
- Department Psychologie, Ludwig-Maximilians-Universität, München, Germany
- Department of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
35
|
Liesefeld HR, Liesefeld AM, Pollmann S, Müller HJ. Biasing Allocations of Attention via Selective Weighting of Saliency Signals: Behavioral and Neuroimaging Evidence for the Dimension-Weighting Account. Curr Top Behav Neurosci 2019; 41:87-113. [PMID: 30588570 DOI: 10.1007/7854_2018_75] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Objects that stand out from the environment tend to be of behavioral relevance, and the visual system is tuned to preferably process these salient objects by allocating focused attention. However, attention is not just passively (bottom-up) driven by stimulus features, but previous experiences and task goals exert strong biases toward attending or actively ignoring salient objects. The core and eponymous assumption of the dimension-weighting account (DWA) is that these top-down biases are not as flexible as one would like them to be; rather, they are subject to dimensional constraints. In particular, DWA assumes that people can often not search for objects that have a particular feature but only for objects that stand out from the environment (i.e., that are salient) in a particular feature dimension. We review behavioral and neuroimaging evidence for such dimensional constraints in three areas: search history, voluntary target enhancement, and distractor handling. The first two have been the focus of research on DWA since its inception and the latter the subject of our more recent research. Additionally, we discuss various challenges to the DWA and its relation to other prominent theories on top-down influences in visual search.
Collapse
Affiliation(s)
- Heinrich René Liesefeld
- Department Psychologie, Ludwig-Maximilians-Universität, Munich, Germany.
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität, Munich, Germany.
| | - Anna M Liesefeld
- Department Psychologie, Ludwig-Maximilians-Universität, Munich, Germany
| | - Stefan Pollmann
- Institute of Psychology and Center for Behavioral Brain Sciences, Otto von Guericke University, Magdeburg, Germany
| | - Hermann J Müller
- Department Psychologie, Ludwig-Maximilians-Universität, Munich, Germany
- Department of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
36
|
Kieras DE. Visual Search Without Selective Attention: A Cognitive Architecture Account. Top Cogn Sci 2018; 11:222-239. [PMID: 30585421 DOI: 10.1111/tops.12406] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2018] [Revised: 12/03/2018] [Accepted: 12/04/2018] [Indexed: 11/28/2022]
Abstract
A key phenomenon in visual search experiments is the linear relation of reaction time (RT) to the number of objects to be searched (set size). The dominant theory of visual search claims that this is a result of covert selective attention operating sequentially to "bind" visual features into objects, and this mechanism operates differently depending on the nature of the search task and the visual features involved, causing the slope of the RT as a function of set size to range from zero to large values. However, a cognitive architectural model presented here shows these effects on RT in three different search task conditions can be easily obtained from basic visual mechanisms, eye movements, and simple task strategies. No selective attention mechanism is needed. In addition, there are little-explored effects of visual crowding, which is typically confounded with set size in visual search experiments. Including a simple mechanism for crowding in the model also allows it to account for significant effects on error rate (ER). The resulting model shows the interaction between visual mechanisms and task strategy, and thus it represents a more comprehensive and fruitful approach to visual search than the dominant theory.
Collapse
Affiliation(s)
- David E Kieras
- Electrical Engineering & Computer Science Department, University of Michigan
| |
Collapse
|
37
|
Schuster S. Hunting in archerfish - an ecological perspective on a remarkable combination of skills. ACTA ACUST UNITED AC 2018; 221:221/24/jeb159723. [PMID: 30530768 DOI: 10.1242/jeb.159723] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Archerfish are well known for using jets of water to dislodge distant aerial prey from twigs or leaves. This Review gives a brief overview of a number of skills that the fish need to secure prey with their shooting technique. Archerfish are opportunistic hunters and, even in the wild, shoot at artificial objects to determine whether these are rewarding. They can detect non-moving targets and use efficient search strategies with characteristics of human visual search. Their learning of how to engage targets can be remarkably efficient and can show impressive degrees of generalization, including learning from observation. In other cases, however, the fish seem unable to learn and it requires some understanding of the ecological and biophysical constraints to appreciate why. The act of shooting has turned out not to be of a simple all-or-none character. Rather, the fish adjust the volume of water fired according to target size and use fine adjustments in the timing of their mouth opening and closing manoeuvre to adjust the hydrodynamic stability of their jets to target distance. As soon as prey is dislodged and starts falling, the fish make rapid and yet sophisticated multi-dimensional decisions to secure their prey against many intraspecific and interspecific competitors. Although it is not known why and how archerfish evolved an ability to shoot in the first place, I suggest that the evolution of shooting has strongly pushed the co-evolution of diverse other skills that are needed to secure a catch.
Collapse
Affiliation(s)
- Stefan Schuster
- Department of Animal Physiology, University of Bayreuth, 95440 Bayreuth, Germany
| |
Collapse
|
38
|
Humans and Algorithms for Facial Recognition: The Effects of Candidate List Length and Experience on Performance. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2018. [DOI: 10.1016/j.jarmac.2018.06.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
39
|
Kamienkowski JE, Varatharajah A, Sigman M, Ison MJ. Parsing a mental program: Fixation-related brain signatures of unitary operations and routines in natural visual search. Neuroimage 2018; 183:73-86. [DOI: 10.1016/j.neuroimage.2018.08.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Revised: 07/24/2018] [Accepted: 08/06/2018] [Indexed: 10/28/2022] Open
|
40
|
Berga D, Fdez-Vidal XR, Otazu X, Leborán V, Pardo XM. Psychophysical evaluation of individual low-level feature influences on visual attention. Vision Res 2018; 154:60-79. [PMID: 30408434 DOI: 10.1016/j.visres.2018.10.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2018] [Revised: 10/23/2018] [Accepted: 10/26/2018] [Indexed: 11/16/2022]
Abstract
In this study we provide the analysis of eye movement behavior elicited by low-level feature distinctiveness with a dataset of synthetically-generated image patterns. Design of visual stimuli was inspired by the ones used in previous psychophysical experiments, namely in free-viewing and visual searching tasks, to provide a total of 15 types of stimuli, divided according to the task and feature to be analyzed. Our interest is to analyze the influences of low-level feature contrast between a salient region and the rest of distractors, providing fixation localization characteristics and reaction time of landing inside the salient region. Eye-tracking data was collected from 34 participants during the viewing of a 230 images dataset. Results show that saliency is predominantly and distinctively influenced by: 1. feature type, 2. feature contrast, 3. temporality of fixations, 4. task difficulty and 5. center bias. This experimentation proposes a new psychophysical basis for saliency model evaluation using synthetic images.
Collapse
Affiliation(s)
- David Berga
- Computer Vision Center, Universitat Autonoma de Barcelona, Spain; Computer Science Department, Universitat Autonoma de Barcelona, Spain.
| | - Xosé R Fdez-Vidal
- Centro de Investigacion en Tecnoloxias da Informacion, Universidade Santiago de Compostela, Spain
| | - Xavier Otazu
- Computer Vision Center, Universitat Autonoma de Barcelona, Spain; Computer Science Department, Universitat Autonoma de Barcelona, Spain
| | - Víctor Leborán
- Centro de Investigacion en Tecnoloxias da Informacion, Universidade Santiago de Compostela, Spain
| | - Xosé M Pardo
- Centro de Investigacion en Tecnoloxias da Informacion, Universidade Santiago de Compostela, Spain
| |
Collapse
|
41
|
Ares G, Varela F, Machin L, Antúnez L, Giménez A, Curutchet MR, Aschemann-Witzel J. Comparative performance of three interpretative front-of-pack nutrition labelling schemes: Insights for policy making. Food Qual Prefer 2018. [DOI: 10.1016/j.foodqual.2018.03.007] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
42
|
Allenmark F, Müller HJ, Shi Z. Inter-trial effects in visual pop-out search: Factorial comparison of Bayesian updating models. PLoS Comput Biol 2018; 14:e1006328. [PMID: 30059500 PMCID: PMC6091979 DOI: 10.1371/journal.pcbi.1006328] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Revised: 08/14/2018] [Accepted: 06/26/2018] [Indexed: 01/08/2023] Open
Abstract
Many previous studies on visual search have reported inter-trial effects, that is, observers respond faster when some target property, such as a defining feature or dimension, or the response associated with the target repeats versus changes across consecutive trial episodes. However, what processes drive these inter-trial effects is still controversial. Here, we investigated this question using a combination of Bayesian modeling of belief updating and evidence accumulation modeling in perceptual decision-making. In three visual singleton ('pop-out') search experiments, we explored how the probability of the response-critical states of the search display (e.g., target presence/absence) and the repetition/switch of the target-defining dimension (color/ orientation) affect reaction time distributions. The results replicated the mean reaction time (RT) inter-trial and dimension repetition/switch effects that have been reported in previous studies. Going beyond this, to uncover the underlying mechanisms, we used the Drift-Diffusion Model (DDM) and the Linear Approach to Threshold with Ergodic Rate (LATER) model to explain the RT distributions in terms of decision bias (starting point) and information processing speed (evidence accumulation rate). We further investigated how these different aspects of the decision-making process are affected by different properties of stimulus history, giving rise to dissociable inter-trial effects. We approached this question by (i) combining each perceptual decision making model (DDM or LATER) with different updating models, each specifying a plausible rule for updating of either the starting point or the rate, based on stimulus history, and (ii) comparing every possible combination of trial-wise updating mechanism and perceptual decision model in a factorial model comparison. Consistently across experiments, we found that the (recent) history of the response-critical property influences the initial decision bias, while repetition/switch of the target-defining dimension affects the accumulation rate, likely reflecting an implicit 'top-down' modulation process. This provides strong evidence of a disassociation between response- and dimension-based inter-trial effects.
Collapse
Affiliation(s)
- Fredrik Allenmark
- Experimental Psychology, Department of Psychology, LMU Munich, Munich, Germany
| | - Hermann J. Müller
- Experimental Psychology, Department of Psychology, LMU Munich, Munich, Germany
- Department of Psychological Science, Birkbeck College (University of London), London, United Kingdom
| | - Zhuanghua Shi
- Experimental Psychology, Department of Psychology, LMU Munich, Munich, Germany
| |
Collapse
|
43
|
Abstract
Much of the evidence for theories in visual search (including Hulleman & Olivers' [H&O's]) comes from inferences made using changes in mean RT as a function of the number of items in a display. We have known for more than 40 years that these inferences are based on flawed reasoning and obscured by model mimicry. Here we describe a method that avoids these problems.
Collapse
|
44
|
Scanning movements during haptic search: similarity with fixations during visual search. Behav Brain Sci 2018; 40:e151. [PMID: 29342610 DOI: 10.1017/s0140525x16000212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Finding relevant objects through vision, or visual search, is a crucial function that has received considerable attention in the literature. After decades of research, data suggest that visual fixations are more crucial to understanding how visual search works than are the attributes of stimuli. This idea receives further support from the field of haptic search.
Collapse
|
45
|
Kilpatrick ZP, Poll DB. Neural field model of memory-guided search. Phys Rev E 2017; 96:062411. [PMID: 29347320 DOI: 10.1103/physreve.96.062411] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Indexed: 11/07/2022]
Abstract
Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.
Collapse
Affiliation(s)
- Zachary P Kilpatrick
- Department of Applied Mathematics, University of Colorado, Boulder, Colorado 80309, USA.,Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado 80045, USA
| | - Daniel B Poll
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA.,Department of Engineering Sciences and Applied Mathematics, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
46
|
When is it time to move to the next map? Optimal foraging in guided visual search. Atten Percept Psychophys 2017; 78:2135-51. [PMID: 27192994 DOI: 10.3758/s13414-016-1128-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Suppose that you are looking for visual targets in a set of images, each containing an unknown number of targets. How do you perform that search, and how do you decide when to move from the current image to the next? Optimal foraging theory predicts that foragers should leave the current image when the expected value from staying falls below the expected value from leaving. Here, we describe how to apply these models to more complex tasks, like search for objects in natural scenes where people have prior beliefs about the number and locations of targets in each image, and search is guided by target features and scene context. We model these factors in a guided search task and predict the optimal time to quit search. The data come from a satellite image search task. Participants searched for small gas stations in large satellite images. We model quitting times with a Bayesian model that incorporates prior beliefs about the number of targets in each map, average search efficiency (guidance), and actual search history in the image. Clicks deploying local magnification were used as surrogates for deployments of attention and, thus, for time. Leaving times (measured in mouse clicks) were well-predicted by the model. People terminated search when their expected rate of target collection fell to the average rate for the task. Apparently, people follow a rate-optimizing strategy in this task and use both their prior knowledge and search history in the image to decide when to quit searching.
Collapse
|
47
|
More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel. PSYCHOLOGICAL RESEARCH 2017; 83:459-475. [PMID: 28917014 DOI: 10.1007/s00426-017-0906-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2016] [Accepted: 08/12/2017] [Indexed: 10/18/2022]
Abstract
Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.
Collapse
|
48
|
Narbutas V, Lin YS, Kristan M, Heinke D. Serial versus parallel search: A model comparison approach based on reaction time distributions. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1352055] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- V. Narbutas
- School of Psychology, University of Birmingham, Birmingham, UK
| | - Y.-S. Lin
- School of Medicine, Division of Psychology, University of Tasmania, Sandy Bay, Australia
| | - M. Kristan
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
| | - D. Heinke
- School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
49
|
Aponte EA, Schöbi D, Stephan KE, Heinzle J. The Stochastic Early Reaction, Inhibition, and late Action (SERIA) model for antisaccades. PLoS Comput Biol 2017; 13:e1005692. [PMID: 28767650 PMCID: PMC5555715 DOI: 10.1371/journal.pcbi.1005692] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2017] [Revised: 08/14/2017] [Accepted: 07/20/2017] [Indexed: 01/19/2023] Open
Abstract
The antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce the Stochastic Early Reaction, Inhibition, and late Action model (SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the early decision process postulated by the SERIA model is, to a large extent, insensitive to the cue presented in a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (pro- or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades.
Collapse
Affiliation(s)
- Eduardo A. Aponte
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich & Swiss Institute of Technology Zurich, Zurich, Switzerland
- * E-mail: (EAA); (JH)
| | - Dario Schöbi
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich & Swiss Institute of Technology Zurich, Zurich, Switzerland
| | - Klaas E. Stephan
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich & Swiss Institute of Technology Zurich, Zurich, Switzerland
- Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom
| | - Jakob Heinzle
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich & Swiss Institute of Technology Zurich, Zurich, Switzerland
- * E-mail: (EAA); (JH)
| |
Collapse
|
50
|
An appeal against the item's death sentence: Accounting for diagnostic data patterns with an item-based model of visual search. Behav Brain Sci 2017; 40:e148. [DOI: 10.1017/s0140525x16000182] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractWe show that our item-based model, competitive guided search, accounts for the empirical patterns that Hulleman & Olivers (H&O) invoke against item-based models, and we highlight recently reported diagnostic data that challenge their approach. We advise against “forsaking the item” unless and until a full fixation-based model is shown to be superior to extant item-based models.
Collapse
|