1
|
Nikolaev AR, Meghanathan RN, van Leeuwen C. Refixation behavior in naturalistic viewing: Methods, mechanisms, and neural correlates. Atten Percept Psychophys 2024:10.3758/s13414-023-02836-9. [PMID: 38169029 DOI: 10.3758/s13414-023-02836-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/17/2023] [Indexed: 01/05/2024]
Abstract
When freely viewing a scene, the eyes often return to previously visited locations. By tracking eye movements and coregistering eye movements and EEG, such refixations are shown to have multiple roles: repairing insufficient encoding from precursor fixations, supporting ongoing viewing by resampling relevant locations prioritized by precursor fixations, and aiding the construction of memory representations. All these functions of refixation behavior are understood to be underpinned by three oculomotor and cognitive systems and their associated brain structures. First, immediate saccade planning prior to refixations involves attentional selection of candidate locations to revisit. This process is likely supported by the dorsal attentional network. Second, visual working memory, involved in maintaining task-related information, is likely supported by the visual cortex. Third, higher-order relevance of scene locations, which depends on general knowledge and understanding of scene meaning, is likely supported by the hippocampal memory system. Working together, these structures bring about viewing behavior that balances exploring previously unvisited areas of a scene with exploiting visited areas through refixations.
Collapse
Affiliation(s)
- Andrey R Nikolaev
- Department of Psychology, Lund University, Box 213, 22100, Lund, Sweden.
- Brain & Cognition Research Unit, KU Leuven-University of Leuven, Leuven, Belgium.
| | | | - Cees van Leeuwen
- Brain & Cognition Research Unit, KU Leuven-University of Leuven, Leuven, Belgium
- Center for Cognitive Science, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, Kaiserslautern, Germany
| |
Collapse
|
2
|
Godwin HJ, Hout MC. Just say 'I don't know': Understanding information stagnation during a highly ambiguous visual search task. PLoS One 2023; 18:e0295669. [PMID: 38060624 PMCID: PMC10703240 DOI: 10.1371/journal.pone.0295669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 11/27/2023] [Indexed: 12/18/2023] Open
Abstract
Visual search experiments typically involve participants searching simple displays with two potential response options: 'present' or 'absent'. Here we examined search behavior and decision-making when participants were tasked with searching ambiguous displays whilst also being given a third response option: 'I don't know'. Participants searched for a simple target (the letter 'o') amongst other letters in the displays. We made the target difficult to detect by increasing the degree to which letters overlapped in the displays. The results showed that as overlap increased, participants were more likely to respond 'I don't know', as expected. RT analyses demonstrated that 'I don't know' responses occurred at a later time than 'present' responses (but before 'absent' responses) when the overlap was low. By contrast, when the overlap was high, 'I don't know' responses occurred very rapidly. We discuss the implications of our findings for current models and theories in terms of what we refer to as 'information stagnation' during visual search.
Collapse
Affiliation(s)
- Hayward J. Godwin
- School of Psychology, University of Southampton, Southampton, Hampshire, United Kingdom
| | - Michael C. Hout
- Department of Psychology, New Mexico State University, Las Cruces, New Mexico, United States of America
| |
Collapse
|
3
|
Ernst D, Wolfe JM. How fixation durations are affected by search difficulty manipulations. VISUAL COGNITION 2022. [DOI: 10.1080/13506285.2022.2063465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Daniel Ernst
- Brigham & Women’s Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- Bielefeld University, Bielefeld, Germany
| | - Jeremy M. Wolfe
- Brigham & Women’s Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| |
Collapse
|
4
|
Wu CC, Wolfe JM. The Functional Visual Field(s) in simple visual search. Vision Res 2022; 190:107965. [PMID: 34775158 PMCID: PMC8976560 DOI: 10.1016/j.visres.2021.107965] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 10/07/2021] [Accepted: 10/12/2021] [Indexed: 01/03/2023]
Abstract
During a visual search for a target among distractors, observers do not fixate every location in the search array. Rather processing is thought to occur within a Functional Visual Field (FVF) surrounding each fixation. We argue that there are three questions that can be asked at each fixation and that these imply three different senses of the FVF. 1) Can I identify what is at location XY? This defines a resolution FVF. 2) To what shall I attend during this fixation? This defines an Attentional FVF. 3) Where should I fixate next? This defines an Exploratory FVF. We examine FVFs 2&3 using eye movements in visual search. In three Experiments, we collected eye movements during visual search for the target letter T among distractor letter Ls (Exps 1 and 3) or for a color X orientation conjunction (Exp 2). Saccades that do not go to the target can be used to define the Exploratory FVF. The saccade that goes to the target can be used to define the Attentional FVF since the target was probably covertly detected during the prior fixation. The Exploratory FVF is larger than the Attentional FVF for all three experiments. Interestingly, the probability that the next saccade would go to the target was always well below 1.0, even when the current fixation was close to the target and well within any reasonable estimate of the FVF. Measuring search-based Exploratory and Attentional FVFs sheds light on how we can miss clearly visible targets.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA.
| | - Jeremy M Wolfe
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
5
|
Wolfe JM, Wu CC, Li J, Suresh SB. What do experts look at and what do experts find when reading mammograms? J Med Imaging (Bellingham) 2021; 8:045501. [PMID: 34277890 DOI: 10.1117/1.jmi.8.4.045501] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 06/28/2021] [Indexed: 12/21/2022] Open
Abstract
Purpose: Radiologists sometimes fail to report clearly visible, clinically significant findings. Eye tracking can provide insight into the causes of such errors. Approach: We tracked eye movements of 17 radiologists, searching for masses in 80 mammograms (60 with masses). Results: Errors were classified using the Kundel et al. (1978) taxonomy: search errors (target never fixated), recognition errors (fixated < 500 ms ), or decision errors (fixated > 500 ms ). Error proportions replicated Krupinski (1996): search 25%, recognition 25%, and decision 50%. Interestingly, we found few differences between experts and residents in accuracy or eye movement metrics. Error categorization depends on the definition of the useful field of view (UFOV) around fixation. We explored different UFOV definitions, based on targeting saccades and search saccades. Targeting saccades averaged slightly longer than search saccades. Of most interest, we found that the probability that the eyes would move to the target on the next saccade or even on one of the next three saccades was strikingly low ( ∼ 33 % , even when the eyes were < 2 deg from the target). This makes it clear that observers do not fully process everything within a UFOV. Using a probabilistic UFOV, we find, unsurprisingly, that observers cover more of the image when no target is present than when it is found. Interestingly, we do not find evidence that observers cover too little of the image on trials when they miss the target. Conclusions: These results indicate that many errors in mammography reflect failed deployment of attention; not failure to fixate clinically significant locations.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Cambridge, Massachusetts, United States
| | - Chia-Chien Wu
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Cambridge, Massachusetts, United States
| | - Jonathan Li
- Melbourne Medical School, Melbourne, Victoria, Australia
| | - Sneha B Suresh
- Brigham and Women's Hospital, Boston, Massachusetts, United States
| |
Collapse
|
6
|
Avoiding potential pitfalls in visual search and eye-movement experiments: A tutorial review. Atten Percept Psychophys 2021; 83:2753-2783. [PMID: 34089167 PMCID: PMC8460493 DOI: 10.3758/s13414-021-02326-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2021] [Indexed: 12/15/2022]
Abstract
Examining eye-movement behavior during visual search is an increasingly popular approach for gaining insights into the moment-to-moment processing that takes place when we look for targets in our environment. In this tutorial review, we describe a set of pitfalls and considerations that are important for researchers – both experienced and new to the field – when engaging in eye-movement and visual search experiments. We walk the reader through the research cycle of a visual search and eye-movement experiment, from choosing the right predictions, through to data collection, reporting of methodology, analytic approaches, the different dependent variables to analyze, and drawing conclusions from patterns of results. Overall, our hope is that this review can serve as a guide, a talking point, a reflection on the practices and potential problems with the current literature on this topic, and ultimately a first step towards standardizing research practices in the field.
Collapse
|
7
|
Mirpour K, Bisley JW. The roles of the lateral intraparietal area and frontal eye field in guiding eye movements in free viewing search behavior. J Neurophysiol 2021; 125:2144-2157. [PMID: 33949898 DOI: 10.1152/jn.00559.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The lateral intraparietal area (LIP) and frontal eye field (FEF) have been shown to play significant roles in oculomotor control, yet most studies have found that the two areas behave similarly. To identify the unique roles each area plays in guiding eye movements, we recorded 200 LIP neurons and 231 FEF neurons from four animals performing a free viewing visual foraging task. We analyzed how neuronal responses were modulated by stimulus identity and the animals' choice of where to make a saccade. We additionally analyzed the comodulation of the sensory signals and the choice signal to identify how the sensory signals drove the choice. We found a clearly defined division of labor: LIP provided a stable map integrating task rules and stimulus identity, whereas FEF responses were dynamic, representing more complex information and, just before the saccade, were integrated with task rules and stimulus identity to decide where to move the eye.NEW & NOTEWORTHY The lateral intrapareital area (LIP) and frontal eye field (FEF) are known to contribute to guiding eye movements, but little is known about the unique roles that each area plays. Using a free viewing visual search task, we found that LIP provides a stable map of the visual world, integrating task rules and stimulus identity. FEF activity is consistently modulated by more complex information but, just before the saccade, integrates all the information to make the final decision about where to move.
Collapse
Affiliation(s)
- Koorosh Mirpour
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, California
| | - James W Bisley
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, California.,Jules Stein Eye Institute, David Geffen School of Medicine at UCLA, Los Angeles, California.,Department of Psychology and the Brain Research Institute, UCLA, Los Angeles, California
| |
Collapse
|
8
|
Abstract
Research and theories on visual search often focus on visual guidance to explain differences in search. Guidance is the tuning of attention to target features and facilitates search because distractors that do not show target features can be more effectively ignored (skipping). As a general rule, the better the guidance is, the more efficient search is. Correspondingly, behavioral experiments often interpreted differences in efficiency as reflecting varying degrees of attentional guidance. But other factors such as the time spent on processing a distractor (dwelling) or multiple visits to the same stimulus in a search display (revisiting) are also involved in determining search efficiency. While there is some research showing that dwelling and revisiting modulate search times in addition to skipping, the corresponding studies used complex naturalistic and category-defined stimuli. The present study tests whether results from prior research can be generalized to more simple stimuli, where target-distractor similarity, a strong factor influencing search performance, can be manipulated in a detailed fashion. Thus, in the present study, simple stimuli with varying degrees of target-distractor similarity were used to deliver conclusive evidence for the contribution of dwelling and revisiting to search performance. The results have theoretical and methodological implications: They imply that visual search models should not treat dwelling and revisiting as constants across varying levels of search efficiency and that behavioral search experiments are equivocal with respect to the responsible processing mechanisms underlying more versus less efficient search. We also suggest that eye-tracking methods may be used to disentangle different search components such as skipping, dwelling, and revisiting.
Collapse
|
9
|
Dwelling on distractors varying in target-distractor similarity. Acta Psychol (Amst) 2019; 198:102859. [PMID: 31212105 DOI: 10.1016/j.actpsy.2019.05.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 05/14/2019] [Accepted: 05/16/2019] [Indexed: 11/23/2022] Open
Abstract
Present day models of visual search focus on explaining search efficiency by visual guidance: The target guides attention to the target's position better in more efficient than in less efficient search. The time spent processing the distractor, however, is set to a constant in these models. In contrast to this assumption, recent studies found that dwelling on distractors is longer in more inefficient search. Previous experiments in support of this contention all presented the same distractors across all conditions, while varying the targets. While this procedure has its virtues, it confounds the manipulation of search efficiency with target type. Here we use the same targets over the entire experiment, while varying search efficiency by presenting different types of distractors. Eye fixation behavior was used to infer the amount of distractor dwelling, skipping, and revisiting. The results replicate previous results, with similarity affecting dwelling, and dwelling in turn affecting search performance. A regression analysis confirmed that variations in dwelling account for a large amount of variance in search speed, and that the similarity effect in dwelling accounts for the similarity effect in overall search performance.
Collapse
|
10
|
Kangasrääsiö A, Jokinen JPP, Oulasvirta A, Howes A, Kaski S. Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation. Cogn Sci 2019; 43:e12738. [PMID: 31204797 PMCID: PMC6593436 DOI: 10.1111/cogs.12738] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 04/09/2019] [Accepted: 04/11/2019] [Indexed: 11/28/2022]
Abstract
This paper addresses a common challenge with computational cognitive models: identifying parameter values that are both theoretically plausible and generate predictions that match well with empirical data. While computational models can offer deep explanations of cognition, they are computationally complex and often out of reach of traditional parameter fitting methods. Weak methodology may lead to premature rejection of valid models or to acceptance of models that might otherwise be falsified. Mathematically robust fitting methods are, therefore, essential to the progress of computational modeling in cognitive science. In this article, we investigate the capability and role of modern fitting methods—including Bayesian optimization and approximate Bayesian computation—and contrast them to some more commonly used methods: grid search and Nelder–Mead optimization. Our investigation consists of a reanalysis of the fitting of two previous computational models: an Adaptive Control of Thought—Rational model of skill acquisition and a computational rationality model of visual search. The results contrast the efficiency and informativeness of the methods. A key advantage of the Bayesian methods is the ability to estimate the uncertainty of fitted parameter values. We conclude that approximate Bayesian computation is (a) efficient, (b) informative, and (c) offers a path to reproducible results.
Collapse
Affiliation(s)
| | | | | | - Andrew Howes
- School of Computer Science, University of Birmingham
| | - Samuel Kaski
- Department of Computer Science, Aalto University
| |
Collapse
|
11
|
Suppression of frontal eye field neuronal responses with maintained fixation. Proc Natl Acad Sci U S A 2018; 115:804-809. [PMID: 29311323 DOI: 10.1073/pnas.1716315115] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The decision of where to make an eye movement is thought to be driven primarily by responses to stimuli in neurons' receptive fields (RFs) in oculomotor areas, including the frontal eye field (FEF) of prefrontal cortex. It is also thought that a saccade may be generated when the accumulation of this activity in favor of one location or another reaches a threshold. However, in the reading and scene perception fields, it is well known that the properties of the stimulus at the fovea often affect when the eyes leave that stimulus. We propose that if FEF plays a role in generating eye movements, then the identity of the stimulus at fixation should affect the FEF responses so as to reduce the probability of making a saccade when fixating an item of interest. Using a visual foraging task in which animals could make multiple eye movements within a single trial, we found that responses were strongly modulated by the identity of the stimulus at the fovea. Specifically, responses to the stimulus in the RF were suppressed when the animal maintained fixation for longer durations on a stimulus that could be associated with a reward. We suggest that this suppression, which was predicted by models of eye movement behavior, could be a mechanism by which FEF can modulate the temporal flow of saccades based on the importance of the stimulus at the fovea.
Collapse
|
12
|
Horstmann G, Becker S, Ernst D. Dwelling, rescanning, and skipping of distractors explain search efficiency in difficult search better than guidance by the target. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1347591] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Gernot Horstmann
- Department of Psychology and CITEC, Bielefeld University, Bielefeld, Germany
| | - Stefanie Becker
- School of Psychology, The University of Queensland, St Lucia, Australia
| | - Daniel Ernst
- Department of Psychology and CITEC, Bielefeld University, Bielefeld, Germany
| |
Collapse
|