101
|
Dziemianko M, Keller F. Memory modulated saliency: A computational model of the incremental learning of target locations in visual search. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.784717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
102
|
Abstract
It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers performed as many as 15 searches for different objects in the same, unchanging scene, the speed of search did not decrease much over the course of these multiple searches (Võ & Wolfe, 2012). Only when observers were asked to search for the same object again did search become considerably faster. We argued that our naturalistic scenes provided such strong "semantic" guidance-e.g., knowing that a faucet is usually located near a sink-that guidance by incidental episodic memory-having seen that faucet previously-was rendered less useful. Here, we directly manipulated the availability of semantic information provided by a scene. By monitoring observers' eye movements, we found a tight coupling of semantic and episodic memory guidance: Decreasing the availability of semantic information increases the use of episodic memory to guide search. These findings have broad implications regarding the use of memory during search in general and particularly during search in naturalistic scenes.
Collapse
Affiliation(s)
- Melissa L-H Võ
- Visual Attention Lab, Harvard Medical School, Brigham and Women's Hospital, USA.
| | | |
Collapse
|
103
|
Staub A, Abbott M, Bogartz RS. Linguistically guided anticipatory eye movements in scene viewing. VISUAL COGNITION 2012. [DOI: 10.1080/13506285.2012.715599] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
104
|
Taya S, Windridge D, Osman M. Looking to score: the dissociation of goal influence on eye movement and meta-attentional allocation in a complex dynamic natural scene. PLoS One 2012; 7:e39060. [PMID: 22768058 PMCID: PMC3387190 DOI: 10.1371/journal.pone.0039060] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2011] [Accepted: 05/17/2012] [Indexed: 12/04/2022] Open
Abstract
Several studies have reported that task instructions influence eye-movement behavior during static image observation. In contrast, during dynamic scene observation we show that while the specificity of the goal of a task influences observers' beliefs about where they look, the goal does not in turn influence eye-movement patterns. In our study observers watched short video clips of a single tennis match and were asked to make subjective judgments about the allocation of visual attention to the items presented in the clip (e.g., ball, players, court lines, and umpire). However, before attending to the clips, observers were either told to simply watch clips (non-specific goal), or they were told to watch the clips with a view to judging which of the two tennis players was awarded the point (specific goal). The results of subjective reports suggest that observers believed that they allocated their attention more to goal-related items (e.g. court lines) if they performed the goal-specific task. However, we did not find the effect of goal specificity on major eye-movement parameters (i.e., saccadic amplitudes, inter-saccadic intervals, and gaze coherence). We conclude that the specificity of a task goal can alter observer's beliefs about their attention allocation strategy, but such task-driven meta-attentional modulation does not necessarily correlate with eye-movement behavior.
Collapse
Affiliation(s)
- Shuichiro Taya
- School of Biological and Chemical Science, Queen Mary College, University of London, London, United Kingdom
- Centre for Vision Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| | - David Windridge
- Centre for Vision Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| | - Magda Osman
- School of Biological and Chemical Science, Queen Mary College, University of London, London, United Kingdom
- Centre for Vision Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| |
Collapse
|
105
|
Glaholt MG, Reingold EM. Direct control of fixation times in scene viewing: Evidence from analysis of the distribution of first fixation duration. VISUAL COGNITION 2012. [DOI: 10.1080/13506285.2012.666295] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
106
|
Foulsham T, Kingstone A. Modelling the influence of central and peripheral information on saccade biases in gaze-contingent scene viewing. VISUAL COGNITION 2012. [DOI: 10.1080/13506285.2012.680934] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
107
|
Amano K, Foster DH, Mould MS, Oakley JP. Visual search in natural scenes explained by local color properties. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2012; 29:A194-A199. [PMID: 22330379 DOI: 10.1364/josaa.29.00a194] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Success in visually searching for a small object or target in a natural scene depends on many factors, including the spatial structure of the scene and the pattern of observers' eye movements. The aim of this study was to determine to what extent local color properties of natural scenes can account for target-detection performance. A computer-controlled high-resolution color monitor was used to present images of natural scenes containing a small, randomly located, shaded gray sphere, which served as the target. Observers' gaze position was simultaneously monitored with an infrared video eye-tracker. About 60% of the adjusted variance in observers' detection performance was accounted for by local color properties, namely, lightness and the red-green and blue-yellow components of chroma. A similar level of variance was accounted for by observers' fixations. These results suggest that local color can be as influential as gaze position in determining observers' search performance in natural scenes.
Collapse
Affiliation(s)
- Kinjiro Amano
- School of Electrical and Electronic Engineering, University of Manchester, Manchester M13 9PL, UK.
| | | | | | | |
Collapse
|
108
|
Hollingworth A. Guidance of visual search by memory and knowledge. NEBRASKA SYMPOSIUM ON MOTIVATION. NEBRASKA SYMPOSIUM ON MOTIVATION 2012; 59:63-89. [PMID: 23437630 DOI: 10.1007/978-1-4614-4794-8_4] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
To behave intelligently in the world, humans must be able to find objects efficiently within the complex environments they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes, focusing on the role of memory and knowledge in guiding attention to task-relevant objects.
Collapse
|
109
|
Abstract
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.
Collapse
|
110
|
Wiener JM, Hölscher C, Büchner S, Konieczny L. Gaze behaviour during space perception and spatial decision making. PSYCHOLOGICAL RESEARCH 2011; 76:713-29. [PMID: 22139023 DOI: 10.1007/s00426-011-0397-5] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2011] [Accepted: 11/17/2011] [Indexed: 11/24/2022]
Abstract
A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screenshots of choice points taken in large virtual environments. Each screenshot depicted alternative path options. In Experiment 1, participants had to decide between them to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently, they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 and 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making.
Collapse
Affiliation(s)
- Jan M Wiener
- Department of Psychology, Bournemouth University, Poole House, Talbot Campus, Fern Barrow, Poole, Dorset, BH12 5BB, UK.
| | | | | | | |
Collapse
|
111
|
Chiappe DL, Strybel TZ, Vu KPL. Mechanisms for the acquisition of situation awareness in situated agents. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2011. [DOI: 10.1080/1463922x.2011.611267] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
112
|
Does oculomotor inhibition of return influence fixation probability during scene search? Atten Percept Psychophys 2011; 73:2384-98. [DOI: 10.3758/s13414-011-0191-x] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
113
|
Task relevance predicts gaze in videos of real moving scenes. Exp Brain Res 2011; 214:131-7. [PMID: 21822674 DOI: 10.1007/s00221-011-2812-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2011] [Accepted: 07/23/2011] [Indexed: 10/17/2022]
Abstract
Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.
Collapse
|
114
|
Salverda AP, Brown M, Tanenhaus MK. A goal-based perspective on eye movements in visual world studies. Acta Psychol (Amst) 2011; 137:172-80. [PMID: 21067708 DOI: 10.1016/j.actpsy.2010.09.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2010] [Revised: 09/21/2010] [Accepted: 09/23/2010] [Indexed: 10/18/2022] Open
Abstract
There is an emerging literature on visual search in natural tasks suggesting that task-relevant goals account for a remarkably high proportion of saccades, including anticipatory eye movements. Moreover, factors such as "visual saliency" that otherwise affect fixations become less important when they are bound to objects that are not relevant to the task at hand. We briefly review this literature and discuss the implications for task-based variants of the visual world paradigm. We argue that the results and their likely interpretation may profoundly affect the "linking hypothesis" between language processing and the location and timing of fixations in task-based visual world studies. We outline a goal-based linking hypothesis and discuss some of the implications for how we conduct visual world studies, including how we interpret and analyze the data. Finally, we outline some avenues of research, including examples of some classes of experiments that might prove fruitful for evaluating the effects of goals in visual world experiments and the viability of a goal-based linking hypothesis.
Collapse
|
115
|
Tatler BW, Hayhoe MM, Land MF, Ballard DH. Eye guidance in natural vision: reinterpreting salience. J Vis 2011; 11:5. [PMID: 21622729 DOI: 10.1167/11.5.5] [Citation(s) in RCA: 358] [Impact Index Per Article: 25.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Models of gaze allocation in complex scenes are derived mainly from studies of static picture viewing. The dominant framework to emerge has been image salience, where properties of the stimulus play a crucial role in guiding the eyes. However, salience-based schemes are poor at accounting for many aspects of picture viewing and can fail dramatically in the context of natural task performance. These failures have led to the development of new models of gaze allocation in scene viewing that address a number of these issues. However, models based on the picture-viewing paradigm are unlikely to generalize to a broader range of experimental contexts, because the stimulus context is limited, and the dynamic, task-driven nature of vision is not represented. We argue that there is a need to move away from this class of model and find the principles that govern gaze allocation in a broader range of settings. We outline the major limitations of salience-based selection schemes and highlight what we have learned from studies of gaze allocation in natural vision. Clear principles of selection are found across many instances of natural vision and these are not the principles that might be expected from picture-viewing studies. We discuss the emerging theoretical framework for gaze allocation on the basis of reward maximization and uncertainty reduction.
Collapse
|
116
|
Matsukura M, Brockmole JR, Boot WR, Henderson JM. Oculomotor capture during real-world scene viewing depends on cognitive load. Vision Res 2011; 51:546-52. [PMID: 21310171 DOI: 10.1016/j.visres.2011.01.014] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2010] [Revised: 01/20/2011] [Accepted: 01/28/2011] [Indexed: 11/24/2022]
Abstract
It has been claimed that gaze control during scene viewing is largely governed by stimulus-driven, bottom-up selection mechanisms. Recent research, however, has strongly suggested that observers' top-down control plays a dominant role in attentional prioritization in scenes. A notable exception to this strong top-down control is oculomotor capture, where visual transients in a scene draw the eyes. One way to test whether oculomotor capture during scene viewing is independent of an observer's top-down goal setting is to reduce observers' cognitive resource availability. In the present study, we examined whether increasing observers' cognitive load influences the frequency and speed of oculomotor capture during scene viewing. In Experiment 1, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by a new object suddenly appeared in a scene. Similarly, in Experiment 2, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by an object's color change. In both experiments, the degree of oculomotor capture decreased as observers' cognitive resources were reduced. These results suggest that oculomotor capture during scene viewing is dependent on observers' top-down selection mechanisms.
Collapse
Affiliation(s)
- Michi Matsukura
- University of Iowa, Department of Psychology, 11 Seashore Hall E, Iowa City, IA 52242, USA.
| | | | | | | |
Collapse
|
117
|
Foulsham T, Barton JJS, Kingstone A, Dewhurst R, Underwood G. Modeling eye movements in visual agnosia with a saliency map approach: bottom-up guidance or top-down strategy? Neural Netw 2011; 24:665-77. [PMID: 21316191 DOI: 10.1016/j.neunet.2011.01.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2010] [Revised: 12/16/2010] [Accepted: 01/18/2011] [Indexed: 10/18/2022]
Abstract
Two recent papers (Foulsham, Barton, Kingstone, Dewhurst, & Underwood, 2009; Mannan, Kennard, & Husain, 2009) report that neuropsychological patients with a profound object recognition problem (visual agnosic subjects) show differences from healthy observers in the way their eye movements are controlled when looking at images. The interpretation of these papers is that eye movements can be modeled as the selection of points on a saliency map, and that agnosic subjects show an increased reliance on visual saliency, i.e., brightness and contrast in low-level stimulus features. Here we review this approach and present new data from our own experiments with an agnosic patient that quantifies the relationship between saliency and fixation location. In addition, we consider whether the perceptual difficulties of individual patients might be modeled by selectively weighting the different features involved in a saliency map. Our data indicate that saliency is not always a good predictor of fixation in agnosia: even for our agnosic subject, as for normal observers, the saliency-fixation relationship varied as a function of the task. This means that top-down processes still have a significant effect on the earliest stages of scanning in the setting of visual agnosia, indicating severe limitations for the saliency map model. Top-down, active strategies-which are the hallmark of our human visual system-play a vital role in eye movement control, whether we know what we are looking at or not.
Collapse
Affiliation(s)
- Tom Foulsham
- Department of Psychology, University of British Columbia, Canada.
| | | | | | | | | |
Collapse
|
118
|
Wolfe JM, Võ MLH, Evans KK, Greene MR. Visual search in scenes involves selective and nonselective pathways. Trends Cogn Sci 2011; 15:77-84. [PMID: 21227734 DOI: 10.1016/j.tics.2010.12.001] [Citation(s) in RCA: 285] [Impact Index Per Article: 20.4] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2010] [Revised: 12/01/2010] [Accepted: 12/02/2010] [Indexed: 10/18/2022]
Abstract
How does one find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This article argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes might be best explained by a dual-path model: a 'selective' path in which candidate objects must be individually selected for recognition and a 'nonselective' path in which information can be extracted from global and/or statistical information.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham & Women's Hospital, Harvard Medical School, 64 Sidney St. Suite 170, Cambridge, MA 02139, USA.
| | | | | | | |
Collapse
|
119
|
|
120
|
Foulsham T, Underwood G. If Visual Saliency Predicts Search, Then Why? Evidence from Normal and Gaze-Contingent Search Tasks in Natural Scenes. Cognit Comput 2010. [DOI: 10.1007/s12559-010-9069-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|