1
|
Saltzmann SM, Eich B, Moen KC, Beck MR. Activated long-term memory and visual working memory during hybrid visual search: Effects on target memory search and distractor memory. Mem Cognit 2024:10.3758/s13421-024-01556-1. [PMID: 38528298 DOI: 10.3758/s13421-024-01556-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2024] [Indexed: 03/27/2024]
Abstract
In hybrid visual search, observers must maintain multiple target templates and subsequently search for any one of those targets. If the number of potential target templates exceeds visual working memory (VWM) capacity, then the target templates are assumed to be maintained in activated long-term memory (aLTM). Observers must search the array for potential targets (visual search), as well as search through memory (target memory search). Increasing the target memory set size reduces accuracy, increases search response times (RT), and increases dwell time on distractors. However, the extent of observers' memory for distractors during hybrid search is largely unknown. In the current study, the impact of hybrid search on target memory search (measured by dwell time on distractors, false alarms, and misses) and distractor memory (measured by distractor revisits and recognition memory of recently viewed distractors) was measured. Specifically, we aimed to better understand how changes in behavior during hybrid search impacts distractor memory. Increased target memory set size led to an increase in search RTs, distractor dwell times, false alarms, and target identification misses. Increasing target memory set size increased revisits to distractors, suggesting impaired distractor location memory, but had no effect on a two-alternative forced-choice (2AFC) distractor recognition memory test presented during the search trial. The results from the current study suggest a lack of interference between memory stores maintaining target template representations (aLTM) and distractor information (VWM). Loading aLTM with more target templates does not impact VWM for distracting information.
Collapse
Affiliation(s)
- Stephanie M Saltzmann
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Brandon Eich
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Katherine C Moen
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
- Department of Psychology, University of Nebraska at Kearney, 2504 9th Ave, Kearney, NE, 68849, USA
| | - Melissa R Beck
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA.
| |
Collapse
|
2
|
Williams LH, Carrigan AJ, Mills M, Auffermann WF, Rich AN, Drew T. Characteristics of expert search behavior in volumetric medical image interpretation. J Med Imaging (Bellingham) 2021; 8:041208. [PMID: 34277889 DOI: 10.1117/1.jmi.8.4.041208] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 06/28/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Experienced radiologists have enhanced global processing ability relative to novices, allowing experts to rapidly detect medical abnormalities without performing an exhaustive search. However, evidence for global processing models is primarily limited to two-dimensional image interpretation, and it is unclear whether these findings generalize to volumetric images, which are widely used in clinical practice. We examined whether radiologists searching volumetric images use methods consistent with global processing models of expertise. In addition, we investigated whether search strategy (scanning/drilling) differs with experience level. Approach: Fifty radiologists with a wide range of experience evaluated chest computed-tomography scans for lung nodules while their eye movements and scrolling behaviors were tracked. Multiple linear regressions were used to determine: (1) how search behaviors differed with years of experience and the number of chest CTs evaluated per week and (2) which search behaviors predicted better performance. Results: Contrary to global processing models based on 2D images, experience was unrelated to measures of global processing (saccadic amplitude, coverage, time to first fixation, search time, and depth passes) in this task. Drilling behavior was associated with better accuracy than scanning behavior when controlling for observer experience. Greater image coverage was a strong predictor of task accuracy. Conclusions: Global processing ability may play a relatively small role in volumetric image interpretation, where global scene statistics are not available to radiologists in a single glance. Rather, in volumetric images, it may be more important to engage in search strategies that support a more thorough search of the image.
Collapse
Affiliation(s)
- Lauren H Williams
- University of California, San Diego, Department of Psychology, San Diego, California, United States
| | - Ann J Carrigan
- Macquarie University, Department of Psychology, Sydney, New South Wales, Australia.,Macquarie University, Perception in Action Research Centre, Sydney, New South Wales, Australia.,Macquarie University, Centre for Elite Performance, Expertise, and Training, Sydney, New South Wales, Australia
| | - Megan Mills
- University of Utah, School of Medicine, Department of Radiology and Imaging Sciences, Salt Lake City, Utah, United States
| | - William F Auffermann
- University of Utah, School of Medicine, Department of Radiology and Imaging Sciences, Salt Lake City, Utah, United States
| | - Anina N Rich
- Macquarie University, Perception in Action Research Centre, Sydney, New South Wales, Australia.,Macquarie University, Centre for Elite Performance, Expertise, and Training, Sydney, New South Wales, Australia.,Macquarie University, Department of Cognitive Science, Sydney, New South Wales, Australia
| | - Trafton Drew
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| |
Collapse
|
3
|
Avoiding potential pitfalls in visual search and eye-movement experiments: A tutorial review. Atten Percept Psychophys 2021; 83:2753-2783. [PMID: 34089167 PMCID: PMC8460493 DOI: 10.3758/s13414-021-02326-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2021] [Indexed: 12/15/2022]
Abstract
Examining eye-movement behavior during visual search is an increasingly popular approach for gaining insights into the moment-to-moment processing that takes place when we look for targets in our environment. In this tutorial review, we describe a set of pitfalls and considerations that are important for researchers – both experienced and new to the field – when engaging in eye-movement and visual search experiments. We walk the reader through the research cycle of a visual search and eye-movement experiment, from choosing the right predictions, through to data collection, reporting of methodology, analytic approaches, the different dependent variables to analyze, and drawing conclusions from patterns of results. Overall, our hope is that this review can serve as a guide, a talking point, a reflection on the practices and potential problems with the current literature on this topic, and ultimately a first step towards standardizing research practices in the field.
Collapse
|
4
|
Maintaining rejected distractors in working memory during visual search depends on search stimuli: Evidence from contralateral delay activity. Atten Percept Psychophys 2021; 83:67-84. [PMID: 33000442 DOI: 10.3758/s13414-020-02127-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The presence of memory for rejected distractors during visual search has been heavily debated in the literature and has proven challenging to investigate behaviorally. In this research, we used an electrophysiological index of working memory (contralateral delay activity) to passively measure working memory activity during visual search. Participants were asked to indicate whether a novel target was present or absent in a lateralized search array with three visual set sizes (2, 4, or 6). If rejected distractors are maintained in working memory during search, working memory activity should increase with the number of distractors that need to be evaluated. Therefore, we predicted the amplitude of the contralateral delay activity would be larger for target-absent trials and would increase with visual set size until WM capacity was reached. In Experiment 1, we found no evidence for distractor maintenance in working memory during search for real-world stimuli. In Experiment 2, we found partial evidence in support of distractor maintenance during search for stimuli with high target/distractor similarity. In both experiments, working memory capacity did not appear to be a limiting factor during visual search. These results suggest the role of working memory during search may depend on the visual search task in question. Maintaining distractors in working memory appears to be unnecessary during search for realistic stimuli. However, there appears to be a limited role for distractor maintenance during search for artificial stimuli with a high degree of feature overlap.
Collapse
|
5
|
Zhou Y, Yu Y. Human visual search follows a suboptimal Bayesian strategy revealed by a spatiotemporal computational model and experiment. Commun Biol 2021; 4:34. [PMID: 33397998 PMCID: PMC7782508 DOI: 10.1038/s42003-020-01485-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Accepted: 11/14/2020] [Indexed: 11/09/2022] Open
Abstract
There is conflicting evidence regarding whether humans can make spatially optimal eye movements during visual search. Some studies have shown that humans can optimally integrate information across fixations and determine the next fixation location, however, these models have generally ignored the control of fixation duration and memory limitation, and the model results do not agree well with the details of human eye movement metrics. Here, we measured the temporal course of the human visibility map and performed a visual search experiment. We further built a continuous-time eye movement model that considers saccadic inaccuracy, saccadic bias, and memory constraints. We show that this model agrees better with the spatial and temporal properties of human eye movements and predict that humans have a memory capacity of around eight previous fixations. The model results reveal that humans employ a suboptimal eye movement strategy to find a target, which may minimize costs while still achieving sufficiently high search performance.
Collapse
Affiliation(s)
- Yunhui Zhou
- School of Life Sciences, Fudan University, 200433, Shanghai, China
| | - Yuguo Yu
- School of Life Sciences, Fudan University, 200433, Shanghai, China.
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, 200433, Shanghai, China.
- Human Phenome Institute, Fudan University, 200433, Shanghai, China.
- Research Institute of Intelligent Complex Systems and Institutes of Brain Science, Fudan University, 200433, Shanghai, China.
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, 200433, Shanghai, China.
| |
Collapse
|
6
|
Xin K, Li Z. Visual working memory load does not affect the overall stimulus processing time in visual search. Q J Exp Psychol (Hove) 2020; 73:330-343. [DOI: 10.1177/1747021819881622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The dual-task paradigm is widely used in studying the interaction between visual search and working memory. A number of studies showed that holding items in working memory delays the overall response time (RT) in visual search, but it does not affect the efficiency of search (i.e., the slope of the RT × set size function). Why the memory load merely affects the overall RT? Some researchers proposed that this load-effect on overall RT may be caused by factors that only affect response selection processes, while others argued that it may reflect the effect of visual working memory load on visual search. This study investigated the two competing hypotheses by measuring the threshold stimulus exposure duration (TSED) for successfully fulfilling a search task. Experiment 1 replicated the large overall RT difference with the RT method but only found a small though reliable overall TSED difference with the TSED method. Experiment 2, with better controls, found no TSED difference by manipulating the visual working memory load. Experiment 3 showed that the TSED is not influenced by processes in the response selection stage. The present findings suggest that the overall stimulus processing time in visual search is not affected by visual working memory load and that the effect of memory load on overall RT is largely due to factors affecting response selection alone.
Collapse
Affiliation(s)
- Keyun Xin
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, P.R. China
| | - Zhi Li
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, P.R. China
| |
Collapse
|
7
|
Williams LH, Drew T. What do we know about volumetric medical image interpretation?: a review of the basic science and medical image perception literatures. Cogn Res Princ Implic 2019; 4:21. [PMID: 31286283 PMCID: PMC6614227 DOI: 10.1186/s41235-019-0171-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 05/19/2019] [Indexed: 11/26/2022] Open
Abstract
Interpretation of volumetric medical images represents a rapidly growing proportion of the workload in radiology. However, relatively little is known about the strategies that best guide search behavior when looking for abnormalities in volumetric images. Although there is extensive literature on two-dimensional medical image perception, it is an open question whether the conclusions drawn from these images can be generalized to volumetric images. Importantly, volumetric images have distinct characteristics (e.g., scrolling through depth, smooth-pursuit eye-movements, motion onset cues, etc.) that should be considered in future research. In this manuscript, we will review the literature on medical image perception and discuss relevant findings from basic science that can be used to generate predictions about expertise in volumetric image interpretation. By better understanding search through volumetric images, we may be able to identify common sources of error, characterize the optimal strategies for searching through depth, or develop new training and assessment techniques for radiology residents.
Collapse
|
8
|
Friedman GN, Johnson L, Williams ZM. Long-Term Visual Memory and Its Role in Learning Suppression. Front Psychol 2018; 9:1896. [PMID: 30369895 PMCID: PMC6194155 DOI: 10.3389/fpsyg.2018.01896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 09/18/2018] [Indexed: 11/13/2022] Open
Abstract
Long-term memory is a core aspect of human learning that permits a wide range of skills and behaviors often important for survival. While this core ability has been broadly observed for procedural and declarative memory, whether similar mechanisms subserve basic sensory or perceptual processes remains unclear. Here, we use a visual learning paradigm to show that training humans to search for common visual features in the environment leads to a persistent improvement in performance over consecutive days but, surprisingly, suppresses the subsequent ability to learn similar visual features. This suppression is reversed if the memory is prevented from consolidating, while still permitting the ability to learn multiple visual features simultaneously. These findings reveal a memory mechanism that may enable salient sensory patterns to persist in memory over prolonged durations, but which also functions to prevent false-positive detection by proactively suppressing new learning.
Collapse
Affiliation(s)
- Gabriel N Friedman
- Department of Neurosurgery, Harvard Medical School, Massachusetts General Hospital, Boston, MA, United States
| | - Lance Johnson
- Department of Neurobiology, Harvard University, Cambridge, MA, United States
| | - Ziv M Williams
- Department of Neurosurgery, Harvard Medical School, Massachusetts General Hospital, Boston, MA, United States.,Harvard-MIT Health Sciences and Technology, Boston, MA, United States.,Program in Neuroscience, Harvard Medical School, Harvard University, Boston, MA, United States
| |
Collapse
|
9
|
What Does It Take to Search Organized? The Cognitive Correlates of Search Organization During Cancellation After Stroke. J Int Neuropsychol Soc 2018; 24:424-436. [PMID: 29198217 DOI: 10.1017/s1355617717001254] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVES Stroke could lead to deficits in organization of visual search. Cancellation tests are frequently used in standard neuropsychological assessment and appear suitable to measure search organization. The current aim was to evaluate which cognitive functions are associated with cancellation organization measures after stroke. METHODS Stroke patients admitted to inpatient rehabilitation were included in this retrospective study. We performed exploratory factor analyses to explore cognitive domains. A digital shape cancellation test (SC) was administered, and measures of search organization (intersections rate and best r) were computed. The following cognitive functions were measured by neuropsychological testing: neglect (SC, line bisection; LB, Catherine Bergego Scale; CBS, and Balloons Test), visuospatial perception and construction (Rey Complex Figure Test, RCFT), psychomotor speed (Trail Making Test; TMT-A), executive functioning/working memory (TMT-B), spatial planning (Tower Test), rule learning (Brixton Test), short-term auditory memory (Digit Span Forward; DSF), and verbal working memory (Digit Span Backward; DSB). RESULTS In total, 439 stroke patients were included in our analyses. Four clusters were separated: "Executive functioning" (TMT-A, TMT-B, Brixton Test, and Tower Test), "Verbal memory" (DSF and DSB), "Search organization" (intersections rate and best r), and "Neglect" (CBS, RCFT copy, Balloons Test, SC, and LB). CONCLUSIONS Search organization during cancellation, as measured with intersections rate and best r, seems a distinct cognitive construct compared to existing cognitive domains that are tested during neuropsychological assessment. Administering cancellation tests and analyzing measures of search organization could provide useful additional insights into the visuospatial processes of stroke patients. (JINS, 2018, 24, 424-436).
Collapse
|
10
|
Visual search for changes in scenes creates long-term, incidental memory traces. Atten Percept Psychophys 2018; 80:829-843. [PMID: 29427122 DOI: 10.3758/s13414-018-1486-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Collapse
|
11
|
Godwin HJ, Reichle ED, Menneer T. Modeling Lag-2 Revisits to Understand Trade-Offs in Mixed Control of Fixation Termination During Visual Search. Cogn Sci 2016; 41:996-1019. [PMID: 27322836 DOI: 10.1111/cogs.12379] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2015] [Revised: 12/02/2015] [Accepted: 01/26/2016] [Indexed: 11/30/2022]
Abstract
An important question about eye-movement behavior is when the decision is made to terminate a fixation and program the following saccade. Different approaches have found converging evidence in favor of a mixed-control account, in which there is some overlap between processing information at fixation and planning the following saccade. We examined one interesting instance of mixed control in visual search: lag-2 revisits, during which observers fixate a stimulus, move to a different stimulus, and then revisit the first stimulus on the next fixation. Results show that the probability of lag-2 revisits occurring increased with the number of target-similar stimuli, and revisits were preceded by a brief fixation on the intervening distractor stimulus. We developed the Efficient Visual Sampling (EVS) computational model to simulate our findings (fixation durations and fixation locations) and to provide insight into mixed control of fixations and the perceptual, cognitive, and motor processes that produce lag-2 revisits.
Collapse
|
12
|
Abstract
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | | |
Collapse
|
13
|
Draschkow D, Wolfe JM, Võ MLH. Seek and you shall remember: scene semantics interact with visual search to build better memories. J Vis 2014; 14:10. [PMID: 25015385 DOI: 10.1167/14.8.10] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization.
Collapse
Affiliation(s)
| | - Jeremy M Wolfe
- Harvard Medical School, Cambridge, MA, USABrigham and Women's Hospital, Boston, MA, USA
| | - Melissa L H Võ
- Harvard Medical School, Cambridge, MA, USABrigham and Women's Hospital, Boston, MA, USAJohann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
14
|
Were you paying attention to where you looked? The role of executive working memory in visual search. Psychon Bull Rev 2013; 15:372-7. [DOI: 10.3758/pbr.15.2.372] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
15
|
Search through complex motion displays does not break down under spatial memory load. Psychon Bull Rev 2013; 21:652-8. [DOI: 10.3758/s13423-013-0537-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
16
|
Abstract
Visual working memory is an online workspace for temporarily representing visual information from the environment. The two most prevalent empirical characteristics of working memory are that it is supported by sustained neural activity over a delay period and it has a severely limited capacity for representing multiple items simultaneously. Traditionally, such delay activity and capacity limits have been considered to be exclusive for maintaining information about objects that are no longer visible to the observers. Here, by contrast, we provide both neurophysiological and psychophysical evidence that the sustained neural activity and capacity limits for items that are continuously visible to the human observer are indistinguishable from those measured for items that are no longer visible. This holds true even when the observers know that the objects will not disappear from the visual field. These results demonstrate that our explicit representation of objects that are still "in view" is far more limited than previously assumed.
Collapse
|
17
|
Woods AJ, Göksun T, Chatterjee A, Zelonis S, Mehta A, Smith SE. The development of organized visual search. Acta Psychol (Amst) 2013; 143:191-9. [PMID: 23584560 DOI: 10.1016/j.actpsy.2013.03.008] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2012] [Revised: 03/08/2013] [Accepted: 03/15/2013] [Indexed: 10/27/2022] Open
Abstract
Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search.
Collapse
|
18
|
Abstract
It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers performed as many as 15 searches for different objects in the same, unchanging scene, the speed of search did not decrease much over the course of these multiple searches (Võ & Wolfe, 2012). Only when observers were asked to search for the same object again did search become considerably faster. We argued that our naturalistic scenes provided such strong "semantic" guidance-e.g., knowing that a faucet is usually located near a sink-that guidance by incidental episodic memory-having seen that faucet previously-was rendered less useful. Here, we directly manipulated the availability of semantic information provided by a scene. By monitoring observers' eye movements, we found a tight coupling of semantic and episodic memory guidance: Decreasing the availability of semantic information increases the use of episodic memory to guide search. These findings have broad implications regarding the use of memory during search in general and particularly during search in naturalistic scenes.
Collapse
Affiliation(s)
- Melissa L-H Võ
- Visual Attention Lab, Harvard Medical School, Brigham and Women's Hospital, USA.
| | | |
Collapse
|
19
|
Oculomotor inhibition of return: How soon is it “recoded” into spatiotopic coordinates? Atten Percept Psychophys 2012; 74:1145-53. [DOI: 10.3758/s13414-012-0312-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
20
|
Solman GJ, Cheyne JA, Smilek D. Found and missed: Failing to recognize a search target despite moving it. Cognition 2012; 123:100-18. [DOI: 10.1016/j.cognition.2011.12.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2011] [Revised: 11/04/2011] [Accepted: 12/12/2011] [Indexed: 11/29/2022]
|
21
|
Does oculomotor inhibition of return influence fixation probability during scene search? Atten Percept Psychophys 2011; 73:2384-98. [DOI: 10.3758/s13414-011-0191-x] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
Võ MLH, Wolfe JM. When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. J Exp Psychol Hum Percept Perform 2011; 38:23-41. [PMID: 21688939 DOI: 10.1037/a0024147] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches-despite previous encounters with the target objects-demonstrates the dominance of guidance by generic scene knowledge in real-world search.
Collapse
Affiliation(s)
- Melissa L-H Võ
- Visual Attention Lab, Harvard Medical School, 64 Sidney Street, Suite 170, Cambridge, MA 02139, USA.
| | | |
Collapse
|
23
|
Hout MC, Goldinger SD. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: evidence from eye movements. J Exp Psychol Hum Percept Perform 2011; 38:90-112. [PMID: 21574743 DOI: 10.1037/a0023894] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent.
Collapse
Affiliation(s)
- Michael C Hout
- Department of Psychology, Arizona State University, Tempe, AZ 85287-1104, USA
| | | |
Collapse
|
24
|
Memory load affects visual search processes without influencing search efficiency. Vision Res 2011; 51:1185-91. [DOI: 10.1016/j.visres.2011.03.009] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2010] [Revised: 03/07/2011] [Accepted: 03/14/2011] [Indexed: 11/22/2022]
|
25
|
Inhibitory tagging in visual search: Only in difficult search are items tagged individually. Vision Res 2010; 50:2069-79. [DOI: 10.1016/j.visres.2010.07.017] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2010] [Revised: 07/20/2010] [Accepted: 07/21/2010] [Indexed: 11/15/2022]
|
26
|
Emrich SM, Al-Aidroos N, Pratt J, Ferber S. Rapid Communication: Finding memory in search: The effect of visual working memory load on visual search. Q J Exp Psychol (Hove) 2010; 63:1457-66. [DOI: 10.1080/17470218.2010.483768] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
There is now substantial evidence that during visual search, previously searched distractors are stored in memory to prevent them from being reselected. Studies examining which memory resources are involved in this process have indicated that while a concurrent spatial working memory task does affect search slopes, depleting visual working memory (VWM) resources does not. In the present study, we confirm that VWM load indeed has no effect on the search slope; however, there is an increase in overall reaction times that is directly related to the number of items held in VWM. Importantly, this effect on search time increases proportionally with the memory load until the capacity of VWM is reached. Furthermore, the search task interfered with the number of items stored in VWM during the concurrent change-detection task. These findings suggest that VWM plays a role in the inhibition of previously searched distractors.
Collapse
Affiliation(s)
| | | | - Jay Pratt
- University of Toronto, Toronto, Ontario, Canada
| | - Susanne Ferber
- University of Toronto, Toronto, Ontario, Canada, and Rotman Research Institute, Baycrest, Toronto, Ontario, Canada
| |
Collapse
|
27
|
Eye movements in active visual search: A computable phenomenological model. Atten Percept Psychophys 2010; 72:285-307. [DOI: 10.3758/app.72.2.285] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
Eye movement trajectories in active visual search: Contributions of attention, memory, and scene boundaries to pattern formation. Atten Percept Psychophys 2010; 72:114-41. [DOI: 10.3758/app.72.1.114] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
29
|
Emrich SM, Al-Aidroos N, Pratt J, Ferber S. Visual search elicits the electrophysiological marker of visual working memory. PLoS One 2009; 4:e8042. [PMID: 19956663 PMCID: PMC2777337 DOI: 10.1371/journal.pone.0008042] [Citation(s) in RCA: 70] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2009] [Accepted: 11/02/2009] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Although limited in capacity, visual working memory (VWM) plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA), which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. METHODOLOGY/PRINCIPAL FINDINGS The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. CONCLUSIONS/SIGNIFICANCE We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors.
Collapse
Affiliation(s)
- Stephen M Emrich
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada.
| | | | | | | |
Collapse
|
30
|
Dickinson CA, Zelinsky GJ. Memory for the search path: evidence for a high-capacity representation of search history. Vision Res 2007; 47:1745-55. [PMID: 17482657 PMCID: PMC2129092 DOI: 10.1016/j.visres.2007.02.010] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2006] [Revised: 02/06/2007] [Accepted: 02/16/2007] [Indexed: 10/23/2022]
Abstract
Using a gaze-contingent paradigm, we directly measured observers' memory capacity for fixated distractor locations during search. After approximately half of the search objects had been fixated, they were masked and a spatial probe appeared at either a previously fixated location or a non-fixated location; observers then rated their confidence that the target had appeared at the probed location. Observers were able to differentiate the 12 most recently fixated distractor locations from non-fixated locations, but analyses revealed that these locations were represented fairly coarsely. We conclude that there exists a high-capacity, but low-resolution, memory for a search path.
Collapse
|