1
|
Moskowitz JB, Fooken J, Castelhano MS, Gallivan JP, Flanagan JR. Visual search for reach targets in actionable space is influenced by movement costs imposed by obstacles. J Vis 2023; 23:4. [PMID: 37289172 PMCID: PMC10257340 DOI: 10.1167/jov.23.6.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 05/07/2023] [Indexed: 06/09/2023] Open
Abstract
Real world search tasks often involve action on a target object once it has been located. However, few studies have examined whether movement-related costs associated with acting on located objects influence visual search. Here, using a task in which participants reached to a target object after locating it, we examined whether people take into account obstacles that increase movement-related costs for some regions of the reachable search space but not others. In each trial, a set of 36 objects (4 targets and 32 distractors) were displayed on a vertical screen and participants moved a cursor to a target after locating it. Participants had to fixate on an object to determine whether it was a target or distractor. A rectangular obstacle, of varying length, location, and orientation, was briefly displayed at the start of the trial. Participants controlled the cursor by moving the handle of a robotic manipulandum in a horizontal plane. The handle applied forces to simulate contact between the cursor and the unseen obstacle. We found that search, measured using eye movements, was biased to regions of the search space that could be reached without moving around the obstacle. This result suggests that when deciding where to search, people can incorporate the physical structure of the environment so as to reduce the movement-related cost of subsequently acting on the located target.
Collapse
Affiliation(s)
- Joshua B Moskowitz
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jolande Fooken
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Monica S Castelhano
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jason P Gallivan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
2
|
Beitner J, Helbing J, Draschkow D, Võ MLH. Get Your Guidance Going: Investigating the Activation of Spatial Priors for Efficient Search in Virtual Reality. Brain Sci 2021; 11:44. [PMID: 33406655 PMCID: PMC7823740 DOI: 10.3390/brainsci11010044] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/21/2020] [Accepted: 12/22/2020] [Indexed: 11/21/2022] Open
Abstract
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Collapse
Affiliation(s)
- Julia Beitner
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Jason Helbing
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, UK;
| | - Melissa L.-H. Võ
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| |
Collapse
|
3
|
Litchfield D, Donovan T. Expecting the initial glimpse: prior target knowledge activation or repeated search does not eliminate scene preview search benefits. JOURNAL OF COGNITIVE PSYCHOLOGY 2019. [DOI: 10.1080/20445911.2018.1555163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
| | - Tim Donovan
- Medical & Sport Sciences, University of Cumbria, Carlisle, UK
| |
Collapse
|
4
|
Josephs EL, Draschkow D, Wolfe JM, Võ MLH. Gist in time: Scene semantics and structure enhance recall of searched objects. Acta Psychol (Amst) 2016; 169:100-108. [PMID: 27270227 DOI: 10.1016/j.actpsy.2016.05.013] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Revised: 02/24/2016] [Accepted: 05/20/2016] [Indexed: 11/16/2022] Open
Abstract
Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization.
Collapse
Affiliation(s)
- Emilie L Josephs
- Cognitive and Neural Organization Lab, Harvard University, Cambridge, MA, USA
| | - Dejan Draschkow
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| | - Jeremy M Wolfe
- Visual Attention Lab, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Cambridge, MA, USA
| | - Melissa L-H Võ
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
5
|
Abstract
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | | |
Collapse
|
6
|
LaPointe MRP, Lupianez J, Milliken B. Context congruency effects in change detection: Opposing effects on detection and identification. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.787133] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
7
|
Glaholt MG, Reingold EM. Direct control of fixation times in scene viewing: Evidence from analysis of the distribution of first fixation duration. VISUAL COGNITION 2012. [DOI: 10.1080/13506285.2012.666295] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
8
|
|
9
|
Object-scene inconsistencies do not capture gaze: evidence from the flash-preview moving-window paradigm. Atten Percept Psychophys 2011; 73:1742-53. [PMID: 21607814 DOI: 10.3758/s13414-011-0150-6] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In the present study, we investigated the influence of object-scene relationships on eye movement control during scene viewing. We specifically tested whether an object that is inconsistent with its scene context is able to capture gaze from the visual periphery. In four experiments, we presented rendered images of naturalistic scenes and compared baseline consistent objects with semantically, syntactically, or both semantically and syntactically inconsistent objects within those scenes. To disentangle the effects of extrafoveal and foveal object-scene processing on eye movement control, we used the flash-preview moving-window paradigm: A short scene preview was followed by an object search or free viewing of the scene, during which visual input was available only via a small gaze-contingent window. This method maximized extrafoveal processing during the preview but limited scene analysis to near-foveal regions during later stages of scene viewing. Across all experiments, there was no indication of an attraction of gaze toward object-scene inconsistencies. Rather than capturing gaze, the semantic inconsistency of an object weakened contextual guidance, resulting in impeded search performance and inefficient eye movement control. We conclude that inconsistent objects do not capture gaze from an initial glimpse of a scene.
Collapse
|
10
|
Brooks J, Belopolsky A, Matsukura M, Palomares M. Object Perception, Attention, and Memory (OPAM) 2009 Conference Report 17th Annual Meeting, Boston, MA, USA. VISUAL COGNITION 2010. [DOI: 10.1080/13506280903314433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|