1
|
Falon SL, Jobson L, Liddell BJ. Does culture moderate the encoding and recognition of negative cues? Evidence from an eye-tracking study. PLoS One 2024; 19:e0295301. [PMID: 38630733 PMCID: PMC11023573 DOI: 10.1371/journal.pone.0295301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 11/20/2023] [Indexed: 04/19/2024] Open
Abstract
Cross-cultural research has elucidated many important differences between people from Western European and East Asian cultural backgrounds regarding how each group encodes and consolidates the contents of complex visual stimuli. While Western European groups typically demonstrate a perceptual bias towards centralised information, East Asian groups favour a perceptual bias towards background information. However, this research has largely focused on the perception of neutral cues and thus questions remain regarding cultural group differences in both the perception and recognition of negative, emotionally significant cues. The present study therefore compared Western European (n = 42) and East Asian (n = 40) participants on a free-viewing task and a subsequent memory task utilising negative and neutral social cues. Attentional deployment to the centralised versus background components of negative and neutral social cues was indexed via eye-tracking, and memory was assessed with a cued-recognition task two days later. While both groups demonstrated an attentional bias towards the centralised components of the neutral cues, only the Western European group demonstrated this bias in the case of the negative cues. There were no significant differences observed between Western European and East Asian groups in terms of memory accuracy, although the Western European group was unexpectedly less sensitive to the centralised components of the negative cues. These findings suggest that culture modulates low-level attentional deployment to negative information, however not higher-level recognition after a temporal interval. This paper is, to our knowledge, the first to concurrently consider the effect of culture on both attentional outcomes and memory for both negative and neutral cues.
Collapse
Affiliation(s)
| | - Laura Jobson
- School of Psychological Sciences, Monash University, Clayton, Australia
| | | |
Collapse
|
2
|
Beitner J, Helbing J, David EJ, Võ MLH. Using a flashlight-contingent window paradigm to investigate visual search and object memory in virtual reality and on computer screens. Sci Rep 2024; 14:8596. [PMID: 38615047 DOI: 10.1038/s41598-024-58941-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/04/2024] [Indexed: 04/15/2024] Open
Abstract
A popular technique to modulate visual input during search is to use gaze-contingent windows. However, these are often rather discomforting, providing the impression of visual impairment. To counteract this, we asked participants in this study to search through illuminated as well as dark three-dimensional scenes using a more naturalistic flashlight with which they could illuminate the rooms. In a surprise incidental memory task, we tested the identities and locations of objects encountered during search. Importantly, we tested this study design in both immersive virtual reality (VR; Experiment 1) and on a desktop-computer screen (Experiment 2). As hypothesized, searching with a flashlight increased search difficulty and memory usage during search. We found a memory benefit for identities of distractors in the flashlight condition in VR but not in the computer screen experiment. Surprisingly, location memory was comparable across search conditions despite the enormous difference in visual input. Subtle differences across experiments only appeared in VR after accounting for previous recognition performance, hinting at a benefit of flashlight search in VR. Our findings highlight that removing visual information does not necessarily impair location memory, and that screen experiments using virtual environments can elicit the same major effects as VR setups.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Erwan Joël David
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- LIUM, Le Mans Université, Le Mans, France
| | - Melissa Lê-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
3
|
Broers N, Bainbridge WA, Michel R, Balestrieri E, Busch NA. The extent and specificity of visual exploration determines the formation of recollected memories in complex scenes. J Vis 2022; 22:9. [PMID: 36227616 PMCID: PMC9583750 DOI: 10.1167/jov.22.11.9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Our visual memories of complex scenes often appear as robust, detailed records of the past. Several studies have demonstrated that active exploration with eye movements improves recognition memory for scenes, but it is unclear whether this improvement is due to stronger feelings of familiarity or more detailed recollection. We related the extent and specificity of fixation patterns at encoding and retrieval to different recognition decisions in an incidental memory paradigm. After incidental encoding of 240 real-world scene photographs, participants (N = 44) answered a surprise memory test by reporting whether an image was new, remembered (indicating recollection), or just known to be old (indicating familiarity). To assess the specificity of their visual memories, we devised a novel report procedure in which participants selected the scene region that they specifically recollected, that appeared most familiar, or that was particularly new to them. At encoding, when considering the entire scene,subsequently recollected compared to familiar or forgotten scenes showed a larger number of fixations that were more broadly distributed, suggesting that more extensive visual exploration determines stronger and more detailed memories. However, when considering only the memory-relevant image areas, fixations were more dense and more clustered for subsequently recollected compared to subsequently familiar scenes. At retrieval, the extent of visual exploration was more restricted for recollected compared to new or forgotten scenes, with a smaller number of fixations. Importantly, fixation density and clustering was greater in memory-relevant areas for recollected versus familiar or falsely recognized images. Our findings suggest that more extensive visual exploration across the entire scene, with a subset of more focal and dense fixations in specific image areas, leads to increased potential for recollecting specific image aspects.
Collapse
Affiliation(s)
- Nico Broers
- Institute of Psychology, University of Münster, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.,
| | | | - René Michel
- Institute of Psychology, University of Münster, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.,
| | - Elio Balestrieri
- Institute of Psychology, University of Münster, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.,
| | - Niko A Busch
- Institute of Psychology, University of Münster, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.,
| |
Collapse
|
4
|
Děchtěrenko F, Lukavský J. False memories when viewing overlapping scenes. PeerJ 2022; 10:e13187. [PMID: 35411252 PMCID: PMC8994494 DOI: 10.7717/peerj.13187] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 03/08/2022] [Indexed: 01/12/2023] Open
Abstract
Humans can memorize and later recognize many objects and complex scenes. In this study, we prepared large photographs and presented participants with only partial views to test the fidelity of their memories. The unpresented parts of the photographs were used as a source of distractors with similar semantic and perceptual information. Additionally, we presented overlapping views to determine whether the second presentation provided a memory advantage for later recognition tests. Experiment 1 (N = 28) showed that while people were good at recognizing presented content and identifying new foils, they showed a remarkable level of uncertainty about foils selected from the unseen parts of presented photographs (false alarm, 59%). The recognition accuracy was higher for the parts that were shown twice, irrespective of whether the same identical photograph was viewed twice or whether two photographs with overlapping content were observed. In Experiment 2 (N = 28), the memorability of the large image was estimated by a pre-trained deep neural network. Neither the recognition accuracy for an image part nor the tendency for false alarms correlated with the memorability. Finally, in Experiment 3 (N = 21), we repeated the experiment while measuring eye movements. Fixations were biased toward the center of the original large photograph in the first presentation, and this bias was repeated during the second presentation in both identical and overlapping views. Altogether, our experiments show that people recognize parts of remembered photographs, but they find it difficult to reject foils from unseen parts, suggesting that their memory representation is not sufficiently detailed to rule them out as distractors.
Collapse
|
5
|
Effects of pointing movements on visuospatial working memory in a joint-action condition: Evidence from eye movements. Mem Cognit 2021; 50:261-277. [PMID: 34480326 PMCID: PMC8821511 DOI: 10.3758/s13421-021-01230-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/02/2021] [Indexed: 11/18/2022]
Abstract
Previous studies showed that (a) performing pointing movements towards to-be-remembered locations enhanced their later recognition, and (b) in a joint-action condition, experimenter-performed pointing movements benefited memory to the same extent as self-performed movements. The present study replicated these findings and additionally recorded participants’ fixations towards studied arrays. Each trial involved the presentation of two consecutive spatial arrays, where each item occupied a different spatial location. The item locations of one array were encoded by mere visual observation (the no-move array), whereas the locations of the other array were encoded by observation plus pointing movements (the move array). Critically, in Experiment 1, participants took turns with the experimenter in pointing towards the move arrays (joint-action condition), while in Experiment 2 pointing was performed only by the experimenter (passive condition). The results showed that the locations of move arrays were recognized better than the locations of no-move arrays in Experiment 1, but not in Experiment 2. The pattern of eye-fixations was in line with behavioral findings, indicating that in Experiment 1, fixations to the locations of move arrays were higher in number and longer in duration than fixations to the locations of no-move arrays, irrespective of the agent who performed the movements. In contrast, no differences emerged in Experiment 2. We propose that, in the joint-action condition, self- and other-performed pointing movements are coded at the same representational level and their functional equivalency is reflected in a similar pattern of eye-fixations.
Collapse
|
6
|
Lyu M, Choe KW, Kardan O, Kotabe HP, Henderson JM, Berman MG. Overt attentional correlates of memorability of scene images and their relationships to scene semantics. J Vis 2021; 20:2. [PMID: 32876677 PMCID: PMC7476653 DOI: 10.1167/jov.20.9.2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Computer vision-based research has shown that scene semantics (e.g., presence of meaningful objects in a scene) can predict memorability of scene images. Here, we investigated whether and to what extent overt attentional correlates, such as fixation map consistency (also called inter-observer congruency of fixation maps) and fixation counts, mediate the relationship between scene semantics and scene memorability. First, we confirmed that the higher the fixation map consistency of a scene, the higher its memorability. Moreover, both fixation map consistency and its correlation to scene memorability were the highest in the first 2 seconds of viewing, suggesting that meaningful scene features that contribute to producing more consistent fixation maps early in viewing, such as faces and humans, may also be important for scene encoding. Second, we found that the relationship between scene semantics and scene memorability was partially (but not fully) mediated by fixation map consistency and fixation counts, separately as well as together. Third, we found that fixation map consistency, fixation counts, and scene semantics significantly and additively contributed to scene memorability. Together, these results suggest that eye-tracking measurements can complement computer vision-based algorithms and improve overall scene memorability prediction.
Collapse
Affiliation(s)
- Muxuan Lyu
- Department of Management and Marketing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Kyoung Whan Choe
- Department of Psychology, The University of Chicago, Chicago, IL, USA.,Mansueto Institute for Urban Innovation, The University of Chicago, Chicago, IL, USA
| | - Omid Kardan
- Department of Psychology, The University of Chicago, Chicago, IL, USA
| | | | - John M Henderson
- Center for Mind and Brain and Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Marc G Berman
- Department of Psychology, The University of Chicago, Chicago, IL, USA.,Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, The University of Chicago, Chicago, IL, USA
| |
Collapse
|
7
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
8
|
Cronin DA, Hall EH, Goold JE, Hayes TR, Henderson JM. Eye Movements in Real-World Scene Photographs: General Characteristics and Effects of Viewing Task. Front Psychol 2020; 10:2915. [PMID: 32010016 PMCID: PMC6971407 DOI: 10.3389/fpsyg.2019.02915] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
The present study examines eye movement behavior in real-world scenes with a large (N = 100) sample. We report baseline measures of eye movement behavior in our sample, including mean fixation duration, saccade amplitude, and initial saccade latency. We also characterize how eye movement behaviors change over the course of a 12 s trial. These baseline measures will be of use to future work studying eye movement behavior in scenes in a variety of literatures. We also examine effects of viewing task on when and where the eyes move in real-world scenes: participants engaged in a memorization and an aesthetic judgment task while viewing 100 scenes. While we find no difference at the mean-level between the two tasks, temporal- and distribution-level analyses reveal significant task-driven differences in eye movement behavior.
Collapse
Affiliation(s)
- Deborah A. Cronin
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Elizabeth H. Hall
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
- Department of Psychology, University of California, Davis, Davis, CA, United States
| | - Jessica E. Goold
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Taylor R. Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - John M. Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
- Department of Psychology, University of California, Davis, Davis, CA, United States
| |
Collapse
|
9
|
Williams CC, Castelhano MS. The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes. Vision (Basel) 2019; 3:E33. [PMID: 31735834 PMCID: PMC6802790 DOI: 10.3390/vision3030033] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 06/20/2019] [Accepted: 06/24/2019] [Indexed: 11/16/2022] Open
Abstract
The use of eye movements to explore scene processing has exploded over the last decade. Eye movements provide distinct advantages when examining scene processing because they are both fast and spatially measurable. By using eye movements, researchers have investigated many questions about scene processing. Our review will focus on research performed in the last decade examining: (1) attention and eye movements; (2) where you look; (3) influence of task; (4) memory and scene representations; and (5) dynamic scenes and eye movements. Although typically addressed as separate issues, we argue that these distinctions are now holding back research progress. Instead, it is time to examine the intersections of these seemingly separate influences and examine the intersectionality of how these influences interact to more completely understand what eye movements can tell us about scene processing.
Collapse
Affiliation(s)
- Carrick C. Williams
- Department of Psychology, California State University San Marcos, San Marcos, CA 92069, USA
| | | |
Collapse
|
10
|
Šikl R, Svatoňová H, Děchtěrenko F, Urbánek T. Visual recognition memory for scenes in aerial photographs: Exploring the role of expertise. Acta Psychol (Amst) 2019; 197:23-31. [PMID: 31077995 DOI: 10.1016/j.actpsy.2019.04.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Revised: 04/24/2019] [Accepted: 04/30/2019] [Indexed: 12/13/2022] Open
Abstract
Aerial photographs depict objects from an overhead position, which gives them several unusual visual characteristics that are challenging for viewers to perceive and memorize. However, even for untrained viewers, aerial photographs are still meaningful and rich with contextual information. Such visual stimulus properties are considered appropriate and important when testing for expertise effects in visual recognition memory. The current experiment investigated memory recognition in expert image analysts and untrained viewers using two types of aerial photographs. The experts were better than untrained viewers at recognizing both vertical aerial photographs, which is the domain of their expertise, and oblique aerial photographs. Thus, one notable finding is that the superior memory performance of experts is not limited to a domain of expertise but extends to a broader category of large-scale landscape scenes. Furthermore, the experts' recognition accuracy remained relatively stable throughout the experimental conditions, illustrating the ability to use semantic information over strictly visual information in memory processes.
Collapse
|
11
|
Distinct roles of eye movements during memory encoding and retrieval. Cognition 2019; 184:119-129. [DOI: 10.1016/j.cognition.2018.12.014] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 12/11/2018] [Accepted: 12/20/2018] [Indexed: 11/23/2022]
|
12
|
Zhou W, Mo F, Zhang Y, Ding J. Semantic and Syntactic Associations During Word Search Modulate the Relationship Between Attention and Subsequent Memory. The Journal of General Psychology 2017; 144:69-88. [PMID: 28098521 DOI: 10.1080/00221309.2016.1258389] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Two experiments were conducted to investigate how linguistic information influences attention allocation in visual search and memory for words. In Experiment 1, participants searched for the synonym of a cue word among five words. The distractors included one antonym and three unrelated words. In Experiment 2, participants were asked to judge whether the five words presented on the screen comprise a valid sentence. The relationships among words were sentential, semantically related or unrelated. A memory recognition task followed. Results in both experiments showed that linguistically related words produced better memory performance. We also found that there were significant interactions between linguistic relation conditions and memorization on eye-movement measures, indicating that good memory for words relied on frequent and long fixations during search in the unrelated condition but to a much lesser extent in linguistically related conditions. We conclude that semantic and syntactic associations attenuate the link between overt attention allocation and subsequent memory performance, suggesting that linguistic relatedness can somewhat compensate for a relative lack of attention during word search.
Collapse
Affiliation(s)
| | - Fei Mo
- a Capital Normal University
| | | | | |
Collapse
|
13
|
Doherty BR, Patai EZ, Duta M, Nobre AC, Scerif G. The functional consequences of social distraction: Attention and memory for complex scenes. Cognition 2017; 158:215-223. [PMID: 27842274 DOI: 10.1016/j.cognition.2016.10.015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Revised: 10/12/2016] [Accepted: 10/26/2016] [Indexed: 11/20/2022]
Affiliation(s)
- Brianna Ruth Doherty
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Eva Zita Patai
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; Oxford Centre for Human Brain Activity, University of Oxford, Oxford, United Kingdom
| | - Mihaela Duta
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Anna Christina Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; Oxford Centre for Human Brain Activity, University of Oxford, Oxford, United Kingdom
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| |
Collapse
|
14
|
Making Sense of Real-World Scenes. Trends Cogn Sci 2016; 20:843-856. [PMID: 27769727 DOI: 10.1016/j.tics.2016.09.003] [Citation(s) in RCA: 82] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Revised: 09/06/2016] [Accepted: 09/06/2016] [Indexed: 11/23/2022]
Abstract
To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties.
Collapse
|
15
|
Josephs EL, Draschkow D, Wolfe JM, Võ MLH. Gist in time: Scene semantics and structure enhance recall of searched objects. Acta Psychol (Amst) 2016; 169:100-108. [PMID: 27270227 DOI: 10.1016/j.actpsy.2016.05.013] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Revised: 02/24/2016] [Accepted: 05/20/2016] [Indexed: 11/16/2022] Open
Abstract
Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization.
Collapse
Affiliation(s)
- Emilie L Josephs
- Cognitive and Neural Organization Lab, Harvard University, Cambridge, MA, USA
| | - Dejan Draschkow
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| | - Jeremy M Wolfe
- Visual Attention Lab, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Cambridge, MA, USA
| | - Melissa L-H Võ
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
16
|
Abstract
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | | |
Collapse
|