1
|
Kyle-Davidson C, Solis O, Robinson S, Tan RTW, Evans KK. Scene complexity and the detail trace of human long-term visual memory. Vision Res 2024; 227:108525. [PMID: 39644707 DOI: 10.1016/j.visres.2024.108525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 10/30/2024] [Accepted: 11/21/2024] [Indexed: 12/09/2024]
Abstract
Humans can remember a vast amount of scene images; an ability often attributed to encoding only low-fidelity gist traces of a scene. Instead, studies show a surprising amount of detail is retained for each scene image allowing them to be distinguished from highly similar in-category distractors. The gist trace for images can be relatively easily captured through both computational and behavioural techniques, but capturing detail is much harder. While detail can be broadly estimated at the categorical level (e.g. man-made scenes more complex than natural), there is a lack of both ground-truth detail data at the sample level and a way to operationalise it for measurement purposes. Here through three different studies, we investigate whether the perceptual complexity of scenes can serve as a suitable analogue for the detail present in a scene, and hence whether we can use complexity to determine the relationship between scene detail and visual long term memory for scenes. First we examine this relationship directly using the VISCHEMA datasets, to determine whether the perceived complexity of a scene interacts with memorability, finding a significant positive correlation between complexity and memory, in contrast to the hypothesised U-shaped relation often proposed in the literature. In the second study we model complexity via artificial means, and find that even predicted measures of complexity still correlate with the overall ground-truth memorability of a scene, indicating that complexity and memorability cannot be easily disentangled. Finally, we investigate how cognitive load impacts the influence of scene complexity on image memorability. Together, findings indicate complexity and memorability do vary non-linearly, but generally it is limited to the extremes of the image complexity ranges. The effect of complexity on memory closely mirrors previous findings that detail enhances memory, and suggests that complexity is a suitable analogue for detail in visual long-term scene memory.
Collapse
Affiliation(s)
| | - Oscar Solis
- University of York, Dept. of Psychology, York, YO10 5NA, UK
| | | | | | - Karla K Evans
- University of York, Dept. of Psychology, York, YO10 5NA, UK
| |
Collapse
|
2
|
Cárdenas-Miller N, O'Donnell RE, Tam J, Wyble B. Surprise! Draw the scene: Visual recall reveals poor incidental working memory following visual search in natural scenes. Mem Cognit 2023:10.3758/s13421-023-01465-9. [PMID: 37770695 DOI: 10.3758/s13421-023-01465-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2023] [Indexed: 09/30/2023]
Abstract
Searching within natural scenes can induce incidental encoding of information about the scene and the target, particularly when the scene is complex or repeated. However, recent evidence from attribute amnesia (AA) suggests that in some situations, searchers can find a target without building a robust incidental memory of its task relevant features. Through drawing-based visual recall and an AA search task, we investigated whether search in natural scenes necessitates memory encoding. Participants repeatedly searched for and located an easily detected item in novel scenes for numerous trials before being unexpectedly prompted to draw either the entire scene (Experiment 1) or their search target (Experiment 2) directly after viewing the search image. Naïve raters assessed the similarity of the drawings to the original information. We found that surprise-trial drawings of the scene and search target were both poorly recognizable, but the same drawers produced highly recognizable drawings on the next trial when they had an expectation to draw the image. Experiment 3 further showed that the poor surprise trial memory could not merely be attributed to interference from the surprising event. Our findings suggest that even for searches done in natural scenes, it is possible to locate a target without creating a robust memory of either it or the scene it was in, even if attended to just a few seconds prior. This disconnection between attention and memory might reflect a fundamental property of cognitive computations designed to optimize task performance and minimize resource use.
Collapse
Affiliation(s)
| | - Ryan E O'Donnell
- Pennsylvania State University, University Park, PA, USA
- Drexel University, Philadelphia, PA, USA
| | - Joyce Tam
- Pennsylvania State University, University Park, PA, USA
| | - Brad Wyble
- Pennsylvania State University, University Park, PA, USA.
| |
Collapse
|
3
|
Long-term memory representations for audio-visual scenes. Mem Cognit 2023; 51:349-370. [PMID: 36100821 PMCID: PMC9950240 DOI: 10.3758/s13421-022-01355-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/08/2022]
Abstract
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Collapse
|
4
|
Hall EH, Bainbridge WA, Baker CI. Highly similar and competing visual scenes lead to diminished object but not spatial detail in memory drawings. Memory 2021; 30:279-292. [PMID: 34913412 DOI: 10.1080/09658211.2021.2010761] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Drawings of scenes made from memory can be highly detailed and spatially accurate, with little information not found in the observed stimuli. While prior work has focused on studying memory for distinct scenes, less is known about the specific detail recalled when episodes are highly similar and competing. Here, participants (N = 30) were asked to study and recall eight complex scene images using a drawing task. Importantly, four of these images were exemplars of different scene categories, while the other four images were from the same scene category. The resulting 213 drawings were judged by 1764 online scorers for a comprehensive set of measures, including scene and object diagnosticity, spatial information, and fixation and pen movement behaviour. We observed that competition in memory resulted in diminished object detail, with drawings and objects that were less diagnostic of their original image. However, repeated exemplars of a category did not result in differences in spatial memory accuracy, and there were no differences in fixations during study or pen movements during recall. These results reveal that while drawings for distinct categories of scenes can be highly detailed and accurate, drawings for scenes from repeated categories, creating competition in memory, show reduced object detail.
Collapse
Affiliation(s)
- Elizabeth H Hall
- Department of Psychology, University of California Davis, Davis, CA, USA.,Center for Mind and Brain, University of California Davis, Davis, CA, USA.,Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | | | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| |
Collapse
|
5
|
Information stored in memory affects abductive reasoning. PSYCHOLOGICAL RESEARCH 2021; 85:3119-3133. [PMID: 33428007 PMCID: PMC8476388 DOI: 10.1007/s00426-020-01460-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 12/07/2020] [Indexed: 11/02/2022]
Abstract
Abductive reasoning describes the process of deriving an explanation from given observations. The theory of abductive reasoning (TAR; Johnson and Krems, Cognitive Science 25:903-939, 2001) assumes that when information is presented sequentially, new information is integrated into a mental representation, a situation model, the central data structure on which all reasoning processes are based. Because working memory capacity is limited, the question arises how reasoning might change with the amount of information that has to be processed in memory. Thus, we conducted an experiment (N = 34) in which we manipulated whether previous observation information and previously found explanations had to be retrieved from memory or were still visually present. Our results provide evidence that people experience differences in task difficulty when more information has to be retrieved from memory. This is also evident in changes in the mental representation as reflected by eye tracking measures. However, no differences are found between groups in the reasoning outcome. These findings suggest that individuals construct their situation model from both information in memory as well as external memory stores. The complexity of the model depends on the task: when memory demands are high, only relevant information is included. With this compensation strategy, people are able to achieve similar reasoning outcomes even when faced with tasks that are more difficult. This implies that people are able to adapt their strategy to the task in order to keep their reasoning successful.
Collapse
|
6
|
Henderson JM, Goold JE, Choi W, Hayes TR. Neural Correlates of Fixated Low- and High-level Scene Properties during Active Scene Viewing. J Cogn Neurosci 2020; 32:2013-2023. [PMID: 32573384 PMCID: PMC11164273 DOI: 10.1162/jocn_a_01599] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
During real-world scene perception, viewers actively direct their attention through a scene in a controlled sequence of eye fixations. During each fixation, local scene properties are attended, analyzed, and interpreted. What is the relationship between fixated scene properties and neural activity in the visual cortex? Participants inspected photographs of real-world scenes in an MRI scanner while their eye movements were recorded. Fixation-related fMRI was used to measure activation as a function of lower- and higher-level scene properties at fixation, operationalized as edge density and meaning maps, respectively. We found that edge density at fixation was most associated with activation in early visual areas, whereas semantic content at fixation was most associated with activation along the ventral visual stream including core object and scene-selective areas (lateral occipital complex, parahippocampal place area, occipital place area, and retrosplenial cortex). The observed activation from semantic content was not accounted for by differences in edge density. The results are consistent with active vision models in which fixation gates detailed visual analysis for fixated scene regions, and this gating influences both lower and higher levels of scene analysis.
Collapse
Affiliation(s)
| | | | - Wonil Choi
- Gwangju Institute of Science and Technology
| | | |
Collapse
|
7
|
Rosen ML, Stern CE, Devaney KJ, Somers DC. Cortical and Subcortical Contributions to Long-Term Memory-Guided Visuospatial Attention. Cereb Cortex 2019; 28:2935-2947. [PMID: 28968648 DOI: 10.1093/cercor/bhx172] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2016] [Accepted: 06/21/2017] [Indexed: 01/22/2023] Open
Abstract
Long-term memory (LTM) helps to efficiently direct and deploy the scarce resources of the attentional system; however, the neural substrates that support LTM-guidance of visual attention are not well understood. Here, we present results from fMRI experiments that demonstrate that cortical and subcortical regions of a network defined by resting-state functional connectivity are selectively recruited for LTM-guided attention, relative to a similarly demanding stimulus-guided attention paradigm that lacks memory retrieval and relative to a memory retrieval paradigm that lacks covert deployment of attention. Memory-guided visuospatial attention recruited posterior callosal sulcus, posterior precuneus, and lateral intraparietal sulcus bilaterally. Additionally, 3 subcortical regions defined by intrinsic functional connectivity were recruited: the caudate head, mediodorsal thalamus, and cerebellar lobule VI/Crus I. Although the broad resting-state network to which these nodes belong has been referred to as a cognitive control network, the posterior cortical regions activated in the present study are not typically identified with supporting standard cognitive control tasks. We propose that these regions form a Memory-Attention Network that is recruited for processes that integrate mnemonic and stimulus-based representations to guide attention. These findings may have important implications for understanding the mechanisms by which memory retrieval influences attentional deployment.
Collapse
Affiliation(s)
- Maya L Rosen
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Room 149C, Boston, MA, USA.,Department of Psychology, University of Washington, 119A Guthrie Hall, Seattle, WA, USA
| | - Chantal E Stern
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Room 149C, Boston, MA, USA.,Center for Memory and Brain, Boston University, 610 Commonwealth Ave, 7th Floor, Boston, MA, USA
| | - Kathryn J Devaney
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Room 149C, Boston, MA, USA
| | - David C Somers
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Room 149C, Boston, MA, USA.,Center for Memory and Brain, Boston University, 610 Commonwealth Ave, 7th Floor, Boston, MA, USA
| |
Collapse
|
8
|
Shafer-Skelton A, Brady TF. Scene layout priming relies primarily on low-level features rather than scene layout. J Vis 2019; 19:14. [PMID: 30677124 DOI: 10.1167/19.1.14] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The ability to perceive and remember the spatial layout of a scene is critical to understanding the visual world, both for navigation and for other complex tasks that depend upon the structure of the current environment. However, surprisingly little work has investigated how and when scene layout information is maintained in memory. One prominent line of work investigating this issue is a scene-priming paradigm (e.g., Sanocki & Epstein, 1997), in which different types of previews are presented to participants shortly before they judge which of two regions of a scene is closer in depth to the viewer. Experiments using this paradigm have been widely cited as evidence that scene layout information is stored across brief delays and have been used to investigate the structure of the representations underlying memory for scene layout. In the present experiments, we better characterize these scene-priming effects. We find that a large amount of visual detail rather than the presence of depth information is necessary for the priming effect; that participants show a preview benefit for a judgment completely unrelated to the scene itself; and that preview benefits are susceptible to masking and quickly decay. Together, these results suggest that "scene priming" effects do not isolate scene layout information in memory, and that they may arise from low-level visual information held in sensory memory. This broadens the range of interpretations of scene priming effects and suggests that other paradigms may need to be developed to selectively investigate how we represent scene layout information in memory.
Collapse
Affiliation(s)
| | - Timothy F Brady
- Department of Psychology, University of California, San Diego, CA, USA
| |
Collapse
|
9
|
Abstract
The present research explored the role of the medial temporal lobes in object memory in the unique patient MR, who has a selective lesion to her left lateral entorhinal cortex. Two experiments explored recognition memory for object identity and object location in MR and matched controls. The results showed that MR had intact performance in an object location task [MR=0.70, controls=0.69, t(6)=0.06, P>0.05], but was impaired in an object identity task [MR=0.62, controls=0.84, t(6)=-4.12, P<0.05]. No differences in correct recollection or familiarity emerged. These results suggest a differential role of the entorhinal cortex in object recognition memory. The current research is therefore the first patient study to show the role of the lateral entorhinal cortex in object identity recognition and suggests that current medial temporal lobe theoretical models on both object and recognition memory require a theoretical re-think to account for the contributions of the entorhinal cortex in these processes.
Collapse
|
10
|
Mercer T, Jones GA. Time-dependent forgetting and retrieval practice effects in detailed visual long-term memory. Q J Exp Psychol (Hove) 2018; 72:1561-1577. [PMID: 30142989 DOI: 10.1177/1747021818799697] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Memories-especially those containing fine details-are usually lost over time, but this study assessed whether detailed visual memories can survive a 1-week delay if retrieval practice is provided. In three experiments, participants viewed 300 objects and then completed recognition tests assessing memory for precise object exemplars and their state. The recognition tests occurred immediately after encoding and 1 week later, and required participants to distinguish between a previously seen target object and an incorrect foil. While there was forgetting when participants were tested on different sets of stimuli across the delay, retrieval practice led to an advantage in recognition performance. This effect was not simply due to mere exposure, as retrieval practice boosted recognition beyond a restudy condition, which had a second encoding opportunity but no retrieval practice. Yet more detailed analyses revealed that the effect of retrieval practice was highly dependent upon the type of information being tested (exemplar or state) and the specific foil that was presented. In addition, state information was harder to retain over the delay than exemplar information, suggesting that memory for different properties is forgotten at different rates.
Collapse
Affiliation(s)
- Tom Mercer
- Faculty of Education, Health and Wellbeing, University of Wolverhampton, Wolverhampton, UK
| | - Gemma A Jones
- Faculty of Education, Health and Wellbeing, University of Wolverhampton, Wolverhampton, UK
| |
Collapse
|
11
|
Hansen NE, Noesen BT, Nador JD, Harel A. The influence of behavioral relevance on the processing of global scene properties: An ERP study. Neuropsychologia 2018; 114:168-180. [PMID: 29729276 DOI: 10.1016/j.neuropsychologia.2018.04.040] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Revised: 04/27/2018] [Accepted: 04/30/2018] [Indexed: 12/01/2022]
Abstract
Recent work studying the temporal dynamics of visual scene processing (Harel et al., 2016) has found that global scene properties (GSPs) modulate the amplitude of early Event-Related Potentials (ERPs). It is still not clear, however, to what extent the processing of these GSPs is influenced by their behavioral relevance, determined by the goals of the observer. To address this question, we investigated how behavioral relevance, operationalized by the task context impacts the electrophysiological responses to GSPs. In a set of two experiments we recorded ERPs while participants viewed images of real-world scenes, varying along two GSPs, naturalness (manmade/natural) and spatial expanse (open/closed). In Experiment 1, very little attention to scene content was required as participants viewed the scenes while performing an orthogonal fixation-cross task. In Experiment 2 participants saw the same scenes but now had to actively categorize them, based either on their naturalness or spatial expense. We found that task context had very little impact on the early ERP responses to the naturalness and spatial expanse of the scenes: P1, N1, and P2 could distinguish between open and closed scenes and between manmade and natural scenes across both experiments. Further, the specific effects of naturalness and spatial expanse on the ERP components were largely unaffected by their relevance for the task. A task effect was found at the N1 and P2 level, but this effect was manifest across all scene dimensions, indicating a general effect rather than an interaction between task context and GSPs. Together, these findings suggest that the extraction of global scene information reflected in the early ERP components is rapid and very little influenced by top-down observer-based goals.
Collapse
Affiliation(s)
- Natalie E Hansen
- Department of Psychology, Wright State University, Dayton, OH, United States
| | - Birken T Noesen
- Department of Psychology, Wright State University, Dayton, OH, United States
| | - Jeffrey D Nador
- Department of Psychology, Wright State University, Dayton, OH, United States
| | - Assaf Harel
- Department of Psychology, Wright State University, Dayton, OH, United States.
| |
Collapse
|
12
|
Visual search for changes in scenes creates long-term, incidental memory traces. Atten Percept Psychophys 2018; 80:829-843. [PMID: 29427122 DOI: 10.3758/s13414-018-1486-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Collapse
|
13
|
Lomp O, Faubel C, Schöner G. A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity. Front Neurorobot 2017; 11:23. [PMID: 28503145 PMCID: PMC5408094 DOI: 10.3389/fnbot.2017.00023] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2016] [Accepted: 04/06/2017] [Indexed: 11/24/2022] Open
Abstract
Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views.
Collapse
Affiliation(s)
- Oliver Lomp
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
- *Correspondence: Oliver Lomp,
| | - Christian Faubel
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| | - Gregor Schöner
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
14
|
Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes. Mem Cognit 2017; 44:390-402. [PMID: 26620810 DOI: 10.3758/s13421-015-0575-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.
Collapse
|
15
|
Abstract
How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the postchange display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a 6-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the prechange array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.
Collapse
|
16
|
Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli. Proc Natl Acad Sci U S A 2016; 113:7459-64. [PMID: 27325767 DOI: 10.1073/pnas.1520027113] [Citation(s) in RCA: 122] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli-colors and orientations-is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up," revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge.
Collapse
|
17
|
Rosen ML, Stern CE, Michalka SW, Devaney KJ, Somers DC. Cognitive Control Network Contributions to Memory-Guided Visual Attention. Cereb Cortex 2015; 26:2059-2073. [PMID: 25750253 DOI: 10.1093/cercor/bhv028] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention.
Collapse
Affiliation(s)
| | - Chantal E Stern
- Department of Psychological and Brain Sciences.,Center for Memory and Brain.,Graduate Program for Neuroscience, Boston University, Boston, MA 02215, USA
| | | | | | - David C Somers
- Department of Psychological and Brain Sciences.,Center for Memory and Brain.,Graduate Program for Neuroscience, Boston University, Boston, MA 02215, USA
| |
Collapse
|
18
|
Olejarczyk JH, Luke SG, Henderson JM. Incidental memory for parts of scenes from eye movements. VISUAL COGNITION 2014. [DOI: 10.1080/13506285.2014.941433] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
19
|
Something from (almost) nothing: buildup of object memory from forgettable single fixations. Atten Percept Psychophys 2014; 76:2413-23. [DOI: 10.3758/s13414-014-0706-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2013] [Revised: 04/29/2014] [Accepted: 05/10/2014] [Indexed: 11/08/2022]
|
20
|
Rosen ML, Stern CE, Somers DC. Long-term memory guidance of visuospatial attention in a change-detection paradigm. Front Psychol 2014; 5:266. [PMID: 24744744 PMCID: PMC3978356 DOI: 10.3389/fpsyg.2014.00266] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Accepted: 03/11/2014] [Indexed: 11/13/2022] Open
Abstract
Visual task performance is generally stronger in familiar environments. One reason for this familiarity benefit is that we learn where to direct our visual attention and effective attentional deployment enhances performance. Visual working memory plays a central role in supporting long-term memory guidance of visuospatial attention. We modified a change detection task to create a new paradigm for investigating long-term memory guidance of attention. During the training phase, subjects viewed images in a flicker paradigm and were asked to detect between one and three changes in the images. The test phase required subjects to detect a single change in a one-shot change detection task in which they held all possible locations of changes in visual working memory and deployed attention to those locations to determine if a change occurred. Subjects detected significantly more changes in images for which they had been trained to detect the changes, demonstrating that memory of the images guided subjects in deploying their attention. Moreover, capacity to detect changes was greater for images that had multiple changes during the training phase. In Experiment 2, we observed that capacity to detect changes for the 3-studied change condition increased significantly with more study exposures and capacity was significantly higher than 1, indicating that subjects were able to attend to more than one location. Together, these findings suggest memory and attentional systems interact via working memory such that long-term memory can be used to direct visual spatial attention to multiple locations based on previous experience.
Collapse
Affiliation(s)
- Maya L Rosen
- Department of Psychological and Brain Sciences, Boston University Boston MA, USA
| | - Chantal E Stern
- Department of Psychological and Brain Sciences, Boston University Boston MA, USA
| | - David C Somers
- Department of Psychological and Brain Sciences, Boston University Boston MA, USA
| |
Collapse
|
21
|
Abstract
Visual working memory (WM) capacity is thought to be limited to 3 or 4 items. However, many cognitive activities seem to require larger temporary memory stores. Here, we provide evidence for a temporary memory store with much larger capacity than past WM capacity estimates. Further, based on previous WM research, we show that a single factor--proactive interference--is sufficient to bring capacity estimates down to the range of previous WM capacity estimates. Participants saw a rapid serial visual presentation of 5-21 pictures of familiar objects or words presented at rates of 4/s or 8/s, respectively, and thus too fast for strategies such as rehearsal. Recognition memory was tested with a single probe item. When new items were used on all trials, no fixed memory capacities were observed, with estimates of up to 9.1 retained pictures for 21-item lists, and up to 30.0 retained pictures for 100-item lists, and no clear upper bound to how many items could be retained. Further, memory items were not stored in a temporally stable form of memory but decayed almost completely after a few minutes. In contrast, when, as in most WM experiments, a small set of items was reused across all trials, thus creating proactive interference among items, capacity remained in the range reported in previous WM experiments. These results show that humans have a large-capacity temporary memory store in the absence of proactive interference, and raise the question of whether temporary memory in everyday cognitive processing is severely limited, as in WM experiments, or has the much larger capacity found in the present experiments.
Collapse
Affiliation(s)
- Ansgar D Endress
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | - Mary C Potter
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| |
Collapse
|
22
|
Gajewski DA, Philbeck JW, Wirtz PW, Chichka D. Angular declination and the dynamic perception of egocentric distance. J Exp Psychol Hum Percept Perform 2014; 40:361-77. [PMID: 24099588 PMCID: PMC4140626 DOI: 10.1037/a0034394] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The extraction of the distance between an object and an observer is fast when angular declination is informative, as it is with targets placed on the ground. To what extent does angular declination drive performance when viewing time is limited? Participants judged target distances in a real-world environment with viewing durations ranging from 36-220 ms. An important role for angular declination was supported by experiments showing that the cue provides information about egocentric distance even on the very first glimpse, and that it supports a sensitive response to distance in the absence of other useful cues. Performance was better at 220-ms viewing durations than for briefer glimpses, suggesting that the perception of distance is dynamic even within the time frame of a typical eye fixation. Critically, performance in limited viewing trials was better when preceded by a 15-s preview of the room without a designated target. The results indicate that the perception of distance is powerfully shaped by memory from prior visual experience with the scene. A theoretical framework for the dynamic perception of distance is presented.
Collapse
Affiliation(s)
| | | | - Philip W. Wirtz
- Department of Psychology, The George Washington University
- Department of Decision Sciences, The George Washington University
| | - David Chichka
- Department of Mechanical and Aerospace Engineering, The George Washington University
| |
Collapse
|
23
|
Busch NA. The fate of object memory traces under change detection and change blindness. Brain Res 2013; 1520:107-15. [DOI: 10.1016/j.brainres.2013.05.014] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2012] [Revised: 04/03/2013] [Accepted: 05/08/2013] [Indexed: 11/29/2022]
|
24
|
Hollingworth A. Task specificity and the influence of memory on visual search: comment on Võ and Wolfe (2012). J Exp Psychol Hum Percept Perform 2013. [PMID: 23205947 DOI: 10.1037/a0030237] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent results from Võ and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a preview task did not improve later search, but Võ and Wolfe used a relatively insensitive, between-subjects design. Here, we replicated the Võ and Wolfe study using a within-subject manipulation of scene preview. A preview session (focused either on object location memory or on the assessment of object semantics) reliably facilitated later search. In addition, information acquired from distractors in a scene-facilitated search when the distractor later became the target. Instead of being strongly constrained by task, visual memory is applied flexibly to guide attention and gaze during visual search.
Collapse
Affiliation(s)
- Andrew Hollingworth
- Department of Psychology, University of Iowa, 11 Seashore Hall E, IA City, IA 52242-1407, USA.
| |
Collapse
|
25
|
Brady TF, Konkle T, Gill J, Oliva A, Alvarez GA. Visual Long-Term Memory Has the Same Limit on Fidelity as Visual Working Memory. Psychol Sci 2013; 24:981-90. [DOI: 10.1177/0956797612465439] [Citation(s) in RCA: 110] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Visual long-term memory can store thousands of objects with surprising visual detail, but just how detailed are these representations, and how can one quantify this fidelity? Using the property of color as a case study, we estimated the precision of visual information in long-term memory, and compared this with the precision of the same information in working memory. Observers were shown real-world objects in random colors and were asked to recall the colors after a delay. We quantified two parameters of performance: the variability of internal representations of color (fidelity) and the probability of forgetting an object’s color altogether. Surprisingly, the fidelity of color information in long-term memory was comparable to the asymptotic precision of working memory. These results suggest that long-term memory and working memory may be constrained by a common limit, such as a bound on the fidelity required to retrieve a memory representation.
Collapse
Affiliation(s)
| | | | - Jonathan Gill
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
| | | |
Collapse
|
26
|
Urgolites ZJ, Wood JN. Visual long-term memory stores high-fidelity representations of observed actions. Psychol Sci 2013; 24:403-11. [PMID: 23436784 DOI: 10.1177/0956797612457375] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The ability to remember others' actions is fundamental to social cognition, but the precision of action memories remains unknown. To probe the fidelity of the action representations stored in visual long-term memory, we asked observers to view a large number of computer-animated actions. Afterward, observers were shown pairs of actions and indicated which of the two actions they had seen for each pair. On some trials, the previously viewed action was paired with an action from a different action category, and on other trials, it was paired with an action from the same category. Accuracy on both types of trials was remarkably high (81% and 82%, respectively). Further, results from a second experiment showed that the action representations maintained in visual long-term memory can be nearly as precise as the action representations maintained in visual working memory. Together, these findings provide evidence for a mechanism in visual long-term memory that maintains high-fidelity representations of observed actions.
Collapse
Affiliation(s)
- Zhisen Jiang Urgolites
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA.
| | | |
Collapse
|
27
|
Huebner GM, Gegenfurtner KR. Conceptual and visual features contribute to visual memory for natural images. PLoS One 2012; 7:e37575. [PMID: 22719842 PMCID: PMC3374796 DOI: 10.1371/journal.pone.0037575] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2011] [Accepted: 04/23/2012] [Indexed: 11/18/2022] Open
Abstract
We examined the role of conceptual and visual similarity in a memory task for natural images. The important novelty of our approach was that visual similarity was determined using an algorithm [1] instead of being judged subjectively. This similarity index takes colours and spatial frequencies into account. For each target, four distractors were selected that were (1) conceptually and visually similar, (2) only conceptually similar, (3) only visually similar, or (4) neither conceptually nor visually similar to the target image. Participants viewed 219 images with the instruction to memorize them. Memory for a subset of these images was tested subsequently. In Experiment 1, participants performed a two-alternative forced choice recognition task and in Experiment 2, a yes/no-recognition task. In Experiment 3, testing occurred after a delay of one week. We analyzed the distribution of errors depending on distractor type. Performance was lowest when the distractor image was conceptually and visually similar to the target image, indicating that both factors matter in such a memory task. After delayed testing, these differences disappeared. Overall performance was high, indicating a large-capacity, detailed visual long-term memory.
Collapse
Affiliation(s)
- Gesche M Huebner
- Department of Psychology, Justus-Liebig-University of Giessen, Giessen, Germany.
| | | |
Collapse
|
28
|
Eye movements during long-term pictorial recall. PSYCHOLOGICAL RESEARCH 2012; 77:303-9. [PMID: 22610303 DOI: 10.1007/s00426-012-0439-7] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2011] [Accepted: 05/02/2012] [Indexed: 10/28/2022]
Abstract
We investigated eye movements during long-term pictorial recall. Participants performed a perceptual encoding task, in which they memorized 16 stimuli that were displayed in different areas on a computer screen. After the encoding phase the participants had to recall and visualize the images and answer to specific questions about visual details of the stimuli. One week later the participants repeated the pictorial recall task. Interestingly, not only in the immediate recall task but also 1 week later participants looked longer at the areas where the stimuli were encoded. The major contribution of this study is that memory for pictorial objects, including their spatial location, is stable and robust over time.
Collapse
|
29
|
INOUE KAZUYA, TAKEDA YUJI. Scene-context effect in visual memory is independent of retention interval1. JAPANESE PSYCHOLOGICAL RESEARCH 2012. [DOI: 10.1111/j.1468-5884.2012.00516.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
30
|
Inoue K, Takeda Y. The role of attention in the contextual enhancement of visual memory for natural scenes. VISUAL COGNITION 2012. [DOI: 10.1080/13506285.2011.640648] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
31
|
Hollingworth A. Guidance of visual search by memory and knowledge. NEBRASKA SYMPOSIUM ON MOTIVATION. NEBRASKA SYMPOSIUM ON MOTIVATION 2012; 59:63-89. [PMID: 23437630 DOI: 10.1007/978-1-4614-4794-8_4] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
To behave intelligently in the world, humans must be able to find objects efficiently within the complex environments they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes, focusing on the role of memory and knowledge in guiding attention to task-relevant objects.
Collapse
|
32
|
Melcher D, Murphy B. The role of semantic interference in limiting memory for the details of visual scenes. Front Psychol 2011; 2:262. [PMID: 22016743 PMCID: PMC3192955 DOI: 10.3389/fpsyg.2011.00262] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2011] [Accepted: 09/21/2011] [Indexed: 11/13/2022] Open
Abstract
Many studies suggest a large capacity memory for briefly presented pictures of whole scenes. At the same time, visual working memory (WM) of scene elements is limited to only a few items. We examined the role of retroactive interference in limiting memory for visual details. Participants viewed a scene for 5 s and then, after a short delay containing either a blank screen or 10 distracter scenes, answered questions about the location, color, and identity of objects in the scene. We found that the influence of the distracters depended on whether they were from a similar semantic domain, such as “kitchen” or “airport.” Increasing the number of similar scenes reduced, and eventually eliminated, memory for scene details. Although scene memory was firmly established over the initial study period, this memory was fragile and susceptible to interference. This may help to explain the discrepancy in the literature between studies showing limited visual WM and those showing a large capacity memory for scenes.
Collapse
Affiliation(s)
- David Melcher
- Center for Mind/Brain Sciences, University of Trento Trento, Italy
| | | |
Collapse
|
33
|
Kaakinen JK, Hyönä J, Viljanen M. Influence of a Psychological Perspective on Scene Viewing and Memory for Scenes. Q J Exp Psychol (Hove) 2011; 64:1372-87. [DOI: 10.1080/17470218.2010.548872] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.
Collapse
Affiliation(s)
| | - Jukka Hyönä
- Department of Psychology, University of Turku, Turku, Finland
| | - Minna Viljanen
- Department of Psychology, University of Turku, Turku, Finland
| |
Collapse
|
34
|
Brady TF, Konkle T, Alvarez GA. A review of visual memory capacity: Beyond individual items and toward structured representations. J Vis 2011; 11:4. [PMID: 21617025 PMCID: PMC3405498 DOI: 10.1167/11.5.4] [Citation(s) in RCA: 262] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted toward a representation-based emphasis, focusing on the contents of memory and attempting to determine the format and structure of remembered information. The main thesis of this review will be that one cannot fully understand memory systems or memory processes without also determining the nature of memory representations. Nowhere is this connection more obvious than in research that attempts to measure the capacity of visual memory. We will review research on the capacity of visual working memory and visual long-term memory, highlighting recent work that emphasizes the contents of memory. This focus impacts not only how we estimate the capacity of the system--going beyond quantifying how many items can be remembered and moving toward structured representations--but how we model memory systems and memory processes.
Collapse
Affiliation(s)
- Timothy F. Brady
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | - Talia Konkle
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | - George A. Alvarez
- Vision Sciences Laboratory, Department of Psychology, Harvard University
| |
Collapse
|
35
|
Tatler BW, Land MF. Vision and the representation of the surroundings in spatial memory. Philos Trans R Soc Lond B Biol Sci 2011; 366:596-610. [PMID: 21242146 DOI: 10.1098/rstb.2010.0188] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience.
Collapse
|
36
|
Hout MC, Goldinger SD. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: evidence from eye movements. J Exp Psychol Hum Percept Perform 2011; 38:90-112. [PMID: 21574743 DOI: 10.1037/a0023894] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent.
Collapse
Affiliation(s)
- Michael C Hout
- Department of Psychology, Arizona State University, Tempe, AZ 85287-1104, USA
| | | |
Collapse
|
37
|
Jensen MS, Yao R, Street WN, Simons DJ. Change blindness and inattentional blindness. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2011; 2:529-546. [PMID: 26302304 DOI: 10.1002/wcs.130] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Change blindness and inattentional blindness are both failures of visual awareness. Change blindness is the failure to notice an obvious change. Inattentional blindness is the failure to notice the existence of an unexpected item. In each case, we fail to notice something that is clearly visible once we know to look for it. Despite similarities, each type of blindness has a unique background and distinct theoretical implications. Here, we discuss the central paradigms used to explore each phenomenon in a historical context. We also outline the central findings from each field and discuss their implications for visual perception and attention. In addition, we examine the impact of task and observer effects on both types of blindness as well as common pitfalls and confusions people make while studying these topics. WIREs Cogni Sci 2011 2 529-546 DOI: 10.1002/wcs.130 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Melinda S Jensen
- Department of Psychology, University of Illinois, Champaign, IL, US
| | - Richard Yao
- Department of Psychology, University of Illinois, Champaign, IL, US
| | - Whitney N Street
- Department of Psychology, University of Illinois, Champaign, IL, US
| | - Daniel J Simons
- Department of Psychology, University of Illinois, Champaign, IL, US
| |
Collapse
|
38
|
Ebersbach M, Stiehler S, Asmus P. On the relationship between children's perspective taking in complex scenes and their spatial drawing ability. BRITISH JOURNAL OF DEVELOPMENTAL PSYCHOLOGY 2011; 29:455-74. [DOI: 10.1348/026151010x504942] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
39
|
Abstract
In five experiments, we examined the influence of contextual objects' location and visual features on visual memory. Participants' visual memory was tested with a change detection task in which they had to judge whether the orientation (Experiments 1A, 1B, and 2) or color (Experiments 3A and 3B) of a target object was the same. Furthermore, contextual objects' locations and visual features were manipulated in the test image. The results showed that change detection performance was better when contextual objects' locations remained the same from study to test, demonstrating that the original spatial configuration is important for subsequent visual memory retrieval. The results further showed that changes to contextual objects' orientation, but not color, reduced orientation change detection performance; and changes to contextual objects' color, but not orientation, impaired color change detection performance. Therefore, contextual objects' visual features are capable of affecting visual memory. However, selective attention plays an influential role in modulating such effects.
Collapse
|
40
|
|
41
|
|
42
|
Abstract
Visual search (e.g., finding a specific object in an array of other objects) is performed most effectively when people are able to ignore distracting nontargets. In repeated search, however, incidental learning of object identities may facilitate performance. In three experiments, with over 1,100 participants, we examined the extent to which search could be facilitated by object memory and by memory for spatial layouts. Participants searched for new targets (real-world, nameable objects) embedded among repeated distractors. To make the task more challenging, some participants performed search for multiple targets, increasing demands on visual working memory (WM). Following search, memory for search distractors was assessed using a surprise two-alternative forced choice recognition memory test with semantically matched foils. Search performance was facilitated by distractor object learning and by spatial memory; it was most robust when object identity was consistently tied to spatial locations and weakest (or absent) when object identities were inconsistent across trials. Incidental memory for distractors was better among participants who searched under high WM load, relative to low WM load. These results were observed when visual search included exhaustive-search trials (Experiment 1) or when all trials were self-terminating (Experiment 2). In Experiment 3, stimulus exposure was equated across WM load groups by presenting objects in a single-object stream; recognition accuracy was similar to that in Experiments 1 and 2. Together, the results suggest that people incidentally generate memory for nontarget objects encountered during search and that such memory can facilitate search performance.
Collapse
|
43
|
Melcher D. Accumulating and remembering the details of neutral and emotional natural scenes. Perception 2010; 39:1011-25. [PMID: 20942355 DOI: 10.1068/p6670] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
In contrast to our rich sensory experience with complex scenes in everyday life, the capacity of visual working memory is thought to be quite limited. Here our memory has been examined for the details of naturalistic scenes as a function of display duration, emotional valence of the scene, and delay before test. Individual differences in working memory and long-term memory for pictorial scenes were examined in experiment 1. The accumulation of memory for emotional scenes and the retention of these details in long-term memory were investigated in experiment 2. Although there were large individual differences in performance, memory for scene details generally exceeded the traditional working memory limit within a few seconds. Information about positive scenes was learned most quickly, while negative scenes showed the worst memory for details. The overall pattern of results was consistent with the idea that both short-term and long-term representations are mixed together in a medium-term 'online' memory for scenes.
Collapse
Affiliation(s)
- David Melcher
- Centre for Mind/Brain Sciences and Department of Cognitive Sciences, University of Trento, Palazzo Fedrigotti, Corso Bettini 31, I 38068 Rovereto, Italy.
| |
Collapse
|
44
|
Nakashima R, Yokosawa K. [Visual representation of natural scenes in flicker changes]. SHINRIGAKU KENKYU : THE JAPANESE JOURNAL OF PSYCHOLOGY 2010; 81:210-217. [PMID: 20845726 DOI: 10.4992/jjpsy.81.210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Coherence theory in scene perception (Rensink, 2002) assumes the retention of volatile object representations on which attention is not focused. On the other hand, visual memory theory in scene perception (Hollingworth & Henderson, 2002) assumes that robust object representations are retained. In this study, we hypothesized that the difference between these two theories is derived from the difference of the experimental tasks that they are based on. In order to verify this hypothesis, we examined the properties of visual representation by using a change detection and memory task in a flicker paradigm. We measured the representations when participants were instructed to search for a change in a scene, and compared them with the intentional memory representations. The visual representations were retained in visual long-term memory even in the flicker paradigm, and were as robust as the intentional memory representations. However, the results indicate that the representations are unavailable for explicitly localizing a scene change, but are available for answering the recognition test. This suggests that coherence theory and visual memory theory are compatible.
Collapse
Affiliation(s)
- Ryoichi Nakashima
- Department of Psychology, Graduate School of Humanities and Sociology, University of Tokyo, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan.
| | | |
Collapse
|
45
|
Abstract
Attention capture occurs when a stimulus event involuntarily recruits attention. The abrupt appearance of a new object is perhaps the most well-studied attention-capturing event, yet there is debate over the root cause of this capture. Does a new object capture attention because it involves the creation of a new object representation or because its appearance creates a characteristic luminance transient? The present study sought to resolve this question by introducing a new object into a search display, either with or without a unique luminance transient. Contrary to the results of a recent study (Davoli, Suszko, & Abrams, 2007), when the new object's transient was masked by a brief interstimulus interval introduced between the placeholder and search arrays, a new object did not capture attention. Moreover, when a new object's transient was masked, participants could not locate a new object efficiently even when that was their explicit goal. Together, these data suggest that luminance transient signals are necessary for attention capture by new objects.
Collapse
|
46
|
Huebner GM, Gegenfurtner KR. Effects of Viewing Time, Fixations, and Viewing Strategies on Visual Memory for Briefly Presented Natural Objects. Q J Exp Psychol (Hove) 2010; 63:1398-413. [DOI: 10.1080/17470210903398139] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
We investigated the impact of viewing time and fixations on visual memory for briefly presented natural objects. Participants saw a display of eight natural objects arranged in a circle and used a partial report procedure to assign one object to the position it previously occupied during stimulus presentation. At the longest viewing time of 7,000 ms or 10 fixations, memory performance was significantly higher than at the shorter times. This increase was accompanied by a primacy effect, suggesting a contribution of another memory component—for example, visual long-term memory (VLTM). We found a very limited beneficial effect of fixations on objects; fixated objects were only remembered better at the shortest viewing times. Our results revealed an intriguing difference between the use of a blocked versus an interleaved experimental design. When trial length was predictable, in the blocked design, target fixation durations increased with longer viewing times. When trial length was unpredictable, fixation durations stayed the same for all viewing lengths. Memory performance was not affected by this design manipulation, thus also supporting the idea that the number and duration of fixations are not closely coupled to memory performance.
Collapse
|
47
|
How high is visual short-term memory capacity for object layout? Atten Percept Psychophys 2010; 72:1097-109. [DOI: 10.3758/app.72.4.1097] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
48
|
Võ MLH, Schneider WX. A glimpse is not a glimpse: Differential processing of flashed scene previews leads to differential target search benefits. VISUAL COGNITION 2010. [DOI: 10.1080/13506280802547901] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
49
|
|
50
|
Parra MA, Sala SD, Logie RH, Abrahams S. Selective impairment in visual short-term memory binding. Cogn Neuropsychol 2009; 26:583-605. [DOI: 10.1080/02643290903523286] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|