1
|
Torres RE, Duprey MS, Campbell KL, Emrich SM. Not all objects are created equal: The object benefit in visual working memory is supported by greater recollection-like memory, but only for memorable objects. Mem Cognit 2024:10.3758/s13421-024-01655-z. [PMID: 39467965 DOI: 10.3758/s13421-024-01655-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/10/2024] [Indexed: 10/30/2024]
Abstract
Visual working memory is thought to have a fixed capacity limit. However, recent evidence suggests that a greater number of real-world objects than simple features (i.e., colors) can be maintained, an effect termed the object benefit. Here, we examined whether this object benefit in visual working memory is due to qualitatively different memory processes employed for meaningful stimuli compared to simple features. In online samples of young adults, real-world objects were better remembered than colors, had higher measures of recollection, and showed a greater proportion of high-confidence responses (Exp. 1). Objects were also remembered better than their scrambled counterparts (Exp. 2), suggesting that this benefit is related to semantic information, rather than visual complexity. Critically, the specific objects that were likely to be remembered with high confidence were highly correlated across experiments, consistent with the idea that some objects are more memorable than others. Visual working memory performance for the least-memorable objects was worse than that of colors and scrambled objects. These findings suggest that real-world objects give rise to recollective, or at least high-confidence, responses at retrieval that may depend on activation of semantic features, but that this effect is limited to certain objects.
Collapse
Affiliation(s)
- Rosa E Torres
- Department of Psychology, Brock University, St. Catharines, ON, Canada
| | - Mallory S Duprey
- Department of Psychology, Brock University, St. Catharines, ON, Canada
| | - Karen L Campbell
- Department of Psychology, Brock University, St. Catharines, ON, Canada
| | - Stephen M Emrich
- Department of Psychology, Brock University, St. Catharines, ON, Canada.
| |
Collapse
|
2
|
Morales-Calva F, Leal SL. Tell me why: the missing w in episodic memory's what, where, and when. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024:10.3758/s13415-024-01234-4. [PMID: 39455523 DOI: 10.3758/s13415-024-01234-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/30/2024] [Indexed: 10/28/2024]
Abstract
Endel Tulving defined episodic memory as consisting of a spatiotemporal context. It enables us to recollect personal experiences of people, things, places, and situations. In other words, it is made up of what, where, and when components. However, this definition does not include arguably the most important aspect of episodic memory: the why. Understanding why we remember has important implications to better understand how our memory system works and as a potential target of intervention for memory impairment. The intrinsic and extrinsic factors related to why some experiences are better remembered than others have been widely investigated but largely independently studied. How these factors interact with one another to drive an event to become a lasting memory is still unknown. This review summarizes research examining the why of episodic memory, where we aim to uncover the factors that drive core features of our memory. We discuss the concept of episodic memory examining the what, where, and when, and how the why is essential to each of these key components of episodic memory. Furthermore, we discuss the neural mechanisms known to support our rich episodic memories and how a why signal may provide critical modulatory impact on neural activity and communication. Finally, we discuss the individual differences that may further drive why we remember certain experiences over others. A better understanding of these elements, and how we experience memory in daily life, can elucidate why we remember what we remember, providing important insight into the overarching goal of our memory system.
Collapse
Affiliation(s)
| | - Stephanie L Leal
- Department of Psychological Sciences, Rice University, Houston, TX, USA.
- Department of Integrative Biology & Physiology, UCLA, 621 Charles E Young Dr S, Los Angeles, CA, 90095, USA.
| |
Collapse
|
3
|
Roberts BRT, Pruin J, Bainbridge WA, Rosenberg MD, deBettencourt MT. Memory augmentation with an adaptive cognitive interface. Psychon Bull Rev 2024:10.3758/s13423-024-02589-y. [PMID: 39379775 DOI: 10.3758/s13423-024-02589-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/13/2024] [Indexed: 10/10/2024]
Abstract
What we remember reflects both what we encounter, such as the intrinsic memorability of a stimulus, and our internal attentional state when we encounter that stimulus. Our memories are better for memorable images and images encountered in an engaged attentional state. Here, in an effort to modulate long-term memory performance, we manipulated these factors in combination by selecting the memorability of presented images contingent on individuals' natural fluctuations in sustained attention. Can image memorability and attentional state be strategically combined to improve memory? Are memorable images still well remembered during lapses in sustained attention, and conversely, can attentive states rescue memory performance for forgettable images? We designed a procedure to monitor participants' sustained attention dynamics on the fly via their response time fluctuations during a continuous performance task with trial-unique scene images. When high- or low-attentional states were detected, our algorithm triggered the presentation of high- or low-memorability images. Afterwards, participants completed a surprise recognition memory test for the attention-triggered images. Results demonstrated that memory performance for memorable items is not only resistant to lapses in sustained attention but also that memory cannot be further improved by encoding memorable items in engaged attentional states. On the other hand, memory performance for low-memorability images can be rescued by attentive encoding states. In sum, we show that both memorability and sustained attention can be leveraged in real time to maximize memory performance. This approach suggests that adaptive cognitive interfaces can tailor what information appears when to best support overall memory.
Collapse
Affiliation(s)
- Brady R T Roberts
- Department of Psychology, University of Chicago, 940 East 57th Street, Chicago, IL, 60637, USA.
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA.
| | - Julia Pruin
- Department of Psychology, University of Chicago, 940 East 57th Street, Chicago, IL, 60637, USA
| | - Wilma A Bainbridge
- Department of Psychology, University of Chicago, 940 East 57th Street, Chicago, IL, 60637, USA
- Neuroscience Institute, University of Chicago, Chicago, IL, USA
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA
| | - Monica D Rosenberg
- Department of Psychology, University of Chicago, 940 East 57th Street, Chicago, IL, 60637, USA
- Neuroscience Institute, University of Chicago, Chicago, IL, USA
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA
| | - Megan T deBettencourt
- Department of Psychology, University of Chicago, 940 East 57th Street, Chicago, IL, 60637, USA
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
4
|
Lee FM, Berman MG, Stier AJ, Bainbridge WA. Navigating Memorability Landscapes: Hyperbolic Geometry Reveals Hierarchical Structures in Object Concept Memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.22.614329. [PMID: 39386606 PMCID: PMC11463604 DOI: 10.1101/2024.09.22.614329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
Why are some object concepts (e.g., birds, cars, vegetables, etc.) more memorable than others? Prior studies have suggested that features (e.g., color, animacy, etc.) and typicality (e.g., robin vs. penguin) of object images influences the likelihood of being remembered. However, a complete understanding of object memorability remains elusive. In this study, we examine whether the geometric relationship between object concepts explains differences in their memorability. Specifically, we hypothesize that image concepts will be geometrically arranged in hierarchical structures and that memorability will be explained by a concept's depth in these hierarchical trees. To test this hypothesis, we construct a Hyperbolic representation space of object concepts (N=1,854) from the THINGS database (Hebart et al., 2019), which consists of naturalistic images of concrete objects, and a space of 49 feature dimensions derived from data-driven models. Using ALBATROSS (Stier, A. J., Giusti, C., & Berman, M. G., In prep), a stochastic topological data analysis technique that detects underlying structures of data, we demonstrate that Hyperbolic geometry efficiently captures the hierarchical organization of object concepts above and beyond a traditional Euclidean geometry and that hierarchical organization is related to memorability. We find that concepts closer to the center of the representational space are more prototypical and also more memorable. Importantly, Hyperbolic distances are more predictive of memorability and prototypicality than Euclidean distances, suggesting that concept memorability and typicality are organized hierarchically. Taken together, our work presents a novel hierarchical representational structure of object concepts that explains memorability and typicality.
Collapse
|
5
|
Trinkl N, Wolfe JM. Image memorability influences memory for where the item was seen but not when. Mem Cognit 2024:10.3758/s13421-024-01635-3. [PMID: 39256320 DOI: 10.3758/s13421-024-01635-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/22/2024] [Indexed: 09/12/2024]
Abstract
Observers can determine whether they have previously seen hundreds of images with more than 80% accuracy. This "massive memory" for WHAT we have seen is accompanied by smaller but still massive memories for WHERE and WHEN the item was seen (spatial & temporal massive memory). Recent studies have shown that certain images are more easily remembered than others (higher "memorability"). Does memorability influence spatial massive memory and temporal massive memory? In two experiments, viewers saw 150 images presented twice in random order. These 300 images were sequentially presented at random locations in a 7 × 7 grid. If an image was categorized as old, observers clicked on the spot in the grid where they thought they had previously seen it. They also noted when they had seen it: Experiment 1-clicking on a timeline; Experiment 2-estimating the trial number when the item first appeared. Replicating prior work, data show that high-memorability images are remembered better than low-memorability images. Interestingly, in both experiments, spatial memory precision was correlated with image memorability, while temporal memory precision did not vary as a function of memorability. Apparently, properties that make images memorable help us remember WHERE but not WHEN those images were presented. The lack of correlation between memorability and temporal memory is, of course, a negative result and should be treated with caution.
Collapse
Affiliation(s)
- Nathan Trinkl
- Visual Attention Laboratory, Dept. of Surgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Jeremy M Wolfe
- Visual Attention Laboratory, Dept. of Surgery, Brigham and Women's Hospital, Boston, MA, USA.
- Depts of Ophthalmology & Radiology, Harvard Medical School, Boston, MA, USA.
- Visual Attention Lab, Department of Surgery, Brigham & Women's Hospital, 900 Commonwealth Ave, 3rd Floor, Boston, MA, 02215, USA.
| |
Collapse
|
6
|
Ye C, Guo L, Wang N, Liu Q, Xie W. Perceptual encoding benefit of visual memorability on visual memory formation. Cognition 2024; 248:105810. [PMID: 38733867 PMCID: PMC11369960 DOI: 10.1016/j.cognition.2024.105810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/31/2024] [Accepted: 04/29/2024] [Indexed: 05/13/2024]
Abstract
Human observers often exhibit remarkable consistency in remembering specific visual details, such as certain face images. This phenomenon is commonly attributed to visual memorability, a collection of stimulus attributes that enhance the long-term retention of visual information. However, the exact contributions of visual memorability to visual memory formation remain elusive as these effects could emerge anywhere from early perceptual encoding to post-perceptual memory consolidation processes. To clarify this, we tested three key predictions from the hypothesis that visual memorability facilitates early perceptual encoding that supports the formation of visual short-term memory (VSTM) and the retention of visual long-term memory (VLTM). First, we examined whether memorability benefits in VSTM encoding manifest early, even within the constraints of a brief stimulus presentation (100-200 ms; Experiment 1). We achieved this by manipulating stimulus presentation duration in a VSTM change detection task using face images with high- or low-memorability while ensuring they were equally familiar to the participants. Second, we assessed whether this early memorability benefit increases the likelihood of VSTM retention, even with post-stimulus masking designed to interrupt post-perceptual VSTM consolidation processes (Experiment 2). Last, we investigated the durability of memorability benefits by manipulating memory retention intervals from seconds to 24 h (Experiment 3). Across experiments, our data suggest that visual memorability has an early impact on VSTM formation, persisting across variable retention intervals and predicting subsequent VLTM overnight. Combined, these findings highlight that visual memorability enhances visual memory within 100-200 ms following stimulus onset, resulting in robust memory traces resistant to post-perceptual interruption and long-term forgetting.
Collapse
Affiliation(s)
- Chaoxiong Ye
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China; Department of Psychology, University of Jyväskylä, Jyväskylä 40014, Finland; School of Education, Anyang Normal University, Anyang 455000, China.
| | - Lijing Guo
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China; Department of Psychology, University of Jyväskylä, Jyväskylä 40014, Finland.
| | - Nathan Wang
- Johns Hopkins University, Baltimore, MD 21218, United States of America.
| | - Qiang Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China; Department of Psychology, University of Jyväskylä, Jyväskylä 40014, Finland.
| | - Weizhen Xie
- Department of Psychology, University of Maryland, College Park, MD 20742, United States of America.
| |
Collapse
|
7
|
Ma AC, Cameron AD, Wiener M. Memorability shapes perceived time (and vice versa). Nat Hum Behav 2024; 8:1296-1308. [PMID: 38649460 DOI: 10.1038/s41562-024-01863-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 03/13/2024] [Indexed: 04/25/2024]
Abstract
Visual stimuli are known to vary in their perceived duration. Some visual stimuli are also known to linger for longer in memory. Yet, whether these two features of visual processing are linked is unknown. Despite early assumptions that time is an extracted or higher-order feature of perception, more recent work over the past two decades has demonstrated that timing may be instantiated within sensory modality circuits. A primary location for many of these studies is the visual system, where duration-sensitive responses have been demonstrated. Furthermore, visual stimulus features have been observed to shift perceived duration. These findings suggest that visual circuits mediate or construct perceived time. Here we present evidence across a series of experiments that perceived time is affected by the image properties of scene size, clutter and memorability. More specifically, we observe that scene size and memorability dilate time, whereas clutter contracts it. Furthermore, the durations of more memorable images are also perceived more precisely. Conversely, the longer the perceived duration of an image, the more memorable it is. To explain these findings, we applied a recurrent convolutional neural network model of the ventral visual system, in which images are progressively processed over time. We find that more memorable images are processed faster, and that this increase in processing speed predicts both the lengthening and the increased precision of perceived durations. These findings provide evidence for a link between image features, time perception and memory that can be further explored with models of visual processing.
Collapse
Affiliation(s)
- Alex C Ma
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Ayana D Cameron
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Martin Wiener
- Department of Psychology, George Mason University, Fairfax, VA, USA.
| |
Collapse
|
8
|
Lin Q, Li Z, Lafferty J, Yildirim I. Images with harder-to-reconstruct visual representations leave stronger memory traces. Nat Hum Behav 2024; 8:1309-1320. [PMID: 38740989 DOI: 10.1038/s41562-024-01870-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 03/19/2024] [Indexed: 05/16/2024]
Abstract
Much of what we remember is not because of intentional selection, but simply a by-product of perceiving. This raises a foundational question about the architecture of the mind: how does perception interface with and influence memory? Here, inspired by a classic proposal relating perceptual processing to memory durability, the level-of-processing theory, we present a sparse coding model for compressing feature embeddings of images, and show that the reconstruction residuals from this model predict how well images are encoded into memory. In an open memorability dataset of scene images, we show that reconstruction error not only explains memory accuracy, but also response latencies during retrieval, subsuming, in the latter case, all of the variance explained by powerful vision-only models. We also confirm a prediction of this account with 'model-driven psychophysics'. This work establishes reconstruction error as an important signal interfacing perception and memory, possibly through adaptive modulation of perceptual processing.
Collapse
Affiliation(s)
- Qi Lin
- Department of Psychology, Yale University, New Haven, CT, USA.
- Center for Brain Science, RIKEN, Wako, Japan.
| | - Zifan Li
- Department of Statistics & Data Science, Yale University, New Haven, CT, USA
| | - John Lafferty
- Department of Statistics & Data Science, Yale University, New Haven, CT, USA.
- Wu-Tsai Institute, Yale University, New Haven, CT, USA.
| | - Ilker Yildirim
- Department of Psychology, Yale University, New Haven, CT, USA.
- Department of Statistics & Data Science, Yale University, New Haven, CT, USA.
- Wu-Tsai Institute, Yale University, New Haven, CT, USA.
| |
Collapse
|
9
|
Brook L, Kreichman O, Masarwa S, Gilaie-Dotan S. Higher-contrast images are better remembered during naturalistic encoding. Sci Rep 2024; 14:13445. [PMID: 38862623 PMCID: PMC11166978 DOI: 10.1038/s41598-024-63953-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/04/2024] [Indexed: 06/13/2024] Open
Abstract
It is unclear whether memory for images of poorer visibility (as low contrast or small size) will be lower due to weak signals elicited in early visual processing stages, or perhaps better since their processing may entail top-down processes (as effort and attention) associated with deeper encoding. We have recently shown that during naturalistic encoding (free viewing without task-related modulations), for image sizes between 3°-24°, bigger images stimulating more visual system processing resources at early processing stages are better remembered. Similar to size, higher contrast leads to higher activity in early visual processing. Therefore, here we hypothesized that during naturalistic encoding, at critical visibility ranges, higher contrast images will lead to higher signal-to-noise ratio and better signal quality flowing downstream and will thus be better remembered. Indeed, we found that during naturalistic encoding higher contrast images were remembered better than lower contrast ones (~ 15% higher accuracy, ~ 1.58 times better) for images at 7.5-60 RMS contrast range. Although image contrast and size modulate early visual processing very differently, our results further substantiate that at poor visibility ranges, during naturalistic non-instructed visual behavior, physical image dimensions (contributing to image visibility) impact image memory.
Collapse
Affiliation(s)
- Limor Brook
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Shaimaa Masarwa
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Sharon Gilaie-Dotan
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel.
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
- UCL Institute of Cognitive Neuroscience, London, UK.
| |
Collapse
|
10
|
Qasim SE, Deswal A, Saez I, Gu X. Positive affect modulates memory by regulating the influence of reward prediction errors. COMMUNICATIONS PSYCHOLOGY 2024; 2:52. [PMID: 39242805 PMCID: PMC11332028 DOI: 10.1038/s44271-024-00106-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 05/28/2024] [Indexed: 09/09/2024]
Abstract
How our decisions impact our memories is not well understood. Reward prediction errors (RPEs), the difference between expected and obtained reward, help us learn to make optimal decisions-providing a signal that may influence subsequent memory. To measure this influence and how it might go awry in mood disorders, we recruited a large cohort of human participants to perform a decision-making task in which perceptually memorable stimuli were associated with probabilistic rewards, followed by a recognition test for those stimuli. Computational modeling revealed that positive RPEs enhanced both the accuracy of memory and the temporal efficiency of memory search, beyond the contribution of perceptual information. Critically, positive affect upregulated the beneficial effect of RPEs on memory. These findings demonstrate how affect selectively regulates the impact of RPEs on memory, providing a computational mechanism for biased memory in mood disorders.
Collapse
Affiliation(s)
- Salman E Qasim
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - Ignacio Saez
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Xiaosi Gu
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
11
|
Lahner B, Mohsenzadeh Y, Mullin C, Oliva A. Visual perception of highly memorable images is mediated by a distributed network of ventral visual regions that enable a late memorability response. PLoS Biol 2024; 22:e3002564. [PMID: 38557761 PMCID: PMC10984539 DOI: 10.1371/journal.pbio.3002564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
Behavioral and neuroscience studies in humans and primates have shown that memorability is an intrinsic property of an image that predicts its strength of encoding into and retrieval from memory. While previous work has independently probed when or where this memorability effect may occur in the human brain, a description of its spatiotemporal dynamics is missing. Here, we used representational similarity analysis (RSA) to combine functional magnetic resonance imaging (fMRI) with source-estimated magnetoencephalography (MEG) to simultaneously measure when and where the human cortex is sensitive to differences in image memorability. Results reveal that visual perception of High Memorable images, compared to Low Memorable images, recruits a set of regions of interest (ROIs) distributed throughout the ventral visual cortex: a late memorability response (from around 300 ms) in early visual cortex (EVC), inferior temporal cortex, lateral occipital cortex, fusiform gyrus, and banks of the superior temporal sulcus. Image memorability magnitude results are represented after high-level feature processing in visual regions and reflected in classical memory regions in the medial temporal lobe (MTL). Our results present, to our knowledge, the first unified spatiotemporal account of visual memorability effect across the human cortex, further supporting the levels-of-processing theory of perception and memory.
Collapse
Affiliation(s)
- Benjamin Lahner
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Yalda Mohsenzadeh
- The Brain and Mind Institute, The University of Western Ontario, London, Canada
- Department of Computer Science, The University of Western Ontario, London, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Caitlin Mullin
- Vision: Science to Application (VISTA), York University, Toronto, Ontario, Canada
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
12
|
Morales-Calva F, Leal SL. Emotional modulation of memorability in mnemonic discrimination. Neurobiol Learn Mem 2024; 210:107904. [PMID: 38423168 DOI: 10.1016/j.nlm.2024.107904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 11/20/2023] [Accepted: 02/21/2024] [Indexed: 03/02/2024]
Abstract
Although elements such as emotion may serve to enhance or impair memory for images, some images are consistently remembered or forgotten by most people, an intrinsic characteristic of images known as memorability. Memorability explains some of the variability in memory performance, however, the underlying mechanisms of memorability remain unclear. It is known that emotional valence can increase the memorability of an experience, but how these two elements interact is still unknown. Hippocampal pattern separation, a computation that orthogonalizes overlapping experiences as distinct from one another, may be a candidate mechanism underlying memorability. However, these two literatures have remained largely separate. To explore the interaction between image memorability and emotion on pattern separation, we examined performance on an emotional mnemonic discrimination task, a putative behavioral correlate of hippocampal pattern separation, by splitting stimuli into memorable and forgettable categories as determined by a convolutional neural network as well as by emotion, lure similarity, and time of testing (immediately and 24-hour delay). We measured target recognition, which is typically used to determine memorability scores, as well as lure discrimination, which taxes hippocampal pattern separation and has not yet been examined within a memorability framework. Here, we show that more memorable images were better remembered across both target recognition and lure discrimination measures. However, for target recognition, this was only true upon immediate testing, not after a 24-hour delay. For lure discrimination, we found that memorability interacts with lure similarity, but depends on the time of testing, where memorability primarily impacts high similarity lure discrimination when tested immediately but impacts low similarity lure discrimination after a 24-hour delay. Furthermore, only lure discrimination showed an interaction between emotion and memorability, in which forgettable neutral images showed better lure discrimination compared to more memorable images. These results suggest that careful consideration is required of what makes an image memorable and may depend on what aspects of the image are more memorable (e.g., gist vs. detail, emotional vs. neutral).
Collapse
Affiliation(s)
- Fernanda Morales-Calva
- Department of Psychological Sciences, Rice University, BioScience Research Collaborative, Suite 780B, 6500 Main Street, Houston, TX 77030, USA
| | - Stephanie L Leal
- Department of Psychological Sciences, Rice University, BioScience Research Collaborative, Suite 780B, 6500 Main Street, Houston, TX 77030, USA.
| |
Collapse
|
13
|
Almog G, Alavi Naeini S, Hu Y, Duerden EG, Mohsenzadeh Y. Memoir study: Investigating image memorability across developmental stages. PLoS One 2023; 18:e0295940. [PMID: 38117776 PMCID: PMC10732434 DOI: 10.1371/journal.pone.0295940] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 12/02/2023] [Indexed: 12/22/2023] Open
Abstract
Images have been shown to consistently differ in terms of their memorability in healthy adults: some images stick in one's mind while others are forgotten quickly. Studies have suggested that memorability is an intrinsic, continuous property of a visual stimulus that can be both measured and manipulated. Memory literature suggests that important developmental changes occur throughout adolescence that have an impact on recognition memory, yet the effect that these changes have on image memorability has not yet been investigated. In the current study, we recruited adolescents ages 11-18 (n = 273, mean = 16) to an online visual memory experiment to explore the effects of developmental changes throughout adolescence on image memorability, and determine if memorability findings in adults can be generalized to the adolescent age group. We used the online experiment to calculate adolescent memorability scores for 1,000 natural images, and compared the results to the MemCat dataset-a memorability dataset that is annotated with adult memorability scores (ages 19-27). Our study finds that memorability scores in adolescents and adults are strongly and significantly correlated (Spearman's rank correlation, r = 0.76, p < 0.001). This correlation persists even when comparing adults with developmentally different sub-groups of adolescents (ages 11-14: r = 0.67, p < 0.001; ages 15-18: r = 0.60, p < 0.001). Moreover, the rankings of image categories by mean memorability scores were identical in both adolescents and adults (including the adolescent sub-groups), indicating that broadly, certain image categories are more memorable for both adolescents and adults. Interestingly, however, adolescents experienced significantly higher false alarm rates than adults, supporting studies that show increased impulsivity and reward-seeking behaviour in adolescents. Our results reveal that the memorability of images remains consistent across individuals at different stages of development. This consistency aligns with and strengthens prior research, indicating that memorability is an intrinsic property of images. Our findings open new pathways for applying memorability studies in adolescent populations, with profound implications in fields such as education, marketing, and psychology. Our work paves the way for innovative approaches in these domains, leveraging the consistent nature of image memorability across age groups.
Collapse
Affiliation(s)
- Gal Almog
- Western Institute for Neuroscience, University of Western Ontario, London, Ontario, Canada
- Department of Computer Science, University of Western Ontario, London, Ontario, Canada
- Department of Pathology and Laboratory Medicine, University of Western Ontario, London, Ontario, Canada
| | - Saeid Alavi Naeini
- Western Institute for Neuroscience, University of Western Ontario, London, Ontario, Canada
- Department of Computer Science, University of Western Ontario, London, Ontario, Canada
| | - Yu Hu
- Western Institute for Neuroscience, University of Western Ontario, London, Ontario, Canada
| | - Emma G. Duerden
- Western Institute for Neuroscience, University of Western Ontario, London, Ontario, Canada
- Applied Psychology, Faculty of Education, University of Western Ontario, London, Ontario, Canada
| | - Yalda Mohsenzadeh
- Western Institute for Neuroscience, University of Western Ontario, London, Ontario, Canada
- Department of Computer Science, University of Western Ontario, London, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| |
Collapse
|
14
|
Gedvila M, Ongchoco JDK, Bainbridge WA. Memorable beginnings, but forgettable endings: Intrinsic memorability alters our subjective experience of time. VISUAL COGNITION 2023; 31:380-389. [PMID: 38708421 PMCID: PMC11068022 DOI: 10.1080/13506285.2023.2268382] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 09/19/2023] [Indexed: 05/07/2024]
Abstract
Time is the fabric of experience - yet it is incredibly malleable in the mind of the observer: seeming to drag on, or fly right by at different moments. One of the most influential drivers of temporal distortions is attention, where heightened attention dilates subjective time. But an equally important feature of subjective experience involves not just the objects of attention, but also what information will naturally be remembered or forgotten, independent of attention (i.e. intrinsic image memorability). Here we test how memorability influences time perception. Observers viewed scenes in an oddball paradigm, where the last scene could be a forgettable "oddball" amidst memorable ones, or vice versa. Subjective time dilation occurred only for forgettable oddballs, but not memorable ones - demonstrating an oddball effect where the oddball did not differ in low-level visual features, image category, or even subjective memorability. But more importantly, these results emphasize how memory can interact with temporal experience: forgettable endings amidst memorable sequences dilate our experience of time.
Collapse
|
15
|
Roberts BRT, MacLeod CM, Fernandes MA. Symbol superiority: Why $ is better remembered than 'dollar'. Cognition 2023; 238:105435. [PMID: 37285688 DOI: 10.1016/j.cognition.2023.105435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 03/02/2023] [Accepted: 03/05/2023] [Indexed: 06/09/2023]
Abstract
Memory typically is better for information presented in picture format than in word format. Dual-coding theory (Paivio, 1969) proposes that this is because pictures are spontaneously labelled, leading to the creation of two representational codes-image and verbal-whereas words often lead to only a single (verbal) code. With this perspective as motivation, the present investigation asked whether common graphic symbols (e.g.,!@#$%&) are afforded primarily verbal coding, akin to words, or whether they also invoke visual imagery, as do pictures. Across four experiments, participants were presented at study with graphic symbols or words (e.g., $ or 'dollar'). In Experiment 1, memory was assessed using free recall; in Experiment 2, memory was assessed using old-new recognition. In Experiment 3, the word set was restricted to a single category. In Experiment 4, memory for graphic symbols, pictures, and words was directly compared. All four experiments demonstrated a memory benefit for symbols relative to words. In a fifth experiment, machine learning estimations of inherent stimulus memorability were found to predict memory performance in the earlier experiments. This study is the first to present evidence that, like pictures, graphic symbols are better remembered than words, in line with dual-coding theory and with a distinctiveness account. We reason that symbols offer a visual referent for abstract concepts that are otherwise unlikely to be spontaneously imaged.
Collapse
Affiliation(s)
- Brady R T Roberts
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada.
| | - Colin M MacLeod
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| | - Myra A Fernandes
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
16
|
Gillies G, Park H, Woo J, Walther DB, Cant JS, Fukuda K. Tracing the emergence of the memorability benefit. Cognition 2023; 238:105489. [PMID: 37163952 DOI: 10.1016/j.cognition.2023.105489] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 04/10/2023] [Accepted: 05/03/2023] [Indexed: 05/12/2023]
Abstract
Some visual stimuli are consistently better remembered than others across individuals, due to variations in memorability (the stimulus-intrinsic property that determines ease of encoding into visual long-term memory (VLTM)). However, it remains unclear what cognitive processes give rise to this mnemonic benefit. One possibility is that this benefit is imbued within the capacity-limited bottleneck of VLTM encoding, namely visual working memory (VWM). More precisely, memorable stimuli may be preferentially encoded into VLTM because fewer cognitive resources are required to store them in VWM (efficiency hypothesis). Alternatively, memorable stimuli may be more competitive in obtaining cognitive resources than forgettable stimuli, leading to more successful storage in VWM (competitiveness hypothesis). Additionally, the memorability benefit might emerge post-VWM, specifically, if memorable stimuli are less prone to be forgotten (i.e., are "stickier") than forgettable stimuli after they pass through the encoding bottleneck (stickiness hypothesis). To test this, we conducted two experiments to examine how memorability benefits emerge by manipulating the stimulus memorability, set size, and degree of competition among stimuli as participants encoded them in the context of a working memory task. Subsequently, their memory for the encoded stimuli was tested in a VLTM task. In the VWM task, performance was better for memorable stimuli compared to forgettable stimuli, supporting the efficiency hypothesis. In addition, we found that when in direct competition, memorable stimuli were also better at attracting limited VWM resources than forgettable stimuli, supporting the competitiveness hypothesis. However, only the efficiency advantage translated to a performance benefit in VLTM. Lastly, we found that memorable stimuli were less likely to be forgotten after they passed through the encoding bottleneck imposed by VWM, supporting the "stickiness" hypothesis. Thus, our results demonstrate that the memorability benefit develops across multiple cognitive processes.
Collapse
Affiliation(s)
- Greer Gillies
- University of Toronto, Mississauga, Canada; University of Toronto, Scarborough, Canada; University of Toronto, Canada
| | - Hyun Park
- University of Toronto, Mississauga, Canada; University of Toronto, Canada
| | - Jason Woo
- University of Toronto, Mississauga, Canada
| | | | - Jonathan S Cant
- University of Toronto, Scarborough, Canada; University of Toronto, Canada
| | - Keisuke Fukuda
- University of Toronto, Mississauga, Canada; University of Toronto, Canada.
| |
Collapse
|
17
|
Jeong SK. Cross-cultural consistency of image memorability. Sci Rep 2023; 13:12737. [PMID: 37543662 PMCID: PMC10404227 DOI: 10.1038/s41598-023-39988-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 08/03/2023] [Indexed: 08/07/2023] Open
Abstract
Memorability refers to the intrinsic property of an image that determines how well it is remembered or forgotten. Recent studies have found that memorability is highly consistent across individuals. However, most studies on memorability were conducted with participants from Western cultures, and the images used in memorability studies were culturally biased. Previous studies implicitly assumed that memorability would be held constant across different cultural groups; however, to the best of our knowledge, this has not yet been empirically investigated. In the current study, we recruited participants from South Korea and the US and examined whether image memorability was consistent across these two cultures. We found that South Korean participants showed greater memory performance for images rated highly memorable by US participants. The current findings provide converging evidence that image memorability is not fully accounted for by individual differences, and suggest the possibility of cross-cultural consistency in image memorability.
Collapse
Affiliation(s)
- Su Keun Jeong
- Department of Psychology, Chungbuk National University, Chungdae-ro 1, Seowon-gu, Cheongju, Chungbuk, 28644, Korea.
| |
Collapse
|
18
|
Davis TM, Bainbridge WA. Memory for artwork is predictable. Proc Natl Acad Sci U S A 2023; 120:e2302389120. [PMID: 37399388 PMCID: PMC10334794 DOI: 10.1073/pnas.2302389120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 05/17/2023] [Indexed: 07/05/2023] Open
Abstract
Viewing art is often seen as a highly personal and subjective experience. However, are there universal factors that make a work of art memorable? We conducted three experiments, where we recorded online memory performance for 4,021 paintings from the Art Institute of Chicago, tested in-person memory after an unconstrained visit to the Art Institute, and obtained abstract attribute measures such as beauty and emotional valence for these pieces. Participants showed significant agreement in their memories both online and in-person, suggesting that pieces have an intrinsic "memorability" based solely on their visual properties that is predictive of memory in a naturalistic museum setting. Importantly, ResMem, a deep learning neural network designed to estimate image memorability, could significantly predict memory both online and in-person based on the images alone, and these predictions could not be explained by other low- or high-level attributes like color, content type, aesthetics, and emotion. A regression comprising ResMem and other stimulus factors could predict as much as half of the variance of in-person memory performance. Further, ResMem could predict the fame of a piece, despite having no cultural or historical knowledge. These results suggest that perceptual features of a painting play a major role in influencing its success, both in memory for a museum visit and in cultural memory over generations.
Collapse
Affiliation(s)
- Trent M. Davis
- Department of Psychology, University of Chicago, Chicago, IL60637
| | | |
Collapse
|
19
|
Kramer MA, Hebart MN, Baker CI, Bainbridge WA. The features underlying the memorability of objects. SCIENCE ADVANCES 2023; 9:eadd2981. [PMID: 37126552 PMCID: PMC10132746 DOI: 10.1126/sciadv.add2981] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
What makes certain images more memorable than others? While much of memory research has focused on participant effects, recent studies using a stimulus-centric perspective have sparked debate on the determinants of memory, including the roles of semantic and visual features and whether the most prototypical or atypical items are best remembered. Prior studies have typically relied on constrained stimulus sets, limiting a generalized view of the features underlying what we remember. Here, we collected more than 1 million memory ratings for a naturalistic dataset of 26,107 object images designed to comprehensively sample concrete objects. We establish a model of object features that is predictive of image memorability and examined whether memorability could be accounted for by the typicality of the objects. We find that semantic features exert a stronger influence than perceptual features on what we remember and that the relationship between memorability and typicality is more complex than a simple positive or negative association alone.
Collapse
Affiliation(s)
- Max A. Kramer
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Martin N. Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
| | - Chris I. Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Wilma A. Bainbridge
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Neuroscience Institute, University of Chicago, Chicago, IL, USA
| |
Collapse
|
20
|
Kolisnyk M, Pereira AE, Tozios CJI, Fukuda K. Dissociating the Impact of Memorability on Electrophysiological Correlates of Memory Encoding Success. J Cogn Neurosci 2023; 35:603-627. [PMID: 36626358 DOI: 10.1162/jocn_a_01960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Despite its unlimited capacity, not all visual information we encounter is encoded into visual long-term memory. Traditionally, variability in encoding success has been ascribed to variability in the types and efficacy of an individual's cognitive processes during encoding. Accordingly, past studies have identified several neural correlates of variability in encoding success, namely, frontal positivity, occipital alpha amplitude, and frontal theta amplitude, by contrasting the electrophysiological signals recorded during successful and failed encoding processes (i.e., subsequent memory). However, recent research demonstrated individuals remember and forget consistent sets of stimuli, thereby elucidating stimulus-intrinsic factors (i.e., memorability) that determine the ease of memory encoding independent of individual-specific variability in encoding processes. The existence of memorability raises the possibility that canonical EEG correlates of subsequent memory may reflect variability in stimulus-intrinsic factors rather than individual-specific encoding processes. To test this, we recorded the EEG correlates of subsequent memory while participants encoded 600 images of real-world objects and assessed the unique contribution of individual-specific and stimulus-intrinsic factors on each EEG correlate. Here, we found that frontal theta amplitude and occipital alpha amplitude were only influenced by individual-specific encoding success, whereas frontal positivity was influenced by stimulus-intrinsic and individual-specific encoding success. Overall, our results offer novel interpretations of canonical EEG correlates of subsequent memory by demonstrating a dissociable impact of stimulus-intrinsic and individual-specific factors of memory encoding success.
Collapse
Affiliation(s)
- Matthew Kolisnyk
- University of Toronto Mississauga, Ontario, Canada.,Western University, London, Ontario, Canada
| | | | | | - Keisuke Fukuda
- University of Toronto Mississauga, Ontario, Canada.,University of Toronto, Ontario, Canada
| |
Collapse
|
21
|
Jayakumar M, Balusu C, Aly M. Attentional fluctuations and the temporal organization of memory. Cognition 2023; 235:105408. [PMID: 36893523 DOI: 10.1016/j.cognition.2023.105408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 02/09/2023] [Accepted: 02/11/2023] [Indexed: 03/10/2023]
Abstract
Event boundaries and temporal context shape the organization of episodic memories. We hypothesized that attentional fluctuations during encoding serve as "events" that affect temporal context representations and recall organization. Individuals encoded trial-unique objects during a modified sustained attention task. Memory was tested with free recall. Response time variability during the encoding tasks was used to characterize "in the zone" and "out of the zone" attentional states. We predicted that: 1) "in the zone", vs. "out of the zone", attentional states should be more conducive to maintaining temporal context representations that can cue temporally organized recall; and 2) temporally distant "in the zone" states may enable more recall "leaps" across intervening items. We replicated several important findings in the sustained attention and memory fields, including more online errors during "out of the zone" vs. "in the zone" attentional states and recall that was temporally structured. Yet, across four studies, we found no evidence for either of our main hypotheses. Recall was robustly temporally organized, and there was no difference in recall organization for items encoded "in the zone" vs. "out of the zone". We conclude that temporal context serves as a strong scaffold for episodic memory, one that can support organized recall even for items encoded during relatively poor attentional states. We also highlight the numerous challenges in striking a balance between sustained attention tasks (long blocks of a repetitive task) and memory recall tasks (short lists of unique items) and describe strategies for researchers interested in uniting these two fields.
Collapse
Affiliation(s)
- Manasi Jayakumar
- Department of Psychology, Columbia University, New York, NY 10027, United States of America.
| | - Chinmayi Balusu
- Department of Psychology, Columbia University, New York, NY 10027, United States of America
| | - Mariam Aly
- Department of Psychology, Columbia University, New York, NY 10027, United States of America
| |
Collapse
|
22
|
Wolfe JM, Wick FA, Mishra M, DeGutis J, Lyu W. Spatial and temporal massive memory in humans. Curr Biol 2023; 33:405-410.e4. [PMID: 36693302 DOI: 10.1016/j.cub.2022.12.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 11/30/2022] [Accepted: 12/15/2022] [Indexed: 01/24/2023]
Abstract
It is well known that humans have a massive memory for pictures and scenes.1,2,3,4 They show an ability to encode thousands of images with only a few seconds of exposure to each. In addition to this massive memory for "what" observers have seen, three experiments reported here show that observers have a "spatial massive memory" (SMM) for "where" stimuli have been seen and a "temporal massive memory" (TMM) for "when" stimuli have been seen. The positions in time and space for at least dozens of items can be reported with good, if not perfect accuracy. Previous work has suggested that there might be good memory for stimulus location,5,6 but there do not seem to have been concerted efforts to measure the extent of this memory. Moreover, in our method, observers are recalling where items were located and not merely recognizing the correct location. This is interesting because massive memory is sometimes thought to be limited to recognition tasks based on sense of familiarity.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Visual Attention Lab, Department of Surgery, Visual Attention Lab, Brigham & Women's Hospital, 900 Commonwealth Avenue, Boston, MA 02115, USA; Departments of Ophthalmology & Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA; Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA.
| | | | - Maruti Mishra
- Department of Psychology, University of Richmond, Richmond Hall, 114 UR Drive, Richmond, VA 23173, USA
| | - Joseph DeGutis
- Boston Attention and Learning Laboratory, VA Boston Healthcare System, 150 S. Huntington Avenue, Boston, MA 02130, USA; Department of Psychiatry, Harvard Medical School, 940 Belmont Street, Brockton, MA 02301, USA
| | - Wanyi Lyu
- Department of Biology, Centre for Vision Research, Vision Science to Application, York University, 4700 Keele Street, Toronto, ON M3J 1P3, Canada
| |
Collapse
|
23
|
Bonham LW, Geier EG, Sirkis DW, Leong JK, Ramos EM, Wang Q, Karydas A, Lee SE, Sturm VE, Sawyer RP, Friedberg A, Ichida JK, Gitler AD, Sugrue L, Cordingley M, Bee W, Weber E, Kramer JH, Rankin KP, Rosen HJ, Boxer AL, Seeley WW, Ravits J, Miller BL, Yokoyama JS. Radiogenomics of C9orf72 Expansion Carriers Reveals Global Transposable Element Derepression and Enables Prediction of Thalamic Atrophy and Clinical Impairment. J Neurosci 2023. [PMID: 36446586 DOI: 10.1101/2022.04.29.490104] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023] Open
Abstract
Hexanucleotide repeat expansion (HRE) within C9orf72 is the most common genetic cause of frontotemporal dementia (FTD). Thalamic atrophy occurs in both sporadic and familial FTD but is thought to distinctly affect HRE carriers. Separately, emerging evidence suggests widespread derepression of transposable elements (TEs) in the brain in several neurodegenerative diseases, including C9orf72 HRE-mediated FTD (C9-FTD). Whether TE activation can be measured in peripheral blood and how the reduction in peripheral C9orf72 expression observed in HRE carriers relates to atrophy and clinical impairment remain unknown. We used FreeSurfer software to assess the effects of C9orf72 HRE and clinical diagnosis (n = 78 individuals, male and female) on atrophy of thalamic nuclei. We also generated a novel, human, whole-blood RNA-sequencing dataset to determine the relationships among peripheral C9orf72 expression, TE activation, thalamic atrophy, and clinical severity (n = 114 individuals, male and female). We confirmed global thalamic atrophy and reduced C9orf72 expression in HRE carriers. Moreover, we identified disproportionate atrophy of the right mediodorsal lateral nucleus in HRE carriers and showed that C9orf72 expression associated with clinical severity, independent of thalamic atrophy. Strikingly, we found global peripheral activation of TEs, including the human endogenous LINE-1 element L1HS L1HS levels were associated with atrophy of multiple pulvinar nuclei, a thalamic region implicated in C9-FTD. Integration of peripheral transcriptomic and neuroimaging data from human HRE carriers revealed atrophy of specific thalamic nuclei, demonstrated that C9orf72 levels relate to clinical severity, and identified marked derepression of TEs, including L1HS, which predicted atrophy of FTD-relevant thalamic nuclei.SIGNIFICANCE STATEMENT Pathogenic repeat expansion in C9orf72 is the most frequent genetic cause of FTD and amyotrophic lateral sclerosis (ALS; C9-FTD/ALS). The clinical, neuroimaging, and pathologic features of C9-FTD/ALS are well characterized, whereas the intersections of transcriptomic dysregulation and brain structure remain largely unexplored. Herein, we used a novel radiogenomic approach to examine the relationship between peripheral blood transcriptomics and thalamic atrophy, a neuroimaging feature disproportionately impacted in C9-FTD/ALS. We confirmed reduction of C9orf72 in blood and found broad dysregulation of transposable elements-genetic elements typically repressed in the human genome-in symptomatic C9orf72 expansion carriers, which associated with atrophy of thalamic nuclei relevant to FTD. C9orf72 expression was also associated with clinical severity, suggesting that peripheral C9orf72 levels capture disease-relevant information.
Collapse
Affiliation(s)
- Luke W Bonham
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California 94158
| | - Ethan G Geier
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Transposon Therapeutics, San Diego, California 92122
| | - Daniel W Sirkis
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Josiah K Leong
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Psychological Science, University of Arkansas, Fayetteville, Arkansas 72701
| | - Eliana Marisa Ramos
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095
| | - Qing Wang
- Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095
| | - Anna Karydas
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Suzee E Lee
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Virginia E Sturm
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Russell P Sawyer
- Department of Neurology and Rehabilitation Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio 45267
| | - Adit Friedberg
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Justin K Ichida
- Department of Stem Cell Biology and Regenerative Medicine, Keck School of Medicine of USC, University of Southern California, Los Angeles, California 90033
| | - Aaron D Gitler
- Department of Genetics, Stanford University School of Medicine, Stanford, California 94305
| | - Leo Sugrue
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California 94158
| | | | - Walter Bee
- Transposon Therapeutics, San Diego, California 92122
| | - Eckard Weber
- Transposon Therapeutics, San Diego, California 92122
| | - Joel H Kramer
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Katherine P Rankin
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Howard J Rosen
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Adam L Boxer
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - William W Seeley
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Pathology, University of California, San Francisco, San Francisco, California 94158
| | - John Ravits
- Department of Neurosciences, ALS Translational Research, University of California, San Diego, La Jolla, California 92093
| | - Bruce L Miller
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Jennifer S Yokoyama
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
24
|
Li X, Bainbridge WA, Bakkour A. Item memorability has no influence on value-based decisions. Sci Rep 2022; 12:22056. [PMID: 36543818 PMCID: PMC9772201 DOI: 10.1038/s41598-022-26333-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022] Open
Abstract
While making decisions, we often rely on past experiences to guide our choices. However, not all experiences are remembered equally well, and some elements of an experience are more memorable than others. Thus, the intrinsic memorability of past experiences may bias our decisions. Here, we hypothesized that individuals would tend to choose more memorable options than less memorable ones. We investigated the effect of item memorability on choice in two experiments. First, using food images, we found that the same items were consistently remembered, and others consistently forgotten, across participants. However, contrary to our hypothesis, we found that participants did not prefer or choose the more memorable over the less memorable items when choice options were matched for the individuals' valuation of the items. Second, we replicated these findings in an alternate stimulus domain, using words that described the same food items. These findings suggest that stimulus memorability does not play a significant role in determining choice based on subjective value.
Collapse
Affiliation(s)
- Xinyue Li
- Department of Psychology, University of Chicago, 5848 S University Ave, Chicago, IL, 60637, USA
| | - Wilma A Bainbridge
- Department of Psychology, University of Chicago, 5848 S University Ave, Chicago, IL, 60637, USA
- Neuroscience Institute, University of Chicago, 5812 S Ellis Ave, Chicago, IL, 60637, USA
| | - Akram Bakkour
- Department of Psychology, University of Chicago, 5848 S University Ave, Chicago, IL, 60637, USA.
- Neuroscience Institute, University of Chicago, 5812 S Ellis Ave, Chicago, IL, 60637, USA.
| |
Collapse
|
25
|
Prasad D, Bainbridge WA. The Visual Mandela Effect as Evidence for Shared and Specific False Memories Across People. Psychol Sci 2022; 33:1971-1988. [PMID: 36219739 DOI: 10.1177/09567976221108944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
The Mandela effect is an Internet phenomenon describing shared and consistent false memories for specific icons in popular culture. The visual Mandela effect is a Mandela effect specific to visual icons (e.g., the Monopoly Man is falsely remembered as having a monocle) and has not yet been empirically quantified or tested. In Experiment 1 (N = 100 adults), we demonstrated that certain images from popular iconography elicit consistent, specific false memories. In Experiment 2 (N = 60 adults), using eye-tracking-like methods, we found no attentional or visual differences that drive this phenomenon. There is no clear difference in the natural visual experience of these images (Experiment 3), and these errors also occur spontaneously during recall (Experiment 4; N = 50 adults). These results demonstrate that there are certain images for which people consistently make the same false-memory error, despite the majority of visual experience being the canonical image.
Collapse
|
26
|
Wakeland-Hart CD, Cao SA, deBettencourt MT, Bainbridge WA, Rosenberg MD. Predicting visual memory across images and within individuals. Cognition 2022; 227:105201. [PMID: 35868240 DOI: 10.1016/j.cognition.2022.105201] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 06/01/2022] [Accepted: 06/06/2022] [Indexed: 11/19/2022]
Abstract
We only remember a fraction of what we see-including images that are highly memorable and those that we encounter during highly attentive states. However, most models of human memory disregard both an image's memorability and an individual's fluctuating attentional states. Here, we build the first model of memory synthesizing these two disparate factors to predict subsequent image recognition. We combine memorability scores of 1100 images (Experiment 1, n = 706) and attentional state indexed by response time on a continuous performance task (Experiments 2 and 3, n = 57 total). Image memorability and sustained attentional state explained significant variance in image memory, and a joint model of memory including both factors outperformed models including either factor alone. Furthermore, models including both factors successfully predicted memory in an out-of-sample group. Thus, building models based on individual- and image-specific factors allows for directed forecasting of our memories. SIGNIFICANCE STATEMENT: Although memory is a fundamental cognitive process, much of the time memory failures cannot be predicted until it is too late. However, in this study, we show that much of memory is surprisingly pre-determined ahead of time, by factors shared across the population and highly specific to each individual. Specifically, we build a new multidimensional model that predicts memory based just on the images a person sees and when they see them. This research synthesizes findings from disparate domains ranging from computer vision, attention, and memory into a predictive model. These findings have resounding implications for domains such as education, business, and marketing, where it is a top priority to predict (and even manipulate) what information people will remember.
Collapse
Affiliation(s)
- Cheyenne D Wakeland-Hart
- Department of Psychology, University of Chicago, Chicago, IL, USA; Department of Psychology, Columbia University, New York, NY, USA
| | - Steven A Cao
- Department of Psychology, University of Chicago, Chicago, IL, USA
| | | | - Wilma A Bainbridge
- Department of Psychology, University of Chicago, Chicago, IL, USA; Neuroscience Institute, University of Chicago, Chicago, IL, USA
| | - Monica D Rosenberg
- Department of Psychology, University of Chicago, Chicago, IL, USA; Neuroscience Institute, University of Chicago, Chicago, IL, USA.
| |
Collapse
|
27
|
Kim H. Attention- versus significance-driven memory formation: Taxonomy, neural substrates, and meta-analyses. Neurosci Biobehav Rev 2022; 138:104685. [PMID: 35526692 DOI: 10.1016/j.neubiorev.2022.104685] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 04/29/2022] [Accepted: 05/01/2022] [Indexed: 11/26/2022]
Abstract
Functional neuroimaging data on episodic memory formation have expanded rapidly over the last 30 years, which raises the need for an integrative framework. This study proposes a taxonomy of episodic memory formation to address this need. At the broadest level, the taxonomy distinguishes between attention-driven vs. significance-driven memory formation. The three subtypes of attention-driven memory formation are selection-, fluctuation-, and level-related. The three subtypes of significance-driven memory formation are novelty-, emotion-, and reward-related. Meta-analytic data indicated that attention-driven memory formation affects the functioning of the extra-medial temporal lobe more strongly than the medial temporal lobe (MTL) regions. In contrast, significance-driven memory formation affects the functioning of the MTL more strongly than the extra-MTL regions. This study proposed a model in which attention has a stronger impact on the formation of neocortical traces than hippocampus/MTL traces, whereas significance has a stronger impact on the formation of hippocampus/MTL traces than neocortical traces. Overall, the taxonomy and model provide an integrative framework in which to place diverse encoding-related findings into a proper perspective.
Collapse
Affiliation(s)
- Hongkeun Kim
- Department of Rehabilitation Psychology, Daegu University, Republic of Korea.
| |
Collapse
|
28
|
Folville A, Bahri MA, Delhaye E, Salmon E, Bastin C. Shared vivid remembering: age-related differences in across-participants similarity of neural representations during encoding and retrieval. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2022; 29:526-551. [PMID: 35168499 DOI: 10.1080/13825585.2022.2036683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 01/27/2022] [Indexed: 06/14/2023]
Abstract
Recent advances in multivariate neuroimaging analyses have made possible the examination of the similarity of the neural patterns of activations measured across participants, but it has not been investigated yet whether such measure is age-sensitive. Here, in the scanner, young and older participants viewed scene pictures associated with labels. At test, participants were presented with the labels and were asked to recollect the associated picture. We used Pattern Similarity Analyses by which we compared patterns of neural activation during the encoding or the remembering of each picture of one participant with the averaged pattern of activation across the remaining participants. Results revealed that across-participants neural similarity was higher in young than in older adults in distributed occipital, temporal and parietal areas during encoding and retrieval. These findings demonstrate that an age-related reduction in specificity of neural activation is also evident when the similarity of neural representations is examined across participants.
Collapse
Affiliation(s)
- Adrien Folville
- GIGA-CRC in Vivo Imaging, University of Liège, Liège, Belgium
- Department of Psychology, Psychology and Neuroscience of Cognition Research Unit, University of Liège, Liège, Belgium
| | | | - Emma Delhaye
- GIGA-CRC in Vivo Imaging, University of Liège, Liège, Belgium
- Department of Psychology, Psychology and Neuroscience of Cognition Research Unit, University of Liège, Liège, Belgium
- Faculdade de Psicologia, CICPSI, Universidade de Lisboa, Lisbon, Portugal
| | - Eric Salmon
- GIGA-CRC in Vivo Imaging, University of Liège, Liège, Belgium
- Department of Psychology, Psychology and Neuroscience of Cognition Research Unit, University of Liège, Liège, Belgium
| | - Christine Bastin
- GIGA-CRC in Vivo Imaging, University of Liège, Liège, Belgium
- Department of Psychology, Psychology and Neuroscience of Cognition Research Unit, University of Liège, Liège, Belgium
| |
Collapse
|
29
|
Can intentional forgetting reduce the cross-race effect in memory? Psychon Bull Rev 2022; 29:1387-1396. [PMID: 35377049 PMCID: PMC8978768 DOI: 10.3758/s13423-022-02080-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/04/2022] [Indexed: 11/08/2022]
Abstract
Across three studies, we utilized an item-method directed forgetting (DF) procedure with faces of different races to investigate the magnitude of intentional forgetting of own-race versus other-race faces. All three experiments shared the same procedure but differed in the number of faces presented. Participants were presented with own-race and other-race faces, each followed by a remember or forget memory instruction, and subsequently received a recognition test for all studied faces. We obtained a robust cross-race effect (CRE) but did not find a DF effect in Experiment 1. Experiments 2 and 3 used shorter study and test lists and obtained a significant DF effect along with significant CRE, but no interaction between face type and memory instruction. The results suggest that own-race and other-race faces are equally susceptible to DF. The results are discussed in terms of the theoretical explanations for CRE and their implications for DF.
Collapse
|
30
|
Kyle-Davidson C, Bors AG, Evans KK. Modulating human memory for complex scenes with artificially generated images. Sci Rep 2022; 12:1583. [PMID: 35091559 PMCID: PMC8799683 DOI: 10.1038/s41598-022-05623-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 01/10/2022] [Indexed: 11/17/2022] Open
Abstract
Visual memory schemas (VMS) are two-dimensional memorability maps that capture the most memorable regions of a given scene, predicting with a high degree of consistency human observer's memory for the same images. These maps are hypothesized to correlate with a mental framework of knowledge employed by humans to encode visual memories. In this study, we develop a generative model we term 'MEMGAN' constrained by extracted visual memory schemas that generates completely new complex scene images that vary based on their degree of predicted memorability. The generated populations of high and low memorability images are then evaluated for their memorability using a human observer experiment. We gather VMS maps for these generated images from participants in the memory experiment and compare these with the intended target VMS maps. Following the evaluation of observers' memory performance through both VMS-defined memorability and hit rate, we find significantly superior memory performance by human observers for the highly memorable generated images compared to poorly memorable. Implementing and testing a construct from cognitive science allows us to generate images whose memorability we can manipulate at will, as well as providing a tool for further study of mental schemas in humans.
Collapse
Affiliation(s)
| | - Adrian G Bors
- Department of Computer Science, University of York, York, YO10 5GH, UK
| | - Karla K Evans
- Department of Psychology, University of York, York, YO10 5DD, UK
| |
Collapse
|
31
|
Masarwa S, Kreichman O, Gilaie-Dotan S. Larger images are better remembered during naturalistic encoding. Proc Natl Acad Sci U S A 2022; 119:e2119614119. [PMID: 35046050 PMCID: PMC8794838 DOI: 10.1073/pnas.2119614119] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 12/03/2021] [Indexed: 11/18/2022] Open
Abstract
We are constantly exposed to multiple visual scenes, and while freely viewing them without an intentional effort to memorize or encode them, only some are remembered. It has been suggested that image memory is influenced by multiple factors, such as depth of processing, familiarity, and visual category. However, this is typically investigated when people are instructed to perform a task (e.g., remember or make some judgment about the images), which may modulate processing at multiple levels and thus, may not generalize to naturalistic visual behavior. Visual memory is assumed to rely on high-level visual perception that shows a level of size invariance and therefore is not assumed to be highly dependent on image size. Here, we reasoned that during naturalistic vision, free of task-related modulations, bigger images stimulate more visual system processing resources (from retina to cortex) and would, therefore, be better remembered. In an extensive set of seven experiments, naïve participants (n = 182) were asked to freely view presented images (sized 3° to 24°) without any instructed encoding task. Afterward, they were given a surprise recognition test (midsized images, 50% already seen). Larger images were remembered better than smaller ones across all experiments (∼20% higher accuracy or ∼1.5 times better). Memory was proportional to image size, faces were better remembered, and outdoors the least. Results were robust even when controlling for image set, presentation order, screen resolution, image scaling at test, or the amount of information. While multiple factors affect image memory, our results suggest that low- to high-level processes may all contribute to image memory.
Collapse
Affiliation(s)
- Shaimaa Masarwa
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan 5290002, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan 5290002, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Sharon Gilaie-Dotan
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan 5290002, Israel;
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| |
Collapse
|
32
|
I remember it like it was yesterday: Age-related differences in the subjective experience of remembering. Psychon Bull Rev 2021; 29:1223-1245. [PMID: 34918271 DOI: 10.3758/s13423-021-02048-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/24/2021] [Indexed: 11/08/2022]
Abstract
It has been frequently described that older adults subjectively report the vividness of their memories as being as high, or even higher, than young adults, despite poorer objective memory performance. Here, we review studies that examined age-related differences in the subjective experience of memory vividness. By examining vividness calibration and resolution, studies using different types of approaches converge to suggest that older adults overestimate the intensity of their vividness ratings relative to young adults, and that they rely on retrieved memory details to a lesser extent to judge vividness. We discuss potential mechanisms underlying these observations. Inflation of memory vividness with regard to the richness of memory content may stem from age-differences in vividness criterion or scale interpretation and psycho-social factors. The reduced reliance on episodic memory details in older adults may stem from age-related differences in how they monitor these details to make their vividness ratings. Considered together, these findings emphasize the importance of examining age-differences in memory vividness using different analytical methods and they provide valuable evidence that the subjective experience of remembering is more than the reactivation of memory content. In this vein, we recommend that future studies explore the links between memory vividness and other subjective memory scales (e.g., ratings of details or memory confidence) in healthy aging and/or other populations, as it could be used as a window to better characterize the cognitive processes that underpin the subjective assessment of the quality of recollected events.
Collapse
|
33
|
Maxcey AM, Joykutty Z, Megla E. Tracking induced forgetting across both strong and weak memory representations to test competing theories of forgetting. Sci Rep 2021; 11:23028. [PMID: 34845275 PMCID: PMC8629985 DOI: 10.1038/s41598-021-02347-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 11/15/2021] [Indexed: 11/17/2022] Open
Abstract
Here we employ a novel analysis to address the question: what causes induced forgetting of pictures? We use baseline memorability as a measure of initial memory strength to ask whether induced forgetting is due to (1) recognition practice damaging the association between the memory representation and the category cue used to activate the representation, (2) the updating of a memory trace by incorporating information about a memory probe presented during recognition practice to the stored trace, (3) inhibitory mechanisms used to resolve the conflict created when correctly selecting the practiced item activates competing exemplars, (4) a global matching model in which repeating some items will hurt memory for other items, or (5) falling into the zone of destruction, where a moderate amount of activation leads to the highest degree of forgetting. None of the accounts of forgetting tested here can comprehensively account for both the novel analyses reported here and previous data using the induced forgetting paradigm. We discuss aspects of forgetting theories that are consistent with the novel analyses and existing data, a potential solution for existing models, proposals for future directions, and considerations when incorporating memorability into models of memory.
Collapse
Affiliation(s)
- Ashleigh M Maxcey
- Department of Psychology, Vanderbilt University, Wilson Hall, 111 21st Ave S, Nashville, TN, 37212, USA.
| | - Zara Joykutty
- Department of Psychology, Vanderbilt University, Wilson Hall, 111 21st Ave S, Nashville, TN, 37212, USA
| | - Emma Megla
- Department of Psychology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
34
|
Praveen A, Noorwali A, Samiayya D, Zubair Khan M, Vincent P M DR, Bashir AK, Alagupandi V. ResMem-Net: memory based deep CNN for image memorability estimation. PeerJ Comput Sci 2021; 7:e767. [PMID: 34825056 PMCID: PMC8594589 DOI: 10.7717/peerj-cs.767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Accepted: 10/12/2021] [Indexed: 06/13/2023]
Abstract
Image memorability is a very hard problem in image processing due to its subjective nature. But due to the introduction of Deep Learning and the large availability of data and GPUs, great strides have been made in predicting the memorability of an image. In this paper, we propose a novel deep learning architecture called ResMem-Net that is a hybrid of LSTM and CNN that uses information from the hidden layers of the CNN to compute the memorability score of an image. The intermediate layers are important for predicting the output because they contain information about the intrinsic properties of the image. The proposed architecture automatically learns visual emotions and saliency, shown by the heatmaps generated using the GradRAM technique. We have also used the heatmaps and results to analyze and answer one of the most important questions in image memorability: "What makes an image memorable?". The model is trained and evaluated using the publicly available Large-scale Image Memorability dataset (LaMem) from MIT. The results show that the model achieves a rank correlation of 0.679 and a mean squared error of 0.011, which is better than the current state-of-the-art models and is close to human consistency (p = 0.68). The proposed architecture also has a significantly low number of parameters compared to the state-of-the-art architecture, making it memory efficient and suitable for production.
Collapse
Affiliation(s)
| | | | - Duraimurugan Samiayya
- Department of Information Technology, St. Joseph’s College of Engineering, Chennai, India
| | | | - Durai Raj Vincent P M
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India
| | | | | |
Collapse
|
35
|
Santacà M, Dadda M, Miletto Petrazzini ME, Bisazza A. Stimulus characteristics, learning bias and visual discrimination in zebrafish (Danio rerio). Behav Processes 2021; 192:104499. [PMID: 34499984 DOI: 10.1016/j.beproc.2021.104499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/31/2021] [Accepted: 09/03/2021] [Indexed: 12/27/2022]
Abstract
Zebrafish is an emerging model in the study of brain function; however, knowledge about its behaviour and cognition is incomplete. Previous studies suggest this species has limited ability in visual learning tasks compared to other teleosts. In this study, we systematically examined zebrafish's ability to learn to discriminate colour, shape, size, and orientation of figures using an appetitive conditioning paradigm. Contrary to earlier reports, the zebrafish successfully completed all tasks. Not all discriminations were learned with the same speed and accuracy. Subjects discriminated the size of objects better than their shape or colour. In all three tasks, they were faster and more accurate when required to discriminate between outlined figures than between filled figures. With stimuli consisting of outlines, the learning performance of zebrafish was comparable to that observed in higher vertebrates. Zebrafish easily learned a horizontal-vertical discrimination task, but like many other vertebrates, they had great difficulty discriminating a figure from its mirror image. Performance was more accurate for subjects reinforced on one stimulus (green over red, triangle over circle, large over small). Unexpectedly, these stimulus biases occurred only when zebrafish were tested with filled figures, suggesting some causal relationship between stimulus preference, learning bias and performance.
Collapse
Affiliation(s)
- Maria Santacà
- Department of General Psychology, University of Padova, Padova, Italy.
| | - Marco Dadda
- Department of General Psychology, University of Padova, Padova, Italy
| | | | - Angelo Bisazza
- Department of General Psychology, University of Padova, Padova, Italy; Padua Neuroscience Center, University of Padova, Padova, Italy
| |
Collapse
|
36
|
Grande X, Berron D, Maass A, Bainbridge WA, Düzel E. Content-specific vulnerability of recent episodic memories in Alzheimer's disease. Neuropsychologia 2021; 160:107976. [PMID: 34314781 PMCID: PMC8434425 DOI: 10.1016/j.neuropsychologia.2021.107976] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 07/21/2021] [Accepted: 07/22/2021] [Indexed: 11/21/2022]
Abstract
Endel Tulving's episodic memory framework emphasizes the multifaceted re-experiencing of personal events. Indeed, decades of research focused on the experiential nature of episodic memories, usually treating recent episodic memory as a coherent experiential quality. However, recent insights into the functional architecture of the medial temporal lobe show that different types of mnemonic information are segregated into distinct neural pathways in brain circuits empirically associated with episodic memory. Moreover, recent memories do not fade as a whole under conditions of progressive neurodegeneration in these brain circuits, notably in Alzheimer's disease. Instead, certain memory content seem particularly vulnerable from the moment of their encoding while other content can remain memorable consistently across individuals and contexts. We propose that these observations are related to the content-specific functional architecture of the medial temporal lobe and consequently to a content-specific impairment of memory at different stages of the neurodegeneration. To develop Endel Tulving's inspirational legacy further and to advance our understanding of how memory function is affected by neurodegenerative conditions such as Alzheimer's disease, we postulate that it is compelling to focus on the representational content of recent episodic memories. The functional anatomy of episodic memory segregates different memory content. Alzheimer's disease may cause content-specific loss of recent memories Content-specific memorability across individuals changes with Alzheimer's disease. Content-specific assessment could provide new insights into episodic memory in health and disease
Collapse
Affiliation(s)
- Xenia Grande
- German Center for Neurodegenerative Diseases, Magdeburg, Germany; Institute of Cognitive Neurology and Dementia Research, Otto von Guericke University Magdeburg, Germany.
| | - David Berron
- German Center for Neurodegenerative Diseases, Magdeburg, Germany; Clinical Memory Research Unit, Department of Clinical Sciences Malmö, Lund University, Lund, Sweden
| | - Anne Maass
- German Center for Neurodegenerative Diseases, Magdeburg, Germany
| | | | - Emrah Düzel
- German Center for Neurodegenerative Diseases, Magdeburg, Germany; Institute of Cognitive Neurology and Dementia Research, Otto von Guericke University Magdeburg, Germany; Institute of Cognitive Neuroscience, University College London, United Kingdom.
| |
Collapse
|
37
|
Eye-movements reveal semantic interference effects during the encoding of naturalistic scenes in long-term memory. Psychon Bull Rev 2021; 28:1601-1614. [PMID: 34009623 DOI: 10.3758/s13423-021-01920-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2021] [Indexed: 11/08/2022]
Abstract
Similarity-based semantic interference (SI) hinders memory recognition. Within long-term visual memory paradigms, the more scenes (or objects) from the same semantic category are viewed, the harder it is to recognize each individual instance. A growing body of evidence shows that overt attention is intimately linked to memory. However, it is yet to be understood whether SI mediates overt attention during scene encoding, and so explain its detrimental impact on recognition memory. In the current experiment, participants watched 372 photographs belonging to different semantic categories (e.g., a kitchen) with different frequency (4, 20, 40 or 60 images), while being eye-tracked. After 10 minutes, they were presented with the same 372 photographs plus 372 new photographs and asked whether they recognized (or not) each photo (i.e., old/new paradigm). We found that the more the SI, the poorer the recognition performance, especially for old scenes of which memory representations existed. Scenes more widely explored were better recognized, but for increasing SI, participants focused on more local regions of the scene in search for its potentially distinctive details. Attending to the centre of the display, or to scene regions rich in low-level saliency was detrimental to recognition accuracy, and as SI increased participants were more likely to rely on visual saliency. The complexity of maintaining faithful memory representations for increasing SI also manifested in longer fixation durations; in fact, a more successful encoding was also associated with shorter fixations. Our study highlights the interdependence between attention and memory during high-level processing of semantic information.
Collapse
|
38
|
Scotti PS, Maxcey AM. What do laboratory-forgetting paradigms tell us about use-inspired forgetting? COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:37. [PMID: 33961151 PMCID: PMC8102837 DOI: 10.1186/s41235-021-00300-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 04/27/2021] [Indexed: 11/10/2022]
Abstract
Directed forgetting is a laboratory task in which subjects are told to remember some information and forget other information. In directed forgetting tasks, participants are able to exert intentional control over which information they retain in memory and which information they forget. Forgetting in this task appears to be mediated by intentional control of memory states in which executive control mechanisms suppress unwanted information. Recognition-induced forgetting is another laboratory task in which subjects forget information. Recognizing a target memory induces the forgetting of related items stored in memory. Rather than occurring due to volitional control, recognition-induced forgetting is an incidental by-product of activating items in memory. Here we asked whether intentional directed forgetting or unintentional recognition-induced forgetting is a more robust forgetting effect. While there was a correlation between forgetting effects when the same subjects did both tasks, the magnitude of recognition-induced forgetting was larger than the magnitude of directed forgetting. These results point to practical differences in forgetting outcomes between two commonly used laboratory-forgetting paradigms.
Collapse
Affiliation(s)
- Paul S Scotti
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Ashleigh M Maxcey
- Department of Psychology, Vanderbilt University, Wilson Hall, 111 21st Ave S, Nashville, TN, 37212, USA.
| |
Collapse
|
39
|
Abstract
Many photographs of real-life scenes are very consistently remembered or forgotten by most people, making these images intrinsically memorable or forgettable. Although machine vision algorithms can predict a given image's memorability very well, nothing is known about the subjective quality of these memories: are memorable images recognized based on strong feelings of familiarity or on recollection of episodic details? We tested people's recognition memory for memorable and forgettable scenes selected from image memorability databases, which contain memorability scores for each image, based on large-scale recognition memory experiments. Specifically, we tested the effect of intrinsic memorability on recollection and familiarity using cognitive computational models based on receiver operating characteristics (ROCs; Experiment 1 and 2) and on remember/know (R/K) judgments (Experiment 2). The ROC data of Experiment 2 indicated that image memorability boosted memory strength, but did not find a specific effect on recollection or familiarity. By contrast, ROC data from Experiment 2, which was designed to facilitate encoding and, in turn, recollection, found evidence for a specific effect of image memorability on recollection. Moreover, R/K judgments showed that, on average, memorability boosts recollection rather than familiarity. However, we also found a large degree of variability in these judgments across individual images: some images actually achieved high recognition rates by exclusively boosting familiarity rather than recollection. Together, these results show that current machine vision algorithms that can predict an image's intrinsic memorability in terms of hit rates fall short of describing the subjective quality of human memories.
Collapse
|
40
|
Rust NC, Mehrpour V. Understanding Image Memorability. Trends Cogn Sci 2020; 24:557-568. [DOI: 10.1016/j.tics.2020.04.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 04/10/2020] [Accepted: 04/11/2020] [Indexed: 11/29/2022]
|