1
|
Andermane N, Moccia A, Zhai C, Henderson LM, Horner AJ. The holistic forgetting of events and the (sometimes) fragmented forgetting of objects. Cognition 2025; 255:106017. [PMID: 39615225 DOI: 10.1016/j.cognition.2024.106017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 11/19/2024] [Accepted: 11/20/2024] [Indexed: 12/09/2024]
Abstract
Episodic events are typically retrieved and forgotten holistically. If you recall one element (e.g., a person), you are more likely to recall other elements from the same event (e.g., the location), a pattern that is retained over time in the presence of forgetting. In contrast, representations of individual items, such as objects, may be less coherently bound, such that object features are forgotten at different rates and retrieval dependency decreases across delay. To test the theoretical prediction that forgetting qualitatively differs across levels in a representational hierarchy, we investigated the potential dissociation between event and item memory across five experiments. Participants encoded three-element events comprising images of famous people, locations, and objects. We measured retrieval accuracy and the dependency between the retrieval of event associations and object features, immediately after encoding and after various delays (5 h to 3 days). Across experiments, retrieval accuracy decreased for both events and objects over time, revealing forgetting. Retrieval dependency for event elements (i.e., people, locations, and objects) did not change over time, suggesting the holistic forgetting of events. Retrieval dependency for object features (i.e., state and colour) was more variable. Depending on encoding and delay conditions across the experiments, we observed both fragmentation and holistic forgetting of object features. Our results suggest that event representations remain coherent over time, whereas object representations can, but do not always, fragment. This provides support for our representational hierarchy framework of forgetting, however there are (still to be determined) boundary conditions in relation to the fragmentation of object representations.
Collapse
Affiliation(s)
- Nora Andermane
- Department of Psychology, University of York, UK; School of Psychology, University of Sussex, UK
| | | | - Chong Zhai
- Department of Psychology, University of York, UK
| | - Lisa M Henderson
- Department of Psychology, University of York, UK; York Biomedical Research Institute, University of York, UK
| | - Aidan J Horner
- Department of Psychology, University of York, UK; York Biomedical Research Institute, University of York, UK.
| |
Collapse
|
2
|
Kyle-Davidson C, Solis O, Robinson S, Tan RTW, Evans KK. Scene complexity and the detail trace of human long-term visual memory. Vision Res 2024; 227:108525. [PMID: 39644707 DOI: 10.1016/j.visres.2024.108525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 10/30/2024] [Accepted: 11/21/2024] [Indexed: 12/09/2024]
Abstract
Humans can remember a vast amount of scene images; an ability often attributed to encoding only low-fidelity gist traces of a scene. Instead, studies show a surprising amount of detail is retained for each scene image allowing them to be distinguished from highly similar in-category distractors. The gist trace for images can be relatively easily captured through both computational and behavioural techniques, but capturing detail is much harder. While detail can be broadly estimated at the categorical level (e.g. man-made scenes more complex than natural), there is a lack of both ground-truth detail data at the sample level and a way to operationalise it for measurement purposes. Here through three different studies, we investigate whether the perceptual complexity of scenes can serve as a suitable analogue for the detail present in a scene, and hence whether we can use complexity to determine the relationship between scene detail and visual long term memory for scenes. First we examine this relationship directly using the VISCHEMA datasets, to determine whether the perceived complexity of a scene interacts with memorability, finding a significant positive correlation between complexity and memory, in contrast to the hypothesised U-shaped relation often proposed in the literature. In the second study we model complexity via artificial means, and find that even predicted measures of complexity still correlate with the overall ground-truth memorability of a scene, indicating that complexity and memorability cannot be easily disentangled. Finally, we investigate how cognitive load impacts the influence of scene complexity on image memorability. Together, findings indicate complexity and memorability do vary non-linearly, but generally it is limited to the extremes of the image complexity ranges. The effect of complexity on memory closely mirrors previous findings that detail enhances memory, and suggests that complexity is a suitable analogue for detail in visual long-term scene memory.
Collapse
Affiliation(s)
| | - Oscar Solis
- University of York, Dept. of Psychology, York, YO10 5NA, UK
| | | | | | - Karla K Evans
- University of York, Dept. of Psychology, York, YO10 5NA, UK
| |
Collapse
|
3
|
Persaud K, Hemmer P. The influence of functional components of natural scenes on episodic memory. Sci Rep 2024; 14:30313. [PMID: 39639108 PMCID: PMC11621360 DOI: 10.1038/s41598-024-81900-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Accepted: 11/29/2024] [Indexed: 12/07/2024] Open
Abstract
Prior expectation for the structure of natural scenes is perhaps the most influential contributor to episodic memory for objects in scenes. While the influence of functional components of natural scenes on scene perception and visual search has been well studied, far less is known about the independent contributions of these components to episodic memory. In this investigation, we systematically removed three functional components of natural scenes: global-background, local spatial, and local associative information, to evaluate their impact on episodic memory. Results revealed that [partially] removing the global-background negatively impacted recall accuracy following short encoding times but had relatively little impact on memory after longer times. In contrast, systematically removing local spatial and associative relationships of scene objects negatively impacted recall accuracy following short and longer encoding times. These findings suggest that scene background, object spatial arrangements, and object relationships facilitate not only scene perception and object recognition, but also episodic memory. Interestingly, the impact of these components depends on how much encoding time is available to store information in episodic memory. This work has important implications for understanding how the inherent structure and function of the natural world interacts with memory and cognition in naturalistic contexts.
Collapse
Affiliation(s)
- Kimele Persaud
- Department of Psychology, Rutgers University, Newark, USA.
| | - Pernille Hemmer
- Department of Psychology, Rutgers University, New Brunswick, USA
| |
Collapse
|
4
|
Sefranek M, Zokaei N, Draschkow D, Nobre AC. Comparing the impact of contextual associations and statistical regularities in visual search and attention orienting. PLoS One 2024; 19:e0302751. [PMID: 39570820 PMCID: PMC11581329 DOI: 10.1371/journal.pone.0302751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 10/06/2024] [Indexed: 11/24/2024] Open
Abstract
During visual search, we quickly learn to attend to an object's likely location. Research has shown that this process can be guided by learning target locations based on consistent spatial contextual associations or other statistical regularities. Here, we tested how different types of associations guide learning and the utilisation of established memories for different purposes. Participants learned contextual associations or rule-like statistical regularities that predicted target locations within different scenes. The consequences of this learning for subsequent performance were then evaluated on attention-orienting and memory-recall tasks. Participants demonstrated facilitated attention-orienting and recall performance based on both contextual associations and statistical regularities. Contextual associations facilitated attention orienting with a different time course compared to statistical regularities. Benefits to memory-recall performance depended on the alignment between the learned association or regularity and the recall demands. The distinct patterns of behavioural facilitation by contextual associations and statistical regularities show how different forms of long-term memory may influence neural information processing through different modulatory mechanisms.
Collapse
Affiliation(s)
- Marcus Sefranek
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Nahid Zokaei
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Dejan Draschkow
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Anna C. Nobre
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
- Wu Tsai Institute, Yale University, New Haven, CT, United States of America
- Department of Psychology, Yale University, New Haven, CT, United States of America
| |
Collapse
|
5
|
Zohar E, Kozak S, Abeles D, Shahar M, Censor N. Convolutional neural networks uncover the dynamics of human visual memory representations over time. Cereb Cortex 2024; 34:bhae447. [PMID: 39530747 DOI: 10.1093/cercor/bhae447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 09/21/2024] [Accepted: 10/25/2024] [Indexed: 11/16/2024] Open
Abstract
The ability to accurately retrieve visual details of past events is a fundamental cognitive function relevant for daily life. While a visual stimulus contains an abundance of information, only some of it is later encoded into long-term memory representations. However, an ongoing challenge has been to isolate memory representations that integrate various visual features and uncover their dynamics over time. To address this question, we leveraged a novel combination of empirical and computational frameworks based on the hierarchal structure of convolutional neural networks and their correspondence to human visual processing. This enabled to reveal the contribution of different levels of visual representations to memory strength and their dynamics over time. Visual memory strength was measured with distractors selected based on their shared similarity to the target memory along low or high layers of the convolutional neural network hierarchy. The results show that visual working memory relies similarly on low and high-level visual representations. However, already after a few minutes and on to the next day, visual memory relies more strongly on high-level visual representations. These findings suggest that visual representations transform from a distributed to a stronger high-level conceptual representation, providing novel insights into the dynamics of visual memory over time.
Collapse
Affiliation(s)
- Eden Zohar
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Stas Kozak
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Dekel Abeles
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Moni Shahar
- The Center for Artificial Intelligence and Data Science (TAD), Tel Aviv University, Tel Aviv 6997801, Israel
| | - Nitzan Censor
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 6997801, Israel
| |
Collapse
|
6
|
Blalock LD, Weichman K, VanWormer LA. Conceptual masking disrupts change-detection performance. Mem Cognit 2024; 52:1900-1914. [PMID: 39313588 DOI: 10.3758/s13421-024-01639-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2024] [Indexed: 09/25/2024]
Abstract
The present study investigated the effects of long-term knowledge on backward masking interference in visual working memory (VWM) by varying the similarity of mask stimuli along categorical dimensions. To-be-remembered items and masks were taken from categories controlled for perceptual distinctiveness and distinctiveness in kinds (e.g., there are many kinds of cars and few kinds of coffee mugs). Participants completed a change-detection task in which the memory array consisted of exemplars from either a similar or distinctive category, followed by a mask array of items from the same category (conceptually similar versus conceptually distinct categories), a different category, or no mask. The results over two experiments showed greater interference from conceptually similar masks as compared with the other conditions across stimulus onset asynchrony (SOA) conditions, suggesting masking with conceptually similar categories leads to more interference even when masks are shown well after the stimulus. These results have important implications for both the nature and time course of long-term conceptual knowledge influencing VWM, particularly when using complex real-world objects.
Collapse
Affiliation(s)
- Lisa Durrance Blalock
- Department of Psychology, University of West Florida, 11000 University Parkway, Pensacola, FL, 32514, USA.
| | - Kyle Weichman
- Department of Psychology, University of West Florida, 11000 University Parkway, Pensacola, FL, 32514, USA
| | - Lisa A VanWormer
- School of Psychology, Fielding Graduate University, Santa Barbara, CA, 93105, USA
| |
Collapse
|
7
|
Saltzmann SM, Eich B, Moen KC, Beck MR. Activated long-term memory and visual working memory during hybrid visual search: Effects on target memory search and distractor memory. Mem Cognit 2024; 52:2156-2171. [PMID: 38528298 DOI: 10.3758/s13421-024-01556-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2024] [Indexed: 03/27/2024]
Abstract
In hybrid visual search, observers must maintain multiple target templates and subsequently search for any one of those targets. If the number of potential target templates exceeds visual working memory (VWM) capacity, then the target templates are assumed to be maintained in activated long-term memory (aLTM). Observers must search the array for potential targets (visual search), as well as search through memory (target memory search). Increasing the target memory set size reduces accuracy, increases search response times (RT), and increases dwell time on distractors. However, the extent of observers' memory for distractors during hybrid search is largely unknown. In the current study, the impact of hybrid search on target memory search (measured by dwell time on distractors, false alarms, and misses) and distractor memory (measured by distractor revisits and recognition memory of recently viewed distractors) was measured. Specifically, we aimed to better understand how changes in behavior during hybrid search impacts distractor memory. Increased target memory set size led to an increase in search RTs, distractor dwell times, false alarms, and target identification misses. Increasing target memory set size increased revisits to distractors, suggesting impaired distractor location memory, but had no effect on a two-alternative forced-choice (2AFC) distractor recognition memory test presented during the search trial. The results from the current study suggest a lack of interference between memory stores maintaining target template representations (aLTM) and distractor information (VWM). Loading aLTM with more target templates does not impact VWM for distracting information.
Collapse
Affiliation(s)
- Stephanie M Saltzmann
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Brandon Eich
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Katherine C Moen
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
- Department of Psychology, University of Nebraska at Kearney, 2504 9th Ave, Kearney, NE, 68849, USA
| | - Melissa R Beck
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA.
| |
Collapse
|
8
|
Sahar T, Gronau N, Makovski T. Semantic meaning enhances feature-binding but not quantity or precision of locations in visual working memory. Mem Cognit 2024; 52:2107-2118. [PMID: 39080186 PMCID: PMC11588879 DOI: 10.3758/s13421-024-01611-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/05/2024] [Indexed: 11/27/2024]
Abstract
Recent studies showed that real-world items are better remembered in visual working memory (VWM) than visually similar stimuli that are stripped of their semantic meaning. However, the exact nature of this advantage remains unclear. We used meaningful and meaningless stimuli in a location-reproduction VWM task. Employing a mixture-modeling analysis, we examined whether semantic meaning enables more item locations to be remembered, whether it improves the precision of the locations stored in memory, or whether it improves binding between the specific items and their locations. Participants were presented with streams of four (Experiments 1 & 2) or six (Experiment 3) real-world items, or their scrambled, meaningless counterparts. Each item was presented at a unique location, and the task was to reproduce one item's location. Overall, location memory was consistently better for real-world items compared with their scrambled counterparts. Furthermore, the results revealed that participants were less likely to make swap errors for the meaningful items, but there was no effect of conceptual meaning on the guess rate or the precision of the report. In line with previous findings, these results indicate that conceptual meaning enhances VWM for arbitrary stimulus properties such as item location, and this improvement is primarily due to a more efficient identity-location binding rather than an increase in the quantity or quality (precision) of the locations held in memory.
Collapse
Affiliation(s)
- Tomer Sahar
- Department of Psychology and Education, The Open University of Israel, Ra'anana, Israel.
- School of Psychological Sciences & The Institute of Information Processing and Decision Making, University of Haifa, Haifa, Israel.
| | - Nurit Gronau
- Department of Psychology and Education, The Open University of Israel, Ra'anana, Israel
| | - Tal Makovski
- Department of Psychology and Education, The Open University of Israel, Ra'anana, Israel
| |
Collapse
|
9
|
Singletary NM, Horga G, Gottlieb J. A neural code supporting prospective probabilistic reasoning for instrumental information demand in humans. Commun Biol 2024; 7:1242. [PMID: 39358516 PMCID: PMC11447085 DOI: 10.1038/s42003-024-06927-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 09/19/2024] [Indexed: 10/04/2024] Open
Abstract
When making adaptive decisions, we actively demand information, but relatively little is known about the mechanisms of active information gathering. An open question is how the brain prospectively estimates the information gains that are expected to accrue from various sources by integrating simpler quantities of prior certainty and the reliability (diagnosticity) of a source. We examine this question using fMRI in a task in which people placed bids to obtain information in conditions that varied independently in the rewards, decision uncertainty, and information diagnosticity. We show that, consistent with value of information theory, the participants' bids are sensitive to prior certainty (the certainty about the correct choice before gathering information) and expected posterior certainty (the certainty expected after gathering information). Expected posterior certainty is decoded from multivoxel activation patterns in the posterior parietal and extrastriate cortices. This representation is independent of instrumental rewards and spatially overlaps with distinct representations of prior certainty and expected information gains. The findings suggest that the posterior parietal and extrastriate cortices are candidates for mediating the prospection of posterior probabilities as a key step to anticipating information gains during active gathering of instrumental information.
Collapse
Affiliation(s)
- Nicholas M Singletary
- Doctoral Program in Neurobiology and Behavior, Columbia University, New York, NY, USA.
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
- New York State Psychiatric Institute, New York, NY, USA.
| | - Guillermo Horga
- New York State Psychiatric Institute, New York, NY, USA.
- Department of Psychiatry, Columbia University, New York, NY, USA.
| | - Jacqueline Gottlieb
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| |
Collapse
|
10
|
Delhaye E, D'Innocenzo G, Raposo A, Coco MI. The upside of cumulative conceptual interference on exemplar-level mnemonic discrimination. Mem Cognit 2024; 52:1567-1578. [PMID: 38709388 PMCID: PMC11522113 DOI: 10.3758/s13421-024-01563-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/21/2024] [Indexed: 05/07/2024]
Abstract
Although long-term visual memory (LTVM) has a remarkable capacity, the fidelity of its episodic representations can be influenced by at least two intertwined interference mechanisms during the encoding of objects belonging to the same category: the capacity to hold similar episodic traces (e.g., different birds) and the conceptual similarity of the encoded traces (e.g., a sparrow shares more features with a robin than with a penguin). The precision of episodic traces can be tested by having participants discriminate lures (unseen objects) from targets (seen objects) representing different exemplars of the same concept (e.g., two visually similar penguins), which generates interference at retrieval that can be solved if efficient pattern separation happened during encoding. The present study examines the impact of within-category encoding interference on the fidelity of mnemonic object representations, by manipulating an index of cumulative conceptual interference that represents the concurrent impact of capacity and similarity. The precision of mnemonic discrimination was further assessed by measuring the impact of visual similarity between targets and lures in a recognition task. Our results show a significant decrement in the correct identification of targets for increasing interference. Correct rejections of lures were also negatively impacted by cumulative interference as well as by the visual similarity with the target. Most interestingly though, mnemonic discrimination for targets presented with a visually similar lure was more difficult when objects were encoded under lower, not higher, interference. These findings counter a simply additive impact of interference on the fidelity of object representations providing a finer-grained, multi-factorial, understanding of interference in LTVM.
Collapse
Affiliation(s)
- Emma Delhaye
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
- GIGA-CRC In-Vivo Imaging, University of Liège, Liège, Belgium
| | | | - Ana Raposo
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Moreno I Coco
- Department of Psychology, Sapienza University of Rome, Rome, Italy.
- IRCSS Santa Lucia, Roma, Italy.
| |
Collapse
|
11
|
Carter AA, Kaiser D. An object numbering task reveals an underestimation of complexity for typically structured scenes. Psychon Bull Rev 2024:10.3758/s13423-024-02577-2. [PMID: 39289240 DOI: 10.3758/s13423-024-02577-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2024] [Indexed: 09/20/2024]
Abstract
Our visual environments are composed of an abundance of individual objects. The efficiency with which we can parse such rich environments is remarkable. Previous work suggests that this efficiency is partly explained by grouping mechanisms, which allow the visual system to process the objects that surround us as meaningful groups rather than individual entities. Here, we show that the grouping of objects in typically and meaningfully structured environments directly relates to a reduction of perceived complexity. In an object numerosity discrimination task, we showed participants pairs of schematic scene miniatures, in which objects were structured in typical or atypical ways and asked them to judge which scene consisted of more individual objects. Critically, participants underestimated the number of objects in typically structured compared with atypically structured scenes, suggesting that grouping based on typical object configurations reduces the perceived numerical complexity of a scene. In two control experiments, we show that this overestimation also occurs when the objects are presented on textured backgrounds, and that it is specific to upright scenes, indicating that it is not related to basic visual feature differences between typically and atypically structured scenes. Together, our results suggest that our visual surroundings appear less complex to the visual system than the number of objects in them makes us believe.
Collapse
Affiliation(s)
- Alex A Carter
- Department of Psychology, University of York, York, UK
| | - Daniel Kaiser
- Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Arndtstraße 2, 35392, Gießen, Germany.
- Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg, Justus-Liebig-Universität Gießen, and Technische Universität Darmstadt, Hans-Meerwein-Straße 6, 35032, Marburg, Germany.
| |
Collapse
|
12
|
Trinkl N, Wolfe JM. Image memorability influences memory for where the item was seen but not when. Mem Cognit 2024:10.3758/s13421-024-01635-3. [PMID: 39256320 DOI: 10.3758/s13421-024-01635-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/22/2024] [Indexed: 09/12/2024]
Abstract
Observers can determine whether they have previously seen hundreds of images with more than 80% accuracy. This "massive memory" for WHAT we have seen is accompanied by smaller but still massive memories for WHERE and WHEN the item was seen (spatial & temporal massive memory). Recent studies have shown that certain images are more easily remembered than others (higher "memorability"). Does memorability influence spatial massive memory and temporal massive memory? In two experiments, viewers saw 150 images presented twice in random order. These 300 images were sequentially presented at random locations in a 7 × 7 grid. If an image was categorized as old, observers clicked on the spot in the grid where they thought they had previously seen it. They also noted when they had seen it: Experiment 1-clicking on a timeline; Experiment 2-estimating the trial number when the item first appeared. Replicating prior work, data show that high-memorability images are remembered better than low-memorability images. Interestingly, in both experiments, spatial memory precision was correlated with image memorability, while temporal memory precision did not vary as a function of memorability. Apparently, properties that make images memorable help us remember WHERE but not WHEN those images were presented. The lack of correlation between memorability and temporal memory is, of course, a negative result and should be treated with caution.
Collapse
Affiliation(s)
- Nathan Trinkl
- Visual Attention Laboratory, Dept. of Surgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Jeremy M Wolfe
- Visual Attention Laboratory, Dept. of Surgery, Brigham and Women's Hospital, Boston, MA, USA.
- Depts of Ophthalmology & Radiology, Harvard Medical School, Boston, MA, USA.
- Visual Attention Lab, Department of Surgery, Brigham & Women's Hospital, 900 Commonwealth Ave, 3rd Floor, Boston, MA, 02215, USA.
| |
Collapse
|
13
|
Novick LR, Liu J. Seeing what you believe: recognition memory for evolutionary tree structure is affected by students' misconceptions. Memory 2024; 32:874-888. [PMID: 38805606 DOI: 10.1080/09658211.2024.2360567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 05/21/2024] [Indexed: 05/30/2024]
Abstract
Peoples' recognition memory for pictorial stimuli is extremely good. Even complex scientific visualisations are recognised with a high degree of accuracy. The present research examined recognition memory for the branching structure of evolutionary trees. This is an educationally consequential topic due to the potential for contamination from students' misconceptions. The authors created six pairs of scientifically accurate and structurally identical evolutionary trees that differed in whether they included a taxon that cued a misconception in memory. As predicted, Experiment 1 found that (a) college students (N = 90) had better memory for each of the six tree structures when a neutral taxon (M = 0.73) rather than a misconception-cuing taxon (M = 0.64) was included in the tree, and (b) recognition memory was significantly above chance for both sets of trees. Experiment 2 ruled out an alternative hypothesis based on the possibility that 8-12 sec was not enough time for students to encode the relationships depicted in the trees. The authors consider implications of these results for using evolutionary trees to better communicate scientific information. This is important because these trees provide information that is relevant for everyday life.
Collapse
Affiliation(s)
- Laura R Novick
- Department of Psychology and Human Development, Vanderbilt University, Nashville, TN, USA
| | - Jingyi Liu
- Department of Psychology and Human Development, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
14
|
Děchtěrenko F, Bainbridge WA, Lukavský J. Visual free recall and recognition in art students and laypeople. Mem Cognit 2024:10.3758/s13421-024-01607-7. [PMID: 39078592 DOI: 10.3758/s13421-024-01607-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/19/2024] [Indexed: 07/31/2024]
Abstract
Artists and laypeople differ in their ability to create drawings. Previous research has shown that artists have improved memory performance during drawing; however, it is unclear whether they have better visual memory after the drawing is finished. In this paper, we focused on the question of differences in visual memory between art students and the general population in two studies. In Study 1, both groups studied a set of images and later drew them in a surprise visual recall test. In Study 2, the drawings from Study 1 were evaluated by a different set of raters based on their drawing quality and similarity to the original image to link drawing evaluations with memory performance for both groups. We found that both groups showed comparable visual recognition memory performance; however, the artist group showed increased recall memory performance. Moreover, they produced drawings that were both better quality and more similar to the original image. Individually, participants whose drawings were rated as better showed higher recognition accuracy. Results from Study 2 also have practical implications for the usage of drawing as a tool for measuring free recall - the majority of the drawings were recognizable, and raters showed a high level of consistency during their evaluation of the drawings. Taken together, we found that artists have better visual recall memory than laypeople.
Collapse
Affiliation(s)
- Filip Děchtěrenko
- Institute of Psychology, Czech Academy of Sciences, Pod Vodárenskou věží 4, Prague, 18200, Czech Republic.
| | - Wilma A Bainbridge
- Department of Psychology, University of Chicago, 5848 S University Ave, Beecher Hall 303, Chicago, IL, 60637, USA
- Neuroscience Institute, University of Chicago, 5812 S Ellis Ave, Chicago, IL, 60637, USA
| | - Jiří Lukavský
- Institute of Psychology, Czech Academy of Sciences, Pod Vodárenskou věží 4, Prague, 18200, Czech Republic
| |
Collapse
|
15
|
Ahmad FN, Tremblay S, Karkuszewski MD, Alvi M, Hockley WE. A conceptual-perceptual distinctiveness processing account of the superior recognition memory of pictures over environmental sounds. Q J Exp Psychol (Hove) 2024; 77:1555-1580. [PMID: 37705452 PMCID: PMC11181738 DOI: 10.1177/17470218231202986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 07/27/2023] [Accepted: 09/02/2023] [Indexed: 09/15/2023]
Abstract
Researchers have proposed a coarser or gist-based representation for sounds, whereas a more verbatim-based representation is retrieved from long-term memory to account for higher recognition performance for pictures. This study examined the mechanism for the recognition advantage for pictures. In Experiment 1A, pictures and sounds were presented in separate trials in a mixed list during the study phase and participants showed in a yes-no test, a higher proportion of correct responses for targets, exemplar foils categorically related to the target, and novel foils for pictures compared with sounds. In Experiment 1B, the picture recognition advantage was replicated in a two-alternative forced-choice test for the novel and exemplar foil conditions. For Experiment 2A, even when verbal labels (i.e., written labels) were presented for sounds during the study phase, a recognition advantage for pictures was shown for both targets and exemplar foils. Experiment 2B showed that the presence of written labels for sounds, during both the study and test phases did not eliminate the advantage of recognition of pictures in terms of correct rejection of exemplar foils. Finally, in two additional experiments, we examined whether the degree of similarity within pictures and sounds could account for the recognition advantage of pictures. The mean similarity rating for pictures was higher than the mean similarity rating for sounds in the exemplar test condition, whereas mean similarity rating for sounds was higher than pictures in the novel test condition. These results pose a challenge for some versions of distinctiveness accounts of the picture superiority effect. We propose a conceptual-perceptual distinctiveness processing account of recognition memory for pictures and sounds.
Collapse
Affiliation(s)
- Fahad N Ahmad
- Department of Psychology, Wilfrid Laurier University, Waterloo, ON, Canada
| | - Savannah Tremblay
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute at Baycrest, Toronto, ON, Canada
| | | | - Marium Alvi
- Department of Psychology, York University, Toronto, ON, Canada
| | - William E Hockley
- Department of Psychology, Wilfrid Laurier University, Waterloo, ON, Canada
| |
Collapse
|
16
|
Morales-Torres R, Wing EA, Deng L, Davis SW, Cabeza R. Visual Recognition Memory of Scenes Is Driven by Categorical, Not Sensory, Visual Representations. J Neurosci 2024; 44:e1479232024. [PMID: 38569925 PMCID: PMC11112637 DOI: 10.1523/jneurosci.1479-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 02/07/2024] [Accepted: 02/14/2024] [Indexed: 04/05/2024] Open
Abstract
When we perceive a scene, our brain processes various types of visual information simultaneously, ranging from sensory features, such as line orientations and colors, to categorical features, such as objects and their arrangements. Whereas the role of sensory and categorical visual representations in predicting subsequent memory has been studied using isolated objects, their impact on memory for complex scenes remains largely unknown. To address this gap, we conducted an fMRI study in which female and male participants encoded pictures of familiar scenes (e.g., an airport picture) and later recalled them, while rating the vividness of their visual recall. Outside the scanner, participants had to distinguish each seen scene from three similar lures (e.g., three airport pictures). We modeled the sensory and categorical visual features of multiple scenes using both early and late layers of a deep convolutional neural network. Then, we applied representational similarity analysis to determine which brain regions represented stimuli in accordance with the sensory and categorical models. We found that categorical, but not sensory, representations predicted subsequent memory. In line with the previous result, only for the categorical model, the average recognition performance of each scene exhibited a positive correlation with the average visual dissimilarity between the item in question and its respective lures. These results strongly suggest that even in memory tests that ostensibly rely solely on visual cues (such as forced-choice visual recognition with similar distractors), memory decisions for scenes may be primarily influenced by categorical rather than sensory representations.
Collapse
Affiliation(s)
| | - Erik A Wing
- Rotman Research Institute, Baycrest Health Sciences, Toronto, Ontario M6A 2E1, Canada
| | - Lifu Deng
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
| | - Simon W Davis
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
- Department of Neurology, Duke University School of Medicine, Durham, North Carolina 27708
| | - Roberto Cabeza
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
| |
Collapse
|
17
|
Rennig J, Langenberger C, Karnath HO. Beyond visual integration: sensitivity of the temporal-parietal junction for objects, places, and faces. BEHAVIORAL AND BRAIN FUNCTIONS : BBF 2024; 20:8. [PMID: 38637870 PMCID: PMC11027340 DOI: 10.1186/s12993-024-00233-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 03/24/2024] [Indexed: 04/20/2024]
Abstract
One important role of the TPJ is the contribution to perception of the global gist in hierarchically organized stimuli where individual elements create a global visual percept. However, the link between clinical findings in simultanagnosia and neuroimaging in healthy subjects is missing for real-world global stimuli, like visual scenes. It is well-known that hierarchical, global stimuli activate TPJ regions and that simultanagnosia patients show deficits during the recognition of hierarchical stimuli and real-world visual scenes. However, the role of the TPJ in real-world scene processing is entirely unexplored. In the present study, we first localized TPJ regions significantly responding to the global gist of hierarchical stimuli and then investigated the responses to visual scenes, as well as single objects and faces as control stimuli. All three stimulus classes evoked significantly positive univariate responses in the previously localized TPJ regions. In a multivariate analysis, we were able to demonstrate that voxel patterns of the TPJ were classified significantly above chance level for all three stimulus classes. These results demonstrate a significant involvement of the TPJ in processing of complex visual stimuli that is not restricted to visual scenes and that the TPJ is sensitive to different classes of visual stimuli with a specific signature of neuronal activations.
Collapse
Affiliation(s)
- Johannes Rennig
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, D-72076, Tübingen, Germany.
| | - Christina Langenberger
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, D-72076, Tübingen, Germany
| | - Hans-Otto Karnath
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, D-72076, Tübingen, Germany
- Department of Psychology, University of South Carolina, Columbia, USA
| |
Collapse
|
18
|
Andrade MÂ, Cipriano M, Raposo A. ObScene database: Semantic congruency norms for 898 pairs of object-scene pictures. Behav Res Methods 2024; 56:3058-3071. [PMID: 37488464 PMCID: PMC11133025 DOI: 10.3758/s13428-023-02181-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/23/2023] [Indexed: 07/26/2023]
Abstract
Research on the interaction between object and scene processing has a long history in the fields of perception and visual memory. Most databases have established norms for pictures where the object is embedded in the scene. In this study, we provide a diverse and controlled stimulus set comprising real-world pictures of 375 objects (e.g., suitcase), 245 scenes (e.g., airport), and 898 object-scene pairs (e.g., suitcase-airport), with object and scene presented separately. Our goal was twofold. First, to create a database of object and scene pictures, normed for the same variables to have comparable measures for both types of pictures. Second, to acquire normative data for the semantic relationships between objects and scenes presented separately, which offers more flexibility in the use of the pictures and allows disentangling the processing of the object and its context (the scene). Along three experiments, participants evaluated each object or scene picture on name agreement, familiarity, and visual complexity, and rated object-scene pairs on semantic congruency. A total of 125 septuplets of one scene and six objects (three congruent, three incongruent), and 120 triplets of one object and two scenes (in congruent and incongruent pairings) were built. In future studies, these objects and scenes can be used separately or combined, while controlling for their key features. Additionally, as object-scene pairs received semantic congruency ratings along the entire scale, researchers may select among a wide range of congruency values. ObScene is a comprehensive and ecologically valid database, useful for psychology and neuroscience studies of visual object and scene processing.
Collapse
Affiliation(s)
- Miguel Ângelo Andrade
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal.
| | - Margarida Cipriano
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Ana Raposo
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| |
Collapse
|
19
|
Liu T, Hao X, Zhang X, Bai X, Xing M. The effect of part-list cuing on associative recognition. Q J Exp Psychol (Hove) 2024:17470218241234145. [PMID: 38326325 DOI: 10.1177/17470218241234145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
The modulation of part-list cuing on item memory has been well-documented, whereas its impact on associative memory remains largely unknown. The present study explored the effect of part-list cuing on associative recognition and, more specifically, whether this forgetting effect caused by part-list cuing is more sensitive to recollection or familiarity in recognition memory. Experiments 1a and 1b combined the intact/rearranged/new judgement task of associative recognition with the classical part-list cuing paradigm, and the result showed that part-list cuing impaired the recognition accuracy of "intact" and "rearranged" face-scene pairs. Moreover, the discriminability score of relational recognition and item recognition was significantly decreased in the part-list cuing condition compared to the no-part-list cuing condition. Experiments 2a and 2b further used the Remember/Know/Guess task to explore which recognition processes (recollection vs. familiarity) were sensitive to the presentation of part-list cuing. The results showed that part-list cuing reduced the familiarity of relational recognition and the recollection and familiarity of item recognition. These findings suggest that part-list cuing was harmful to the recognition of relationships (familiarity) and items (recollection and familiarity) in associative memory.
Collapse
Affiliation(s)
- Tuanli Liu
- School of Educational Science, Xinyang Normal University, Xinyang, China
| | - Xingfeng Hao
- School of Educational Science, Xinyang Normal University, Xinyang, China
| | - Xingyuan Zhang
- School of Business, Xinyang Normal University, Xinyang, China
| | - Xuejun Bai
- Faculty of Psychology, Tianjin Normal University, Tianjin, China
| | - Min Xing
- School of Educational Science, Xinyang Normal University, Xinyang, China
| |
Collapse
|
20
|
Westebbe L, Liang Y, Blaser E. The Accuracy and Precision of Memory for Natural Scenes: A Walk in the Park. Open Mind (Camb) 2024; 8:131-147. [PMID: 38435706 PMCID: PMC10898787 DOI: 10.1162/opmi_a_00122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/17/2024] [Indexed: 03/05/2024] Open
Abstract
It is challenging to quantify the accuracy and precision of scene memory because it is unclear what 'space' scenes occupy (how can we quantify error when misremembering a natural scene?). To address this, we exploited the ecologically valid, metric space in which scenes occur and are represented: routes. In a delayed estimation task, participants briefly saw a target scene drawn from a video of an outdoor 'route loop', then used a continuous report wheel of the route to pinpoint the scene. Accuracy was high and unbiased, indicating there was no net boundary extension/contraction. Interestingly, precision was higher for routes that were more self-similar (as characterized by the half-life, in meters, of a route's Multiscale Structural Similarity index), consistent with previous work finding a 'similarity advantage' where memory precision is regulated according to task demands. Overall, scenes were remembered to within a few meters of their actual location.
Collapse
Affiliation(s)
- Leo Westebbe
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - Yibiao Liang
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - Erik Blaser
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| |
Collapse
|
21
|
Singletary NM, Gottlieb J, Horga G. The parieto-occipital cortex is a candidate neural substrate for the human ability to approximate Bayesian inference. Commun Biol 2024; 7:165. [PMID: 38337012 PMCID: PMC10858241 DOI: 10.1038/s42003-024-05821-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 01/15/2024] [Indexed: 02/12/2024] Open
Abstract
Adaptive decision-making often requires one to infer unobservable states based on incomplete information. Bayesian logic prescribes that individuals should do so by estimating the posterior probability by integrating the prior probability with new information, but the neural basis of this integration is incompletely understood. We record fMRI during a task in which participants infer the posterior probability of a hidden state while we independently modulate the prior probability and likelihood of evidence regarding the state; the task incentivizes participants to make accurate inferences and dissociates expected value from posterior probability. Here we show that activation in a region of left parieto-occipital cortex independently tracks the subjective posterior probability, combining its subcomponents of prior probability and evidence likelihood, and reflecting the individual participants' systematic deviations from objective probabilities. The parieto-occipital cortex is thus a candidate neural substrate for humans' ability to approximate Bayesian inference by integrating prior beliefs with new information.
Collapse
Affiliation(s)
- Nicholas M Singletary
- Doctoral Program in Neurobiology and Behavior, Columbia University, New York, NY, USA.
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
- New York State Psychiatric Institute, New York, NY, USA.
| | - Jacqueline Gottlieb
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Guillermo Horga
- New York State Psychiatric Institute, New York, NY, USA.
- Department of Psychiatry, Columbia University, New York, NY, USA.
| |
Collapse
|
22
|
Nitsch A, Garvert MM, Bellmund JLS, Schuck NW, Doeller CF. Grid-like entorhinal representation of an abstract value space during prospective decision making. Nat Commun 2024; 15:1198. [PMID: 38336756 PMCID: PMC10858181 DOI: 10.1038/s41467-024-45127-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/16/2024] [Indexed: 02/12/2024] Open
Abstract
How valuable a choice option is often changes over time, making the prediction of value changes an important challenge for decision making. Prior studies identified a cognitive map in the hippocampal-entorhinal system that encodes relationships between states and enables prediction of future states, but does not inherently convey value during prospective decision making. In this fMRI study, participants predicted changing values of choice options in a sequence, forming a trajectory through an abstract two-dimensional value space. During this task, the entorhinal cortex exhibited a grid-like representation with an orientation aligned to the axis through the value space most informative for choices. A network of brain regions, including ventromedial prefrontal cortex, tracked the prospective value difference between options. These findings suggest that the entorhinal grid system supports the prediction of future values by representing a cognitive map, which might be used to generate lower-dimensional value signals to guide prospective decision making.
Collapse
Affiliation(s)
- Alexander Nitsch
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Mona M Garvert
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Aging Research, Berlin, Germany
- Faculty of Human Sciences, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
| | - Jacob L S Bellmund
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Nicolas W Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Aging Research, Berlin, Germany
- Institute of Psychology, Universität Hamburg, Hamburg, Germany
| | - Christian F Doeller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Jebsen Centre for Alzheimer's Disease, Norwegian University of Science and Technology, Trondheim, Norway.
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany.
- Department of Psychology, Technical University Dresden, Dresden, Germany.
| |
Collapse
|
23
|
Zhang Y, Rottman BM. Causal learning with delays up to 21 hours. Psychon Bull Rev 2024; 31:312-324. [PMID: 37580453 DOI: 10.3758/s13423-023-02342-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/13/2023] [Indexed: 08/16/2023]
Abstract
Considerable delays between causes and effects are commonly found in real life. However, previous studies have only investigated how well people can learn probabilistic relations with delays on the order of seconds. In the current study we tested whether people can learn a cause-effect relation with delays of 0, 3, 9, or 21 hours, and the study lasted 16 days. We found that learning was slowed with longer delays, but by the end of 16 days participants had learned the cause-effect relation in all four conditions, and they had learned the relation about equally well in all four conditions. This suggests that in real-world situations people may still be fairly accurate at inferring cause-effect relations with delays if they have enough experience. We also discuss ways that delays may interact with other real-world factors that could complicate learning.
Collapse
|
24
|
Mikhailova A, Lightfoot S, Santos-Victor J, Coco MI. Differential effects of intrinsic properties of natural scenes and interference mechanisms on recognition processes in long-term visual memory. Cogn Process 2024; 25:173-187. [PMID: 37831320 DOI: 10.1007/s10339-023-01164-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 09/20/2023] [Indexed: 10/14/2023]
Abstract
Humans display remarkable long-term visual memory (LTVM) processes. Even though images may be intrinsically memorable, the fidelity of their visual representations, and consequently the likelihood of successfully retrieving them, hinges on their similarity when concurrently held in LTVM. In this debate, it is still unclear whether intrinsic features of images (perceptual and semantic) may be mediated by mechanisms of interference generated at encoding, or during retrieval, and how these factors impinge on recognition processes. In the current study, participants (32) studied a stream of 120 natural scenes from 8 semantic categories, which varied in frequencies (4, 8, 16 or 32 exemplars per category) to generate different levels of category interference, in preparation for a recognition test. Then they were asked to indicate which of two images, presented side by side (i.e. two-alternative forced-choice), they remembered. The two images belonged to the same semantic category but varied in their perceptual similarity (similar or dissimilar). Participants also expressed their confidence (sure/not sure) about their recognition response, enabling us to tap into their metacognitive efficacy (meta-d'). Additionally, we extracted the activation of perceptual and semantic features in images (i.e. their informational richness) through deep neural network modelling and examined their impact on recognition processes. Corroborating previous literature, we found that category interference and perceptual similarity negatively impact recognition processes, as well as response times and metacognitive efficacy. Moreover, images semantically rich were less likely remembered, an effect that trumped a positive memorability boost coming from perceptual information. Critically, we did not observe any significant interaction between intrinsic features of images and interference generated either at encoding or during retrieval. All in all, our study calls for a more integrative understanding of the representational dynamics during encoding and recognition enabling us to form, maintain and access visual information.
Collapse
Affiliation(s)
- Anastasiia Mikhailova
- Institute for Systems and Robotics, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | | | - José Santos-Victor
- Institute for Systems and Robotics, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Moreno I Coco
- Sapienza, University of Rome, Rome, Italy.
- I.R.C.C.S. Santa Lucia, Fondazione Santa Lucia, Roma, Italy.
| |
Collapse
|
25
|
Kennedy BL, Most SB, Grootswagers T, Bowden VK. Memory benefits when actively, rather than passively, viewing images. Atten Percept Psychophys 2024; 86:1-8. [PMID: 38012474 DOI: 10.3758/s13414-023-02814-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/26/2023] [Indexed: 11/29/2023]
Abstract
Serial visual presentations of images exist both in the laboratory and increasingly on virtual platforms such as social media feeds. However, the way we interact with information differs between these. In many laboratory experiments participants view stimuli passively, whereas on social media people tend to interact with information actively. This difference could influence the way information is remembered, which carries practical and theoretical implications. In the current study, 821 participants viewed streams containing seven landscape images that were presented at either a self-paced (active) or an automatic (passive) rate. Critically, the presentation speed in each automatic trial was matched to the speed of a self-paced trial for each participant. Both memory accuracy and memory confidence were greater on self-paced compared to automatic trials. These results indicate that active, self-paced progression through images increases the likelihood of them being remembered, relative to when participants have no control over presentation speed and duration.
Collapse
Affiliation(s)
- Briana L Kennedy
- School of Psychological Science, The University of Western Australia, Perth, WA, Australia.
| | - Steven B Most
- School of Psychology, UNSW Sydney, Sydney, NSW, Australia
| | - Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| | - Vanessa K Bowden
- School of Psychological Science, The University of Western Australia, Perth, WA, Australia
| |
Collapse
|
26
|
Bouffard NR, Fidalgo C, Brunec IK, Lee ACH, Barense MD. Older adults can use memory for distinctive objects, but not distinctive scenes, to rescue associative memory deficits. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2024; 31:362-386. [PMID: 36703496 DOI: 10.1080/13825585.2023.2170966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 01/17/2023] [Indexed: 01/28/2023]
Abstract
Associative memory deficits in aging are frequently characterized by false recognition of novel stimulus associations, particularly when stimuli are similar. Introducing distinctive stimuli, therefore, can help guide item differentiation in memory and can further our understanding of how age-related brain changes impact behavior. How older adults use different types of distinctive information to distinguish overlapping events in memory and to avoid false associative recognition is still unknown. To test this, we manipulated the distinctiveness of items from two stimulus categories, scenes and objects, across three conditions: (1) distinct scenes paired with similar objects, (2) similar scenes paired with distinct objects, and (3) similar scenes paired with similar objects. Young and older adults studied scene-object pairs and then made both remember/know judgments toward single items as well as associative memory judgments to old and novel scene-object pairs ("Were these paired together?"). Older adults showed intact single item recognition of scenes and objects, regardless of whether those objects and scenes were similar or distinct. In contrast, relative to younger adults, older adults showed elevated false recognition for scene-object pairs, even when the scenes were distinct. These age-related associative memory deficits, however, disappeared if the pair contained an object that was visually distinct. In line with neural evidence that hippocampal functioning and scene processing decline with age, these results suggest that older adults can rely on memory for distinct objects, but not for distinct scenes, to distinguish between memories with overlapping features.
Collapse
Affiliation(s)
- Nichole R Bouffard
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Rotman Research Institute, Baycrest Hospital, Toronto, Ontario, Canada
| | - Celia Fidalgo
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Iva K Brunec
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Andy C H Lee
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto Scarborough, Scarborough, Ontario, Canada
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Rotman Research Institute, Baycrest Hospital, Toronto, Ontario, Canada
| |
Collapse
|
27
|
Klink H, Kaiser D, Stecher R, Ambrus GG, Kovács G. Your place or mine? The neural dynamics of personally familiar scene recognition suggests category independent familiarity encoding. Cereb Cortex 2023; 33:11634-11645. [PMID: 37885126 DOI: 10.1093/cercor/bhad397] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 09/29/2023] [Accepted: 09/30/2023] [Indexed: 10/28/2023] Open
Abstract
Recognizing a stimulus as familiar is an important capacity in our everyday life. Recent investigation of visual processes has led to important insights into the nature of the neural representations of familiarity for human faces. Still, little is known about how familiarity affects the neural dynamics of non-face stimulus processing. Here we report the results of an EEG study, examining the representational dynamics of personally familiar scenes. Participants viewed highly variable images of their own apartments and unfamiliar ones, as well as personally familiar and unfamiliar faces. Multivariate pattern analyses were used to examine the time course of differential processing of familiar and unfamiliar stimuli. Time-resolved classification revealed that familiarity is decodable from the EEG data similarly for scenes and faces. The temporal dynamics showed delayed onsets and peaks for scenes as compared to faces. Familiarity information, starting at 200 ms, generalized across stimulus categories and led to a robust familiarity effect. In addition, familiarity enhanced category representations in early (250-300 ms) and later (>400 ms) processing stages. Our results extend previous face familiarity results to another stimulus category and suggest that familiarity as a construct can be understood as a general, stimulus-independent processing step during recognition.
Collapse
Affiliation(s)
- Hannah Klink
- Department of Neurology, Universitätsklinikum, Kastanienstraße1 Jena, D-07747 Jena, Thüringen, Germany
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, D-07743 Jena, Thüringen, Germany
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-University Gießen, Arndtstraße 2, D-35392 Gießen, Hessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Justus-Liebig-University Gießen and Philipps-University Marburg, Hans-Meerwein-Straße 6 Mehrzweckgeb, 03C022, Marburg, D-35032, Hessen, Germany
| | - Rico Stecher
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-University Gießen, Arndtstraße 2, D-35392 Gießen, Hessen, Germany
| | - Géza G Ambrus
- Department of Psychology, Bournemouth University, Poole House P319, Talbot Campus, Fern Barrow, Poole, Dorset BH12 5BB, United Kingdom
| | - Gyula Kovács
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, D-07743 Jena, Thüringen, Germany
| |
Collapse
|
28
|
DiNicola LM, Sun W, Buckner RL. Side-by-side regions in dorsolateral prefrontal cortex estimated within the individual respond differentially to domain-specific and domain-flexible processes. J Neurophysiol 2023; 130:1602-1615. [PMID: 37937340 PMCID: PMC11068361 DOI: 10.1152/jn.00277.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/06/2023] [Accepted: 10/28/2023] [Indexed: 11/09/2023] Open
Abstract
A recurring debate concerns whether regions of primate prefrontal cortex (PFC) support domain-flexible or domain-specific processes. Here we tested the hypothesis with functional MRI (fMRI) that side-by-side PFC regions, within distinct parallel association networks, differentially support domain-flexible and domain-specialized processing. Individuals (N = 9) were intensively sampled, and all effects were estimated within their own idiosyncratic anatomy. Within each individual, we identified PFC regions linked to distinct networks, including a dorsolateral PFC (DLPFC) region coupled to the medial temporal lobe (MTL) and an extended region associated with the canonical multiple-demand network. We further identified an inferior PFC region coupled to the language network. Exploration in separate task data, collected within the same individuals, revealed a robust functional triple dissociation. The DLPFC region linked to the MTL was recruited during remembering and imagining the future, distinct from juxtaposed regions that were modulated in a domain-flexible manner during working memory. The inferior PFC region linked to the language network was recruited during sentence processing. Detailed analysis of the trial-level responses further revealed that the DLPFC region linked to the MTL specifically tracked processes associated with scene construction. These results suggest that the DLPFC possesses a domain-specialized region that is small and easily confused with nearby (larger) regions associated with cognitive control. The newly described region is domain specialized for functions traditionally associated with the MTL. We discuss the implications of these findings in relation to convergent anatomical analysis in the monkey.NEW & NOTEWORTHY Competing hypotheses link regions of prefrontal cortex (PFC) to domain-flexible or domain-specific processes. Here, using a precision neuroimaging approach, we identify a domain-specialized region in dorsolateral PFC, coupled to the medial temporal lobe and recruited for scene construction. This region is juxtaposed to, but distinct from, broader PFC regions recruited flexibly for cognitive control. Region distinctions align with broader network differences, suggesting that PFC regions gain dissociable processing properties via segregated anatomical projections.
Collapse
Affiliation(s)
- Lauren M DiNicola
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States
| | - Wendy Sun
- Division of Medical Sciences, Harvard Medical School, Boston, Massachusetts, United States
| | - Randy L Buckner
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, United States
- Department of Psychiatry, Massachusetts General Hospital, Charlestown, Massachusetts, United States
| |
Collapse
|
29
|
Singletary NM, Horga G, Gottlieb J. A Distinct Neural Code Supports Prospection of Future Probabilities During Instrumental Information-Seeking. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.27.568849. [PMID: 38076800 PMCID: PMC10705234 DOI: 10.1101/2023.11.27.568849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
To make adaptive decisions, we must actively demand information, but relatively little is known about the mechanisms of active information gathering. An open question is how the brain estimates expected information gains (EIG) when comparing the current decision uncertainty with the uncertainty that is expected after gathering information. We examined this question using fMRI in a task in which people placed bids to obtain information in conditions that varied independently by prior decision uncertainty, information diagnosticity, and the penalty for an erroneous choice. Consistent with value of information theory, bids were sensitive to EIG and its components of prior certainty and expected posterior certainty. Expected posterior certainty was decoded above chance from multivoxel activation patterns in the posterior parietal and extrastriate cortices. This representation was independent of instrumental rewards and overlapped with distinct representations of EIG and prior certainty. Thus, posterior parietal and extrastriate cortices are candidates for mediating the prospection of posterior probabilities as a key step to estimate EIG during active information gathering.
Collapse
Affiliation(s)
- Nicholas M Singletary
- Doctoral Program in Neurobiology and Behavior, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- New York State Psychiatric Institute, New York, NY, USA
| | - Guillermo Horga
- New York State Psychiatric Institute, New York, NY, USA
- Department of Psychiatry, Columbia University, New York, NY, USA
- These authors contributed equally
| | - Jacqueline Gottlieb
- Department of Neuroscience, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA
- These authors contributed equally
| |
Collapse
|
30
|
Han S, Rezanejad M, Walther DB. Memorability of line drawings of scenes: the role of contour properties. Mem Cognit 2023:10.3758/s13421-023-01478-4. [PMID: 37903987 DOI: 10.3758/s13421-023-01478-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/04/2023] [Indexed: 11/01/2023]
Abstract
Why are some images more likely to be remembered than others? Previous work focused on the influence of global, low-level visual features as well as image content on memorability. To better understand the role of local, shape-based contours, we here investigate the memorability of photographs and line drawings of scenes. We find that the memorability of photographs and line drawings of the same scenes is correlated. We quantitatively measure the role of contour properties and their spatial relationships for scene memorability using a Random Forest analysis. To determine whether this relationship is merely correlational or if manipulating these contour properties causes images to be remembered better or worse, we split each line drawing into two half-images, one with high and the other with low predicted memorability according to the trained Random Forest model. In a new memorability experiment, we find that the half-images predicted to be more memorable were indeed remembered better, confirming a causal role of shape-based contour features, and, in particular, T junctions in scene memorability. We performed a categorization experiment on half-images to test for differential access to scene content. We found that half-images predicted to be more memorable were categorized more accurately. However, categorization accuracy for individual images was not correlated with their memorability. These results demonstrate that we can measure the contributions of individual contour properties to scene memorability and verify their causal involvement with targeted image manipulations, thereby bridging the gap between low-level features and scene semantics in our understanding of memorability.
Collapse
Affiliation(s)
- Seohee Han
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, Canada.
| | - Morteza Rezanejad
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, Canada
| | - Dirk B Walther
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, Canada
| |
Collapse
|
31
|
Burkhardt M, Bergelt J, Gönner L, Dinkelbach HÜ, Beuth F, Schwarz A, Bicanski A, Burgess N, Hamker FH. A large-scale neurocomputational model of spatial cognition integrating memory with vision. Neural Netw 2023; 167:473-488. [PMID: 37688954 DOI: 10.1016/j.neunet.2023.08.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 06/29/2023] [Accepted: 08/20/2023] [Indexed: 09/11/2023]
Abstract
We introduce a large-scale neurocomputational model of spatial cognition called 'Spacecog', which integrates recent findings from mechanistic models of visual and spatial perception. As a high-level cognitive ability, spatial cognition requires the processing of behaviourally relevant features in complex environments and, importantly, the updating of this information during processes of eye and body movement. The Spacecog model achieves this by interfacing spatial memory and imagery with mechanisms of object localisation, saccade execution, and attention through coordinate transformations in parietal areas of the brain. We evaluate the model in a realistic virtual environment where our neurocognitive model steers an agent to perform complex visuospatial tasks. Our modelling approach opens up new possibilities in the assessment of neuropsychological data and human spatial cognition.
Collapse
Affiliation(s)
| | - Julia Bergelt
- Chemnitz University of Technology, 09107, Chemnitz Germany.
| | - Lorenz Gönner
- Technische Universität Dresden, Faculty of Psychology, 01062, Dresden Germany; Technische Universität Dresden, Department of Psychiatry, 01307, Dresden Germany.
| | | | - Frederik Beuth
- Chemnitz University of Technology, 09107, Chemnitz Germany.
| | - Alex Schwarz
- Chemnitz University of Technology, 09107, Chemnitz Germany.
| | - Andrej Bicanski
- Newcastle University, NE1 7RU, Newcastle upon Tyne United Kingdom.
| | - Neil Burgess
- University College London, WC1E 6BT, London United Kingdom.
| | - Fred H Hamker
- Chemnitz University of Technology, 09107, Chemnitz Germany.
| |
Collapse
|
32
|
Melega G, Sheldon S. Conceptual relatedness promotes memory generalization at the cost of detailed recollection. Sci Rep 2023; 13:15575. [PMID: 37730718 PMCID: PMC10511542 DOI: 10.1038/s41598-023-40803-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 08/16/2023] [Indexed: 09/22/2023] Open
Abstract
An adaptive memory system is one that allows us to both retrieve detailed memories as well as generalize knowledge about our past, the latter termed memory generalization and is useful for making inferences about new situations. Research has indicated that memory generalization relies on forming knowledge structures by integrating experiences with shared encountered elements. Whether memory generalization occurs more readily when experiences also have elements that share established (conceptual) information is less clear. It is also unclear if engaging in memory generalization during learning comes at the cost of retrieving detailed memories, the other function of episodic memory. To address these two knowledge gaps, we paired a modified version of the acquired equivalence task with a recognition memory test. Across three experiments, participants first learned a series of overlapping object-scene pairs (A-X, B-X and A-Y) in which half of the overlapping pairs contained conceptually-related objects (e.g., A-pencil; B-scissors; conceptual condition) and the other half contained unrelated objects (neutral condition). Participants ability to generalize to new overlapping object-scene pairs (B-Y) as well as not-learned but semantically-related objects was measured. Finally, participants completed a recognition memory test that included the encoded objects, perceptually similar lures or new foil objects. Across all experiments, we found higher rates of generalization but reduced detailed memory (indexed by increased false alarms to lure objects) for information learned in the conceptual than neutral condition. These results suggest the presence of conceptual knowledge biases an individual towards a generalization function of memory, which comes at the expense of detailed recollection.
Collapse
Affiliation(s)
- Greta Melega
- Department of Neurology, Charité Universitätsmedizin Berlin, Berlin, Germany
- Department of Psychology, McGill University, 2001 McGill College, Montreal, QC, H3A 1G1, Canada
| | - Signy Sheldon
- Department of Psychology, McGill University, 2001 McGill College, Montreal, QC, H3A 1G1, Canada.
| |
Collapse
|
33
|
Cooper PS, Colton E, Bode S, Chong TTJ. Standardised images of novel objects created with generative adversarial networks. Sci Data 2023; 10:575. [PMID: 37660073 PMCID: PMC10475029 DOI: 10.1038/s41597-023-02483-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 08/16/2023] [Indexed: 09/04/2023] Open
Abstract
An enduring question in cognitive science is how perceptually novel objects are processed. Addressing this issue has been limited by the absence of a standardised set of object-like stimuli that appear realistic, but cannot possibly have been previously encountered. To this end, we created a dataset, at the core of which are images of 400 perceptually novel objects. These stimuli were created using Generative Adversarial Networks that integrated features of everyday stimuli to produce a set of synthetic objects that appear entirely plausible, yet do not in fact exist. We curated an accompanying dataset of 400 familiar stimuli, which were matched in terms of size, contrast, luminance, and colourfulness. For each object, we quantified their key visual properties (edge density, entropy, symmetry, complexity, and spectral signatures). We also confirmed that adult observers (N = 390) perceive the novel objects to be less familiar, yet similarly engaging, relative to the familiar objects. This dataset serves as an open resource to facilitate future studies on visual perception.
Collapse
Affiliation(s)
- Patrick S Cooper
- Turner Institute for Brain and Mental Health, Monash University, Victoria, 3800, Australia.
- Melbourne School of Psychological Sciences, University of Melbourne, Victoria, 3010, Australia.
| | - Emily Colton
- Turner Institute for Brain and Mental Health, Monash University, Victoria, 3800, Australia
| | - Stefan Bode
- Melbourne School of Psychological Sciences, University of Melbourne, Victoria, 3010, Australia
| | - Trevor T-J Chong
- Turner Institute for Brain and Mental Health, Monash University, Victoria, 3800, Australia.
- Department of Neurology, Alfred Health, Melbourne, Victoria, 3004, Australia.
- Department of Clinical Neurosciences, St Vincent's Hospital, Victoria, 3065, Australia.
| |
Collapse
|
34
|
Li YP, Wang Y, Turk-Browne NB, Kuhl BA, Hutchinson JB. Perception and memory retrieval states are reflected in distributed patterns of background functional connectivity. Neuroimage 2023; 276:120221. [PMID: 37290674 PMCID: PMC10484747 DOI: 10.1016/j.neuroimage.2023.120221] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 05/19/2023] [Accepted: 06/06/2023] [Indexed: 06/10/2023] Open
Abstract
The same visual input can serve as the target of perception or as a trigger for memory retrieval depending on whether cognitive processing is externally oriented (perception) or internally oriented (memory retrieval). While numerous human neuroimaging studies have characterized how visual stimuli are differentially processed during perception versus memory retrieval, perception and memory retrieval may also be associated with distinct neural states that are independent of stimulus-evoked neural activity. Here, we combined human fMRI with full correlation matrix analysis (FCMA) to reveal potential differences in "background" functional connectivity across perception and memory retrieval states. We found that perception and retrieval states could be discriminated with high accuracy based on patterns of connectivity across (1) the control network, (2) the default mode network (DMN), and (3) retrosplenial cortex (RSC). In particular, clusters in the control network increased connectivity with each other during the perception state, whereas clusters in the DMN were more strongly coupled during the retrieval state. Interestingly, RSC switched its coupling between networks as the cognitive state shifted from retrieval to perception. Finally, we show that background connectivity (1) was fully independent from stimulus-related variance in the signal and, further, (2) captured distinct aspects of cognitive states compared to traditional classification of stimulus-evoked responses. Together, our results reveal that perception and memory retrieval are associated with sustained cognitive states that manifest as distinct patterns of connectivity among large-scale brain networks.
Collapse
Affiliation(s)
- Y Peeta Li
- Department of Psychology, University of Oregon, Eugene, OR, United States.
| | - Yida Wang
- Amazon Web Services, Palo Alto, CA, United States
| | - Nicholas B Turk-Browne
- Department of Psychology, Yale University, New Haven, CT, United States; Wu Tsai Institute, Yale University, New Haven, CT, United States
| | - Brice A Kuhl
- Department of Psychology, University of Oregon, Eugene, OR, United States
| | | |
Collapse
|
35
|
Greene NR, Naveh-Benjamin M. Forgetting of specific and gist visual associative episodic memory representations across time. Psychon Bull Rev 2023; 30:1484-1501. [PMID: 36877363 DOI: 10.3758/s13423-023-02256-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/12/2023] [Indexed: 03/07/2023]
Abstract
Associative binding between components of an episode is vulnerable to forgetting across time. We investigated whether these forgetting effects on inter-item associative memory occur only at specific or also at gist levels of representation. In two experiments, young adult participants (n = 90, and 86, respectively) encoded face-scene pairs and were then tested either immediately after encoding or following a 24-hour delay. Tests featured conjoint recognition judgments, in which participants were tasked with discriminating intact pairs from highly similar foils, less similar foils, and completely dissimilar foils. In both experiments, the 24-hour delay resulted in deficits in specific memory for face-scene pairs, as measured using multinomial-processing-tree analyses. In Experiment 1, gist memory was not affected by the 24-hour delay, but when associative memory was strengthened through pair repetition (Experiment 2), deficits in gist memory following a 24-hour delay were observed. Results suggest that specific representations of associations in episodic memory, and under some conditions gist representations, as well, are susceptible to forgetting across time.
Collapse
Affiliation(s)
- Nathaniel R Greene
- Department of Psychological Sciences, University of Missouri, 9J McAlester Hall, Columbia, MO, 65211, USA.
| | - Moshe Naveh-Benjamin
- Department of Psychological Sciences, University of Missouri, 106 McAlester Hall, Columbia, MO, 65211, USA.
| |
Collapse
|
36
|
Parsons O, Baron-Cohen S. Extraction and generalisation of category-level information during visual statistical learning in autistic people. PLoS One 2023; 18:e0286018. [PMID: 37267333 DOI: 10.1371/journal.pone.0286018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 05/06/2023] [Indexed: 06/04/2023] Open
Abstract
BACKGROUND We examined whether information extracted during a visual statistical learning task could be generalised from specific exemplars to semantically similar ones. We then looked at whether performance in autistic people differed to non-autistic people during a visual statistical learning task and specifically examined whether differences in performance between groups occurred when sequential information was presented at a semantic level. We did this by assessing recall performance using a two-alternative forced choice paradigm after presenting participants with a sequence of naturalistic scene images. METHODS 125 adult participants (61 participants with an autism diagnosis and 64 non-autistic controls) were presented with a fast serial presentation sequence of images and given a cover task to avoid attention being explicitly drawn to patterns in the underlying sequences. This was followed by a two-alternative forced choice task to assess participants' implicit recall. Participants were presented with 1 of 3 unique versions of the task, in which the presentation and assessment of statistical regularities was done at either a low feature-based level or a high semantic-based level. RESULTS Participants were able to generalise statistical information from specific exemplars to semantically similar ones. There was an overall significant reduction in visual statistical learning in the autistic group but we were unable to determine whether group differences occurred specifically in conditions where the learning of semantic information was required. CONCLUSIONS These results provide evidence that participants are able to extract statistical information that is presented at the level of specific exemplars and generalise it to semantically similar contexts. We also showed a modest but statistically significant reduction in recall performance in the autistic participants relative to the non-autistic participants.
Collapse
Affiliation(s)
- Owen Parsons
- Autism Research Centre, Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom
| | - Simon Baron-Cohen
- Autism Research Centre, Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
37
|
Danieli K, Guyon A, Bethus I. Episodic Memory formation: A review of complex Hippocampus input pathways. Prog Neuropsychopharmacol Biol Psychiatry 2023; 126:110757. [PMID: 37086812 DOI: 10.1016/j.pnpbp.2023.110757] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/08/2023] [Accepted: 03/22/2023] [Indexed: 04/24/2023]
Abstract
Memories of everyday experiences involve the encoding of a rich and dynamic representation of present objects and their contextual features. Traditionally, the resulting mnemonic trace is referred to as Episodic Memory, i.e. the "what", "where" and "when" of a lived episode. The journey for such memory trace encoding begins with the perceptual data of an experienced episode handled in sensory brain regions. The information is then streamed to cortical areas located in the ventral Medio Temporal Lobe, which produces multi-modal representations concerning either the objects (in the Perirhinal cortex) or the spatial and contextual features (in the parahippocampal region) of the episode. Then, this high-level data is gated through the Entorhinal Cortex and forwarded to the Hippocampal Formation, where all the pieces get bound together. Eventually, the resulting encoded neural pattern is relayed back to the Neocortex for a stable consolidation. This review will detail these different stages and provide a systematic overview of the major cortical streams toward the Hippocampus relevant for Episodic Memory encoding.
Collapse
Affiliation(s)
| | - Alice Guyon
- Université Cote d'Azur, Neuromod Institute, France; Université Cote d'Azur, CNRS UMR 7275, IPMC, Valbonne, France
| | - Ingrid Bethus
- Université Cote d'Azur, Neuromod Institute, France; Université Cote d'Azur, CNRS UMR 7275, IPMC, Valbonne, France
| |
Collapse
|
38
|
Xu Z, Hu J, Wang Y. Bilateral eye movements disrupt the involuntary perceptual representation of trauma-related memories. Behav Res Ther 2023; 165:104311. [PMID: 37037182 DOI: 10.1016/j.brat.2023.104311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 03/29/2023] [Accepted: 04/07/2023] [Indexed: 04/12/2023]
Abstract
Bilateral eye movement (EM) is a critical component in eye movement desensitization and reprocessing (EMDR), an effective treatment for post-traumatic stress disorder. However, the role of bilateral EM in alleviating trauma-related symptoms is unclear. Here we hypothesize that bilateral EM selectively disrupts the perceptual representation of traumatic memories. We used the trauma film paradigm as an analog for trauma experience. Nonclinical participants viewed trauma films followed by a bilateral EM intervention or a static Fixation period as a control. Perceptual and semantic memories for the film were assessed with different measures. Results showed a significant decrease in perceptual memory recognition shortly after the EM intervention and subsequently in the frequency and vividness of film-related memory intrusions across one week, relative to the Fixation condition. The EM intervention did not affect the explicit recognition of semantic memories, suggesting a dissociation between perceptual and semantic memory disruption. Furthermore, the EM intervention effectively reduced psychophysiological affective responses, including the skin conductance response and pupil size, to film scenes and subjective affective ratings of film-related intrusions. Together, bilateral EMs effectively reduce the perceptual representation and affective response of trauma-related memories. Further theoretical developments are needed to elucidate the mechanism of bilateral EMs in trauma treatment.
Collapse
Affiliation(s)
- Zhenjie Xu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, 310028, Zhejiang, China
| | - Jie Hu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, 310028, Zhejiang, China
| | - Yingying Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, 310028, Zhejiang, China.
| |
Collapse
|
39
|
Cheng A, Chen Z, Dilks DD. A stimulus-driven approach reveals vertical luminance gradient as a stimulus feature that drives human cortical scene selectivity. Neuroimage 2023; 269:119935. [PMID: 36764369 PMCID: PMC10044493 DOI: 10.1016/j.neuroimage.2023.119935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 01/19/2023] [Accepted: 02/07/2023] [Indexed: 02/11/2023] Open
Abstract
Human neuroimaging studies have revealed a dedicated cortical system for visual scene processing. But what is a "scene"? Here, we use a stimulus-driven approach to identify a stimulus feature that selectively drives cortical scene processing. Specifically, using fMRI data from BOLD5000, we examined the images that elicited the greatest response in the cortical scene processing system, and found that there is a common "vertical luminance gradient" (VLG), with the top half of a scene image brighter than the bottom half; moreover, across the entire set of images, VLG systematically increases with the neural response in the scene-selective regions (Study 1). Thus, we hypothesized that VLG is a stimulus feature that selectively engages cortical scene processing, and directly tested the role of VLG in driving cortical scene selectivity using tightly controlled VLG stimuli (Study 2). Consistent with our hypothesis, we found that the scene-selective cortical regions-but not an object-selective region or early visual cortex-responded significantly more to images of VLG over control stimuli with minimal VLG. Interestingly, such selectivity was also found for images with an "inverted" VLG, resembling the luminance gradient in night scenes. Finally, we also tested the behavioral relevance of VLG for visual scene recognition (Study 3); we found that participants even categorized tightly controlled stimuli of both upright and inverted VLG to be a place more than an object, indicating that VLG is also used for behavioral scene recognition. Taken together, these results reveal that VLG is a stimulus feature that selectively engages cortical scene processing, and provide evidence for a recent proposal that visual scenes can be characterized by a set of common and unique visual features.
Collapse
Affiliation(s)
- Annie Cheng
- Department of Psychology, Emory University, Atlanta, GA, USA; Department of Psychiatry, Yale School of Medicine, New Haven, CT, USA
| | - Zirui Chen
- Department of Psychology, Emory University, Atlanta, GA, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, GA, USA.
| |
Collapse
|
40
|
Guttesen AÁV, Gaskell MG, Madden EV, Appleby G, Cross ZR, Cairney SA. Sleep loss disrupts the neural signature of successful learning. Cereb Cortex 2023; 33:1610-1625. [PMID: 35470400 PMCID: PMC9977378 DOI: 10.1093/cercor/bhac159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 03/31/2022] [Accepted: 04/01/2022] [Indexed: 11/13/2022] Open
Abstract
Sleep supports memory consolidation as well as next-day learning. The influential "Active Systems" account of offline consolidation suggests that sleep-associated memory processing paves the way for new learning, but empirical evidence in support of this idea is scarce. Using a within-subjects (n = 30), crossover design, we assessed behavioral and electrophysiological indices of episodic encoding after a night of sleep or total sleep deprivation in healthy adults (aged 18-25 years) and investigated whether behavioral performance was predicted by the overnight consolidation of episodic associations from the previous day. Sleep supported memory consolidation and next-day learning as compared to sleep deprivation. However, the magnitude of this sleep-associated consolidation benefit did not significantly predict the ability to form novel memories after sleep. Interestingly, sleep deprivation prompted a qualitative change in the neural signature of encoding: Whereas 12-20 Hz beta desynchronization-an established marker of successful encoding-was observed after sleep, sleep deprivation disrupted beta desynchrony during successful learning. Taken together, these findings suggest that effective learning depends on sleep but not necessarily on sleep-associated consolidation.
Collapse
Affiliation(s)
- Anna á V Guttesen
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - M Gareth Gaskell
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
- York Biomedical Research Institute, University of York, Heslington, York, YO10 5DD, UK
| | - Emily V Madden
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Gabrielle Appleby
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Zachariah R Cross
- Cognitive Neuroscience Laboratory, Australian Research Centre for Interactive and Virtual Environments, Mawson Lakes Campus, Mawson Lakes, South Australia 5095, Australia
| | - Scott A Cairney
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
- York Biomedical Research Institute, University of York, Heslington, York, YO10 5DD, UK
| |
Collapse
|
41
|
Wolfe JM, Wick FA, Mishra M, DeGutis J, Lyu W. Spatial and temporal massive memory in humans. Curr Biol 2023; 33:405-410.e4. [PMID: 36693302 DOI: 10.1016/j.cub.2022.12.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 11/30/2022] [Accepted: 12/15/2022] [Indexed: 01/24/2023]
Abstract
It is well known that humans have a massive memory for pictures and scenes.1,2,3,4 They show an ability to encode thousands of images with only a few seconds of exposure to each. In addition to this massive memory for "what" observers have seen, three experiments reported here show that observers have a "spatial massive memory" (SMM) for "where" stimuli have been seen and a "temporal massive memory" (TMM) for "when" stimuli have been seen. The positions in time and space for at least dozens of items can be reported with good, if not perfect accuracy. Previous work has suggested that there might be good memory for stimulus location,5,6 but there do not seem to have been concerted efforts to measure the extent of this memory. Moreover, in our method, observers are recalling where items were located and not merely recognizing the correct location. This is interesting because massive memory is sometimes thought to be limited to recognition tasks based on sense of familiarity.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Visual Attention Lab, Department of Surgery, Visual Attention Lab, Brigham & Women's Hospital, 900 Commonwealth Avenue, Boston, MA 02115, USA; Departments of Ophthalmology & Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA; Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA.
| | | | - Maruti Mishra
- Department of Psychology, University of Richmond, Richmond Hall, 114 UR Drive, Richmond, VA 23173, USA
| | - Joseph DeGutis
- Boston Attention and Learning Laboratory, VA Boston Healthcare System, 150 S. Huntington Avenue, Boston, MA 02130, USA; Department of Psychiatry, Harvard Medical School, 940 Belmont Street, Brockton, MA 02301, USA
| | - Wanyi Lyu
- Department of Biology, Centre for Vision Research, Vision Science to Application, York University, 4700 Keele Street, Toronto, ON M3J 1P3, Canada
| |
Collapse
|
42
|
Long-term memory representations for audio-visual scenes. Mem Cognit 2023; 51:349-370. [PMID: 36100821 PMCID: PMC9950240 DOI: 10.3758/s13421-022-01355-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/08/2022]
Abstract
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Collapse
|
43
|
Yates TS, Ellis CT, Turk‐Browne NB. Face processing in the infant brain after pandemic lockdown. Dev Psychobiol 2023; 65:e22346. [PMID: 36567649 PMCID: PMC9877889 DOI: 10.1002/dev.22346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 09/13/2022] [Accepted: 10/10/2022] [Indexed: 12/14/2022]
Abstract
The role of visual experience in the development of face processing has long been debated. We present a new angle on this question through a serendipitous study that cannot easily be repeated. Infants viewed short blocks of faces during fMRI in a repetition suppression task. The same identity was presented multiple times in half of the blocks (repeat condition) and different identities were presented once each in the other half (novel condition). In adults, the fusiform face area (FFA) tends to show greater neural activity for novel versus repeat blocks in such designs, suggesting that it can distinguish same versus different face identities. As part of an ongoing study, we collected data before the COVID-19 pandemic and after an initial local lockdown was lifted. The resulting sample of 12 infants (9-24 months) divided equally into pre- and post-lockdown groups with matching ages and data quantity/quality. The groups had strikingly different FFA responses: pre-lockdown infants showed repetition suppression (novel > repeat), whereas post-lockdown infants showed the opposite (repeat > novel), often referred to as repetition enhancement. These findings provide speculative evidence that altered visual experience during the lockdown, or other correlated environmental changes, may have affected face processing in the infant brain.
Collapse
Affiliation(s)
| | - Cameron T. Ellis
- Department of PsychologyStanford UniversityStanfordCaliforniaUSA
| | - Nicholas B. Turk‐Browne
- Department of PsychologyYale UniversityNew HavenConnecticutUSA,Wu Tsai InstituteYale UniversityNew HavenConnecticutUSA
| |
Collapse
|
44
|
Goujon A, Mathy F, Thorpe S. The fate of visual long term memories for images across weeks in adults and children. Sci Rep 2022; 12:21763. [PMID: 36526824 PMCID: PMC9758234 DOI: 10.1038/s41598-022-26002-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 12/07/2022] [Indexed: 12/23/2022] Open
Abstract
What is the content and the format of visual memories in Long Term Memory (LTM)? Is it similar in adults and children? To address these issues, we investigated, in both adults and 9-year-old children, how visual LTM is affected over time and whether visual vs semantic features are affected differentially. In a learning phase, participants were exposed to hundreds of meaningless and meaningful images presented once or twice for either 120 ms or 1920 ms. Memory was assessed using a recognition task either immediately after learning or after a delay of three or six weeks. The results suggest that multiple and extended exposures are crucial for retaining an image for several weeks. Although a benefit was observed in the meaningful condition when memory was assessed immediately after learning, this benefit tended to disappear over weeks, especially when the images were presented twice for 1920 ms. This pattern was observed for both adults and children. Together, the results call into question the dominant models of LTM for images: although semantic information enhances the encoding & maintaining of images in LTM when assessed immediately, this seems not critical for LTM over weeks.
Collapse
Affiliation(s)
- Annabelle Goujon
- Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive UR 481, Université de Franche-Comté, 19 rue Ambroise Paré, 25030, Besançon, Cedex, France.
| | - Fabien Mathy
- Laboratory BCL CNRS UMR 7320 & Université Côte d'Azur, Nice, France
| | - Simon Thorpe
- CerCo-CNRS & Université de Toulouse 3, Toulouse, France
| |
Collapse
|
45
|
Luo T, Tian M. Chunking in Visual Working Memory: Are Visual Features of Real-World Objects Stored in Chunks? Percept Mot Skills 2022; 129:1641-1657. [PMID: 35968723 DOI: 10.1177/00315125221121228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Are visual features of real-world objects stored as bound units? Previous research has shown that simple visual features (e.g., colored squares or geometric shapes) can be effectively bound together when forming predictable pairs in memory tasks. Through a "memory compression" process, observers can take advantage of these features to compress them into a chunk. However, a recent study found that visual features in real-world objects are stored independently. In the present study, we explored this issue by using drawings of fruits as memory stimuli, presenting four pictures of fruit in separate test trials in which we required observers to remember eight total features (i.e., four colors and four shapes). In the congruent trials, the color of the fruit matched its natural appearance (e.g., a red apple), while in incongruent trials, the color of the fruit mismatched its natural appearance (e.g., a red banana). We paired the shape of the fruits randomly with a color (without replacement). According to chunking theory, if visual features of real-world objects are stored in a chunk, the highest memory capacity should be accompanied by the longest response time in congruent trials due to an extra decoding process required from the chunk. We did find that participants had the highest memory capacity in the congruent condition, but their response times in the congruent condition were significantly faster than in the incongruent condition. Thus, observers did not undergo a decoding process in the congruent condition, and we concluded that visual features in real-world objects are not stored in a chunk.
Collapse
Affiliation(s)
- Tianrui Luo
- Department of Psychology, 26451The Chinese University of Hong Kong, Hong Kong, PR China
| | - Mi Tian
- School of Education Science, 12534Nanjing Normal University, Nanjing, PR China
| |
Collapse
|
46
|
Long-term memory and working memory compete and cooperate to guide attention. Atten Percept Psychophys 2022:10.3758/s13414-022-02593-1. [PMID: 36303020 DOI: 10.3758/s13414-022-02593-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2022] [Indexed: 11/08/2022]
Abstract
Multiple types of memory guide attention: Both long-term memory (LTM) and working memory (WM) effectively guide visual search. Furthermore, both types of memories can capture attention automatically, even when detrimental to performance. It is less clear, however, how LTM and WM cooperate or compete to guide attention in the same task. In a series of behavioral experiments, we show that LTM and WM reliably cooperate to guide attention: Visual search is faster when both memories cue attention to the same spatial location (relative to when only one memory can guide attention). LTM and WM competed to guide attention in more limited circumstances: Competition only occurred when these memories were in different dimensions - particularly when participants searched for a shape and held an accessory color in mind. Finally, we found no evidence for asymmetry in either cooperation or competition: There was no evidence that WM helped (or hindered) LTM-guided search more than the other way around. This lack of asymmetry was found despite differences in LTM-guided and WM-guided search overall, and differences in how two LTMs and two WMs compete or cooperate with each other to guide attention. This work suggests that, even if only one memory is currently task-relevant, WM and LTM can cooperate to guide attention; they can also compete when distracting features are salient enough. This work elucidates interactions between WM and LTM during attentional guidance, adding to the literature on costs and benefits to attention from multiple active memories.
Collapse
|
47
|
Massive visual long-term memory is largely dependent on meaning. Psychon Bull Rev 2022; 30:666-675. [PMID: 36221043 DOI: 10.3758/s13423-022-02193-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/25/2022] [Indexed: 11/08/2022]
Abstract
Previous research demonstrated a massive capacity of visual long-term memory (VLTM) for meaningful images. However, the capacity and limits of a "pure" VLTM that is independent of conceptual information still need to be determined. In the encoding phase of three experiments, participants viewed hundreds of images depicting real-world objects, along with visually similar images that were stripped of their semantic meaning. VLTM was evaluated using a four-alternative-forced-choice test including old and new images and their counterpart mirror transformations. The results revealed superior memory for meaningful than for meaningless stimuli and importantly, there was no hint of a massive VLTM for the meaningless items. Furthermore, when examining memory recognition of visual properties per-se (i.e., original/mirror state), memory was overall poor, and practically negligible for the meaningless items. Taken together, our findings suggest that meaning is critical for massive VLTM and for the ability to store visual properties.
Collapse
|
48
|
Schultz H, Yoo J, Meshi D, Heekeren HR. Category-specific memory encoding in the medial temporal lobe and beyond: the role of reward. Learn Mem 2022; 29:379-389. [PMID: 36180131 PMCID: PMC9536755 DOI: 10.1101/lm.053558.121] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 07/28/2022] [Indexed: 12/15/2022]
Abstract
The medial temporal lobe (MTL), including the hippocampus (HC), perirhinal cortex (PRC), and parahippocampal cortex (PHC), is central to memory formation. Reward enhances memory through interplay between the HC and substantia nigra/ventral tegmental area (SNVTA). While the SNVTA also innervates the MTL cortex and amygdala (AMY), their role in reward-enhanced memory is unclear. Prior research suggests category specificity in the MTL cortex, with the PRC and PHC processing object and scene memory, respectively. It is unknown, however, whether reward modulates category-specific memory processes. Furthermore, no study has demonstrated clear category specificity in the MTL for encoding processes contributing to subsequent recognition memory. To address these questions, we had 39 healthy volunteers (27 for all memory-based analyses) undergo functional magnetic resonance imaging while performing an incidental encoding task pairing objects or scenes with high or low reward, followed by a next-day recognition test. Behaviorally, high reward preferably enhanced object memory. Neural activity in the PRC and PHC reflected successful encoding of objects and scenes, respectively. Importantly, AMY encoding effects were selective for high-reward objects, with a similar pattern in the PRC. The SNVTA and HC showed no clear evidence of successful encoding. This behavioral and neural asymmetry may be conveyed through an anterior-temporal memory system, including the AMY and PRC, potentially in interplay with the ventromedial prefrontal cortex.
Collapse
Affiliation(s)
- Heidrun Schultz
- Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany
- Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, 14195 Berlin, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
| | - Jungsun Yoo
- Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany
- Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, 14195 Berlin, Germany
- Department of Cognitive Sciences, University of California at Irvine, Irvine, California 92697, USA
| | - Dar Meshi
- Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany
- Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, 14195 Berlin, Germany
- Department of Advertising and Public Relations, Michigan State University, East Lansing, Michigan 48824, USA
| | - Hauke R Heekeren
- Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany
- Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, 14195 Berlin, Germany
- Executive University Board, Universität Hamburg, 20148 Hamburg, Germany
| |
Collapse
|
49
|
Judgments of learning reactively facilitate visual memory by enhancing learning engagement. Psychon Bull Rev 2022; 30:676-687. [PMID: 36109421 DOI: 10.3758/s13423-022-02174-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2022] [Indexed: 11/08/2022]
Abstract
Recent studies have found that making judgments of learning (JOLs) for verbal materials changes memory itself, a form of reactivity effect on memory. The current study explores the reactivity effect on visual (image) memory and tests the potential role of enhanced learning engagement in this effect. Experiment 1 employed object image pairs as stimuli and observed a positive reactivity effect on memory for visual details. Experiment 2 conceptually replicated this positive reactivity effect using pairs of scene images. Experiment 3 introduced mind wandering (MW) probes to measure participants' attentional state (learning engagement) and observed that making JOLs significantly reduced MW. More importantly, reduced MW mediated the reactivity effect. Lastly, Experiment 4 found that a manipulation that heightened learning motivation decreased the reactivity effect. Overall, the current study provides the first demonstration of the reactivity effect on visual memory, as well as support for the enhanced learning engagement explanation. Practical implications are discussed.
Collapse
|
50
|
Baumann L, Valuch C. Priming of natural scene categorization during continuous flash suppression. Conscious Cogn 2022; 104:103387. [PMID: 36007344 DOI: 10.1016/j.concog.2022.103387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 07/15/2022] [Accepted: 08/02/2022] [Indexed: 11/03/2022]
Abstract
Continuous Flash Suppression (CFS) reduces conscious awareness of stimuli. Whether stimuli suppressed by CFS are processed at categorical or semantic levels is still debated. Here, we approached this question using a large set of indoor and outdoor scene photographs in a priming paradigm. Perceptually suppressed primes were followed by visible targets. Participants rapidly reported whether the targets showed an indoor or an outdoor scene. Responses were faster (and fast responses more accurate) when primes and targets came from a congruent superordinate category (e.g., both were outdoor scenes). During CFS, priming effects were relatively small (up to 10 ms) and modulated by prime visibility and stimulus onset asynchrony (SOA) of prime and target. Without CFS, the stimuli elicited consistent and more robust priming effects (about 24 ms). Our results imply that scene category is processed during CFS, although some residual prime visibility is likely necessary for significant priming effects to occur.
Collapse
Affiliation(s)
- Leonie Baumann
- Department of Experimental Psychology, University of Goettingen, Germany
| | - Christian Valuch
- Department of Experimental Psychology, University of Goettingen, Germany; Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Austria.
| |
Collapse
|