1
|
Rolls ET, Treves A. A theory of hippocampal function: New developments. Prog Neurobiol 2024; 238:102636. [PMID: 38834132 DOI: 10.1016/j.pneurobio.2024.102636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/15/2024] [Accepted: 05/30/2024] [Indexed: 06/06/2024]
Abstract
We develop further here the only quantitative theory of the storage of information in the hippocampal episodic memory system and its recall back to the neocortex. The theory is upgraded to account for a revolution in understanding of spatial representations in the primate, including human, hippocampus, that go beyond the place where the individual is located, to the location being viewed in a scene. This is fundamental to much primate episodic memory and navigation: functions supported in humans by pathways that build 'where' spatial view representations by feature combinations in a ventromedial visual cortical stream, separate from those for 'what' object and face information to the inferior temporal visual cortex, and for reward information from the orbitofrontal cortex. Key new computational developments include the capacity of the CA3 attractor network for storing whole charts of space; how the correlations inherent in self-organizing continuous spatial representations impact the storage capacity; how the CA3 network can combine continuous spatial and discrete object and reward representations; the roles of the rewards that reach the hippocampus in the later consolidation into long-term memory in part via cholinergic pathways from the orbitofrontal cortex; and new ways of analysing neocortical information storage using Potts networks.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK.
| | | |
Collapse
|
2
|
Rolls ET. The memory systems of the human brain and generative artificial intelligence. Heliyon 2024; 10:e31965. [PMID: 38841455 PMCID: PMC11152951 DOI: 10.1016/j.heliyon.2024.e31965] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 05/11/2024] [Accepted: 05/24/2024] [Indexed: 06/07/2024] Open
Abstract
Generative Artificial Intelligence foundation models (for example Generative Pre-trained Transformer - GPT - models) can generate the next token given a sequence of tokens. How can this 'generative AI' be compared with the 'real' intelligence of the human brain, when for example a human generates a whole memory in response to an incomplete retrieval cue, and then generates further prospective thoughts? Here these two types of generative intelligence, artificial in machines and real in the human brain are compared, and it is shown how when whole memories are generated by hippocampal recall in response to an incomplete retrieval cue, what the human brain computes, and how it computes it, are very different from generative AI. Key differences are the use of local associative learning rules in the hippocampal memory system, and of non-local backpropagation of error learning in AI. Indeed, it is argued that the whole operation of the human brain is performed computationally very differently to what is implemented in generative AI. Moreover, it is emphasized that the primate including human hippocampal system includes computations about spatial view and where objects and people are in scenes, whereas in rodents the emphasis is on place cells and path integration by movements between places. This comparison with generative memory and processing in the human brain has interesting implications for the further development of generative AI and for neuroscience research.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, 200403, China
| |
Collapse
|
3
|
Saxena R, McNaughton BL. Bridging Neuroscience and AI: Environmental Enrichment as a Model for Forward Knowledge Transfer. ARXIV 2024:arXiv:2405.07295v2. [PMID: 38947919 PMCID: PMC11213130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Continual learning (CL) refers to an agent's capability to learn from a continuous stream of data and transfer knowledge without forgetting old information. One crucial aspect of CL is forward transfer, i.e., improved and faster learning on a new task by leveraging information from prior knowledge. While this ability comes naturally to biological brains, it poses a significant challenge for artificial intelligence (AI). Here, we suggest that environmental enrichment (EE) can be used as a biological model for studying forward transfer, inspiring human-like AI development. EE refers to animal studies that enhance cognitive, social, motor, and sensory stimulation and is a model for what, in humans, is referred to as 'cognitive reserve'. Enriched animals show significant improvement in learning speed and performance on new tasks, typically exhibiting forward transfer. We explore anatomical, molecular, and neuronal changes post-EE and discuss how artificial neural networks (ANNs) can be used to predict neural computation changes after enriched experiences. Finally, we provide a synergistic way of combining neuroscience and AI research that paves the path toward developing AI capable of rapid and efficient new task learning.
Collapse
Affiliation(s)
- Rajat Saxena
- Department of Neurobiology and Behavior, University of California, Irvine, Irvine, CA 92697, USA
| | - Bruce L McNaughton
- Department of Neurobiology and Behavior, University of California, Irvine, Irvine, CA 92697, USA
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, T1K 3M4 Canada
| |
Collapse
|
4
|
Tao Y, Schubert T, Wiley R, Stark C, Rapp B. Cortical and Subcortical Mechanisms of Orthographic Word-form Learning. J Cogn Neurosci 2024; 36:1071-1098. [PMID: 38527084 DOI: 10.1162/jocn_a_02147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
We examined the initial stages of orthographic learning in real time as literate adults learned spellings for spoken pseudowords during fMRI scanning. Participants were required to learn and store orthographic word forms because the pseudoword spellings were not uniquely predictable from sound to letter mappings. With eight learning trials per word form, we observed changes in the brain's response as learning was taking place. Accuracy was evaluated during learning, immediately after scanning, and 1 week later. We found evidence of two distinct learning systems-hippocampal and neocortical-operating during orthographic learning, consistent with the predictions of dual systems theories of learning/memory such as the complementary learning systems framework [McClelland, J. L., McNaughton, B. L., & O'Reilly, R. C. Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological Review, 102, 419-457, 1995]. The bilateral hippocampus and the visual word form area (VWFA) showed significant BOLD response changes over learning, with the former exhibiting a rising pattern and the latter exhibiting a falling pattern. Moreover, greater BOLD signal increase in the hippocampus was associated with better postscan recall. In addition, we identified two distinct bilateral brain networks that mirrored the rising and falling patterns of the hippocampus and VWFA. Functional connectivity analysis revealed that regions within each network were internally synchronized. These novel findings highlight, for the first time, the relevance of multiple learning systems in orthographic learning and provide a paradigm that can be used to address critical gaps in our understanding of the neural bases of orthographic learning in general and orthographic word-form learning specifically.
Collapse
|
5
|
Reggev N. Motivation and prediction-driven processing of social memoranda. Neurosci Biobehav Rev 2024; 159:105613. [PMID: 38437974 DOI: 10.1016/j.neubiorev.2024.105613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 12/09/2023] [Accepted: 02/28/2024] [Indexed: 03/06/2024]
Abstract
Social semantic memory guides many aspects of behavior. Individuals rely on acquired and inferred knowledge about personal characteristics and group membership to predict the behavior and character of social targets. These predictions then determine the expectations from, the behavior in, and the interpretations of social interactions. According to predictive processing accounts, mnemonic and attentional mechanisms should enhance the processing of prediction-violating events. However, empirical findings suggest that prediction-consistent social events are often better remembered. This mini-review integrates recent evidence from social and non-social memory research to highlight the role of motivation in explaining these discrepancies. A particular emphasis is given to the continuous nature of prediction-(in)consistency, the epistemic tendency of perceivers to maintain or update their knowledge, and the dynamic influences of motivation on multiple steps in prediction-driven social memory. The suggested framework provides a coherent outlook of existing work and offers promising future directions to better understand the ebb and flow of social memoranda.
Collapse
Affiliation(s)
- Niv Reggev
- Department of Psychology, Ben-Gurion University of the Negev, Beer Sheva, Israel; School of Brain and Cognitive Sciences, Ben-Gurion University of the Negev, Beer Sheva, Israel.
| |
Collapse
|
6
|
Krasne FB, Fanselow MS. Remote memory in a Bayesian model of context fear conditioning (BaconREM). Front Behav Neurosci 2024; 17:1295969. [PMID: 38515786 PMCID: PMC10955142 DOI: 10.3389/fnbeh.2023.1295969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/13/2023] [Indexed: 03/23/2024] Open
Abstract
Here, we propose a model of remote memory (BaconREM), which is an extension of a previously published Bayesian model of context fear learning (BACON) that accounts for many aspects of recently learned context fear. BaconREM simulates most known phenomenology of remote context fear as studied in rodents and makes new predictions. In particular, it predicts the well-known observation that fear that was conditioned to a recently encoded context becomes hippocampus-independent and shows much-enhanced generalization ("hyper-generalization") when systems consolidation occurs (i.e., when memory becomes remote). However, the model also predicts that there should be circumstances under which the generalizability of remote fear may not increase or even decrease. It also predicts the established finding that a "reminder" exposure to a feared context can abolish hyper-generalization while at the same time making remote fear again hippocampus-dependent. This observation has in the past been taken to suggest that reminders facilitate access to detail memory that remains permanently in the hippocampus even after systems consolidation is complete. However, the present model simulates this result even though it totally moves all the contextual memory that it retains to the neo-cortex when context fear becomes remote.
Collapse
Affiliation(s)
- Franklin B. Krasne
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
- Brain Research Institute, University of California, Los Angeles, Los Angeles, CA, United States
| | - Michael S. Fanselow
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
- Brain Research Institute, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
7
|
Yonelinas AP. The role of recollection and familiarity in visual working memory: A mixture of threshold and signal detection processes. Psychol Rev 2024; 131:321-348. [PMID: 37326544 PMCID: PMC11089539 DOI: 10.1037/rev0000432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Whether working memory reflects a thresholded recollection process whereby only a limited number of items are maintained in memory, or a signal detection process in which each studied item is increased in familiarity strength, is a topic of considerable debate. A review of visual working memory studies that have examined receiver operating characteristics (ROCs) across a broad set of materials and test conditions indicates that both signal detection and threshold processes contribute to working memory. In addition, the role that these two processes play varies systematically across conditions, such that a threshold process plays a particularly critical role when binary old/new judgments are required, when changes are relatively discrete, and when the hippocampus does not contribute to performance. In contrast, a signal detection process plays a greater role when confidence judgments are required, when the materials or the changes are global in nature, and when the hippocampus contributes to performance. In addition, the ROC results indicate that in standard single-probe tests of working memory, items that are maintained in an active recollected state support both recall-to-accept and recall-to-reject responses; whereas in complex-probe tests, recollection preferentially supports recall-to-reject; and in item-recognition tests it preferentially supports recall-to-accept. Moreover, there is growing evidence that these threshold and strength-based processes are related to distinct states of conscious awareness whereby they support perceiving- and sensing-based responses, respectively. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
|
8
|
Gattas S, Larson MS, Mnatsakanyan L, Sen-Gupta I, Vadera S, Swindlehurst AL, Rapp PE, Lin JJ, Yassa MA. Theta mediated dynamics of human hippocampal-neocortical learning systems in memory formation and retrieval. Nat Commun 2023; 14:8505. [PMID: 38129375 PMCID: PMC10739909 DOI: 10.1038/s41467-023-44011-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 11/23/2023] [Indexed: 12/23/2023] Open
Abstract
Episodic memory arises as a function of dynamic interactions between the hippocampus and the neocortex, yet the mechanisms have remained elusive. Here, using human intracranial recordings during a mnemonic discrimination task, we report that 4-5 Hz (theta) power is differentially recruited during discrimination vs. overgeneralization, and its phase supports hippocampal-neocortical when memories are being formed and correctly retrieved. Interactions were largely bidirectional, with small but significant net directional biases; a hippocampus-to-neocortex bias during acquisition of new information that was subsequently correctly discriminated, and a neocortex-to-hippocampus bias during accurate discrimination of new stimuli from similar previously learned stimuli. The 4-5 Hz rhythm may facilitate the initial stages of information acquisition by neocortex during learning and the recall of stored information from cortex during retrieval. Future work should further probe these dynamics across different types of tasks and stimuli and computational models may need to be expanded accordingly to accommodate these findings.
Collapse
Affiliation(s)
- Sandra Gattas
- Department of Electrical Engineering and Computer Science, School of Engineering, University of California, Irvine, CA, 92617, USA
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, CA, 92697, USA
| | - Myra Sarai Larson
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, CA, 92697, USA
- Department of Neurobiology and Behavior, School of Biological Sciences, University of California, Irvine, CA, 92697, USA
| | - Lilit Mnatsakanyan
- Department of Neurology, School of Medicine, University of California, Irvine, CA, 92697, USA
| | - Indranil Sen-Gupta
- Department of Neurology, School of Medicine, University of California, Irvine, CA, 92697, USA
| | - Sumeet Vadera
- Department of Neurological Surgery, School of Medicine, University of California, Irvine, CA, 92697, USA
| | - A Lee Swindlehurst
- Department of Electrical Engineering and Computer Science, School of Engineering, University of California, Irvine, CA, 92617, USA
| | - Paul E Rapp
- Department of Military & Emergency Medicine, Uniformed Services University, Bethesda, MD, 20814, USA
| | - Jack J Lin
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, CA, 92697, USA
- Department of Neurology, School of Medicine, University of California, Irvine, CA, 92697, USA
| | - Michael A Yassa
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, CA, 92697, USA.
- Department of Neurobiology and Behavior, School of Biological Sciences, University of California, Irvine, CA, 92697, USA.
- Department of Neurology, School of Medicine, University of California, Irvine, CA, 92697, USA.
| |
Collapse
|
9
|
John M, Wu Y. A simple illustration of interleaved learning using Kalman filter for linear least squares. RESULTS IN APPLIED MATHEMATICS 2023; 20:None. [PMID: 38131008 PMCID: PMC10734634 DOI: 10.1016/j.rinam.2023.100409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 11/03/2023] [Indexed: 12/23/2023]
Abstract
Interleaved learning in machine learning algorithms is a biologically inspired training method with promising results. In this short note, we illustrate the interleaving mechanism via a simple statistical and optimization framework based on Kalman Filter for Linear Least Squares.
Collapse
Affiliation(s)
- Majnu John
- Departments of Mathematics and of Psychiatry, Hofstra University, Hempstead, NY, USA
- Feinstein Institutes of Medical Research, Northwell Health System, Manhasset, NY, USA
| | - Yihren Wu
- Department of Mathematics, Hofstra University, Hempstead, NY, USA
| |
Collapse
|
10
|
Bein O, Gasser C, Amer T, Maril A, Davachi L. Predictions transform memories: How expected versus unexpected events are integrated or separated in memory. Neurosci Biobehav Rev 2023; 153:105368. [PMID: 37619645 PMCID: PMC10591973 DOI: 10.1016/j.neubiorev.2023.105368] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 08/13/2023] [Accepted: 08/21/2023] [Indexed: 08/26/2023]
Abstract
Our brains constantly generate predictions about the environment based on prior knowledge. Many of the events we experience are consistent with these predictions, while others might be inconsistent with prior knowledge and thus violate our predictions. To guide future behavior, the memory system must be able to strengthen, transform, or add to existing knowledge based on the accuracy of our predictions. We synthesize recent evidence suggesting that when an event is consistent with our predictions, it leads to neural integration between related memories, which is associated with enhanced associative memory, as well as memory biases. Prediction errors, in turn, can promote both neural integration and separation, and lead to multiple mnemonic outcomes. We review these findings and how they interact with factors such as memory reactivation, prediction error strength, and task goals, to offer insight into what determines memory for events that violate our predictions. In doing so, this review brings together recent neural and behavioral research to advance our understanding of how predictions shape memory, and why.
Collapse
Affiliation(s)
- Oded Bein
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States.
| | - Camille Gasser
- Department of Psychology, Columbia University, New York, NY, United States.
| | - Tarek Amer
- Department of Psychology, University of Victoria, Victoria, Canada
| | - Anat Maril
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel; Department of Cognitive Science, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Lila Davachi
- Center for Clinical Research, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| |
Collapse
|
11
|
Jedlicka P, Tomko M, Robins A, Abraham WC. Contributions by metaplasticity to solving the Catastrophic Forgetting Problem: (Trends in Neurosciences, 45, 656-666, 2022). Trends Neurosci 2023; 46:893-894. [PMID: 37599184 DOI: 10.1016/j.tins.2023.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
|
12
|
Gattas S, Larson MS, Mnatsakanyan L, Sen-Gupta I, Vadera S, Swindlehurst L, Rapp PE, Lin JJ, Yassa MA. Theta mediated dynamics of human hippocampal-neocortical learning systems in memory formation and retrieval. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.20.558688. [PMID: 37790541 PMCID: PMC10542525 DOI: 10.1101/2023.09.20.558688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Episodic memory arises as a function of dynamic interactions between the hippocampus and the neocortex, yet the mechanisms have remained elusive. Here, using human intracranial recordings during a mnemonic discrimination task, we report that 4-5 Hz (theta) power is differentially recruited during discrimination vs. overgeneralization, and its phase supports hippocampal-neocortical when memories are being formed and correctly retrieved. Interactions were largely bidirectional, with small but significant net directional biases; a hippocampus-to-neocortex bias during acquisition of new information that was subsequently correctly discriminated, and a neocortex-to-hippocampus bias during accurate discrimination of new stimuli from similar previously learned stimuli. The 4-5 Hz rhythm may facilitate the initial stages of information acquisition by neocortex during learning and the recall of stored information from cortex during retrieval. Future work should further probe these dynamics across different types of tasks and stimuli and computational models may need to be expanded accordingly to accommodate these findings.
Collapse
Affiliation(s)
- Sandra Gattas
- Department of Electrical Engineering and Computer Science, School of Engineering, University of California, Irvine, Irvine, CA, 92617, USA
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, Irvine, California, 92697, USA
| | - Myra Sarai Larson
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, Irvine, California, 92697, USA
- Department of Neurobiology and Behavior, School of Biological Sciences, University of California, Irvine, Irvine, CA, 92697, USA
| | - Lilit Mnatsakanyan
- Department of Neurology, School of Medicine, University of California, Irvine, CA, 92697, USA
| | - Indranil Sen-Gupta
- Department of Neurology, School of Medicine, University of California, Irvine, CA, 92697, USA
| | - Sumeet Vadera
- Department of Neurological Surgery, School of Medicine, University of California, Irvine, Irvine, CA, 92697, USA
| | - Lee Swindlehurst
- Department of Electrical Engineering and Computer Science, School of Engineering, University of California, Irvine, Irvine, CA, 92617, USA
| | - Paul E. Rapp
- Department of Military & Emergency Medicine, Uniformed Services University, Bethesda, MD, 20814, USA
| | - Jack J. Lin
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, Irvine, California, 92697, USA
- Department of Neurology, School of Medicine, University of California, Irvine, CA, 92697, USA
| | - Michael A. Yassa
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, Irvine, California, 92697, USA
- Department of Neurobiology and Behavior, School of Biological Sciences, University of California, Irvine, Irvine, CA, 92697, USA
- Department of Neurology, School of Medicine, University of California, Irvine, CA, 92697, USA
| |
Collapse
|
13
|
Wu X, Packard PA, García-Arch J, Bunzeck N, Fuentemilla L. Contextual incongruency triggers memory reinstatement and the disruption of neural stability. Neuroimage 2023; 273:120114. [PMID: 37080120 DOI: 10.1016/j.neuroimage.2023.120114] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 03/13/2023] [Accepted: 04/13/2023] [Indexed: 04/22/2023] Open
Abstract
Schemas, or internal representation models of the environment, are thought to be central in organising our everyday life behaviour by giving stability and predictiveness to the structure of the world. However, when an element from an unfolding event mismatches the schema-derived expectations, the coherent narrative is interrupted and an update to the current event model representation is required. Here, we asked whether the perceived incongruence of an item from an unfolding event and its impact on memory relied on the disruption of neural stability patterns preceded by the neural reactivation of the memory representations of the just-encoded event. Our study includes data from two different experiments whereby human participants (N = 33, 26 females and N = 18, 16 females, respectively) encoded images of objects preceded by trial-unique sequences of events depicting daily routine. We found that neural stability patterns gradually increased throughout the ongoing exposure to a schema-consistent episode, which was corroborated by the re-analysis of data from two other experiments, and that the brain stability pattern was interrupted when the encoding of an object of the event was incongruent with the ongoing schema. We found that the decrease in neural stability for low-congruence items was seen at ∼1000 ms from object encoding onset and that it was preceded by an enhanced N400 ERP and an increased degree of neural reactivation of the just-encoded episode. Current results offer new insights into the neural mechanisms and their temporal orchestration that are engaged during online encoding of schema-consistent episodic narratives and the detection of incongruencies.
Collapse
Affiliation(s)
- Xiongbo Wu
- Cognition and Brain Plasticity Group, Bellvitge Institute for Biomedical Research, Hospitalet de Llobregat 08907, Spain; Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona 08035, Spain; Institute of Neurosciences, University of Barcelona, Barcelona 08035, Spain.
| | - Pau A Packard
- Multisensory Research Group, Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain
| | - Josué García-Arch
- Cognition and Brain Plasticity Group, Bellvitge Institute for Biomedical Research, Hospitalet de Llobregat 08907, Spain; Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona 08035, Spain; Institute of Neurosciences, University of Barcelona, Barcelona 08035, Spain
| | - Nico Bunzeck
- Department of Psychology, University of Lübeck, Lübeck 23562, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck 23562, Germany
| | - Lluís Fuentemilla
- Cognition and Brain Plasticity Group, Bellvitge Institute for Biomedical Research, Hospitalet de Llobregat 08907, Spain; Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona 08035, Spain; Institute of Neurosciences, University of Barcelona, Barcelona 08035, Spain
| |
Collapse
|
14
|
Guo D, Yang J. Reactivation of schema representation in lateral occipital cortex supports successful memory encoding. Cereb Cortex 2022; 33:5968-5980. [PMID: 36520467 DOI: 10.1093/cercor/bhac475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 11/07/2022] [Accepted: 11/08/2022] [Indexed: 12/23/2022] Open
Abstract
Abstract
Schemas provide a scaffold onto which we can integrate new memories. Previous research has investigated the brain activity and connectivity underlying schema-related memory formation. However, how schemas are represented and reactivated in the brain, in order to enhance memory, remains unclear. To address this issue, we used an object–location spatial schema that was learned over multiple sessions, combined with similarity analyses of neural representations, to investigate the reactivation of schema representations of object–location memories when a new object–scene association is learned. In addition, we investigated how this reactivation affects subsequent memory performance under different strengths of schemas. We found that reactivation of a schema representation in the lateral occipital cortex (LOC) during object–scene encoding affected subsequent associative memory performance only in the schema-consistent condition and increased the functional connectivity between the LOC and the parahippocampal place area. Taken together, our findings provide new insight into how schema acts as a scaffold to support the integration of novel information into existing cortical networks and suggest a neural basis for schema-induced rapid cortical learning.
Collapse
Affiliation(s)
- Dingrong Guo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behaviour and Mental Health, Peking University , 5 Yiheyuan Road, Beijing 100871, China
| | - Jiongjiong Yang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behaviour and Mental Health, Peking University , 5 Yiheyuan Road, Beijing 100871, China
| |
Collapse
|
15
|
Singh D, Norman KA, Schapiro AC. A model of autonomous interactions between hippocampus and neocortex driving sleep-dependent memory consolidation. Proc Natl Acad Sci U S A 2022; 119:e2123432119. [PMID: 36279437 PMCID: PMC9636926 DOI: 10.1073/pnas.2123432119] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 08/11/2022] [Indexed: 08/04/2023] Open
Abstract
How do we build up our knowledge of the world over time? Many theories of memory formation and consolidation have posited that the hippocampus stores new information, then "teaches" this information to the neocortex over time, especially during sleep. But it is unclear, mechanistically, how this actually works-How are these systems able to interact during periods with virtually no environmental input to accomplish useful learning and shifts in representation? We provide a framework for thinking about this question, with neural network model simulations serving as demonstrations. The model is composed of hippocampus and neocortical areas, which replay memories and interact with one another completely autonomously during simulated sleep. Oscillations are leveraged to support error-driven learning that leads to useful changes in memory representation and behavior. The model has a non-rapid eye movement (NREM) sleep stage, where dynamics between the hippocampus and neocortex are tightly coupled, with the hippocampus helping neocortex to reinstate high-fidelity versions of new attractors, and a REM sleep stage, where neocortex is able to more freely explore existing attractors. We find that alternating between NREM and REM sleep stages, which alternately focuses the model's replay on recent and remote information, facilitates graceful continual learning. We thus provide an account of how the hippocampus and neocortex can interact without any external input during sleep to drive useful new cortical learning and to protect old knowledge as new information is integrated.
Collapse
Affiliation(s)
- Dhairyya Singh
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104
| | - Kenneth A. Norman
- Department of Psychology, Princeton University, Princeton, NJ 08540
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540
| | - Anna C. Schapiro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104
| |
Collapse
|
16
|
Contributions of memory and brain development to the bioregulation of naps and nap transitions in early childhood. Proc Natl Acad Sci U S A 2022; 119:e2123415119. [PMID: 36279436 PMCID: PMC9636905 DOI: 10.1073/pnas.2123415119] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The transition from multiple sleep bouts each day to a single overnight sleep bout (i.e., nap transition) is a universal process in human development. Naps are important during infancy and early childhood as they enhance learning through memory consolidation. However, a normal part of development is the transition out of naps. Understanding nap transitions is essential in order to maximize early learning and promote positive long-term cognitive outcomes. Here, we propose a novel hypothesis regarding the cognitive, physiological, and neural changes that accompany nap transitions. Specifically, we posit that maturation of the hippocampal-dependent memory network results in more efficient memory storage, which reduces the buildup of homeostatic sleep pressure across the cortex (as reflected by slow-wave activity), and eventually, contributes to nap transitions. This hypothesis synthesizes evidence of bioregulatory mechanisms underlying nap transitions and sheds new light on an important window of change in development. This framework can be used to evaluate multiple untested predictions from the field of sleep science and ultimately, yield science-based guidelines and policies regarding napping in childcare and early education settings.
Collapse
|
17
|
Gattas S, Collett HA, Huff AE, Creighton SD, Weber SE, Buckhalter SS, Manning SA, Ryait HS, McNaughton BL, Winters BD. A rodent obstacle course procedure controls delivery of enrichment and enhances complex cognitive functions. NPJ SCIENCE OF LEARNING 2022; 7:21. [PMID: 36057661 PMCID: PMC9440923 DOI: 10.1038/s41539-022-00134-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 06/13/2022] [Indexed: 06/15/2023]
Abstract
Enrichment in rodents affects brain structure, improves behavioral performance, and is neuroprotective. Similarly, in humans, according to the cognitive reserve concept, enriched experience is functionally protective against neuropathology. Despite this parallel, the ability to translate rodent studies to human clinical situations is limited. This limitation is likely due to the simple cognitive processes probed in rodent studies and the inability to control, with existing methods, the degree of rodent engagement with enrichment material. We overcome these two difficulties with behavioral tasks that probe, in a fine-grained manner, aspects of higher-order cognition associated with deterioration with aging and dementia, and a new enrichment protocol, the 'Obstacle Course' (OC), which enables controlled enrichment delivery, respectively. Together, these two advancements will enable better specification (and comparisons) of the nature of impairments in animal models of complex mental disorders and the potential for remediation from various types of intervention (e.g., enrichment, drugs). We found that two months of OC enrichment produced substantial and sustained enhancements in categorization memory, perceptual object invariance, and cross-modal sensory integration in mice. We also tested mice on behavioral tasks previously shown to benefit from traditional enrichment: spontaneous object recognition, object location memory, and pairwise visual discrimination. OC enrichment improved performance relative to standard housing on all six tasks and was in most cases superior to conventional home-cage enrichment and exercise track groups.
Collapse
Affiliation(s)
- Sandra Gattas
- Department of Electrical Engineering and Computer Science, University of California, Irvine, CA, USA.
- Medical Scientist Training Program, University of California, Irvine, CA, USA.
| | - Heather A Collett
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada
| | - Andrew E Huff
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada
| | - Samantha D Creighton
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada
| | - Siobhon E Weber
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada
| | | | - Silas A Manning
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada
| | - Hardeep S Ryait
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada
| | - Bruce L McNaughton
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada.
- Department of Neurobiology and Behavior, University of California, Irvine, CA, USA.
| | - Boyer D Winters
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada.
| |
Collapse
|
18
|
Jedlicka P, Tomko M, Robins A, Abraham WC. Contributions by metaplasticity to solving the Catastrophic Forgetting Problem. Trends Neurosci 2022; 45:656-666. [PMID: 35798611 DOI: 10.1016/j.tins.2022.06.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 06/06/2022] [Accepted: 06/09/2022] [Indexed: 10/17/2022]
Abstract
Catastrophic forgetting (CF) refers to the sudden and severe loss of prior information in learning systems when acquiring new information. CF has been an Achilles heel of standard artificial neural networks (ANNs) when learning multiple tasks sequentially. The brain, by contrast, has solved this problem during evolution. Modellers now use a variety of strategies to overcome CF, many of which have parallels to cellular and circuit functions in the brain. One common strategy, based on metaplasticity phenomena, controls the future rate of change at key connections to help retain previously learned information. However, the metaplasticity properties so far used are only a subset of those existing in neurobiology. We propose that as models become more sophisticated, there could be value in drawing on a richer set of metaplasticity rules, especially when promoting continual learning in agents moving about the environment.
Collapse
Affiliation(s)
- Peter Jedlicka
- ICAR3R - Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University, Giessen, Germany; Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University Frankfurt, Frankfurt/Main, Germany; Frankfurt Institute for Advanced Studies, Frankfurt 60438, Germany.
| | - Matus Tomko
- ICAR3R - Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University, Giessen, Germany; Institute of Molecular Physiology and Genetics, Centre of Biosciences, Slovak Academy of Sciences, Bratislava, Slovakia
| | - Anthony Robins
- Department of Computer Science, University of Otago, Dunedin 9016, New Zealand
| | - Wickliffe C Abraham
- Department of Psychology, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand.
| |
Collapse
|
19
|
Replay, the default mode network and the cascaded memory systems model. Nat Rev Neurosci 2022; 23:628-640. [PMID: 35970912 DOI: 10.1038/s41583-022-00620-6] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/07/2022] [Indexed: 12/25/2022]
Abstract
The spontaneous replay of patterns of activity related to past experiences and memories is a striking feature of brain activity, as is the coherent activation of sets of brain areas - particularly those comprising the default mode network (DMN) - during rest. We propose that these two phenomena are strongly intertwined and that their potential functions overlap. In the 'cascaded memory systems model' that we outline here, we hypothesize that the DMN forms the backbone for the propagation of replay, mediating interactions between the hippocampus and the neocortex that enable the consolidation of new memories. The DMN may also independently ignite replay cascades, which support reactivation of older memories or high-level semantic representations. We suggest that transient cortical activations, inducing long-range correlations across the neocortex, are a key mechanism supporting a hierarchy of representations that progresses from simple percepts to semantic representations of causes and, finally, to whole episodes.
Collapse
|
20
|
Gore KR, Woollams AM, Bruehl S, Halai AD, Lambon Ralph MA. Direct Neural Evidence for the Contrastive Roles of the Complementary Learning Systems in Adult Acquisition of Native Vocabulary. Cereb Cortex 2022; 32:3392-3405. [PMID: 34875018 PMCID: PMC9376875 DOI: 10.1093/cercor/bhab422] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 10/27/2021] [Accepted: 10/28/2021] [Indexed: 01/01/2023] Open
Abstract
The Complementary Learning Systems (CLS) theory provides a powerful framework for considering the acquisition, consolidation, and generalization of new knowledge. We tested this proposed neural division of labor in adults through an investigation of the consolidation and long-term retention of newly learned native vocabulary with post-learning functional neuroimaging. Newly learned items were compared with two conditions: 1) previously known items to highlight the similarities and differences with established vocabulary and 2) unknown/untrained items to provide a control for non-specific perceptual and motor speech output. Consistent with the CLS, retrieval of newly learned items was supported by a combination of regions associated with episodic memory (including left hippocampus) and the language-semantic areas that support established vocabulary (left inferior frontal gyrus and left anterior temporal lobe). Furthermore, there was a shifting division of labor across these two networks in line with the items' consolidation status; faster naming was associated with more activation of language-semantic areas and lesser activation of episodic memory regions. Hippocampal activity during naming predicted more than half the variation in naming retention 6 months later.
Collapse
Affiliation(s)
- Katherine R Gore
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, University of Manchester, Manchester M13 9GB, UK
| | - Anna M Woollams
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, University of Manchester, Manchester M13 9GB, UK
| | - Stefanie Bruehl
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, University of Manchester, Manchester M13 9GB, UK
- St Mauritius Rehabilitation Centre, Meerbusch & Heinrich-Heine University, 40225 Duesseldorf, Germany
- Clinical and Cognitive Neurosciences, Department of Neurology, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany
| | - Ajay D Halai
- MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK
| | | |
Collapse
|
21
|
Learning in deep neural networks and brains with similarity-weighted interleaved learning. Proc Natl Acad Sci U S A 2022; 119:e2115229119. [PMID: 35759669 PMCID: PMC9271163 DOI: 10.1073/pnas.2115229119] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Unlike humans, artificial neural networks rapidly forget previously learned information when learning something new and must be retrained by interleaving the new and old items; however, interleaving all old items is time-consuming and might be unnecessary. It might be sufficient to interleave only old items having substantial similarity to new ones. We show that training with similarity-weighted interleaving of old items with new ones allows deep networks to learn new items rapidly without forgetting, while using substantially less data. We hypothesize how similarity-weighted interleaving might be implemented in the brain using persistent excitability traces on recently active neurons and attractor dynamics. These findings may advance both neuroscience and machine learning. Understanding how the brain learns throughout a lifetime remains a long-standing challenge. In artificial neural networks (ANNs), incorporating novel information too rapidly results in catastrophic interference, i.e., abrupt loss of previously acquired knowledge. Complementary Learning Systems Theory (CLST) suggests that new memories can be gradually integrated into the neocortex by interleaving new memories with existing knowledge. This approach, however, has been assumed to require interleaving all existing knowledge every time something new is learned, which is implausible because it is time-consuming and requires a large amount of data. We show that deep, nonlinear ANNs can learn new information by interleaving only a subset of old items that share substantial representational similarity with the new information. By using such similarity-weighted interleaved learning (SWIL), ANNs can learn new information rapidly with a similar accuracy level and minimal interference, while using a much smaller number of old items presented per epoch (fast and data-efficient). SWIL is shown to work with various standard classification datasets (Fashion-MNIST, CIFAR10, and CIFAR100), deep neural network architectures, and in sequential learning frameworks. We show that data efficiency and speedup in learning new items are increased roughly proportionally to the number of nonoverlapping classes stored in the network, which implies an enormous possible speedup in human brains, which encode a high number of separate categories. Finally, we propose a theoretical model of how SWIL might be implemented in the brain.
Collapse
|
22
|
Masís-Obando R, Norman KA, Baldassano C. Schema representations in distinct brain networks support narrative memory during encoding and retrieval. eLife 2022; 11:70445. [PMID: 35393941 PMCID: PMC8993217 DOI: 10.7554/elife.70445] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 02/09/2022] [Indexed: 11/13/2022] Open
Abstract
Schematic prior knowledge can scaffold the construction of event memories during perception and also provide structured cues to guide memory search during retrieval. We measured the activation of story-specific and schematic representations using fMRI while participants were presented with 16 stories and then recalled each of the narratives, and related these activations to memory for specific story details. We predicted that schema representations in medial prefrontal cortex (mPFC) would be correlated with successful recall of story details. In keeping with this prediction, an anterior mPFC region showed a significant correlation between activation of schema representations at encoding and subsequent behavioral recall performance; however, this mPFC region was not implicated in schema representation during retrieval. More generally, our analyses revealed largely distinct brain networks at encoding and retrieval in which schema activation was related to successful recall. These results provide new insight into when and where event knowledge can support narrative memory.
Collapse
Affiliation(s)
| | - Kenneth A Norman
- Princeton Neuroscience Institute, Princeton, United States.,Department of Psychology, Princeton University, Princeton, United States
| | | |
Collapse
|
23
|
Kudithipudi D, Aguilar-Simon M, Babb J, Bazhenov M, Blackiston D, Bongard J, Brna AP, Chakravarthi Raja S, Cheney N, Clune J, Daram A, Fusi S, Helfer P, Kay L, Ketz N, Kira Z, Kolouri S, Krichmar JL, Kriegman S, Levin M, Madireddy S, Manicka S, Marjaninejad A, McNaughton B, Miikkulainen R, Navratilova Z, Pandit T, Parker A, Pilly PK, Risi S, Sejnowski TJ, Soltoggio A, Soures N, Tolias AS, Urbina-Meléndez D, Valero-Cuevas FJ, van de Ven GM, Vogelstein JT, Wang F, Weiss R, Yanguas-Gil A, Zou X, Siegelmann H. Biological underpinnings for lifelong learning machines. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00452-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
24
|
Zhang C, Liu S, Wang Z, Weissing FJ, Zhang J. The “self-bad, partner-worse” strategy inhibits cooperation in networked populations. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.11.041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
25
|
Lu Q, Hasson U, Norman KA. A neural network model of when to retrieve and encode episodic memories. eLife 2022; 11:e74445. [PMID: 35142289 PMCID: PMC9000961 DOI: 10.7554/elife.74445] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 02/09/2022] [Indexed: 11/23/2022] Open
Abstract
Recent human behavioral and neuroimaging results suggest that people are selective in when they encode and retrieve episodic memories. To explain these findings, we trained a memory-augmented neural network to use its episodic memory to support prediction of upcoming states in an environment where past situations sometimes reoccur. We found that the network learned to retrieve selectively as a function of several factors, including its uncertainty about the upcoming state. Additionally, we found that selectively encoding episodic memories at the end of an event (but not mid-event) led to better subsequent prediction performance. In all of these cases, the benefits of selective retrieval and encoding can be explained in terms of reducing the risk of retrieving irrelevant memories. Overall, these modeling results provide a resource-rational account of why episodic retrieval and encoding should be selective and lead to several testable predictions.
Collapse
Affiliation(s)
- Qihong Lu
- Department of Psychology, Princeton UniversityPrincetonUnited States
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Uri Hasson
- Department of Psychology, Princeton UniversityPrincetonUnited States
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Kenneth A Norman
- Department of Psychology, Princeton UniversityPrincetonUnited States
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| |
Collapse
|
26
|
Hayes TL, Krishnan GP, Bazhenov M, Siegelmann HT, Sejnowski TJ, Kanan C. Replay in Deep Learning: Current Approaches and Missing Biological Elements. Neural Comput 2021; 33:2908-2950. [PMID: 34474476 PMCID: PMC9074752 DOI: 10.1162/neco_a_01433] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 05/28/2021] [Indexed: 11/04/2022]
Abstract
Replay is the reactivation of one or more neural patterns that are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated in deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this letter, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial neural networks.
Collapse
Affiliation(s)
- Tyler L Hayes
- Rochester Institute of Technology, Rochester, NY 14623, U.S.A.
| | - Giri P Krishnan
- University of California at San Diego, La Jolla, CA 92093, U.S.A.
| | - Maxim Bazhenov
- University of California at San Diego, La Jolla, CA 92093, U.S.A.
| | | | - Terrence J Sejnowski
- University of California at San Diego, La Jolla, CA 92093, U.S.A., and Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A.
| | - Christopher Kanan
- Rochester Institute of Technology, Rochester, NY 14623, U.S.A.; Paige, New York, NY 10036, U.S.A.; and Cornell Tech, New York, NY 10044, U.S.A.
| |
Collapse
|
27
|
Roscow EL, Chua R, Costa RP, Jones MW, Lepora N. Learning offline: memory replay in biological and artificial reinforcement learning. Trends Neurosci 2021; 44:808-821. [PMID: 34481635 DOI: 10.1016/j.tins.2021.07.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 07/13/2021] [Accepted: 07/21/2021] [Indexed: 10/20/2022]
Abstract
Learning to act in an environment to maximise rewards is among the brain's key functions. This process has often been conceptualised within the framework of reinforcement learning, which has also gained prominence in machine learning and artificial intelligence (AI) as a way to optimise decision making. A common aspect of both biological and machine reinforcement learning is the reactivation of previously experienced episodes, referred to as replay. Replay is important for memory consolidation in biological neural networks and is key to stabilising learning in deep neural networks. Here, we review recent developments concerning the functional roles of replay in the fields of neuroscience and AI. Complementary progress suggests how replay might support learning processes, including generalisation and continual learning, affording opportunities to transfer knowledge across the two fields to advance the understanding of biological and artificial learning and memory.
Collapse
Affiliation(s)
| | | | - Rui Ponte Costa
- Bristol Computational Neuroscience Unit, Intelligent Systems Lab, Department of Computer Science, University of Bristol, Bristol, UK
| | - Matt W Jones
- School of Physiology, Pharmacology and Neuroscience, University of Bristol, Bristol, UK
| | - Nathan Lepora
- Department of Engineering Mathematics and Bristol Robotics Laboratory, University of Bristol, Bristol, UK
| |
Collapse
|
28
|
Mau W, Hasselmo ME, Cai DJ. The brain in motion: How ensemble fluidity drives memory-updating and flexibility. eLife 2020; 9:e63550. [PMID: 33372892 PMCID: PMC7771967 DOI: 10.7554/elife.63550] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 12/12/2020] [Indexed: 12/18/2022] Open
Abstract
While memories are often thought of as flashbacks to a previous experience, they do not simply conserve veridical representations of the past but must continually integrate new information to ensure survival in dynamic environments. Therefore, 'drift' in neural firing patterns, typically construed as disruptive 'instability' or an undesirable consequence of noise, may actually be useful for updating memories. In our view, continual modifications in memory representations reconcile classical theories of stable memory traces with neural drift. Here we review how memory representations are updated through dynamic recruitment of neuronal ensembles on the basis of excitability and functional connectivity at the time of learning. Overall, we emphasize the importance of considering memories not as static entities, but instead as flexible network states that reactivate and evolve across time and experience.
Collapse
Affiliation(s)
- William Mau
- Neuroscience Department, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| | | | - Denise J Cai
- Neuroscience Department, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| |
Collapse
|
29
|
Binte Mohd Ikhsan SN, Bisby JA, Bush D, Steins DS, Burgess N. EPS mid-career prize 2018: Inference within episodic memory reflects pattern completion. Q J Exp Psychol (Hove) 2020; 73:2047-2070. [PMID: 33030092 PMCID: PMC7691565 DOI: 10.1177/1747021820959797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Recollection of episodic memories is a process of reconstruction where coherent events are inferred from subsets of remembered associations. Here, we investigated the formation of multielement events from sequential presentation of overlapping pairs of elements (people, places, and objects/animals), interleaved with pairs from other events. Retrievals of paired associations from a fully observed event (e.g., AB, BC, AC) were statistically dependent, indicating a process of pattern completion, but retrievals from a partially observed event (e.g., AB, BC, CD) were not. However, inference for unseen “indirect” associations (i.e., AC, BD or AD) from a partially observed event showed strong dependency with each other and with linking direct associations from that event. In addition, inference of indirect associations correlated with the product of performance on the linking direct associations across events (e.g., AC with ABxBC) but not on the non-linking association (e.g., AC with CD). These results were seen across three experiments, with greater differences in dependency between indirect and direct associations when they were separately tested, but similar results following single and repeated presentations of the direct associations. The results could be accounted for by a simple auto-associative network model of hippocampal memory function. Our findings suggest that pattern completion supports recollection of fully observed multielement events and the inference of indirect associations in partly observed multielement events, mediated via the directly observed linking associations (although the direct associations themselves were retrieved independently). Together with previous work, our results suggest that associative inference plays a key role in reconstructive episodic memory and does so through hippocampal pattern completion.
Collapse
Affiliation(s)
| | - James A Bisby
- Division of Psychiatry, University College London, London, UK
| | - Daniel Bush
- UCL Institute of Cognitive Neuroscience, University College London, London, UK
- UCL Institute of Neurology, University College London, London, UK
| | - David S Steins
- UCL Institute of Cognitive Neuroscience, University College London, London, UK
| | - Neil Burgess
- UCL Institute of Cognitive Neuroscience, University College London, London, UK
- UCL Institute of Neurology, University College London, London, UK
| |
Collapse
|
30
|
McClelland JL, Hill F, Rudolph M, Baldridge J, Schütze H. Placing language in an integrated understanding system: Next steps toward human-level performance in neural language models. Proc Natl Acad Sci U S A 2020; 117:25966-25974. [PMID: 32989131 PMCID: PMC7585006 DOI: 10.1073/pnas.1910416117] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Language is crucial for human intelligence, but what exactly is its role? We take language to be a part of a system for understanding and communicating about situations. In humans, these abilities emerge gradually from experience and depend on domain-general principles of biological neural networks: connection-based learning, distributed representation, and context-sensitive, mutual constraint satisfaction-based processing. Current artificial language processing systems rely on the same domain general principles, embodied in artificial neural networks. Indeed, recent progress in this field depends on query-based attention, which extends the ability of these systems to exploit context and has contributed to remarkable breakthroughs. Nevertheless, most current models focus exclusively on language-internal tasks, limiting their ability to perform tasks that depend on understanding situations. These systems also lack memory for the contents of prior situations outside of a fixed contextual span. We describe the organization of the brain's distributed understanding system, which includes a fast learning system that addresses the memory problem. We sketch a framework for future models of understanding drawing equally on cognitive neuroscience and artificial intelligence and exploiting query-based attention. We highlight relevant current directions and consider further developments needed to fully capture human-level language understanding in a computational system.
Collapse
Affiliation(s)
- James L McClelland
- Department of Psychology, Stanford University, Stanford, CA 94305;
- DeepMind, London N1C 4AG, United Kingdom
| | - Felix Hill
- DeepMind, London N1C 4AG, United Kingdom;
| | - Maja Rudolph
- Bosch Center for Artificial Intelligence, Renningen 71272, Germany;
| | | | - Hinrich Schütze
- Center for Information and Language Processing, Ludwig Maximilian University of Munich, Munich 80538, Germany
| |
Collapse
|
31
|
Something old, something new: A review of the literature on sleep-related lexicalization of novel words in adults. Psychon Bull Rev 2020; 28:96-121. [PMID: 32939631 DOI: 10.3758/s13423-020-01809-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2020] [Indexed: 11/08/2022]
Abstract
Word learning is a crucial aspect of human development that depends on the formation and consolidation of novel memory traces. In this paper, we critically review the behavioural research on sleep-related lexicalization of novel words in healthy young adult speakers. We first describe human memory systems, the processes underlying memory consolidation, then we describe the complementary learning systems account of memory consolidation. We then review behavioural studies focusing on novel word learning and sleep-related lexicalization in monolingual samples, while highlighting their relevance to three main theoretical questions. Finally, we review the few studies that have investigated sleep-related lexicalization in L2 speakers. Overall, while several studies suggest that sleep promotes the gradual transformation of initially labile traces into more stable representations, a growing body of work suggests a rich variety of time courses for novel word lexicalization. Moreover, there is a need for more work on sleep-related lexicalization patterns in varied populations, such as L2 speakers and bilingual speakers, and more work on individual differences, to fully understand the boundary conditions of this phenomenon.
Collapse
|
32
|
González OC, Sokolov Y, Krishnan GP, Delanois JE, Bazhenov M. Can sleep protect memories from catastrophic forgetting? eLife 2020; 9:e51005. [PMID: 32748786 PMCID: PMC7440920 DOI: 10.7554/elife.51005] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2020] [Accepted: 07/19/2020] [Indexed: 11/13/2022] Open
Abstract
Continual learning remains an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting the importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from being forgotten after new learning. In the thalamocortical model, training a new memory interfered with previously learned old memories leading to degradation and forgetting of the old memory traces. Simulating sleep after new learning reversed the damage and enhanced old and new memories. We found that when a new memory competed for previously allocated neuronal/synaptic resources, sleep replay changed the synaptic footprint of the old memory to allow overlapping neuronal populations to store multiple memories. Our study predicts that memory storage is dynamic, and sleep enables continual learning by combining consolidation of new memory traces with reconsolidation of old memory traces to minimize interference.
Collapse
Affiliation(s)
- Oscar C González
- Department of Medicine, University of California, San DiegoLa JollaUnited States
| | - Yury Sokolov
- Department of Medicine, University of California, San DiegoLa JollaUnited States
| | - Giri P Krishnan
- Department of Medicine, University of California, San DiegoLa JollaUnited States
| | - Jean Erik Delanois
- Department of Medicine, University of California, San DiegoLa JollaUnited States
- Department of Computer Science and Engineering, University of California, San DiegoLa JollaUnited States
| | - Maxim Bazhenov
- Department of Medicine, University of California, San DiegoLa JollaUnited States
| |
Collapse
|
33
|
Robertson EM, Genzel L. Memories replayed: reactivating past successes and new dilemmas. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190226. [PMID: 32248775 DOI: 10.1098/rstb.2019.0226] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Our experiences continue to be processed 'offline' in the ensuing hours of both wakefulness and sleep. During these different brain states, the memory formed during our experience is replayed or reactivated. Here, we discuss the unique challenges in studying offline reactivation, the growth in both the experimental and analytical techniques available across different animals from rodents to humans to capture these offline events, the important challenges this innovation has brought, our still modest understanding of how reactivation drives diverse synaptic changes across circuits, and how these changes differ (if at all), and perhaps complement, those at memory formation. Together, these discussions highlight critical emerging issues vital for identifying how reactivation affects circuits, and, in turn, behaviour, and provides a broader context for the contributions in this special issue. This article is part of the Theo Murphy meeting issue 'Memory reactivation: replaying events past, present and future'.
Collapse
Affiliation(s)
- Edwin M Robertson
- Institute of Neuroscience & Psychology, University of Glasgow, Glasgow, UK
| | - Lisa Genzel
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|