1
|
Rao TS, George SJ, Kulkarni GU. Emulating working memory consolidation with a 1D supramolecular nanofibre-based neuromorphic device. NANOSCALE HORIZONS 2025. [PMID: 40235453 DOI: 10.1039/d5nh00034c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2025]
Abstract
Cognitive activities in the human brain are driven by the processes of learning and forgetting. However, there is yet another process namely consolidation, which stands as an interface for saving important learnt information from forgetting. Consolidation is imperative for the formation of stable, long-term memories and is an integral part of the memory formation process. Despite significant efforts in emulating learning, forgetting, and several synaptic functionalities through various neuromorphic devices, the efforts to understand the consolidation process are insignificant. Among the two forms of consolidation, namely long-term and working memory consolidations, the present study explores the latter that stabilizes transient sensory input and enhances retention by counteracting decay-based forgetting. Herein, a two-terminal optically active resistive neuromorphic device based on 1D supramolecular nanofibres is utilized to emulate and quantify consolidation, basically, in working memory. The phenomenon aligns with mathematical models using two-time constants, drawing parallels with biological mechanisms. Given the excellent optical and humidity response of the nanofibres, the emulation was achieved by employing optical input as stimuli and enabling the modulation of the photoresponse by exposure to different humidities. By defining consolidation as a function of humidity, the study underscores its role as an active control, reinforcing the connection between environmental factors and memory stability. The variation in consolidation was studied during the learning-relearning, change in environment (hydrated and dehydrated state), fatigue, and habituation processes. Notably, a consolidation parameter is defined to quantify the process of consolidation that is an inseparable process of cognition.
Collapse
Affiliation(s)
- Tejaswini S Rao
- Chemistry & Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur P.O., Bangalore-560064, India.
| | - Subi J George
- Supramolecular Chemistry Laboratory, New Chemistry Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore-560064, India
| | - Giridhar U Kulkarni
- Chemistry & Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur P.O., Bangalore-560064, India.
| |
Collapse
|
2
|
Finotelli P, Eustache F. Mathematical modeling of human memory. Front Psychol 2023; 14:1298235. [PMID: 38187417 PMCID: PMC10771340 DOI: 10.3389/fpsyg.2023.1298235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 12/05/2023] [Indexed: 01/09/2024] Open
Abstract
The mathematical study of human memory is still an open challenge. Cognitive psychology and neuroscience have given a big contribution to understand how the human memory is structured and works. Cognitive psychologists developed experimental paradigms, conceived quantitative measures of performance in memory tasks for both healthy people and patients with memory disorders, but in terms of mathematical modeling human memory there is still a lot to do. There are many ways to mathematically model human memory, for example, by using mathematical analysis, linear algebra, statistics, and artificial neural networks. The aim of this study is to provide the reader with a description of some prominent models, involving mathematical analysis and linear algebra, designed to describe how memory works by predicting the results of psychological experiments. We have ordered the models from a chronological point of view and, for each model, we have emphasized what are, in our opinion, the strong and weak points. We are aware that this study covers just a part of human memory modeling as well as that we have made a personal selection, which is arguable. Nevertheless, our hope is to help scientists to modeling human memory and its diseases.
Collapse
Affiliation(s)
- Paolo Finotelli
- Normandie Univ, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, Centre Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - Francis Eustache
- Normandie Univ, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, Centre Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| |
Collapse
|
3
|
Multiple traces and altered signal-to-noise in systems consolidation: Evidence from the 7T fMRI Natural Scenes Dataset. Proc Natl Acad Sci U S A 2022; 119:e2123426119. [PMID: 36279446 PMCID: PMC9636924 DOI: 10.1073/pnas.2123426119] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
How do the neural correlates of recognition change over time? We study natural scene image recognition spanning a year with 7-Tesla functional magnetic resonance imaging (fMRI) of the human brain. We find that the medial temporal lobe (MTL) contribution to recognition persists over 200 d, supporting multiple-trace theory and contradicting a trace transfer (from MTL to cortex) point of view. We then test the hypothesis that the signal-to-noise ratio of traces increases over time, presumably a consequence of synaptic “desaturation” in the weeks following learning. The fMRI trace signature associates with the rate of removal of competing traces and reflects a time-related enhancement of image-feature selectivity. We conclude that multiple MTL traces and improved signal-to-noise may underlie systems-level memory consolidation. The brain mechanisms of memory consolidation remain elusive. Here, we examine blood-oxygen-level-dependent (BOLD) correlates of image recognition through the scope of multiple influential systems consolidation theories. We utilize the longitudinal Natural Scenes Dataset, a 7-Tesla functional magnetic resonance imaging human study in which ∼135,000 trials of image recognition were conducted over the span of a year among eight subjects. We find that early- and late-stage image recognition associates with both medial temporal lobe (MTL) and visual cortex when evaluating regional activations and a multivariate classifier. Supporting multiple-trace theory (MTT), parts of the MTL activation time course show remarkable fit to a 20-y-old MTT time-dynamical model predicting early trace intensity increases and slight subsequent interference (R2 > 0.90). These findings contrast a simplistic, yet common, view that memory traces are transferred from MTL to cortex. Next, we test the hypothesis that the MTL trace signature of memory consolidation should also reflect synaptic “desaturation,” as evidenced by an increased signal-to-noise ratio. We find that the magnitude of relative BOLD enhancement among surviving memories is positively linked to the rate of removal (i.e., forgetting) of competing traces. Moreover, an image-feature and time interaction of MTL and visual cortex functional connectivity suggests that consolidation mechanisms improve the specificity of a distributed trace. These neurobiological effects do not replicate on a shorter timescale (within a session), implicating a prolonged, offline process. While recognition can potentially involve cognitive processes outside of memory retrieval (e.g., re-encoding), our work largely favors MTT and desaturation as perhaps complementary consolidative memory mechanisms.
Collapse
|
4
|
Sardoo AM, Zhang S, Ferraro TN, Keck TM, Chen Y. Decoding brain memory formation by single-cell RNA sequencing. Brief Bioinform 2022; 23:6713514. [PMID: 36156112 PMCID: PMC9677489 DOI: 10.1093/bib/bbac412] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 07/10/2022] [Accepted: 08/25/2022] [Indexed: 12/14/2022] Open
Abstract
To understand how distinct memories are formed and stored in the brain is an important and fundamental question in neuroscience and computational biology. A population of neurons, termed engram cells, represents the physiological manifestation of a specific memory trace and is characterized by dynamic changes in gene expression, which in turn alters the synaptic connectivity and excitability of these cells. Recent applications of single-cell RNA sequencing (scRNA-seq) and single-nucleus RNA sequencing (snRNA-seq) are promising approaches for delineating the dynamic expression profiles in these subsets of neurons, and thus understanding memory-specific genes, their combinatorial patterns and regulatory networks. The aim of this article is to review and discuss the experimental and computational procedures of sc/snRNA-seq, new studies of molecular mechanisms of memory aided by sc/snRNA-seq in human brain diseases and related mouse models, and computational challenges in understanding the regulatory mechanisms underlying long-term memory formation.
Collapse
Affiliation(s)
- Atlas M Sardoo
- Department of Biological & Biomedical Sciences, Rowan University, Glassboro, NJ 08028, USA
| | - Shaoqiang Zhang
- College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China
| | - Thomas N Ferraro
- Department of Biomedical Sciences, Cooper Medical School of Rowan University, Camden, NJ 08103, USA
| | - Thomas M Keck
- Department of Biological & Biomedical Sciences, Rowan University, Glassboro, NJ 08028, USA,Department of Chemistry & Biochemistry, Rowan University, Glassboro, NJ 08028, USA
| | - Yong Chen
- Corresponding author. Yong Chen, Department of Biological and Biomedical Sciences, Rowan University, Glassboro, NJ 08028, USA. Tel.: +1 856 256 4500; E-mail:
| |
Collapse
|
5
|
Murre JMJ. Randomly fluctuating neural connections may implement a consolidation mechanism that explains classic memory laws. Sci Rep 2022; 12:13423. [PMID: 35927567 PMCID: PMC9352731 DOI: 10.1038/s41598-022-17639-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Accepted: 07/28/2022] [Indexed: 11/09/2022] Open
Abstract
How can we reconcile the massive fluctuations in neural connections with a stable long-term memory? Two-photon microscopy studies have revealed that large portions of neural connections (spines, synapses) are unexpectedly active, changing unpredictably over time. This appears to invalidate the main assumption underlying the majority of memory models in cognitive neuroscience, which rely on stable connections that retain information over time. Here, we show that such random fluctuations may in fact implement a type of memory consolidation mechanism with a stable very long-term memory that offers novel explanations for several classic memory 'laws', namely Jost's Law (1897: superiority of spaced learning) and Ribot's Law (1881: loss of recent memories in retrograde amnesia), for which a common neural basis has been postulated but not established, as well as other general 'laws' of learning and forgetting. We show how these phenomena emerge naturally from massively fluctuating neural connections.
Collapse
Affiliation(s)
- Jaap M J Murre
- Brain and Cognition Unit, Psychology Department, University of Amsterdam, P.O. Box 15915, 1001 NK, Amsterdam, The Netherlands.
| |
Collapse
|
6
|
Abstract
Humans have the remarkable ability to continually store new memories, while maintaining old memories for a lifetime. How the brain avoids catastrophic forgetting of memories due to interference between encoded memories is an open problem in computational neuroscience. Here we present a model for continual learning in a recurrent neural network combining Hebbian learning, synaptic decay and a novel memory consolidation mechanism: memories undergo stochastic rehearsals with rates proportional to the memory's basin of attraction, causing self-amplified consolidation. This mechanism gives rise to memory lifetimes that extend much longer than the synaptic decay time, and retrieval probability of memories that gracefully decays with their age. The number of retrievable memories is proportional to a power of the number of neurons. Perturbations to the circuit model cause temporally-graded retrograde and anterograde deficits, mimicking observed memory impairments following neurological trauma.
Collapse
|
7
|
Yalnizyan-Carson A, Richards BA. Forgetting Enhances Episodic Control With Structured Memories. Front Comput Neurosci 2022; 16:757244. [PMID: 35399916 PMCID: PMC8991683 DOI: 10.3389/fncom.2022.757244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Forgetting is a normal process in healthy brains, and evidence suggests that the mammalian brain forgets more than is required based on limitations of mnemonic capacity. Episodic memories, in particular, are liable to be forgotten over time. Researchers have hypothesized that it may be beneficial for decision making to forget episodic memories over time. Reinforcement learning offers a normative framework in which to test such hypotheses. Here, we show that a reinforcement learning agent that uses an episodic memory cache to find rewards in maze environments can forget a large percentage of older memories without any performance impairments, if they utilize mnemonic representations that contain structural information about space. Moreover, we show that some forgetting can actually provide a benefit in performance compared to agents with unbounded memories. Our analyses of the agents show that forgetting reduces the influence of outdated information and states which are not frequently visited on the policies produced by the episodic control system. These results support the hypothesis that some degree of forgetting can be beneficial for decision making, which can help to explain why the brain forgets more than is required by capacity limitations.
Collapse
Affiliation(s)
- Annik Yalnizyan-Carson
- Department of Biological Sciences, University of Toronto Scarborough, Toronto, ON, Canada
- Department of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada
- Montreal Institute for Learning Algorithms (MILA), Montreal, QC, Canada
- *Correspondence: Annik Yalnizyan-Carson
| | - Blake A. Richards
- Department of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada
- Montreal Institute for Learning Algorithms (MILA), Montreal, QC, Canada
- Montreal Neurological Institute, Montreal, QC, Canada
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
- School of Computer Science, McGill University, Montreal, QC, Canada
| |
Collapse
|
8
|
Dabaghian Y. From Topological Analyses to Functional Modeling: The Case of Hippocampus. Front Comput Neurosci 2021; 14:593166. [PMID: 33505262 PMCID: PMC7829363 DOI: 10.3389/fncom.2020.593166] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Accepted: 12/02/2020] [Indexed: 11/13/2022] Open
Abstract
Topological data analyses are widely used for describing and conceptualizing large volumes of neurobiological data, e.g., for quantifying spiking outputs of large neuronal ensembles and thus understanding the functions of the corresponding networks. Below we discuss an approach in which convergent topological analyses produce insights into how information may be processed in mammalian hippocampus—a brain part that plays a key role in learning and memory. The resulting functional model provides a unifying framework for integrating spiking data at different timescales and following the course of spatial learning at different levels of spatiotemporal granularity. This approach allows accounting for contributions from various physiological phenomena into spatial cognition—the neuronal spiking statistics, the effects of spiking synchronization by different brain waves, the roles played by synaptic efficacies and so forth. In particular, it is possible to demonstrate that networks with plastic and transient synaptic architectures can encode stable cognitive maps, revealing the characteristic timescales of memory processing.
Collapse
Affiliation(s)
- Yuri Dabaghian
- Department of Neurology, The University of Texas McGovern Medical School, Houston, TX, United States
| |
Collapse
|
9
|
Babichev A, Morozov D, Dabaghian Y. Replays of spatial memories suppress topological fluctuations in cognitive map. Netw Neurosci 2019; 3:707-724. [PMID: 31410375 PMCID: PMC6663216 DOI: 10.1162/netn_a_00076] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 12/18/2018] [Indexed: 11/04/2022] Open
Abstract
The spiking activity of the hippocampal place cells plays a key role in producing and sustaining an internalized representation of the ambient space-a cognitive map. These cells do not only exhibit location-specific spiking during navigation, but also may rapidly replay the navigated routs through endogenous dynamics of the hippocampal network. Physiologically, such reactivations are viewed as manifestations of "memory replays" that help to learn new information and to consolidate previously acquired memories by reinforcing synapses in the parahippocampal networks. Below we propose a computational model of these processes that allows assessing the effect of replays on acquiring a robust topological map of the environment and demonstrate that replays may play a key role in stabilizing the hippocampal representation of space.
Collapse
Affiliation(s)
- Andrey Babichev
- Department of Computational and Applied Mathematics, Rice University, Houston, TX, USA
| | | | - Yuri Dabaghian
- Department of Computational and Applied Mathematics, Rice University, Houston, TX, USA
| |
Collapse
|
10
|
Babichev A, Morozov D, Dabaghian Y. Robust spatial memory maps encoded by networks with transient connections. PLoS Comput Biol 2018; 14:e1006433. [PMID: 30226836 PMCID: PMC6161922 DOI: 10.1371/journal.pcbi.1006433] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 09/28/2018] [Accepted: 08/15/2018] [Indexed: 11/25/2022] Open
Abstract
The spiking activity of principal cells in mammalian hippocampus encodes an internalized neuronal representation of the ambient space—a cognitive map. Once learned, such a map enables the animal to navigate a given environment for a long period. However, the neuronal substrate that produces this map is transient: the synaptic connections in the hippocampus and in the downstream neuronal networks never cease to form and to deteriorate at a rapid rate. How can the brain maintain a robust, reliable representation of space using a network that constantly changes its architecture? We address this question using a computational framework that allows evaluating the effect produced by the decaying connections between simulated hippocampal neurons on the properties of the cognitive map. Using novel Algebraic Topology techniques, we demonstrate that emergence of stable cognitive maps produced by networks with transient architectures is a generic phenomenon. The model also points out that deterioration of the cognitive map caused by weakening or lost connections between neurons may be compensated by simulating the neuronal activity. Lastly, the model explicates the importance of the complementary learning systems for processing spatial information at different levels of spatiotemporal granularity. The reliability of our memories is nothing short of remarkable. Synaptic connections between neurons appear and disappear at a rapid rate, and the resulting networks constantly change their architecture due to various forms of neural plasticity. How can the brain develop a reliable representation of the world, learn and retain memories despite, or perhaps due to, such complex dynamics? Below we address these questions by modeling mechanisms of spatial learning in the hippocampal network, using novel algebraic topology methods. We demonstrate that although the functional units of the hippocampal network—the place cell assemblies—are unstable structures that may appear and disappear, the spatial memory map produced by a sufficiently large population of such assemblies robustly captures the topological structure of the environment.
Collapse
Affiliation(s)
- Andrey Babichev
- Department of Computational and Applied Mathematics, Rice University, Houston, Texas, United States of America
| | - Dmitriy Morozov
- Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Berkeley Institute for Data Science, University of California - Berkeley, Berkeley, California, United States of America
| | - Yuri Dabaghian
- Department of Neurology, The University of Texas McGovern Medical School, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
11
|
Abstract
One of the mysteries of memory is that it can last despite changes in the underlying synaptic architecture. How can we, for example, maintain an internal spatial map of an environment over months or years when the underlying network is full of transient connections? In the following, we propose a computational model for describing the emergence of the hippocampal cognitive map in a network of transient place cell assemblies and demonstrate, using methods of algebraic topology, how such a network can maintain spatial memory over time.
Collapse
|
12
|
Sotolongo-Costa O, Gaggero-Sager LM, Becker JT, Maestu F, Sotolongo-Grau O. A physical model for dementia. PHYSICA A 2017; 472:86-93. [PMID: 28827893 PMCID: PMC5562389 DOI: 10.1016/j.physa.2016.12.086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Aging associated brain decline often result in some kind of dementia. Even when this is a complex brain disorder a physical model can be used in order to describe its general behavior. A probabilistic model for the development of dementia is obtained and fitted to some experimental data obtained from the Alzheimer's Disease Neuroimaging Initiative. It is explained how dementia appears as a consequence of aging and why it is irreversible.
Collapse
Affiliation(s)
- O Sotolongo-Costa
- CInC-(IICBA), Universidad Autónoma del Estado de Morelos, 62209 Cuernavaca, Morelos, Mexico
| | - L M Gaggero-Sager
- CIICAP-(IICBA), Universidad Autónoma del Estado de Morelos, 62209 Cuernavaca, Morelos, Mexico
| | - J T Becker
- Department of Psychiatry, School of Medicine, University of Pittsburgh, Pittsburgh PA 15213, USA
- Department of Neurology, School of Medicine, University of Pittsburgh, Pittsburgh PA 15213, USA
- Department of Psychology, School of Medicine, University of Pittsburgh, Pittsburgh PA 15213, USA
| | - F Maestu
- Laboratory of Cognitive and Computational Neuroscience (UCM-UPM), Centre for Biomedical Technology (CTB), Campus de Montegancedo s/n, Pozuelo de Alarcón, 28223, Madrid, Spain
| | - O Sotolongo-Grau
- Alzheimer Research Center and Memory Clinic, Fundació ACE, Institut Català de Neurociències Aplicades, 08029 Barcelona, Spain
| |
Collapse
|
13
|
Abstract
In this article, learning curves for foreign vocabulary words are investigated, distinguishing between a subject-specific learning rate and a material-specific parameter that is related to the complexity of the items, such as the number of syllables. Two experiments are described, one with Turkish words and one with Italian words. In both, S-shaped learning curves were observed, which were most obvious if the subjects were not very familiar with the materials and if they were slow learners. With prolonged learning, the S shapes disappeared. Three different mathematical functions are proposed to explain these S-shaped curves. A further analysis clarifies why S-shaped learning curves may go unnoticed in many experiments.
Collapse
|
14
|
Murre JMJ, Dros J. Replication and Analysis of Ebbinghaus' Forgetting Curve. PLoS One 2015; 10:e0120644. [PMID: 26148023 PMCID: PMC4492928 DOI: 10.1371/journal.pone.0120644] [Citation(s) in RCA: 198] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2014] [Accepted: 01/25/2015] [Indexed: 11/19/2022] Open
Abstract
We present a successful replication of Ebbinghaus’ classic forgetting curve from 1880 based on the method of savings. One subject spent 70 hours learning lists and relearning them after 20 min, 1 hour, 9 hours, 1 day, 2 days, or 31 days. The results are similar to Ebbinghaus' original data. We analyze the effects of serial position on forgetting and investigate what mathematical equations present a good fit to the Ebbinghaus forgetting curve and its replications. We conclude that the Ebbinghaus forgetting curve has indeed been replicated and that it is not completely smooth but most probably shows a jump upwards starting at the 24 hour data point.
Collapse
Affiliation(s)
| | - Joeri Dros
- University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|