1
|
Kappel D, Cheng S. Global remapping emerges as the mechanism for renewal of context-dependent behavior in a reinforcement learning model. Front Comput Neurosci 2025; 18:1462110. [PMID: 39881840 PMCID: PMC11774835 DOI: 10.3389/fncom.2024.1462110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 12/26/2024] [Indexed: 01/31/2025] Open
Abstract
Introduction The hippocampal formation exhibits complex and context-dependent activity patterns and dynamics, e.g., place cell activity during spatial navigation in rodents or remapping of place fields when the animal switches between contexts. Furthermore, rodents show context-dependent renewal of extinguished behavior. However, the link between context-dependent neural codes and context-dependent renewal is not fully understood. Methods We use a deep neural network-based reinforcement learning agent to study the learning dynamics that occur during spatial learning and context switching in a simulated ABA extinction and renewal paradigm in a 3D virtual environment. Results Despite its simplicity, the network exhibits a number of features typically found in the CA1 and CA3 regions of the hippocampus. A significant proportion of neurons in deeper layers of the network are tuned to a specific spatial position of the agent in the environment-similar to place cells in the hippocampus. These complex spatial representations and dynamics occur spontaneously in the hidden layer of a deep network during learning. These spatial representations exhibit global remapping when the agent is exposed to a new context. The spatial maps are restored when the agent returns to the previous context, accompanied by renewal of the conditioned behavior. Remapping is facilitated by memory replay of experiences during training. Discussion Our results show that integrated codes that jointly represent spatial and task-relevant contextual variables are the mechanism underlying renewal in a simulated DQN agent.
Collapse
Affiliation(s)
| | - Sen Cheng
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
2
|
Melchior J, Altamimi A, Bayati M, Cheng S, Wiskott L. A neural network model for online one-shot storage of pattern sequences. PLoS One 2024; 19:e0304076. [PMID: 38900733 PMCID: PMC11189254 DOI: 10.1371/journal.pone.0304076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 05/06/2024] [Indexed: 06/22/2024] Open
Abstract
Based on the CRISP theory (Content Representation, Intrinsic Sequences, and Pattern completion), we present a computational model of the hippocampus that allows for online one-shot storage of pattern sequences without the need for a consolidation process. In our model, CA3 provides a pre-trained sequence that is hetero-associated with the input sequence, rather than storing a sequence in CA3. That is, plasticity on a short timescale only occurs in the incoming and outgoing connections of CA3, not in its recurrent connections. We use a single learning rule named Hebbian descent to train all plastic synapses in the network. A forgetting mechanism in the learning rule allows the network to continuously store new patterns while forgetting those stored earlier. We find that a single cue pattern can reliably trigger the retrieval of sequences, even when cues are noisy or missing information. Furthermore, pattern separation in subregion DG is necessary when sequences contain correlated patterns. Besides artificially generated input sequences, the model works with sequences of handwritten digits and natural images. Notably, our model is capable of improving itself without external input, in a process that can be referred to as 'replay' or 'offline-learning', which helps in improving the associations and consolidating the learned patterns.
Collapse
Affiliation(s)
- Jan Melchior
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Aya Altamimi
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Mehdi Bayati
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Sen Cheng
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Laurenz Wiskott
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
3
|
Kern S, Nagel J, Gerchen MF, Gürsoy Ç, Meyer-Lindenberg A, Kirsch P, Dolan RJ, Gais S, Feld GB. Reactivation strength during cued recall is modulated by graph distance within cognitive maps. eLife 2024; 12:RP93357. [PMID: 38810249 PMCID: PMC11136493 DOI: 10.7554/elife.93357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2024] Open
Abstract
Declarative memory retrieval is thought to involve reinstatement of neuronal activity patterns elicited and encoded during a prior learning episode. Furthermore, it is suggested that two mechanisms operate during reinstatement, dependent on task demands: individual memory items can be reactivated simultaneously as a clustered occurrence or, alternatively, replayed sequentially as temporally separate instances. In the current study, participants learned associations between images that were embedded in a directed graph network and retained this information over a brief 8 min consolidation period. During a subsequent cued recall session, participants retrieved the learned information while undergoing magnetoencephalographic recording. Using a trained stimulus decoder, we found evidence for clustered reactivation of learned material. Reactivation strength of individual items during clustered reactivation decreased as a function of increasing graph distance, an ordering present solely for successful retrieval but not for retrieval failure. In line with previous research, we found evidence that sequential replay was dependent on retrieval performance and was most evident in low performers. The results provide evidence for distinct performance-dependent retrieval mechanisms, with graded clustered reactivation emerging as a plausible mechanism to search within abstract cognitive maps.
Collapse
Affiliation(s)
- Simon Kern
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
| | - Juliane Nagel
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
| | - Martin F Gerchen
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Department of Psychology, Ruprecht Karl University of HeidelbergHeidelbergGermany
- Bernstein Center for Computational Neuroscience Heidelberg/MannheimMannheimGermany
| | - Çağatay Gürsoy
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
| | - Andreas Meyer-Lindenberg
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Bernstein Center for Computational Neuroscience Heidelberg/MannheimMannheimGermany
| | - Peter Kirsch
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Department of Psychology, Ruprecht Karl University of HeidelbergHeidelbergGermany
- Bernstein Center for Computational Neuroscience Heidelberg/MannheimMannheimGermany
| | - Raymond J Dolan
- Max Planck UCL Centre for Computational Psychiatry and Ageing ResearchLondonUnited Kingdom
- Wellcome Centre for Human Neuroimaging, University College LondonLondonUnited Kingdom
| | - Steffen Gais
- Institute of Medical Psychology and Behavioral Neurobiology, Eberhard-Karls-University TübingenTübingenGermany
| | - Gordon B Feld
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Department of Psychology, Ruprecht Karl University of HeidelbergHeidelbergGermany
| |
Collapse
|
4
|
Ambrogioni L. In Search of Dispersed Memories: Generative Diffusion Models Are Associative Memory Networks. ENTROPY (BASEL, SWITZERLAND) 2024; 26:381. [PMID: 38785630 PMCID: PMC11119823 DOI: 10.3390/e26050381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 04/19/2024] [Accepted: 04/25/2024] [Indexed: 05/25/2024]
Abstract
Uncovering the mechanisms behind long-term memory is one of the most fascinating open problems in neuroscience and artificial intelligence. Artificial associative memory networks have been used to formalize important aspects of biological memory. Generative diffusion models are a type of generative machine learning techniques that have shown great performance in many tasks. Similar to associative memory systems, these networks define a dynamical system that converges to a set of target states. In this work, we show that generative diffusion models can be interpreted as energy-based models and that, when trained on discrete patterns, their energy function is (asymptotically) identical to that of modern Hopfield networks. This equivalence allows us to interpret the supervised training of diffusion models as a synaptic learning process that encodes the associative dynamics of a modern Hopfield network in the weight structure of a deep neural network. Leveraging this connection, we formulate a generalized framework for understanding the formation of long-term memory, where creative generation and memory recall can be seen as parts of a unified continuum.
Collapse
Affiliation(s)
- Luca Ambrogioni
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 XZ Nijmegen, The Netherlands
| |
Collapse
|
5
|
McNamee DC. The generative neural microdynamics of cognitive processing. Curr Opin Neurobiol 2024; 85:102855. [PMID: 38428170 DOI: 10.1016/j.conb.2024.102855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 03/03/2024]
Abstract
The entorhinal cortex and hippocampus form a recurrent network that informs many cognitive processes, including memory, planning, navigation, and imagination. Neural recordings from these regions reveal spatially organized population codes corresponding to external environments and abstract spaces. Aligning the former cognitive functionalities with the latter neural phenomena is a central challenge in understanding the entorhinal-hippocampal circuit (EHC). Disparate experiments demonstrate a surprising level of complexity and apparent disorder in the intricate spatiotemporal dynamics of sequential non-local hippocampal reactivations, which occur particularly, though not exclusively, during immobile pauses and rest. We review these phenomena with a particular focus on their apparent lack of physical simulative realism. These observations are then integrated within a theoretical framework and proposed neural circuit mechanisms that normatively characterize this neural complexity by conceiving different regimes of hippocampal microdynamics as neuromarkers of diverse cognitive computations.
Collapse
|
6
|
Hoffman C, Cheng J, Morales R, Ji D, Dabaghian Y. Altered patterning of neural activity in a tauopathy mouse model. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.23.586417. [PMID: 38585991 PMCID: PMC10996513 DOI: 10.1101/2024.03.23.586417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Alzheimer's disease (AD) is a complex neurodegenerative condition that manifests at multiple levels and involves a spectrum of abnormalities ranging from the cellular to cognitive. Here, we investigate the impact of AD-related tau-pathology on hippocampal circuits in mice engaged in spatial navigation, and study changes of neuronal firing and dynamics of extracellular fields. While most studies are based on analyzing instantaneous or time-averaged characteristics of neuronal activity, we focus on intermediate timescales-spike trains and waveforms of oscillatory potentials, which we consider as single entities. We find that, in healthy mice, spike arrangements and wave patterns (series of crests or troughs) are coupled to the animal's location, speed, and acceleration. In contrast, in tau-mice, neural activity is structurally disarrayed: brainwave cadence is detached from locomotion, spatial selectivity is lost, the spike flow is scrambled. Importantly, these alterations start early and accumulate with age, which exposes progressive disinvolvement the hippocampus circuit in spatial navigation. These features highlight qualitatively different neurodynamics than the ones provided by conventional analyses, and are more salient, thus revealing a new level of the hippocampal circuit disruptions.
Collapse
Affiliation(s)
- C Hoffman
- Department of Neurology, The University of Texas McGovern Medical School, 6431 Fannin St, Houston, TX 77030
| | - J Cheng
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030
| | - R Morales
- Department of Neurology, The University of Texas McGovern Medical School, 6431 Fannin St, Houston, TX 77030
| | - D Ji
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030
| | - Y Dabaghian
- Department of Neurology, The University of Texas McGovern Medical School, 6431 Fannin St, Houston, TX 77030
| |
Collapse
|
7
|
Jiang Y. A theory of the neural mechanisms underlying negative cognitive bias in major depression. Front Psychiatry 2024; 15:1348474. [PMID: 38532986 PMCID: PMC10963437 DOI: 10.3389/fpsyt.2024.1348474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 02/16/2024] [Indexed: 03/28/2024] Open
Abstract
The widely acknowledged cognitive theory of depression, developed by Aaron Beck, focused on biased information processing that emphasizes the negative aspects of affective and conceptual information. Current attempts to discover the neurological mechanism underlying such cognitive and affective bias have successfully identified various brain regions associated with severally biased functions such as emotion, attention, rumination, and inhibition control. However, the neurobiological mechanisms of how individuals in depression develop this selective processing toward negative is still under question. This paper introduces a neurological framework centered around the frontal-limbic circuit, specifically analyzing and synthesizing the activity and functional connectivity within the amygdala, hippocampus, and medial prefrontal cortex. Firstly, a possible explanation of how the positive feedback loop contributes to the persistent hyperactivity of the amygdala in depression at an automatic level is established. Building upon this, two hypotheses are presented: hypothesis 1 revolves around the bidirectional amygdalohippocampal projection facilitating the amplification of negative emotions and memories while concurrently contributing to the impediment of the retrieval of opposing information in the hippocampus attractor network. Hypothesis 2 highlights the involvement of the ventromedial prefrontal cortex in the establishment of a negative cognitive framework through the generalization of conceptual and emotional information in conjunction with the amygdala and hippocampus. The primary objective of this study is to improve and complement existing pathological models of depression, pushing the frontiers of current understanding in neuroscience of affective disorders, and eventually contributing to successful recovery from the debilitating affective disorders.
Collapse
Affiliation(s)
- Yuyue Jiang
- University of California, Santa Barbara, Santa Barbara, CA, United States
| |
Collapse
|
8
|
Yiu YH, Leibold C. A theory of hippocampal theta correlations accounting for extrinsic and intrinsic sequences. eLife 2023; 12:RP86837. [PMID: 37792453 PMCID: PMC10550285 DOI: 10.7554/elife.86837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/05/2023] Open
Abstract
Hippocampal place cell sequences have been hypothesized to serve as diverse purposes as the induction of synaptic plasticity, formation and consolidation of long-term memories, or navigation and planning. During spatial behaviors of rodents, sequential firing of place cells at the theta timescale (known as theta sequences) encodes running trajectories, which can be considered as one-dimensional behavioral sequences of traversed locations. In a two-dimensional space, however, each single location can be visited along arbitrary one-dimensional running trajectories. Thus, a place cell will generally take part in multiple different theta sequences, raising questions about how this two-dimensional topology can be reconciled with the idea of hippocampal sequences underlying memory of (one-dimensional) episodes. Here, we propose a computational model of cornu ammonis 3 (CA3) and dentate gyrus (DG), where sensorimotor input drives the direction-dependent (extrinsic) theta sequences within CA3 reflecting the two-dimensional spatial topology, whereas the intrahippocampal CA3-DG projections concurrently produce intrinsic sequences that are independent of the specific running trajectory. Consistent with experimental data, intrinsic theta sequences are less prominent, but can nevertheless be detected during theta activity, thereby serving as running-direction independent landmark cues. We hypothesize that the intrinsic sequences largely reflect replay and preplay activity during non-theta states.
Collapse
Affiliation(s)
- Yuk-Hoi Yiu
- Fakultät für Biologie & Bernstein Center Freiburg Albert-Ludwigs-Universität FreiburgFreiburgGermany
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität MünchenMunichGermany
| | - Christian Leibold
- Fakultät für Biologie & Bernstein Center Freiburg Albert-Ludwigs-Universität FreiburgFreiburgGermany
- BrainLinks-BrainTools, Albert-Ludwigs-Universität FreiburgFreiburgGermany
| |
Collapse
|
9
|
German JS, Cui G, Xu C, Jacobs RA. Rapid runtime learning by curating small datasets of high-quality items obtained from memory. PLoS Comput Biol 2023; 19:e1011445. [PMID: 37792896 PMCID: PMC10578607 DOI: 10.1371/journal.pcbi.1011445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 10/16/2023] [Accepted: 08/19/2023] [Indexed: 10/06/2023] Open
Abstract
We propose the "runtime learning" hypothesis which states that people quickly learn to perform unfamiliar tasks as the tasks arise by using task-relevant instances of concepts stored in memory during mental training. To make learning rapid, the hypothesis claims that only a few class instances are used, but these instances are especially valuable for training. The paper motivates the hypothesis by describing related ideas from the cognitive science and machine learning literatures. Using computer simulation, we show that deep neural networks (DNNs) can learn effectively from small, curated training sets, and that valuable training items tend to lie toward the centers of data item clusters in an abstract feature space. In a series of three behavioral experiments, we show that people can also learn effectively from small, curated training sets. Critically, we find that participant reaction times and fitted drift rates are best accounted for by the confidences of DNNs trained on small datasets of highly valuable items. We conclude that the runtime learning hypothesis is a novel conjecture about the relationship between learning and memory with the potential for explaining a wide variety of cognitive phenomena.
Collapse
Affiliation(s)
- Joseph Scott German
- Institute for Psychology and Centre for Cognitive Science, Technical University of Darmstadt, Darmstadt, Germany
| | - Guofeng Cui
- Department of Computer Science, Rutgers University, Piscataway, New Jersey, United States of America
| | - Chenliang Xu
- Department of Computer Science, University of Rochester, Rochester, New York, United States of America
| | - Robert A. Jacobs
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| |
Collapse
|
10
|
Parra-Barrero E, Vijayabaskaran S, Seabrook E, Wiskott L, Cheng S. A map of spatial navigation for neuroscience. Neurosci Biobehav Rev 2023; 152:105200. [PMID: 37178943 DOI: 10.1016/j.neubiorev.2023.105200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/13/2023] [Accepted: 04/24/2023] [Indexed: 05/15/2023]
Abstract
Spatial navigation has received much attention from neuroscientists, leading to the identification of key brain areas and the discovery of numerous spatially selective cells. Despite this progress, our understanding of how the pieces fit together to drive behavior is generally lacking. We argue that this is partly caused by insufficient communication between behavioral and neuroscientific researchers. This has led the latter to under-appreciate the relevance and complexity of spatial behavior, and to focus too narrowly on characterizing neural representations of space-disconnected from the computations these representations are meant to enable. We therefore propose a taxonomy of navigation processes in mammals that can serve as a common framework for structuring and facilitating interdisciplinary research in the field. Using the taxonomy as a guide, we review behavioral and neural studies of spatial navigation. In doing so, we validate the taxonomy and showcase its usefulness in identifying potential issues with common experimental approaches, designing experiments that adequately target particular behaviors, correctly interpreting neural activity, and pointing to new avenues of research.
Collapse
Affiliation(s)
- Eloy Parra-Barrero
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany; International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sandhiya Vijayabaskaran
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Eddie Seabrook
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
| | - Laurenz Wiskott
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany; International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sen Cheng
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany; International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany.
| |
Collapse
|
11
|
Micou C, O'Leary T. Representational drift as a window into neural and behavioural plasticity. Curr Opin Neurobiol 2023; 81:102746. [PMID: 37392671 DOI: 10.1016/j.conb.2023.102746] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 05/25/2023] [Accepted: 05/31/2023] [Indexed: 07/03/2023]
Abstract
Large-scale recordings of neural activity over days and weeks have revealed that neural representations of familiar tasks, precepts and actions continually evolve without obvious changes in behaviour. We hypothesise that this steady drift in neural activity and accompanying physiological changes is due in part to the continuous application of a learning rule at the cellular and population level. Explicit predictions of this drift can be found in neural network models that use iterative learning to optimise weights. Drift therefore provides a measurable signal that can reveal systems-level properties of biological plasticity mechanisms, such as their precision and effective learning rates.
Collapse
Affiliation(s)
- Charles Micou
- Department of Engineering, University of Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, United Kingdom; Theoretical Sciences Visiting Program, Okinawa Institute of Science and Technology Graduate University, Onna, 904-0495, Japan.
| |
Collapse
|
12
|
Zeng X, Diekmann N, Wiskott L, Cheng S. Modeling the function of episodic memory in spatial learning. Front Psychol 2023; 14:1160648. [PMID: 37138984 PMCID: PMC10149844 DOI: 10.3389/fpsyg.2023.1160648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 03/31/2023] [Indexed: 05/05/2023] Open
Abstract
Episodic memory has been studied extensively in the past few decades, but so far little is understood about how it drives future behavior. Here we propose that episodic memory can facilitate learning in two fundamentally different modes: retrieval and replay, which is the reinstatement of hippocampal activity patterns during later sleep or awake quiescence. We study their properties by comparing three learning paradigms using computational modeling based on visually-driven reinforcement learning. Firstly, episodic memories are retrieved to learn from single experiences (one-shot learning); secondly, episodic memories are replayed to facilitate learning of statistical regularities (replay learning); and, thirdly, learning occurs online as experiences arise with no access to memories of past experiences (online learning). We found that episodic memory benefits spatial learning in a broad range of conditions, but the performance difference is meaningful only when the task is sufficiently complex and the number of learning trials is limited. Furthermore, the two modes of accessing episodic memory affect spatial learning differently. One-shot learning is typically faster than replay learning, but the latter may reach a better asymptotic performance. In the end, we also investigated the benefits of sequential replay and found that replaying stochastic sequences results in faster learning as compared to random replay when the number of replays is limited. Understanding how episodic memory drives future behavior is an important step toward elucidating the nature of episodic memory.
Collapse
Affiliation(s)
- Xiangshuai Zeng
- Department of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Nicolas Diekmann
- Department of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Laurenz Wiskott
- Department of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sen Cheng
- Department of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
- *Correspondence: Sen Cheng
| |
Collapse
|
13
|
Skatchkovsky N, Jang H, Simeone O. Bayesian continual learning via spiking neural networks. Front Comput Neurosci 2022; 16:1037976. [PMID: 36465962 PMCID: PMC9708898 DOI: 10.3389/fncom.2022.1037976] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 10/26/2022] [Indexed: 09/19/2023] Open
Abstract
Among the main features of biological intelligence are energy efficiency, capacity for continual adaptation, and risk management via uncertainty quantification. Neuromorphic engineering has been thus far mostly driven by the goal of implementing energy-efficient machines that take inspiration from the time-based computing paradigm of biological brains. In this paper, we take steps toward the design of neuromorphic systems that are capable of adaptation to changing learning tasks, while producing well-calibrated uncertainty quantification estimates. To this end, we derive online learning rules for spiking neural networks (SNNs) within a Bayesian continual learning framework. In it, each synaptic weight is represented by parameters that quantify the current epistemic uncertainty resulting from prior knowledge and observed data. The proposed online rules update the distribution parameters in a streaming fashion as data are observed. We instantiate the proposed approach for both real-valued and binary synaptic weights. Experimental results using Intel's Lava platform show the merits of Bayesian over frequentist learning in terms of capacity for adaptation and uncertainty quantification.
Collapse
Affiliation(s)
- Nicolas Skatchkovsky
- King's Communication, Learning and Information Processing (KCLIP) Lab, Department of Engineering, King's College London, London, United Kingdom
| | - Hyeryung Jang
- Department of Artificial Intelligence, Dongguk University, Seoul, South Korea
| | - Osvaldo Simeone
- King's Communication, Learning and Information Processing (KCLIP) Lab, Department of Engineering, King's College London, London, United Kingdom
| |
Collapse
|
14
|
Freezing revisited: coordinated autonomic and central optimization of threat coping. Nat Rev Neurosci 2022; 23:568-580. [PMID: 35760906 DOI: 10.1038/s41583-022-00608-2] [Citation(s) in RCA: 50] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2022] [Indexed: 12/16/2022]
Abstract
Animals have sophisticated mechanisms for coping with danger. Freezing is a unique state that, upon threat detection, allows evidence to be gathered, response possibilities to be previsioned and preparations to be made for worst-case fight or flight. We propose that - rather than reflecting a passive fear state - the particular somatic and cognitive characteristics of freezing help to conceal overt responses, while optimizing sensory processing and action preparation. Critical for these functions are the neurotransmitters noradrenaline and acetylcholine, which modulate neural information processing and also control the sympathetic and parasympathetic branches of the autonomic nervous system. However, the interactions between autonomic systems and the brain during freezing, and the way in which they jointly coordinate responses, remain incompletely explored. We review the joint actions of these systems and offer a novel computational framework to describe their temporally harmonized integration. This reconceptualization of freezing has implications for its role in decision-making under threat and for psychopathology.
Collapse
|
15
|
Zhu S, Lakshminarasimhan KJ, Arfaei N, Angelaki DE. Eye movements reveal spatiotemporal dynamics of visually-informed planning in navigation. eLife 2022; 11:e73097. [PMID: 35503099 PMCID: PMC9135400 DOI: 10.7554/elife.73097] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 05/01/2022] [Indexed: 11/28/2022] Open
Abstract
Goal-oriented navigation is widely understood to depend upon internal maps. Although this may be the case in many settings, humans tend to rely on vision in complex, unfamiliar environments. To study the nature of gaze during visually-guided navigation, we tasked humans to navigate to transiently visible goals in virtual mazes of varying levels of difficulty, observing that they took near-optimal trajectories in all arenas. By analyzing participants' eye movements, we gained insights into how they performed visually-informed planning. The spatial distribution of gaze revealed that environmental complexity mediated a striking trade-off in the extent to which attention was directed towards two complimentary aspects of the world model: the reward location and task-relevant transitions. The temporal evolution of gaze revealed rapid, sequential prospection of the future path, evocative of neural replay. These findings suggest that the spatiotemporal characteristics of gaze during navigation are significantly shaped by the unique cognitive computations underlying real-world, sequential decision making.
Collapse
Affiliation(s)
- Seren Zhu
- Center for Neural Science, New York UniversityNew YorkUnited States
| | | | - Nastaran Arfaei
- Department of Psychology, New York UniversityNew YorkUnited States
| | - Dora E Angelaki
- Center for Neural Science, New York UniversityNew YorkUnited States
- Department of Mechanical and Aerospace Engineering, New York UniversityNew YorkUnited States
| |
Collapse
|
16
|
Teşileanu T, Golkar S, Nasiri S, Sengupta AM, Chklovskii DB. Neural Circuits for Dynamics-Based Segmentation of Time Series. Neural Comput 2022; 34:891-938. [PMID: 35026035 DOI: 10.1162/neco_a_01476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 10/15/2021] [Indexed: 11/04/2022]
Abstract
The brain must extract behaviorally relevant latent variables from the signals streamed by the sensory organs. Such latent variables are often encoded in the dynamics that generated the signal rather than in the specific realization of the waveform. Therefore, one problem faced by the brain is to segment time series based on underlying dynamics. We present two algorithms for performing this segmentation task that are biologically plausible, which we define as acting in a streaming setting and all learning rules being local. One algorithm is model based and can be derived from an optimization problem involving a mixture of autoregressive processes. This algorithm relies on feedback in the form of a prediction error and can also be used for forecasting future samples. In some brain regions, such as the retina, the feedback connections necessary to use the prediction error for learning are absent. For this case, we propose a second, model-free algorithm that uses a running estimate of the autocorrelation structure of the signal to perform the segmentation. We show that both algorithms do well when tasked with segmenting signals drawn from autoregressive models with piecewise-constant parameters. In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known. We also test our methods on data sets generated by alternating snippets of voice recordings. We provide implementations of our algorithms at https://github.com/ttesileanu/bio-time-series.
Collapse
Affiliation(s)
- Tiberiu Teşileanu
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Siavash Golkar
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Samaneh Nasiri
- Department of Neurology, Harvard Medical School, Boston, MA 02115, U.S.A.
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, and Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854, U.S.A.
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, and Neuroscience Institute, NYU Langone Medical Center, New York, NY, U.S.A.
| |
Collapse
|
17
|
Wittkuhn L, Chien S, Hall-McMaster S, Schuck NW. Replay in minds and machines. Neurosci Biobehav Rev 2021; 129:367-388. [PMID: 34371078 DOI: 10.1016/j.neubiorev.2021.08.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 07/19/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022]
Abstract
Experience-related brain activity patterns reactivate during sleep, wakeful rest, and brief pauses from active behavior. In parallel, machine learning research has found that experience replay can lead to substantial performance improvements in artificial agents. Together, these lines of research suggest replay has a variety of computational benefits for decision-making and learning. Here, we provide an overview of putative computational functions of replay as suggested by machine learning and neuroscientific research. We show that replay can lead to faster learning, less forgetting, reorganization or augmentation of experiences, and support planning and generalization. In addition, we highlight the benefits of reactivating abstracted internal representations rather than veridical memories, and discuss how replay could provide a mechanism to build internal representations that improve learning and decision-making.
Collapse
Affiliation(s)
- Lennart Wittkuhn
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany.
| | - Samson Chien
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany
| | - Sam Hall-McMaster
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany
| | - Nicolas W Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany.
| |
Collapse
|
18
|
Wiggin TD, Hsiao Y, Liu JB, Huber R, Griffith LC. Rest Is Required to Learn an Appetitively-Reinforced Operant Task in Drosophila. Front Behav Neurosci 2021; 15:681593. [PMID: 34220464 PMCID: PMC8250850 DOI: 10.3389/fnbeh.2021.681593] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 05/21/2021] [Indexed: 11/13/2022] Open
Abstract
Maladaptive operant conditioning contributes to development of neuropsychiatric disorders. Candidate genes have been identified that contribute to this maladaptive plasticity, but the neural basis of operant conditioning in genetic model organisms remains poorly understood. The fruit fly Drosophila melanogaster is a versatile genetic model organism that readily forms operant associations with punishment stimuli. However, operant conditioning with a food reward has not been demonstrated in flies, limiting the types of neural circuits that can be studied. Here we present the first sucrose-reinforced operant conditioning paradigm for flies. In the paradigm, flies walk along a Y-shaped track with reward locations at the terminus of each hallway. When flies turn in the reinforced direction at the center of the track, they receive a sucrose reward at the end of the hallway. Only flies that rest early in training learn the reward contingency normally. Flies rewarded independently of their behavior do not form a learned association but have the same amount of rest as trained flies, showing that rest is not driven by learning. Optogenetically-induced sleep does not promote learning, indicating that sleep itself is not sufficient for learning the operant task. We validated the sensitivity of this assay to detect the effect of genetic manipulations by testing the classic learning mutant dunce. Dunce flies are learning-impaired in the Y-Track task, indicating a likely role for cAMP in the operant coincidence detector. This novel training paradigm will provide valuable insight into the molecular mechanisms of disease and the link between sleep and learning.
Collapse
Affiliation(s)
- Timothy D. Wiggin
- Department of Biology, National Center for Behavioral Genomics and Volen Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Yungyi Hsiao
- Department of Biology, National Center for Behavioral Genomics and Volen Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Jeffrey B. Liu
- Department of Biology, National Center for Behavioral Genomics and Volen Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Robert Huber
- Radcliffe Institute for Advanced Studies, Harvard University, Cambridge, MA, United States
- Juvatech, Toledo, MA, United States
| | - Leslie C. Griffith
- Department of Biology, National Center for Behavioral Genomics and Volen Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
19
|
Walther T, Diekmann N, Vijayabaskaran S, Donoso JR, Manahan-Vaughan D, Wiskott L, Cheng S. Context-dependent extinction learning emerging from raw sensory inputs: a reinforcement learning approach. Sci Rep 2021; 11:2713. [PMID: 33526840 PMCID: PMC7851139 DOI: 10.1038/s41598-021-81157-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Accepted: 12/08/2020] [Indexed: 11/09/2022] Open
Abstract
The context-dependence of extinction learning has been well studied and requires the hippocampus. However, the underlying neural mechanisms are still poorly understood. Using memory-driven reinforcement learning and deep neural networks, we developed a model that learns to navigate autonomously in biologically realistic virtual reality environments based on raw camera inputs alone. Neither is context represented explicitly in our model, nor is context change signaled. We find that memory-intact agents learn distinct context representations, and develop ABA renewal, whereas memory-impaired agents do not. These findings reproduce the behavior of control and hippocampal animals, respectively. We therefore propose that the role of the hippocampus in the context-dependence of extinction learning might stem from its function in episodic-like memory and not in context-representation per se. We conclude that context-dependence can emerge from raw visual inputs.
Collapse
Affiliation(s)
- Thomas Walther
- Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
| | - Nicolas Diekmann
- Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
| | | | - José R Donoso
- Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
| | | | - Laurenz Wiskott
- Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
| | - Sen Cheng
- Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany.
| |
Collapse
|
20
|
Dohmatob E, Dumas G, Bzdok D. Dark control: The default mode network as a reinforcement learning agent. Hum Brain Mapp 2020; 41:3318-3341. [PMID: 32500968 PMCID: PMC7375062 DOI: 10.1002/hbm.25019] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 03/22/2020] [Accepted: 04/12/2020] [Indexed: 12/11/2022] Open
Abstract
The default mode network (DMN) is believed to subserve the baseline mental activity in humans. Its higher energy consumption compared to other brain networks and its intimate coupling with conscious awareness are both pointing to an unknown overarching function. Many research streams speak in favor of an evolutionarily adaptive role in envisioning experience to anticipate the future. In the present work, we propose a process model that tries to explain how the DMN may implement continuous evaluation and prediction of the environment to guide behavior. The main purpose of DMN activity, we argue, may be described by Markov decision processes that optimize action policies via value estimates through vicarious trial and error. Our formal perspective on DMN function naturally accommodates as special cases previous interpretations based on (a) predictive coding, (b) semantic associations, and (c) a sentinel role. Moreover, this process model for the neural optimization of complex behavior in the DMN offers parsimonious explanations for recent experimental findings in animals and humans.
Collapse
Affiliation(s)
- Elvis Dohmatob
- Criteo AI LabParisFrance
- INRIA, Parietal TeamSaclayFrance
- Neurospin, CEAGif‐sur‐YvetteFrance
| | - Guillaume Dumas
- Institut Pasteur, Human Genetics and Cognitive Functions UnitParisFrance
- CNRS UMR 3571 Genes, Synapses and Cognition, Institut PasteurParisFrance
- University Paris Diderot, Sorbonne Paris CitéParisFrance
- Centre de Bioinformatique, Biostatistique et Biologie IntégrativeParisFrance
| | - Danilo Bzdok
- Department of Biomedical Engineering, McConnell Brain Imaging Centre, Montreal Neurological Institute, Faculty of Medicine, School of Computer ScienceMcGill UniversityMontrealCanada
- Mila—Quebec Artificial Intelligence InstituteMontrealCanada
| |
Collapse
|
21
|
Inayat S, Qandeel, Nazariahangarkolaee M, Singh S, McNaughton BL, Whishaw IQ, Mohajerani MH. Low acetylcholine during early sleep is important for motor memory consolidation. Sleep 2020; 43:zsz297. [PMID: 31825510 PMCID: PMC7294415 DOI: 10.1093/sleep/zsz297] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 11/06/2019] [Indexed: 01/29/2023] Open
Abstract
The synaptic homeostasis theory of sleep proposes that low neurotransmitter activity in sleep optimizes memory consolidation. We tested this theory by asking whether increasing acetylcholine levels during early sleep would weaken motor memory consolidation. We trained separate groups of adult mice on the rotarod walking task and the single pellet reaching task, and after training, administered physostigmine, an acetylcholinesterase inhibitor, to increase cholinergic tone in subsequent sleep. Post-sleep testing showed that physostigmine impaired motor skill acquisition of both tasks. Home-cage video monitoring and electrophysiology revealed that physostigmine disrupted sleep structure, delayed non-rapid-eye-movement sleep onset, and reduced slow-wave power in the hippocampus and cortex. Additional experiments showed that: (1) the impaired performance associated with physostigmine was not due to its effects on sleep structure, as 1 h of sleep deprivation after training did not impair rotarod performance, (2) a reduction in cholinergic tone by inactivation of cholinergic neurons during early sleep did not affect rotarod performance, and (3) stimulating or blocking muscarinic and nicotinic acetylcholine receptors did not impair rotarod performance. Taken together, the experiments suggest that the increased slow wave activity and inactivation of both muscarinic and nicotinic receptors during early sleep due to reduced acetylcholine contribute to motor memory consolidation.
Collapse
Affiliation(s)
- Samsoon Inayat
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Qandeel
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | | | - Surjeet Singh
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Bruce L McNaughton
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
- Center for the Neurobiology of Learning and Memory, University of California, Irvine
| | - Ian Q Whishaw
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Majid H Mohajerani
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| |
Collapse
|
22
|
Tononi G, Cirelli C. Sleep and synaptic down-selection. Eur J Neurosci 2020; 51:413-421. [PMID: 30614089 PMCID: PMC6612535 DOI: 10.1111/ejn.14335] [Citation(s) in RCA: 97] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Revised: 10/28/2018] [Accepted: 12/21/2018] [Indexed: 01/22/2023]
Abstract
The synaptic homeostasis hypothesis (SHY) proposes that sleep is an essential process needed by the brain to maintain the total amount of synaptic strength under control. SHY predicts that by the end of a waking day the synaptic connections of many neural circuits undergo a net increase in synaptic strength due to ongoing learning, which is mainly mediated by synaptic potentiation. Stronger synapses require more energy and supplies and are prone to saturation, creating the need for synaptic renormalization. Such renormalization should mainly occur during sleep, when the brain is disconnected from the environment and neural circuits can be broadly reactivated off-line to undergo a systematic but specific synaptic down-selection. In short, according to SHY sleep is the price to pay for waking plasticity, to avoid runaway potentiation, decreased signal-to-noise ratio, and impaired learning due to saturation. In this review, we briefly discuss the rationale of the hypothesis and recent supportive ultrastructural evidence obtained in our laboratory. We then examine recent studies by other groups showing the causal role of cortical slow waves and hippocampal sharp waves/ripples in sleep-dependent down-selection of neural activity and synaptic strength. Finally, we discuss some of the molecular mechanisms that could mediate synaptic weakening during sleep.
Collapse
Affiliation(s)
- Giulio Tononi
- Department of Psychiatry, University of Wisconsin-Madison, Madison, Wisconsin
| | - Chiara Cirelli
- Department of Psychiatry, University of Wisconsin-Madison, Madison, Wisconsin
| |
Collapse
|
23
|
Marshall L, Cross N, Binder S, Dang-Vu TT. Brain Rhythms During Sleep and Memory Consolidation: Neurobiological Insights. Physiology (Bethesda) 2020; 35:4-15. [DOI: 10.1152/physiol.00004.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Sleep can benefit memory consolidation. The characterization of brain regions underlying memory consolidation during sleep, as well as their temporal interplay, reflected by specific patterns of brain electric activity, is surfacing. Here, we provide an overview of recent concepts and results on the mechanisms of sleep-related memory consolidation. The latest studies strongly impacting future directions of research in this field are highlighted.
Collapse
Affiliation(s)
- Lisa Marshall
- Institute for Experimental and Clinical Pharmacology and Toxicology, University of Luebeck, Luebeck, Germany
- Center for Brain, Behavior and Metabolism, University of Luebeck, Luebeck, Germany
| | - Nathan Cross
- Perform Center, Center for Studies in Behavioral Neurobiology, and Department of Health, Kinesiology and Applied Physiology, Concordia University, Montreal, Quebec, Canada
- Centre de Recherche de l’Institut Universitaire de Gériatrie de Montréal, CIUSSS Centre-Sud-de-l’Ile-de-Montréal, Montreal, Quebec, Canada
| | - Sonja Binder
- Institute for Experimental and Clinical Pharmacology and Toxicology, University of Luebeck, Luebeck, Germany
- Center for Brain, Behavior and Metabolism, University of Luebeck, Luebeck, Germany
| | - Thien Thanh Dang-Vu
- Perform Center, Center for Studies in Behavioral Neurobiology, and Department of Health, Kinesiology and Applied Physiology, Concordia University, Montreal, Quebec, Canada
- Centre de Recherche de l’Institut Universitaire de Gériatrie de Montréal, CIUSSS Centre-Sud-de-l’Ile-de-Montréal, Montreal, Quebec, Canada
| |
Collapse
|
24
|
Görler R, Wiskott L, Cheng S. Improving sensory representations using episodic memory. Hippocampus 2019; 30:638-656. [DOI: 10.1002/hipo.23186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 11/28/2019] [Accepted: 12/04/2019] [Indexed: 11/06/2022]
Affiliation(s)
- Richard Görler
- Institute for Neural ComputationRuhr University Bochum Bochum Germany
- International Graduate School of NeuroscienceRuhr University Bochum Bochum Germany
| | - Laurenz Wiskott
- Institute for Neural ComputationRuhr University Bochum Bochum Germany
| | - Sen Cheng
- Institute for Neural ComputationRuhr University Bochum Bochum Germany
| |
Collapse
|
25
|
Bilkey DK, Jensen C. Neural Markers of Event Boundaries. Top Cogn Sci 2019; 13:128-141. [DOI: 10.1111/tops.12470] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 09/11/2019] [Accepted: 09/11/2019] [Indexed: 12/16/2022]
|
26
|
Drieu C, Zugaro M. Hippocampal Sequences During Exploration: Mechanisms and Functions. Front Cell Neurosci 2019; 13:232. [PMID: 31263399 PMCID: PMC6584963 DOI: 10.3389/fncel.2019.00232] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Accepted: 05/08/2019] [Indexed: 12/13/2022] Open
Abstract
Although the hippocampus plays a critical role in spatial and episodic memories, the mechanisms underlying memory formation, stabilization, and recall for adaptive behavior remain relatively unknown. During exploration, within single cycles of the ongoing theta rhythm that dominates hippocampal local field potentials, place cells form precisely ordered sequences of activity. These neural sequences result from the integration of both external inputs conveying sensory-motor information, and intrinsic network dynamics possibly related to memory processes. Their endogenous replay during subsequent sleep is critical for memory consolidation. The present review discusses possible mechanisms and functions of hippocampal theta sequences during exploration. We present several lines of evidence suggesting that these neural sequences play a key role in information processing and support the formation of initial memory traces, and discuss potential functional distinctions between neural sequences emerging during theta vs. awake sharp-wave ripples.
Collapse
Affiliation(s)
- Céline Drieu
- Center for Interdisciplinary Research in Biology, Collège de France, CNRS UMR 7241, INSERM U 1050, PSL Research University, Paris, France
| | - Michaël Zugaro
- Center for Interdisciplinary Research in Biology, Collège de France, CNRS UMR 7241, INSERM U 1050, PSL Research University, Paris, France
| |
Collapse
|
27
|
Abstract
A hallmark feature of episodic memory is that of "mental time travel," whereby an individual feels they have returned to a prior moment in time. Cognitive and behavioral neuroscience methods have revealed a neurobiological counterpart: Successful retrieval often is associated with reactivation of a prior brain state. We review the emerging literature on memory reactivation and recapitulation, and we describe evidence for the effects of emotion on these processes. Based on this review, we propose a new model: Negative Emotional Valence Enhances Recapitulation (NEVER). This model diverges from existing models of emotional memory in three key ways. First, it underscores the effects of emotion during retrieval. Second, it stresses the importance of sensory processing to emotional memory. Third, it emphasizes how emotional valence - whether an event is negative or positive - affects the way that information is remembered. The model specifically proposes that, as compared to positive events, negative events both trigger increased encoding of sensory detail and elicit a closer resemblance between the sensory encoding signature and the sensory retrieval signature. The model also proposes that negative valence enhances the reactivation and storage of sensory details over offline periods, leading to a greater divergence between the sensory recapitulation of negative and positive memories over time. Importantly, the model proposes that these valence-based differences occur even when events are equated for arousal, thus rendering an exclusively arousal-based theory of emotional memory insufficient. We conclude by discussing implications of the model and suggesting directions for future research to test the tenets of the model.
Collapse
|
28
|
Sadeh T, Chen J, Goshen-Gottstein Y, Moscovitch M. Overlap between hippocampal pre-encoding and encoding patterns supports episodic memory. Hippocampus 2019; 29:836-847. [PMID: 30779457 DOI: 10.1002/hipo.23079] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2018] [Revised: 12/15/2018] [Accepted: 01/15/2019] [Indexed: 01/13/2023]
Abstract
It is well-established that whether the information will be remembered or not depends on the extent to which the learning context is reinstated during post-encoding rest and/or at retrieval. It has yet to be determined, however, if the fundamental importance of contextual reinstatement to memory extends to periods of spontaneous neurocognitive activity prior to learning. We thus asked whether memory performance can be predicted by the extent to which spontaneous pre-encoding neural patterns resemble patterns elicited during encoding. Individuals studied and retrieved lists of words while undergoing fMRI-scanning. Multivoxel hippocampal patterns during resting periods prior to encoding resembled hippocampal patterns at encoding most strongly for items that were subsequently remembered. Furthermore, across subjects, the magnitude of similarity correlated with a behavioral measure of episodic recall. The results indicate that the neural context before learning is an important determinant of memory.
Collapse
Affiliation(s)
- Talya Sadeh
- Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, Beer Sheva, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer Sheva, Israel.,Department of Psychology, Ben-Gurion University of the Negev, Beer Sheva, Israel.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - Janice Chen
- Psychological & Brain Sciences, Johns Hopkins University, Baltimore, Maryland
| | | | - Morris Moscovitch
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| |
Collapse
|
29
|
Mattar MG, Daw ND. Prioritized memory access explains planning and hippocampal replay. Nat Neurosci 2018; 21:1609-1617. [PMID: 30349103 PMCID: PMC6203620 DOI: 10.1038/s41593-018-0232-z] [Citation(s) in RCA: 159] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 08/17/2018] [Indexed: 11/24/2022]
Abstract
To make decisions, animals must evaluate candidate choices by accessing memories of relevant experiences. Yet little is known about which experiences are considered or ignored during deliberation, which ultimately governs choice. We propose a normative theory predicting which memories should be accessed at each moment to optimize future decisions. Using nonlocal 'replay' of spatial locations in hippocampus as a window into memory access, we simulate a spatial navigation task in which an agent accesses memories of locations sequentially, ordered by utility: how much extra reward would be earned due to better choices. This prioritization balances two desiderata: the need to evaluate imminent choices versus the gain from propagating newly encountered information to preceding locations. Our theory offers a simple explanation for numerous findings about place cells; unifies seemingly disparate proposed functions of replay including planning, learning, and consolidation; and posits a mechanism whose dysfunction may underlie pathologies like rumination and craving.
Collapse
Affiliation(s)
- Marcelo G Mattar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
| | - Nathaniel D Daw
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Department of Psychology, Princeton University, Princeton, NJ, USA
| |
Collapse
|
30
|
Karimpanal TG, Bouffanais R. Experience Replay Using Transition Sequences. Front Neurorobot 2018; 12:32. [PMID: 29977200 PMCID: PMC6022201 DOI: 10.3389/fnbot.2018.00032] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Accepted: 05/31/2018] [Indexed: 11/20/2022] Open
Abstract
Experience replay is one of the most commonly used approaches to improve the sample efficiency of reinforcement learning algorithms. In this work, we propose an approach to select and replay sequences of transitions in order to accelerate the learning of a reinforcement learning agent in an off-policy setting. In addition to selecting appropriate sequences, we also artificially construct transition sequences using information gathered from previous agent-environment interactions. These sequences, when replayed, allow value function information to trickle down to larger sections of the state/state-action space, thereby making the most of the agent's experience. We demonstrate our approach on modified versions of standard reinforcement learning tasks such as the mountain car and puddle world problems and empirically show that it enables faster, and more accurate learning of value functions as compared to other forms of experience replay. Further, we briefly discuss some of the possible extensions to this work, as well as applications and situations where this approach could be particularly useful.
Collapse
Affiliation(s)
- Thommen George Karimpanal
- Engineering Product Development, Singapore University of Technology and Design, Singapore, Singapore
| | - Roland Bouffanais
- Engineering Product Development, Singapore University of Technology and Design, Singapore, Singapore
| |
Collapse
|
31
|
Maboudi K, Ackermann E, de Jong LW, Pfeiffer BE, Foster D, Diba K, Kemere C. Uncovering temporal structure in hippocampal output patterns. eLife 2018; 7:34467. [PMID: 29869611 PMCID: PMC6013258 DOI: 10.7554/elife.34467] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Accepted: 05/14/2018] [Indexed: 12/02/2022] Open
Abstract
Place cell activity of hippocampal pyramidal cells has been described as the cognitive substrate of spatial memory. Replay is observed during hippocampal sharp-wave-ripple-associated population burst events (PBEs) and is critical for consolidation and recall-guided behaviors. PBE activity has historically been analyzed as a phenomenon subordinate to the place code. Here, we use hidden Markov models to study PBEs observed in rats during exploration of both linear mazes and open fields. We demonstrate that estimated models are consistent with a spatial map of the environment, and can even decode animals’ positions during behavior. Moreover, we demonstrate the model can be used to identify hippocampal replay without recourse to the place code, using only PBE model congruence. These results suggest that downstream regions may rely on PBEs to provide a substrate for memory. Additionally, by forming models independent of animal behavior, we lay the groundwork for studies of non-spatial memory.
Collapse
Affiliation(s)
- Kourosh Maboudi
- Departmentof Anesthesiology, University of Michigan, Ann Arbor, United States.,Department of Psychology, University of Wisconsin-Milwaukee, Milwaukee, United States
| | - Etienne Ackermann
- Department of Electrical and Computer Engineering, Rice University, Houston, United States
| | - Laurel Watkins de Jong
- Departmentof Anesthesiology, University of Michigan, Ann Arbor, United States.,Department of Psychology, University of Wisconsin-Milwaukee, Milwaukee, United States
| | - Brad E Pfeiffer
- Department of Neuroscience, University of Texas Southwestern, Dallas, United States
| | - David Foster
- Department of Psychology, University of California, Berkeley, Berkeley, United States.,Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, United States
| | - Kamran Diba
- Departmentof Anesthesiology, University of Michigan, Ann Arbor, United States.,Department of Psychology, University of Wisconsin-Milwaukee, Milwaukee, United States
| | - Caleb Kemere
- Department of Electrical and Computer Engineering, Rice University, Houston, United States
| |
Collapse
|
32
|
Reiner M, Lev DD, Rosen A. Theta Neurofeedback Effects on Motor Memory Consolidation and Performance Accuracy: An Apparent Paradox? Neuroscience 2018; 378:198-210. [PMID: 28736135 DOI: 10.1016/j.neuroscience.2017.07.022] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Revised: 07/08/2017] [Accepted: 07/10/2017] [Indexed: 11/25/2022]
Abstract
Previous studies have shown that theta neurofeedback enhances motor memory consolidation on an easy-to-learn finger-tapping task. However, the simplicity of the finger-tapping task precludes evaluating the putative effects of elevated theta on performance accuracy. Mastering a motor sequence is classically assumed to entail faster performance with fewer errors. The speed-accuracy tradeoff (SAT) principle states that as action speed increases, motor performance accuracy decreases. The current study investigated whether theta neurofeedback could improve both performance speed and performance accuracy, or would only enhance performance speed at the cost of reduced accuracy. A more complex task was used to study the effects of parietal elevated theta on 45 healthy volunteers The findings confirmed previous results on the effects of theta neurofeedback on memory consolidation. In contrast to the two control groups, in the theta-neurofeedback group the speed-accuracy tradeoff was reversed. The speed-accuracy tradeoff patterns only stabilized after a night's sleep implying enhancement in terms of both speed and accuracy.
Collapse
Affiliation(s)
- Miriam Reiner
- The Virtual Reality and Neurocognition Lab, Faculty of Education in Science and Technology, Technion, Haifa 32000, Israel
| | - Dror D Lev
- Rekhasim Municipality Psychological Services, Israel
| | - Amit Rosen
- The Virtual Reality and Neurocognition Lab, Faculty of Education in Science and Technology, Technion, Haifa 32000, Israel.
| |
Collapse
|
33
|
Resonance with subthreshold oscillatory drive organizes activity and optimizes learning in neural networks. Proc Natl Acad Sci U S A 2018; 115:E3017-E3025. [PMID: 29545273 PMCID: PMC5879670 DOI: 10.1073/pnas.1716933115] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023] Open
Abstract
Networks of neurons need to reliably encode and replay patterns and sequences of activity. In the brain, sequences of spatially coding neurons are replayed in both the forward and reverse direction in time with respect to their order in recent experience. As of yet there is no network-level or biophysical mechanism known that can produce both modes of replay within the same network. Here we propose that resonance, a property of neurons, paired with subthreshold oscillations in neural input facilitate network-level learning of fixed and sequential activity patterns and lead to both forward and reverse replay. Network oscillations across and within brain areas are critical for learning and performance of memory tasks. While a large amount of work has focused on the generation of neural oscillations, their effect on neuronal populations’ spiking activity and information encoding is less known. Here, we use computational modeling to demonstrate that a shift in resonance responses can interact with oscillating input to ensure that networks of neurons properly encode new information represented in external inputs to the weights of recurrent synaptic connections. Using a neuronal network model, we find that due to an input current-dependent shift in their resonance response, individual neurons in a network will arrange their phases of firing to represent varying strengths of their respective inputs. As networks encode information, neurons fire more synchronously, and this effect limits the extent to which further “learning” (in the form of changes in synaptic strength) can occur. We also demonstrate that sequential patterns of neuronal firing can be accurately stored in the network; these sequences are later reproduced without external input (in the context of subthreshold oscillations) in both the forward and reverse directions (as has been observed following learning in vivo). To test whether a similar mechanism could act in vivo, we show that periodic stimulation of hippocampal neurons coordinates network activity and functional connectivity in a frequency-dependent manner. We conclude that resonance with subthreshold oscillations provides a plausible network-level mechanism to accurately encode and retrieve information without overstrengthening connections between neurons.
Collapse
|
34
|
Fang J, Rüther N, Bellebaum C, Wiskott L, Cheng S. The Interaction between Semantic Representation and Episodic Memory. Neural Comput 2018; 30:293-332. [DOI: 10.1162/neco_a_01044] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input. We show quantitatively that the accuracy of episodic memory crucially depends on the quality of the semantic representation. We compare two types of semantic representations: appropriate representations, which means that the representation is used to store input sequences that are of the same type as those that it was trained on, and inappropriate representations, which means that stored inputs differ from the training data. Retrieval accuracy is higher for appropriate representations because the encoded sequences are less divergent than those encoded with inappropriate representations. Consistent with our model prediction, we found that human subjects remember some aspects of episodes significantly more accurately if they had previously been familiarized with the objects occurring in the episode, as compared to episodes involving unfamiliar objects. We thus conclude that the interaction with the semantic system plays an important role for episodic memory.
Collapse
Affiliation(s)
- Jing Fang
- Mercator Research Group “Structure of Memory,” Institute for Neural Computation, and Faculty of Psychology, Ruhr University Bochum, Bochum 44801, Germany
| | - Naima Rüther
- Faculty of Psychology, Ruhr University Bochum, Bochum 44801, Germany
| | - Christian Bellebaum
- Institute of Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf 40225, Germany
| | - Laurenz Wiskott
- Institute for Neural Computation, Ruhr University Bochum, Bochum 44801, Germany
| | - Sen Cheng
- Mercator Research Group “Structure of Memory” and Institute for Neural Computation, Ruhr University Bochum, Bochum 44801, Germany
| |
Collapse
|
35
|
Pezzulo G, Kemere C, van der Meer MAA. Internally generated hippocampal sequences as a vantage point to probe future-oriented cognition. Ann N Y Acad Sci 2017; 1396:144-165. [PMID: 28548460 DOI: 10.1111/nyas.13329] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Revised: 01/31/2017] [Accepted: 02/07/2017] [Indexed: 12/22/2022]
Abstract
Information processing in the rodent hippocampus is fundamentally shaped by internally generated sequences (IGSs), expressed during two different network states: theta sequences, which repeat and reset at the ∼8 Hz theta rhythm associated with active behavior, and punctate sharp wave-ripple (SWR) sequences associated with wakeful rest or slow-wave sleep. A potpourri of diverse functional roles has been proposed for these IGSs, resulting in a fragmented conceptual landscape. Here, we advance a unitary view of IGSs, proposing that they reflect an inferential process that samples a policy from the animal's generative model, supported by hippocampus-specific priors. The same inference affords different cognitive functions when the animal is in distinct dynamical modes, associated with specific functional networks. Theta sequences arise when inference is coupled to the animal's action-perception cycle, supporting online spatial decisions, predictive processing, and episode encoding. SWR sequences arise when the animal is decoupled from the action-perception cycle and may support offline cognitive processing, such as memory consolidation, the prospective simulation of spatial trajectories, and imagination. We discuss the empirical bases of this proposal in relation to rodent studies and highlight how the proposed computational principles can shed light on the mechanisms of future-oriented cognition in humans.
Collapse
Affiliation(s)
- Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Caleb Kemere
- Electrical and Computer Engineering, Rice University, Houston, Texas
| | | |
Collapse
|
36
|
Debarnot U, Rossi M, Faraguna U, Schwartz S, Sebastiani L. Sleep does not facilitate insight in older adults. Neurobiol Learn Mem 2017; 140:106-113. [PMID: 28219752 DOI: 10.1016/j.nlm.2017.02.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2016] [Revised: 02/04/2017] [Accepted: 02/09/2017] [Indexed: 10/20/2022]
Abstract
Sleep has been shown to foster the process of insight generation in young adults during problem solving activities. Aging is characterized by substantial changes in sleep architecture altering memory consolidation. Whether sleep might promote the occurrence of insight in older adults as well has not yet been tested experimentally. To address this issue, we tested healthy young and old volunteers on an insight problem solving task, involving both explicit and implicit features, before and after a night of sleep or a comparable wakefulness period. Data showed that insight emerged significantly less frequently after a night of sleep in older adults compared to young. Moreover, there was no difference in the magnitude of insight occurrence following sleep and daytime -consolidation in aged participants. We further found that acquisition of implicit knowledge in the task before sleep potentiated the gain of insight in young participants, but this effect was not observed in aged participants. Overall, present findings demonstrate that a period of sleep does not significantly promote insight in problem solving in older adults.
Collapse
Affiliation(s)
- Ursula Debarnot
- Department of Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland; Geneva Neuroscience Center, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland; Inter-University Laboratory of Human Movement Science-EA 7424, University Claude Bernard Lyon 1, Villeurbanne, France.
| | - Marta Rossi
- Dipartimento di Ricerca Traslazionale e delle Nuove Tecnologie in Medicina e Chirurgia, Università degli Studi di Pisa, Italy; School of Life Sciences, University of Sussex, Brighton, United Kingdom
| | - Ugo Faraguna
- Dipartimento di Ricerca Traslazionale e delle Nuove Tecnologie in Medicina e Chirurgia, Università degli Studi di Pisa, Italy; Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Pisa, Italy
| | - Sophie Schwartz
- Department of Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland; Geneva Neuroscience Center, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Laura Sebastiani
- Dipartimento di Ricerca Traslazionale e delle Nuove Tecnologie in Medicina e Chirurgia, Università degli Studi di Pisa, Italy
| |
Collapse
|
37
|
Barner C, Seibold M, Born J, Diekelmann S. Consolidation of Prospective Memory: Effects of Sleep on Completed and Reinstated Intentions. Front Psychol 2017; 7:2025. [PMID: 28111558 PMCID: PMC5216900 DOI: 10.3389/fpsyg.2016.02025] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2016] [Accepted: 12/13/2016] [Indexed: 11/13/2022] Open
Abstract
Sleep has been shown to facilitate the consolidation of prospective memory, which is the ability to execute intended actions at the appropriate time in the future. In a previous study, the sleep benefit for prospective memory was mainly expressed as a preservation of prospective memory performance under divided attention as compared to full attention. Based on evidence that intentions are only remembered as long as they have not been executed yet (cf. 'Zeigarnik effect'), here we asked whether the enhancement of prospective memory by sleep vanishes if the intention is completed before sleep and whether completed intentions can be reinstated to benefit from sleep again. In Experiment 1, subjects learned cue-associate word pairs in the evening and were prospectively instructed to detect the cue words and to type in the associates in a lexical decision task (serving as ongoing task) 2 h later before a night of sleep or wakefulness. At a second surprise test 2 days later, sleep and wake subjects did not differ in prospective memory performance. Specifically, both sleep and wake groups detected fewer cue words under divided compared to full attention, indicating that sleep does not facilitate the consolidation of completed intentions. Unexpectedly, in Experiment 2, reinstating the intention, by instructing subjects about the second test after completion of the first test, was not sufficient to restore the sleep benefit. However, in Experiment 3, where subjects were instructed about both test sessions immediately after learning, sleep facilitated prospective memory performance at the second test after 2 days, evidenced by comparable cue word detection under divided attention and full attention in sleep participants, whereas wake participants detected fewer cue words under divided relative to full attention. Together, these findings show that for prospective memory to benefit from sleep, (i) the intention has to be active across the sleep period, and (ii) the intention should be induced in temporal proximity to the initial learning session.
Collapse
Affiliation(s)
- Christine Barner
- Institute of Medical Psychology and Behavioral Neurobiology, University of TübingenTübingen, Germany
| | - Mitja Seibold
- Institute of Medical Psychology and Behavioral Neurobiology, University of TübingenTübingen, Germany
| | - Jan Born
- Institute of Medical Psychology and Behavioral Neurobiology, University of TübingenTübingen, Germany
- Center for Integrative Neuroscience, University of TübingenTübingen, Germany
| | - Susanne Diekelmann
- Institute of Medical Psychology and Behavioral Neurobiology, University of TübingenTübingen, Germany
| |
Collapse
|
38
|
Box M, Jones MW, Whiteley N. A hidden Markov model for decoding and the analysis of replay in spike trains. J Comput Neurosci 2016; 41:339-366. [PMID: 27624733 PMCID: PMC5097117 DOI: 10.1007/s10827-016-0621-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2014] [Revised: 06/12/2016] [Accepted: 08/23/2016] [Indexed: 11/28/2022]
Abstract
We present a hidden Markov model that describes variation in an animal's position associated with varying levels of activity in action potential spike trains of individual place cell neurons. The model incorporates a coarse-graining of position, which we find to be a more parsimonious description of the system than other models. We use a sequential Monte Carlo algorithm for Bayesian inference of model parameters, including the state space dimension, and we explain how to estimate position from spike train observations (decoding). We obtain greater accuracy over other methods in the conditions of high temporal resolution and small neuronal sample size. We also present a novel, model-based approach to the study of replay: the expression of spike train activity related to behaviour during times of motionlessness or sleep, thought to be integral to the consolidation of long-term memories. We demonstrate how we can detect the time, information content and compression rate of replay events in simulated and real hippocampal data recorded from rats in two different environments, and verify the correlation between the times of detected replay events and of sharp wave/ripples in the local field potential.
Collapse
Affiliation(s)
- Marc Box
- Bristol Centre for Complexity Sciences, University of Bristol, Bristol, UK
| | - Matt W. Jones
- School of Physiology and Pharmacology, University of Bristol, Bristol, UK
| | - Nick Whiteley
- School of Mathematics, University of Bristol, Bristol, UK
| |
Collapse
|
39
|
Mader EC, Mader ACL. Sleep as spatiotemporal integration of biological processes that evolved to periodically reinforce neurodynamic and metabolic homeostasis: The 2m3d paradigm of sleep. J Neurol Sci 2016; 367:63-80. [PMID: 27423566 DOI: 10.1016/j.jns.2016.05.025] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Revised: 05/12/2016] [Accepted: 05/13/2016] [Indexed: 11/19/2022]
Abstract
Sleep continues to perplex scientists and researchers. Despite decades of sleep research, we still lack a clear understanding of the biological functions and evolution of sleep. In this review, we will examine sleep from a functional and phylogenetic perspective and describe some important conceptual gaps in understanding sleep. Classical theories of the biology and evolution of sleep emphasize sensory activation, energy balance, and metabolic homeostasis. Advances in electrophysiology, functional neuroimaging, and neuroplasticity allow us to view sleep within the framework of neural dynamics. With this paradigm shift, we have come to realize the importance of neurodynamic homeostasis in shaping the biology of sleep. Evidently, animals sleep to achieve neurodynamic and metabolic homeostasis. We are not aware of any framework for understanding sleep where neurodynamic, metabolic, homeostatic, chronophasic, and afferent variables are all taken into account. This motivated us to propose the two-mode three-drive (2m3d) paradigm of sleep. In the 2m3d paradigm, local neurodynamic/metabolic (N/M) processes switch between two modes-m0 and m1-in response to three drives-afferent, chronophasic, and homeostatic. The spatiotemporal integration of local m0/m1 operations gives rise to the global states of sleep and wakefulness. As a framework of evolution, the 2m3d paradigm allows us to view sleep as a robust adaptive strategy that evolved so animals can periodically reinforce neurodynamic and metabolic homeostasis while remaining sensitive to their internal and external environment.
Collapse
Affiliation(s)
- Edward Claro Mader
- Louisiana State University Health Sciences Center, Department of Neurology, New Orleans, LA 70112, USA.
| | | |
Collapse
|
40
|
Babichev A, Cheng S, Dabaghian YA. Topological Schemas of Cognitive Maps and Spatial Learning. Front Comput Neurosci 2016; 10:18. [PMID: 27014045 PMCID: PMC4781836 DOI: 10.3389/fncom.2016.00018] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2015] [Accepted: 02/16/2016] [Indexed: 01/16/2023] Open
Abstract
Spatial navigation in mammals is based on building a mental representation of their environment-a cognitive map. However, both the nature of this cognitive map and its underpinning in neural structures and activity remains vague. A key difficulty is that these maps are collective, emergent phenomena that cannot be reduced to a simple combination of inputs provided by individual neurons. In this paper we suggest computational frameworks for integrating the spiking signals of individual cells into a spatial map, which we call schemas. We provide examples of four schemas defined by different types of topological relations that may be neurophysiologically encoded in the brain and demonstrate that each schema provides its own large-scale characteristics of the environment-the schema integrals. Moreover, we find that, in all cases, these integrals are learned at a rate which is faster than the rate of complete training of neural networks. Thus, the proposed schema framework differentiates between the cognitive aspect of spatial learning and the physiological aspect at the neural network level.
Collapse
Affiliation(s)
- Andrey Babichev
- Department of Pediatrics Neurology, Baylor College of Medicine, Jan and Dan Duncan Neurological Research InstituteHouston, TX, USA; Department of Computational and Applied Mathematics, Rice UniversityHouston, TX, USA
| | - Sen Cheng
- Mercator Research Group "Structure of Memory" and Department of Psychology, Ruhr-University Bochum Bochum, Germany
| | - Yuri A Dabaghian
- Department of Pediatrics Neurology, Baylor College of Medicine, Jan and Dan Duncan Neurological Research InstituteHouston, TX, USA; Department of Computational and Applied Mathematics, Rice UniversityHouston, TX, USA
| |
Collapse
|
41
|
Mahoney JM, Titiz AS, Hernan AE, Scott RC. Short-Range Temporal Interactions in Sleep; Hippocampal Spike Avalanches Support a Large Milieu of Sequential Activity Including Replay. PLoS One 2016; 11:e0147708. [PMID: 26866597 PMCID: PMC4750866 DOI: 10.1371/journal.pone.0147708] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2015] [Accepted: 01/07/2016] [Indexed: 11/23/2022] Open
Abstract
Hippocampal neural systems consolidate multiple complex behaviors into memory. However, the temporal structure of neural firing supporting complex memory consolidation is unknown. Replay of hippocampal place cells during sleep supports the view that a simple repetitive behavior modifies sleep firing dynamics, but does not explain how multiple episodes could be integrated into associative networks for recollection during future cognition. Here we decode sequential firing structure within spike avalanches of all pyramidal cells recorded in sleeping rats after running in a circular track. We find that short sequences that combine into multiple long sequences capture the majority of the sequential structure during sleep, including replay of hippocampal place cells. The ensemble, however, is not optimized for maximally producing the behavior-enriched episode. Thus behavioral programming of sequential correlations occurs at the level of short-range interactions, not whole behavioral sequences and these short sequences are assembled into a large and complex milieu that could support complex memory consolidation.
Collapse
Affiliation(s)
- J. Matthew Mahoney
- Department of Neurological Sciences, University of Vermont College of Medicine, Burlington, Vermont, 05405, United States of America
- * E-mail:
| | - Ali S. Titiz
- Department of Neurosurgery, University of California Los Angeles, Los Angeles, California, 90095, United States of America
| | - Amanda E. Hernan
- Department of Neurological Sciences, University of Vermont College of Medicine, Burlington, Vermont, 05405, United States of America
| | - Rod C. Scott
- Department of Neurological Sciences, University of Vermont College of Medicine, Burlington, Vermont, 05405, United States of America
- Neurosciences Unit, University College London Institute of Child Health, London, United Kingdom
| |
Collapse
|
42
|
Dissociating memory traces and scenario construction in mental time travel. Neurosci Biobehav Rev 2015; 60:82-9. [PMID: 26627866 DOI: 10.1016/j.neubiorev.2015.11.011] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Revised: 11/17/2015] [Accepted: 11/23/2015] [Indexed: 12/31/2022]
Abstract
There has been a persistent debate about how to define episodic memory and whether it is a uniquely human capacity. On the one hand, many animal cognition studies employ content-based criteria, such as the what-where-when criterion, and argue that nonhuman animals possess episodic memory. On the other hand, many human cognition studies emphasize the subjective experience during retrieval as an essential property of episodic memory and the distinctly human foresight it purportedly enables. We propose that both perspectives may examine distinct but complementary aspects of episodic memory by drawing a conceptual distinction between episodic memory traces and mental time travel. Episodic memory traces are sequential mnemonic representations of particular, personally experienced episodes. Mental time travel draws on these traces, but requires other components to construct scenarios and embed them into larger narratives. Various nonhuman animals may store episodic memory traces, and yet it is possible that only humans are able to construct and reflect on narratives of their lives - and flexibly compare alternative scenarios of the remote future.
Collapse
|
43
|
Poucet B, Chaillan F, Truchet B, Save E, Sargolini F, Hok V. Is there a pilot in the brain? Contribution of the self-positioning system to spatial navigation. Front Behav Neurosci 2015; 9:292. [PMID: 26578920 PMCID: PMC4626564 DOI: 10.3389/fnbeh.2015.00292] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 10/15/2015] [Indexed: 11/13/2022] Open
Abstract
Since the discovery of place cells, the hippocampus is thought to be the neural substrate of a cognitive map. The later discovery of head direction cells, grid cells and border cells, as well as of cells with more complex spatial signals, has led to the idea that there is a brain system devoted to providing the animal with the information required to achieve efficient navigation. Current questioning is focused on how these signals are integrated in the brain. In this review, we focus on the issue of how self-localization is performed in the hippocampal place cell map. To do so, we first shortly review the sensory information used by place cells and then explain how this sensory information can lead to two coding modes, respectively based on external landmarks (allothetic information) and self-motion cues (idiothetic information). We hypothesize that these two modes can be used concomitantly with the rat shifting from one mode to the other during its spatial displacements. We then speculate that sequential reactivation of place cells could participate in the resetting of self-localization under specific circumstances and in learning a new environment. Finally, we provide some predictions aimed at testing specific aspects of the proposed ideas.
Collapse
Affiliation(s)
- Bruno Poucet
- Laboratory of Cognitive Neuroscience, CNRS and Aix-Marseille University Marseille, France ; Fédération 3C, CNRS and Aix-Marseille University Marseille, France
| | - Franck Chaillan
- Laboratory of Cognitive Neuroscience, CNRS and Aix-Marseille University Marseille, France ; Fédération 3C, CNRS and Aix-Marseille University Marseille, France
| | - Bruno Truchet
- Laboratory of Cognitive Neuroscience, CNRS and Aix-Marseille University Marseille, France ; Fédération 3C, CNRS and Aix-Marseille University Marseille, France
| | - Etienne Save
- Laboratory of Cognitive Neuroscience, CNRS and Aix-Marseille University Marseille, France ; Fédération 3C, CNRS and Aix-Marseille University Marseille, France
| | - Francesca Sargolini
- Laboratory of Cognitive Neuroscience, CNRS and Aix-Marseille University Marseille, France ; Fédération 3C, CNRS and Aix-Marseille University Marseille, France ; Institut Universitaire de France Paris, France
| | - Vincent Hok
- Laboratory of Cognitive Neuroscience, CNRS and Aix-Marseille University Marseille, France ; Fédération 3C, CNRS and Aix-Marseille University Marseille, France
| |
Collapse
|
44
|
Ólafsdóttir HF, Barry C, Saleem AB, Hassabis D, Spiers HJ. Hippocampal place cells construct reward related sequences through unexplored space. eLife 2015; 4:e06063. [PMID: 26112828 PMCID: PMC4479790 DOI: 10.7554/elife.06063] [Citation(s) in RCA: 170] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2014] [Accepted: 05/22/2015] [Indexed: 12/04/2022] Open
Abstract
Dominant theories of hippocampal function propose that place cell representations are formed during an animal's first encounter with a novel environment and are subsequently replayed during off-line states to support consolidation and future behaviour. Here we report that viewing the delivery of food to an unvisited portion of an environment leads to off-line pre-activation of place cells sequences corresponding to that space. Such ‘preplay’ was not observed for an unrewarded but otherwise similar portion of the environment. These results suggest that a hippocampal representation of a visible, yet unexplored environment can be formed if the environment is of motivational relevance to the animal. We hypothesise such goal-biased preplay may support preparation for future experiences in novel environments. DOI:http://dx.doi.org/10.7554/eLife.06063.001 As an animal explores an area, part of the brain called the hippocampus creates a mental map of the space. When the animal is in one location, a few neurons called ‘place cells’ will fire. If the animal moves to a new spot, other place cells fire instead. Each time the animal returns to that spot, the same place cells will fire. Thus, as the animal moves, a place-specific pattern of firing emerges that scientists can view by recording the cells' activity and which can be used to reconstruct the animal's position. After exploring a space, the hippocampus may replay the new place-specific pattern of activity during sleep. By doing so, the brain consolidates the memory of the space for return visits. Recent evidence now suggests that these mental rehearsals—or internal simulations of the space—may begin even before a new space has been explored. Now, Ólafsdóttir, Barry et al. report that whether an animal's brain simulates a first visit to a new space depends on whether the animal anticipates a reward. In the experiments, rats were allowed to run up to the junction in a T-shaped track. The animals could see into each of the arms, but not enter them. Food was then placed in one of the inaccessible arms. Ólafsdóttir, Barry et al. recorded the firing of place cells in the brain of the animals when they were on the track and during a rest period afterwards. The rats were then allowed onto the inaccessible arms, and again their brain activity was recorded. In the rest period after the rats first viewed the inaccessible arms, the place cell pattern that would later form the mental map of a journey to and from the food-containing arm was pre-activated. However, the place cell pattern that would become the mental map of the other inaccessible arm was not activated before the rat explored that area. Therefore, Ólafsdóttir, Barry et al. suggest that the perception of reward influences which place cell pattern is simulated during rest. An implication of these findings is that the brain preferentially simulates past or future experiences that are deemed to be functionally significant, such as those associated with reward. A future challenge will be to determine whether this goal-related simulation of unvisited spaces predicts and is needed for behaviour such as successful navigation to a goal. DOI:http://dx.doi.org/10.7554/eLife.06063.002
Collapse
Affiliation(s)
- H Freyja Ólafsdóttir
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, United Kingdom
| | - Caswell Barry
- Department of Cell and Developmental Biology, University College London, London, United Kingdom
| | - Aman B Saleem
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| | - Demis Hassabis
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
| | - Hugo J Spiers
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, United Kingdom
| |
Collapse
|
45
|
Bayati M, Valizadeh A, Abbassian A, Cheng S. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation. Front Comput Neurosci 2015; 9:69. [PMID: 26089794 PMCID: PMC4454885 DOI: 10.3389/fncom.2015.00069] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 05/20/2015] [Indexed: 12/30/2022] Open
Abstract
Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence.
Collapse
Affiliation(s)
- Mehdi Bayati
- Mercator Research Group "Structure of Memory", Ruhr-Universität Bochum Bochum, Germany
| | - Alireza Valizadeh
- Department of Physics, Institute for Advanced Studies in Basic Sciences Zanjan, Iran ; School of Cognitive Sciences, Institute for Research in Fundamental Sciences Tehran, Iran
| | | | - Sen Cheng
- Mercator Research Group "Structure of Memory", Ruhr-Universität Bochum Bochum, Germany ; Department of Psychology, Ruhr-Universität Bochum Bochum, Germany
| |
Collapse
|
46
|
Not only … but also: REM sleep creates and NREM Stage 2 instantiates landmark junctions in cortical memory networks. Neurobiol Learn Mem 2015; 122:69-87. [PMID: 25921620 DOI: 10.1016/j.nlm.2015.04.005] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2014] [Revised: 04/16/2015] [Accepted: 04/16/2015] [Indexed: 12/13/2022]
Abstract
This article argues both rapid eye movement (REM) and non-rapid eye movement (NREM) sleep contribute to overnight episodic memory processes but their roles differ. Episodic memory may have evolved from memory for spatial navigation in animals and humans. Equally, mnemonic navigation in world and mental space may rely on fundamentally equivalent processes. Consequently, the basic spatial network characteristics of pathways which meet at omnidirectional nodes or junctions may be conserved in episodic brain networks. A pathway is formally identified with the unidirectional, sequential phases of an episodic memory. In contrast, the function of omnidirectional junctions is not well understood. In evolutionary terms, both animals and early humans undertook tours to a series of landmark junctions, to take advantage of resources (food, water and shelter), whilst trying to avoid predators. Such tours required memory for emotionally significant landmark resource-place-danger associations and the spatial relationships amongst these landmarks. In consequence, these tours may have driven the evolution of both spatial and episodic memory. The environment is dynamic. Resource-place associations are liable to shift and new resource-rich landmarks may be discovered, these changes may require re-wiring in neural networks. To realise these changes, REM may perform an associative, emotional encoding function between memory networks, engendering an omnidirectional landmark junction which is instantiated in the cortex during NREM Stage 2. In sum, REM may preplay associated elements of past episodes (rather than replay individual episodes), to engender an unconscious representation which can be used by the animal on approach to a landmark junction in wake.
Collapse
|
47
|
Abstract
The establishment of memories involves reactivation of waking neuronal activity patterns and strengthening of associated neural circuits during slow-wave sleep (SWS), a process known as "cellular consolidation" (Dudai and Morris, 2013). Reactivation of neural activity patterns during waking behaviors that occurs on a timescale of seconds to minutes is thought to constitute memory recall (O'Keefe and Nadel, 1978), whereas consolidation of memory traces may be revealed and served by correlated firing (reactivation) that appears during sleep under conditions suitable for synaptic modification (Buhry et al., 2011). Although reactivation has been observed in human neuronal recordings (Gelbard-Sagiv et al., 2008; Miller et al., 2013), reactivation during sleep has not, likely because data are difficult to obtain and the effect is subtle. Seizures, however, provide intense and synchronous, yet sparse activation (Bower et al., 2012) that could produce a stronger consolidation effect if seizures activate learning-related mechanisms similar to those activated by learned tasks. Continuous wide-bandwidth recordings from patients undergoing intracranial monitoring for drug-resistant epilepsy revealed reactivation of seizure-related neuronal activity during subsequent SWS, but not wakefulness. Those neuronal assemblies that were most strongly activated during seizures showed the largest correlation changes, suggesting that consolidation selectively strengthened neuronal circuits activated by seizures. These results suggest that seizures "hijack" physiological learning mechanisms and also suggest a novel epilepsy therapy targeting neuronal dynamics during post-seizure sleep.
Collapse
|
48
|
Cohen N, Pell L, Edelson MG, Ben-Yakov A, Pine A, Dudai Y. Peri-encoding predictors of memory encoding and consolidation. Neurosci Biobehav Rev 2015; 50:128-42. [DOI: 10.1016/j.neubiorev.2014.11.002] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2014] [Revised: 10/05/2014] [Accepted: 11/02/2014] [Indexed: 10/24/2022]
|
49
|
Hippocampal Sequences and the Cognitive Map. SPRINGER SERIES IN COMPUTATIONAL NEUROSCIENCE 2015. [DOI: 10.1007/978-1-4939-1969-7_5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
50
|
Wikenheiser AM, Redish AD. Decoding the cognitive map: ensemble hippocampal sequences and decision making. Curr Opin Neurobiol 2014; 32:8-15. [PMID: 25463559 DOI: 10.1016/j.conb.2014.10.002] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2014] [Revised: 09/29/2014] [Accepted: 10/01/2014] [Indexed: 10/24/2022]
Abstract
Tolman proposed that complex animal behavior is mediated by the cognitive map, an integrative learning system that allows animals to reconfigure previous experience in order to compute predictions about the future. The discovery of place cells in the rodent hippocampus immediately suggested a plausible neural mechanism to fulfill the 'map' component of Tolman's theory. Recent work examining hippocampal representations occurring at fast time scales suggests that these sequences might be important for supporting the inferential mental operations associated with the cognitive map function. New findings that hippocampal sequences play an important causal role in mediating adaptive behavior on a moment-by-moment basis suggest specific neural processes that may underlie Tolman's cognitive map framework.
Collapse
Affiliation(s)
- Andrew M Wikenheiser
- Graduate Program in Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA
| | - A David Redish
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA.
| |
Collapse
|