1
|
Liao Z, Losonczy A. Learning, Fast and Slow: Single- and Many-Shot Learning in the Hippocampus. Annu Rev Neurosci 2024; 47:187-209. [PMID: 38663090 DOI: 10.1146/annurev-neuro-102423-100258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2024]
Abstract
The hippocampus is critical for memory and spatial navigation. The ability to map novel environments, as well as more abstract conceptual relationships, is fundamental to the cognitive flexibility that humans and other animals require to survive in a dynamic world. In this review, we survey recent advances in our understanding of how this flexibility is implemented anatomically and functionally by hippocampal circuitry, during both active exploration (online) and rest (offline). We discuss the advantages and limitations of spike timing-dependent plasticity and the more recently discovered behavioral timescale synaptic plasticity in supporting distinct learning modes in the hippocampus. Finally, we suggest complementary roles for these plasticity types in explaining many-shot and single-shot learning in the hippocampus and discuss how these rules could work together to support the learning of cognitive maps.
Collapse
Affiliation(s)
- Zhenrui Liao
- Department of Neuroscience and Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA;
| | - Attila Losonczy
- Department of Neuroscience and Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA;
| |
Collapse
|
2
|
Choucry A, Nomoto M, Inokuchi K. Engram mechanisms of memory linking and identity. Nat Rev Neurosci 2024; 25:375-392. [PMID: 38664582 DOI: 10.1038/s41583-024-00814-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/25/2024] [Indexed: 05/25/2024]
Abstract
Memories are thought to be stored in neuronal ensembles referred to as engrams. Studies have suggested that when two memories occur in quick succession, a proportion of their engrams overlap and the memories become linked (in a process known as prospective linking) while maintaining their individual identities. In this Review, we summarize the key principles of memory linking through engram overlap, as revealed by experimental and modelling studies. We describe evidence of the involvement of synaptic memory substrates, spine clustering and non-linear neuronal capacities in prospective linking, and suggest a dynamic somato-synaptic model, in which memories are shared between neurons yet remain separable through distinct dendritic and synaptic allocation patterns. We also bring into focus retrospective linking, in which memories become associated after encoding via offline reactivation, and discuss key temporal and mechanistic differences between prospective and retrospective linking, as well as the potential differences in their cognitive outcomes.
Collapse
Affiliation(s)
- Ali Choucry
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan
- Department of Pharmacology and Toxicology, Faculty of Pharmacy, Cairo University, Cairo, Egypt
| | - Masanori Nomoto
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan
- CREST, Japan Science and Technology Agency (JST), University of Toyama, Toyama, Japan
- Japan Agency for Medical Research and Development (AMED), Tokyo, Japan
| | - Kaoru Inokuchi
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan.
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan.
- CREST, Japan Science and Technology Agency (JST), University of Toyama, Toyama, Japan.
| |
Collapse
|
3
|
Ecker A, Egas Santander D, Bolaños-Puchet S, Isbister JB, Reimann MW. Cortical cell assemblies and their underlying connectivity: An in silico study. PLoS Comput Biol 2024; 20:e1011891. [PMID: 38466752 PMCID: PMC10927091 DOI: 10.1371/journal.pcbi.1011891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 02/05/2024] [Indexed: 03/13/2024] Open
Abstract
Recent developments in experimental techniques have enabled simultaneous recordings from thousands of neurons, enabling the study of functional cell assemblies. However, determining the patterns of synaptic connectivity giving rise to these assemblies remains challenging. To address this, we developed a complementary, simulation-based approach, using a detailed, large-scale cortical network model. Using a combination of established methods we detected functional cell assemblies from the stimulus-evoked spiking activity of 186,665 neurons. We studied how the structure of synaptic connectivity underlies assembly composition, quantifying the effects of thalamic innervation, recurrent connectivity, and the spatial arrangement of synapses on dendrites. We determined that these features reduce up to 30%, 22%, and 10% of the uncertainty of a neuron belonging to an assembly. The detected assemblies were activated in a stimulus-specific sequence and were grouped based on their position in the sequence. We found that the different groups were affected to different degrees by the structural features we considered. Additionally, connectivity was more predictive of assembly membership if its direction aligned with the temporal order of assembly activation, if it originated from strongly interconnected populations, and if synapses clustered on dendritic branches. In summary, reversing Hebb's postulate, we showed how cells that are wired together, fire together, quantifying how connectivity patterns interact to shape the emergence of assemblies. This includes a qualitative aspect of connectivity: not just the amount, but also the local structure matters; from the subcellular level in the form of dendritic clustering to the presence of specific network motifs.
Collapse
Affiliation(s)
- András Ecker
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Daniela Egas Santander
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Sirio Bolaños-Puchet
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - James B. Isbister
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Michael W. Reimann
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| |
Collapse
|
4
|
Nguyen ND, Lutas A, Amsalem O, Fernando J, Ahn AYE, Hakim R, Vergara J, McMahon J, Dimidschstein J, Sabatini BL, Andermann ML. Cortical reactivations predict future sensory responses. Nature 2024; 625:110-118. [PMID: 38093002 PMCID: PMC11014741 DOI: 10.1038/s41586-023-06810-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 10/31/2023] [Indexed: 01/05/2024]
Abstract
Many theories of offline memory consolidation posit that the pattern of neurons activated during a salient sensory experience will be faithfully reactivated, thereby stabilizing the pattern1,2. However, sensory-evoked patterns are not stable but, instead, drift across repeated experiences3-6. Here, to investigate the relationship between reactivations and the drift of sensory representations, we imaged the calcium activity of thousands of excitatory neurons in the mouse lateral visual cortex. During the minute after a visual stimulus, we observed transient, stimulus-specific reactivations, often coupled with hippocampal sharp-wave ripples. Stimulus-specific reactivations were abolished by local cortical silencing during the preceding stimulus. Reactivations early in a session systematically differed from the pattern evoked by the previous stimulus-they were more similar to future stimulus response patterns, thereby predicting both within-day and across-day representational drift. In particular, neurons that participated proportionally more or less in early stimulus reactivations than in stimulus response patterns gradually increased or decreased their future stimulus responses, respectively. Indeed, we could accurately predict future changes in stimulus responses and the separation of responses to distinct stimuli using only the rate and content of reactivations. Thus, reactivations may contribute to a gradual drift and separation in sensory cortical response patterns, thereby enhancing sensory discrimination7.
Collapse
Affiliation(s)
- Nghia D Nguyen
- Program in Neuroscience, Harvard University, Boston, MA, USA
| | - Andrew Lutas
- Division of Endocrinology, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Diabetes, Endocrinology and Obesity Branch, National Institutes of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, USA
| | - Oren Amsalem
- Division of Endocrinology, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Jesseba Fernando
- Division of Endocrinology, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Andy Young-Eon Ahn
- Division of Endocrinology, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Richard Hakim
- Program in Neuroscience, Harvard University, Boston, MA, USA
- Howard Hughes Medical Institute, Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Josselyn Vergara
- Stanley Center for Psychiatric Research, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Justin McMahon
- Stanley Center for Psychiatric Research, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Jordane Dimidschstein
- Stanley Center for Psychiatric Research, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Bernardo L Sabatini
- Program in Neuroscience, Harvard University, Boston, MA, USA
- Howard Hughes Medical Institute, Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Mark L Andermann
- Program in Neuroscience, Harvard University, Boston, MA, USA.
- Division of Endocrinology, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA.
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
5
|
Boscaglia M, Gastaldi C, Gerstner W, Quian Quiroga R. A dynamic attractor network model of memory formation, reinforcement and forgetting. PLoS Comput Biol 2023; 19:e1011727. [PMID: 38117859 PMCID: PMC10766193 DOI: 10.1371/journal.pcbi.1011727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 01/04/2024] [Accepted: 12/02/2023] [Indexed: 12/22/2023] Open
Abstract
Empirical evidence shows that memories that are frequently revisited are easy to recall, and that familiar items involve larger hippocampal representations than less familiar ones. In line with these observations, here we develop a modelling approach to provide a mechanistic understanding of how hippocampal neural assemblies evolve differently, depending on the frequency of presentation of the stimuli. For this, we added an online Hebbian learning rule, background firing activity, neural adaptation and heterosynaptic plasticity to a rate attractor network model, thus creating dynamic memory representations that can persist, increase or fade according to the frequency of presentation of the corresponding memory patterns. Specifically, we show that a dynamic interplay between Hebbian learning and background firing activity can explain the relationship between the memory assembly sizes and their frequency of stimulation. Frequently stimulated assemblies increase their size independently from each other (i.e. creating orthogonal representations that do not share neurons, thus avoiding interference). Importantly, connections between neurons of assemblies that are not further stimulated become labile so that these neurons can be recruited by other assemblies, providing a neuronal mechanism of forgetting.
Collapse
Affiliation(s)
- Marta Boscaglia
- Centre for Systems Neuroscience, University of Leicester, United Kingdom
- School of Psychology and Vision Sciences, University of Leicester, United Kingdom
| | - Chiara Gastaldi
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Rodrigo Quian Quiroga
- Centre for Systems Neuroscience, University of Leicester, United Kingdom
- Hospital del Mar Medical Research Institute (IMIM), Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
- Ruijin hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, People’s Republic of China
| |
Collapse
|
6
|
Abstract
According to the commonly accepted opinion, memory engrams are formed and stored at the level of neural networks due to a change in the strength of synaptic connections between neurons. This hypothesis of synaptic plasticity (HSP), formulated by Donald Hebb in the 1940s, continues to dominate the directions of experimental studies and the interpretations of experimental results in the field. The universal acceptance of the HSP has transformed it from a hypothesis into an incontrovertible theory. In this article, I show that the entire body of experimental and clinical data obtained in studies of long-term memory in mammals and humans is inconsistent with the HSP. Instead, these data suggest that long-term memory is formed and stored at the intracellular level where it is reliably protected from ongoing synaptic activity, including pathological epileptic activity. It seems that the generally accepted HSP became a serious obstacle to understanding the mechanisms of memory and that progress in this field requires rethinking this doctrine and shifting experimental efforts toward exploring the intracellular mechanisms.
Collapse
Affiliation(s)
- Yuri I Arshavsky
- BioCircuits Institute, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
7
|
Gallinaro JV, Scholl B, Clopath C. Synaptic weights that correlate with presynaptic selectivity increase decoding performance. PLoS Comput Biol 2023; 19:e1011362. [PMID: 37549193 PMCID: PMC10434873 DOI: 10.1371/journal.pcbi.1011362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 08/17/2023] [Accepted: 07/16/2023] [Indexed: 08/09/2023] Open
Abstract
The activity of neurons in the visual cortex is often characterized by tuning curves, which are thought to be shaped by Hebbian plasticity during development and sensory experience. This leads to the prediction that neural circuits should be organized such that neurons with similar functional preference are connected with stronger weights. In support of this idea, previous experimental and theoretical work have provided evidence for a model of the visual cortex characterized by such functional subnetworks. A recent experimental study, however, have found that the postsynaptic preferred stimulus was defined by the total number of spines activated by a given stimulus and independent of their individual strength. While this result might seem to contradict previous literature, there are many factors that define how a given synaptic input influences postsynaptic selectivity. Here, we designed a computational model in which postsynaptic functional preference is defined by the number of inputs activated by a given stimulus. Using a plasticity rule where synaptic weights tend to correlate with presynaptic selectivity, and is independent of functional-similarity between pre- and postsynaptic activity, we find that this model can be used to decode presented stimuli in a manner that is comparable to maximum likelihood inference.
Collapse
Affiliation(s)
- Júlia V. Gallinaro
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Benjamin Scholl
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania, United States of America
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
8
|
Micou C, O'Leary T. Representational drift as a window into neural and behavioural plasticity. Curr Opin Neurobiol 2023; 81:102746. [PMID: 37392671 DOI: 10.1016/j.conb.2023.102746] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 05/25/2023] [Accepted: 05/31/2023] [Indexed: 07/03/2023]
Abstract
Large-scale recordings of neural activity over days and weeks have revealed that neural representations of familiar tasks, precepts and actions continually evolve without obvious changes in behaviour. We hypothesise that this steady drift in neural activity and accompanying physiological changes is due in part to the continuous application of a learning rule at the cellular and population level. Explicit predictions of this drift can be found in neural network models that use iterative learning to optimise weights. Drift therefore provides a measurable signal that can reveal systems-level properties of biological plasticity mechanisms, such as their precision and effective learning rates.
Collapse
Affiliation(s)
- Charles Micou
- Department of Engineering, University of Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, United Kingdom; Theoretical Sciences Visiting Program, Okinawa Institute of Science and Technology Graduate University, Onna, 904-0495, Japan.
| |
Collapse
|
9
|
The molecular memory code and synaptic plasticity: A synthesis. Biosystems 2023; 224:104825. [PMID: 36610586 DOI: 10.1016/j.biosystems.2022.104825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 12/29/2022] [Accepted: 12/30/2022] [Indexed: 01/06/2023]
Abstract
The most widely accepted view of memory in the brain holds that synapses are the storage sites of memory, and that memories are formed through associative modification of synapses. This view has been challenged on conceptual and empirical grounds. As an alternative, it has been proposed that molecules within the cell body are the storage sites of memory, and that memories are formed through biochemical operations on these molecules. This paper proposes a synthesis of these two views, grounded in a computational model of memory. Synapses are conceived as storage sites for the parameters of an approximate posterior probability distribution over latent causes. Intracellular molecules are conceived as storage sites for the parameters of a generative model. The model stipulates how these two components work together as part of an integrated algorithm for learning and inference.
Collapse
|
10
|
Folschweiller S, Sauer JF. Controlling neuronal assemblies: a fundamental function of respiration-related brain oscillations in neuronal networks. Pflugers Arch 2023; 475:13-21. [PMID: 35637391 PMCID: PMC9816207 DOI: 10.1007/s00424-022-02708-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/19/2022] [Indexed: 01/31/2023]
Abstract
Respiration exerts profound influence on cognition, which is presumed to rely on the generation of local respiration-coherent brain oscillations and the entrainment of cortical neurons. Here, we propose an addition to that view by emphasizing the role of respiration in pacing cortical assemblies (i.e., groups of synchronized, coactive neurons). We review recent findings of how respiration directly entrains identified assembly patterns and discuss how respiration-dependent pacing of assembly activations might be beneficial for cognitive functions.
Collapse
Affiliation(s)
- Shani Folschweiller
- Institute for Physiology I, Medical Faculty, Albert-Ludwigs-University Freiburg, Hermann-Herder-Strasse 7, 79104, Freiburg, Germany
- Faculty of Biology, Albert-Ludwigs-University Freiburg, Schaenzlestrasse 1, 79104, Freiburg, Germany
| | - Jonas-Frederic Sauer
- Institute for Physiology I, Medical Faculty, Albert-Ludwigs-University Freiburg, Hermann-Herder-Strasse 7, 79104, Freiburg, Germany.
| |
Collapse
|
11
|
Fukai T. Computational models of Idling brain activity for memory processing. Neurosci Res 2022; 189:75-82. [PMID: 36592825 DOI: 10.1016/j.neures.2022.12.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 12/29/2022] [Indexed: 01/01/2023]
Abstract
Studying the underlying neural mechanisms of cognitive functions of the brain is one of the central questions in modern biology. Moreover, it has significantly impacted the development of novel technologies in artificial intelligence. Spontaneous activity is a unique feature of the brain and is currently lacking in many artificially constructed intelligent machines. Spontaneous activity may represent the brain's idling states, which are internally driven by neuronal networks and possibly participate in offline processing during awake, sleep, and resting states. Evidence is accumulating that the brain's spontaneous activity is not mere noise but part of the mechanisms to process information about previous experiences. A bunch of literature has shown how previous sensory and behavioral experiences influence the subsequent patterns of brain activity with various methods in various animals. It seems, however, that the patterns of neural activity and their computational roles differ significantly from area to area and from function to function. In this article, I review the various forms of the brain's spontaneous activity, especially those observed during memory processing, and some attempts to model the generation mechanisms and computational roles of such activities.
Collapse
Affiliation(s)
- Tomoki Fukai
- Okinawa Institute of Science and Technology, Tancha 1919-1, Onna-son, Okinawa 904-0495, Japan.
| |
Collapse
|
12
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
13
|
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat Commun 2022; 13:5163. [PMID: 36056006 PMCID: PMC9440011 DOI: 10.1038/s41467-022-32646-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 08/08/2022] [Indexed: 11/08/2022] Open
Abstract
Animals rapidly adapt their movements to external perturbations, a process paralleled by changes in neural activity in the motor cortex. Experimental studies suggest that these changes originate from altered inputs (Hinput) rather than from changes in local connectivity (Hlocal), as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, we used a modular recurrent neural network to qualitatively test this interpretation. As expected, Hinput resulted in small activity changes and largely preserved covariance. Surprisingly given the presumed dependence of stable covariance on preserved circuit connectivity, Hlocal led to only slightly larger changes in activity and covariance, still within the range of experimental recordings. This similarity is due to Hlocal only requiring small, correlated connectivity changes for successful adaptation. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference between Hinput and Hlocal, which could be exploited when designing future experiments.
Collapse
|
14
|
Abstract
Humans have the remarkable ability to continually store new memories, while maintaining old memories for a lifetime. How the brain avoids catastrophic forgetting of memories due to interference between encoded memories is an open problem in computational neuroscience. Here we present a model for continual learning in a recurrent neural network combining Hebbian learning, synaptic decay and a novel memory consolidation mechanism: memories undergo stochastic rehearsals with rates proportional to the memory's basin of attraction, causing self-amplified consolidation. This mechanism gives rise to memory lifetimes that extend much longer than the synaptic decay time, and retrieval probability of memories that gracefully decays with their age. The number of retrievable memories is proportional to a power of the number of neurons. Perturbations to the circuit model cause temporally-graded retrograde and anterograde deficits, mimicking observed memory impairments following neurological trauma.
Collapse
|
15
|
Hennig MH. The sloppy relationship between neural circuit structure and function. J Physiol 2022. [PMID: 35876720 DOI: 10.1113/jp282757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/20/2022] [Indexed: 11/08/2022] Open
Abstract
Investigating and describing the relationships between the structure of a circuit and its function has a long tradition in neuroscience. Since neural circuits acquire their structure through sophisticated developmental programmes, and memories and experiences are maintained through synaptic modification, it is to be expected that structure is closely linked to function. Recent findings challenge this hypothesis from three different angles: Function does not strongly constrain circuit parameters, many parameters in neural circuits are irrelevant and contribute little to function, and circuit parameters are unstable and subject to constant random drift. At the same time however, recent work also showed that dynamics in neural circuit activity that is related to function are robust over time and across individuals. Here this apparent contradiction is addressed by considering the properties of neural manifolds that restrict circuit activity to functionally relevant subspaces, and it will be suggested that degenerate, anisotropic and unstable parameter spaces are a closely related to the structure and implementation of functionally relevant neural manifolds. Abstract figure legend What are the relationships between noisy and highly variable microscopic neural circuit variables on the one hand and the generation of behaviour on the other? Here it is proposed that an intermediate level of description exists where this relationship can be understood in terms of low-dimensional dynamics. Recordings of neural activity during unconstrained behaviour and the development of new machine learning methods will help to uncover these links. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Matthias H Hennig
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh
| |
Collapse
|
16
|
Anwar H, Caby S, Dura-Bernal S, D’Onofrio D, Hasegan D, Deible M, Grunblatt S, Chadderdon GL, Kerr CC, Lakatos P, Lytton WW, Hazan H, Neymotin SA. Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning. PLoS One 2022; 17:e0265808. [PMID: 35544518 PMCID: PMC9094569 DOI: 10.1371/journal.pone.0265808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 03/08/2022] [Indexed: 11/18/2022] Open
Abstract
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. The models using these learning rules are often treated as black boxes, with little analysis on circuit architectures and learning mechanisms supporting optimal performance. Here we developed visual/motor spiking neuronal network models and trained them to play a virtual racket-ball game using several reinforcement learning algorithms inspired by the dopaminergic reward system. We systematically investigated how different architectures and circuit-motifs (feed-forward, recurrent, feedback) contributed to learning and performance. We also developed a new biologically-inspired learning rule that significantly enhanced performance, while reducing training time. Our models included visual areas encoding game inputs and relaying the information to motor areas, which used this information to learn to move the racket to hit the ball. Neurons in the early visual area relayed information encoding object location and motion direction across the network. Neuronal association areas encoded spatial relationships between objects in the visual scene. Motor populations received inputs from visual and association areas representing the dorsal pathway. Two populations of motor neurons generated commands to move the racket up or down. Model-generated actions updated the environment and triggered reward or punishment signals that adjusted synaptic weights so that the models could learn which actions led to reward. Here we demonstrate that our biologically-plausible learning rules were effective in training spiking neuronal network models to solve problems in dynamic environments. We used our models to dissect the circuit architectures and learning rules most effective for learning. Our model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks. In biological networks, all learning mechanisms may complement one another, accelerating the learning capabilities of animals. Furthermore, this also highlights the resilience and redundancy in biological systems.
Collapse
Affiliation(s)
- Haroon Anwar
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Simon Caby
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Salvador Dura-Bernal
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - David D’Onofrio
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Daniel Hasegan
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Matt Deible
- University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Sara Grunblatt
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - George L. Chadderdon
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - Cliff C. Kerr
- Dept Physics, University of Sydney, Sydney, Australia
- Institute for Disease Modeling, Global Health Division, Bill & Melinda Gates Foundation, Seattle, Washington, United States of America
| | - Peter Lakatos
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| | - William W. Lytton
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
- Dept Neurology, Kings County Hospital Center, Brooklyn, New York, United States of America
| | - Hananel Hazan
- Dept of Biology, Tufts University, Medford, Massachusetts, United States of America
| | - Samuel A. Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| |
Collapse
|
17
|
Harris JJ, Kollo M, Erskine A, Schaefer A, Burdakov D. Natural VTA activity during NREM sleep influences future exploratory behavior. iScience 2022; 25:104396. [PMID: 35663010 PMCID: PMC9156940 DOI: 10.1016/j.isci.2022.104396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 11/23/2021] [Accepted: 05/09/2022] [Indexed: 11/26/2022] Open
Abstract
During wakefulness, the VTA represents the valence of experiences and mediates affective response to the outside world. Recent work revealed that two major VTA populations – dopamine and GABA neurons – are highly active during REM sleep and less active during NREM sleep. Using long-term cell type and brain state-specific recordings, machine learning, and optogenetics, we examined the role that the sleep-activity of these neurons plays in subsequent awake behavior. We found that VTA activity during NREM (but not REM) sleep correlated with exploratory features of the next day’s behavior. Disrupting natural VTA activity during NREM (but not REM) sleep reduced future tendency to explore and increased preferences for familiarity and goal-directed actions, with no direct effect on learning or memory. Our data suggest that, during deep sleep, VTA neurons engage in offline processing, consolidating not memories but affective responses to remembered environments, shaping the way that animals respond to future experiences. Dopamine and GABA neurons in the VTA are active during NREM as well as REM sleep VTA activity during NREM-sleep — but not REM-sleep — is correlated with exploration the next day Inhibiting this activity during NREM-sleep — but not REM-sleep — reduces future exploration
Collapse
|
18
|
Folschweiller S, Sauer JF. Phase-specific pooling of sparse assembly activity by respiration-related brain oscillations. J Physiol 2022; 600:1991-2011. [PMID: 35218015 DOI: 10.1113/jp282631] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 02/10/2022] [Indexed: 11/08/2022] Open
Abstract
Neuronal assemblies activate phase-coupled to ongoing respiration-related oscillations (RROs) in the medial prefrontal cortex of mice. The phase coupling strength of assemblies exceeds that of individual neurons. Assemblies preferentially activate during the descending phase of RRO. Despite higher assembly frequency during descending RRO, overlap between active assemblies remains constant across RRO phase. Putative GABAergic interneurons are preferentially recruited by assembly neurons during descending RRO, suggesting that interneurons might contribute to the segregation of active assemblies during the descending phase of RRO. ABSTRACT: Nasal breathing affects cognitive functions, but it has remained largely unclear how respiration-driven inputs shape information processing in neuronal circuits. Current theories emphasize the role of neuronal assemblies, coalitions of transiently active pyramidal cells, as the core unit of cortical network computations. Here, we show that the phase of respiration-related oscillations (RROs) influences the likelihood of activation of a subset of neuronal assemblies in the medial prefrontal cortex (mPFC) of awake mice. RROs bias the activation of neuronal assemblies more efficiently than that of individual neurons by entraining the coactivity of assembly neurons. Moreover, the activation of assemblies is moderately biased towards the descending phase of RROs. Despite the enriched activation of assemblies during descending RRO, the overlap between individual assemblies remains constant across RRO phases. Putative GABAergic interneurons are shown to coactivate with assemblies and receive enhanced excitatory drive from assembly neurons during descending RRO, suggesting that the phase-specific recruitment of putative interneurons might help to keep the activation of different assemblies separated from each other during times of preferred assembly activation. Our results thus identify respiration-synchronized brain rhythms as drivers of neuronal assemblies and point to a role of RROs in defining time windows of enhanced yet segregated assembly activity. Abstract figure legend. Nasal breathing affects cognitive functions, but it has remained largely unclear how respiration-driven inputs shape information processing in neuronal circuits. We show that the phase of respiration-related oscillations (RROs) influences the likelihood of the activation of a subset of neuronal assemblies in the medial prefrontal cortex (mPFC) of awake mice. The activation of assemblies is moderately biased towards the descending phase of RROs, while the overlap between individual assemblies remains constant across RRO phases. Putative GABAergic interneurons are shown to coactivate with assemblies and receive enhanced excitatory drive from assembly neurons during descending RRO, suggesting that the phase-specific recruitment of putative interneurons might help to keep the activation of different assemblies separated from each other. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Shani Folschweiller
- Institute for Physiology I, Medical Faculty, Albert-Ludwigs-University Freiburg, Hermann-Herder-Strasse 7, Freiburg, D-79104, Germany.,Faculty of Biology, Albert-Ludwigs-University Freiburg, Schaenzlestrasse 1, Freiburg, D-79104, Germany
| | - Jonas-Frederic Sauer
- Institute for Physiology I, Medical Faculty, Albert-Ludwigs-University Freiburg, Hermann-Herder-Strasse 7, Freiburg, D-79104, Germany
| |
Collapse
|
19
|
Rule ME, O'Leary T. Self-healing codes: How stable neural populations can track continually reconfiguring neural representations. Proc Natl Acad Sci U S A 2022; 119:e2106692119. [PMID: 35145024 PMCID: PMC8851551 DOI: 10.1073/pnas.2106692119] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 12/29/2021] [Indexed: 12/19/2022] Open
Abstract
As an adaptive system, the brain must retain a faithful representation of the world while continuously integrating new information. Recent experiments have measured population activity in cortical and hippocampal circuits over many days and found that patterns of neural activity associated with fixed behavioral variables and percepts change dramatically over time. Such "representational drift" raises the question of how malleable population codes can interact coherently with stable long-term representations that are found in other circuits and with relatively rigid topographic mappings of peripheral sensory and motor signals. We explore how known plasticity mechanisms can allow single neurons to reliably read out an evolving population code without external error feedback. We find that interactions between Hebbian learning and single-cell homeostasis can exploit redundancy in a distributed population code to compensate for gradual changes in tuning. Recurrent feedback of partially stabilized readouts could allow a pool of readout cells to further correct inconsistencies introduced by representational drift. This shows how relatively simple, known mechanisms can stabilize neural tuning in the short term and provides a plausible explanation for how plastic neural codes remain integrated with consolidated, long-term representations.
Collapse
Affiliation(s)
- Michael E Rule
- Engineering Department, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
| | - Timothy O'Leary
- Engineering Department, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
| |
Collapse
|
20
|
Gallinaro JV, Gašparović N, Rotter S. Homeostatic control of synaptic rewiring in recurrent networks induces the formation of stable memory engrams. PLoS Comput Biol 2022; 18:e1009836. [PMID: 35143489 PMCID: PMC8865699 DOI: 10.1371/journal.pcbi.1009836] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 02/23/2022] [Accepted: 01/14/2022] [Indexed: 12/04/2022] Open
Abstract
Brain networks store new memories using functional and structural synaptic plasticity. Memory formation is generally attributed to Hebbian plasticity, while homeostatic plasticity is thought to have an ancillary role in stabilizing network dynamics. Here we report that homeostatic plasticity alone can also lead to the formation of stable memories. We analyze this phenomenon using a new theory of network remodeling, combined with numerical simulations of recurrent spiking neural networks that exhibit structural plasticity based on firing rate homeostasis. These networks are able to store repeatedly presented patterns and recall them upon the presentation of incomplete cues. Storage is fast, governed by the homeostatic drift. In contrast, forgetting is slow, driven by a diffusion process. Joint stimulation of neurons induces the growth of associative connections between them, leading to the formation of memory engrams. These memories are stored in a distributed fashion throughout connectivity matrix, and individual synaptic connections have only a small influence. Although memory-specific connections are increased in number, the total number of inputs and outputs of neurons undergo only small changes during stimulation. We find that homeostatic structural plasticity induces a specific type of "silent memories", different from conventional attractor states.
Collapse
Affiliation(s)
- Júlia V. Gallinaro
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Nebojša Gašparović
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
21
|
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. Proc Natl Acad Sci U S A 2021; 118:2023832118. [PMID: 34772802 DOI: 10.1073/pnas.2023832118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2021] [Indexed: 11/18/2022] Open
Abstract
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Collapse
|
22
|
Folschweiller S, Sauer JF. Respiration-Driven Brain Oscillations in Emotional Cognition. Front Neural Circuits 2021; 15:761812. [PMID: 34790100 PMCID: PMC8592085 DOI: 10.3389/fncir.2021.761812] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 10/05/2021] [Indexed: 12/21/2022] Open
Abstract
Respiration paces brain oscillations and the firing of individual neurons, revealing a profound impact of rhythmic breathing on brain activity. Intriguingly, respiration-driven entrainment of neural activity occurs in a variety of cortical areas, including those involved in higher cognitive functions such as associative neocortical regions and the hippocampus. Here we review recent findings of respiration-entrained brain activity with a particular focus on emotional cognition. We summarize studies from different brain areas involved in emotional behavior such as fear, despair, and motivation, and compile findings of respiration-driven activities across species. Furthermore, we discuss the proposed cellular and network mechanisms by which cortical circuits are entrained by respiration. The emerging synthesis from a large body of literature suggests that the impact of respiration on brain function is widespread across the brain and highly relevant for distinct cognitive functions. These intricate links between respiration and cognitive processes call for mechanistic studies of the role of rhythmic breathing as a timing signal for brain activity.
Collapse
Affiliation(s)
- Shani Folschweiller
- Institute for Physiology I, University of Freiburg, Freiburg, Germany
- Faculty of Biology, University of Freiburg, Freiburg, Germany
| | | |
Collapse
|
23
|
Raman DV, O'Leary T. Optimal plasticity for memory maintenance during ongoing synaptic change. eLife 2021; 10:62912. [PMID: 34519270 PMCID: PMC8504970 DOI: 10.7554/elife.62912] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 09/13/2021] [Indexed: 11/13/2022] Open
Abstract
Synaptic connections in many brain circuits fluctuate, exhibiting substantial turnover and remodelling over hours to days. Surprisingly, experiments show that most of this flux in connectivity persists in the absence of learning or known plasticity signals. How can neural circuits retain learned information despite a large proportion of ongoing and potentially disruptive synaptic changes? We address this question from first principles by analysing how much compensatory plasticity would be required to optimally counteract ongoing fluctuations, regardless of whether fluctuations are random or systematic. Remarkably, we find that the answer is largely independent of plasticity mechanisms and circuit architectures: compensatory plasticity should be at most equal in magnitude to fluctuations, and often less, in direct agreement with previously unexplained experimental observations. Moreover, our analysis shows that a high proportion of learning-independent synaptic change is consistent with plasticity mechanisms that accurately compute error gradients.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
24
|
Computational roles of intrinsic synaptic dynamics. Curr Opin Neurobiol 2021; 70:34-42. [PMID: 34303124 DOI: 10.1016/j.conb.2021.06.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 12/26/2022]
Abstract
Conventional theories assume that long-term information storage in the brain is implemented by modifying synaptic efficacy. Recent experimental findings challenge this view by demonstrating that dendritic spine sizes, or their corresponding synaptic weights, are highly volatile even in the absence of neural activity. Here, we review previous computational works on the roles of these intrinsic synaptic dynamics. We first present the possibility for neuronal networks to sustain stable performance in their presence, and we then hypothesize that intrinsic dynamics could be more than mere noise to withstand, but they may improve information processing in the brain.
Collapse
|
25
|
Mau W, Hasselmo ME, Cai DJ. The brain in motion: How ensemble fluidity drives memory-updating and flexibility. eLife 2020; 9:e63550. [PMID: 33372892 PMCID: PMC7771967 DOI: 10.7554/elife.63550] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 12/12/2020] [Indexed: 12/18/2022] Open
Abstract
While memories are often thought of as flashbacks to a previous experience, they do not simply conserve veridical representations of the past but must continually integrate new information to ensure survival in dynamic environments. Therefore, 'drift' in neural firing patterns, typically construed as disruptive 'instability' or an undesirable consequence of noise, may actually be useful for updating memories. In our view, continual modifications in memory representations reconcile classical theories of stable memory traces with neural drift. Here we review how memory representations are updated through dynamic recruitment of neuronal ensembles on the basis of excitability and functional connectivity at the time of learning. Overall, we emphasize the importance of considering memories not as static entities, but instead as flexible network states that reactivate and evolve across time and experience.
Collapse
Affiliation(s)
- William Mau
- Neuroscience Department, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| | | | - Denise J Cai
- Neuroscience Department, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| |
Collapse
|
26
|
Smolen P, Baxter DA, Byrne JH. Comparing Theories for the Maintenance of Late LTP and Long-Term Memory: Computational Analysis of the Roles of Kinase Feedback Pathways and Synaptic Reactivation. Front Comput Neurosci 2020; 14:569349. [PMID: 33390922 PMCID: PMC7772319 DOI: 10.3389/fncom.2020.569349] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 11/16/2020] [Indexed: 11/26/2022] Open
Abstract
A fundamental neuroscience question is how memories are maintained from days to a lifetime, given turnover of proteins that underlie expression of long-term synaptic potentiation (LTP) or “tag” synapses as eligible for LTP. A likely solution relies on synaptic positive feedback loops, prominently including persistent activation of Ca2+/calmodulin kinase II (CaMKII) and self-activated synthesis of protein kinase M ζ (PKMζ). Data also suggest positive feedback based on recurrent synaptic reactivation within neuron assemblies, or engrams, is necessary to maintain memories. The relative importance of these mechanisms is controversial. To explore the likelihood that each mechanism is necessary or sufficient to maintain memory, we simulated maintenance of LTP with a simplified model incorporating persistent kinase activation, synaptic tagging, and preferential reactivation of strong synapses, and analyzed implications of recent data. We simulated three model variants, each maintaining LTP with one feedback loop: autonomous, self-activated PKMζ synthesis (model variant I); self-activated CamKII (model variant II); and recurrent reactivation of strengthened synapses (model variant III). Variant I predicts that, for successful maintenance of LTP, either 1) PKMζ contributes to synaptic tagging, or 2) a low constitutive tag level persists during maintenance independent of PKMζ, or 3) maintenance of LTP is independent of tagging. Variant II maintains LTP and suggests persistent CaMKII activation could maintain PKMζ activity, a feedforward interaction not previously considered. However, we note data challenging the CaMKII feedback loop. In Variant III synaptic reactivation drives, and thus predicts, recurrent or persistent activation of CamKII and other necessary kinases, plausibly contributing to persistent elevation of PKMζ levels. Reactivation is thus predicted to sustain recurrent rounds of synaptic tagging and incorporation of plasticity-related proteins. We also suggest (model variant IV) that synaptic reactivation and autonomous kinase activation could synergistically maintain LTP. We propose experiments that could discriminate these maintenance mechanisms.
Collapse
Affiliation(s)
- Paul Smolen
- Department of Neurobiology and Anatomy, W.M. Keck Center for the Neurobiology of Learning and Memory, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Douglas A Baxter
- Department of Neurobiology and Anatomy, W.M. Keck Center for the Neurobiology of Learning and Memory, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, TX, United States.,Engineering and Medicine, Texas A&M Health Science Center, Houston, TX, United States
| | - John H Byrne
- Department of Neurobiology and Anatomy, W.M. Keck Center for the Neurobiology of Learning and Memory, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
27
|
Quantitative Synaptic Biology: A Perspective on Techniques, Numbers and Expectations. Int J Mol Sci 2020; 21:ijms21197298. [PMID: 33023247 PMCID: PMC7582872 DOI: 10.3390/ijms21197298] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 09/24/2020] [Accepted: 09/28/2020] [Indexed: 12/31/2022] Open
Abstract
Synapses play a central role for the processing of information in the brain and have been analyzed in countless biochemical, electrophysiological, imaging, and computational studies. The functionality and plasticity of synapses are nevertheless still difficult to predict, and conflicting hypotheses have been proposed for many synaptic processes. In this review, we argue that the cause of these problems is a lack of understanding of the spatiotemporal dynamics of key synaptic components. Fortunately, a number of emerging imaging approaches, going beyond super-resolution, should be able to provide required protein positions in space at different points in time. Mathematical models can then integrate the resulting information to allow the prediction of the spatiotemporal dynamics. We argue that these models, to deal with the complexity of synaptic processes, need to be designed in a sufficiently abstract way. Taken together, we suggest that a well-designed combination of imaging and modelling approaches will result in a far more complete understanding of synaptic function than currently possible.
Collapse
|
28
|
Sugden AU, Zaremba JD, Sugden LA, McGuire KL, Lutas A, Ramesh RN, Alturkistani O, Lensjø KK, Burgess CR, Andermann ML. Cortical reactivations of recent sensory experiences predict bidirectional network changes during learning. Nat Neurosci 2020; 23:981-991. [PMID: 32514136 PMCID: PMC7392804 DOI: 10.1038/s41593-020-0651-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 05/05/2020] [Indexed: 12/13/2022]
Abstract
Salient experiences are often relived in the mind. Human neuroimaging studies suggest that such experiences drive activity patterns in visual association cortex that are subsequently reactivated during quiet waking. Nevertheless, the circuit-level consequences of such reactivations remain unclear. Here, we imaged hundreds of neurons in visual association cortex across days as mice learned a visual discrimination task. Distinct patterns of neurons were activated by different visual cues. These same patterns were subsequently reactivated during quiet waking in darkness, with higher reactivation rates during early learning and for food-predicting versus neutral cues. Reactivations involving ensembles of neurons encoding both the food cue and the reward predicted strengthening of next-day functional connectivity of participating neurons, while the converse was observed for reactivations involving ensembles encoding only the food cue. We propose that task-relevant neurons strengthen while task-irrelevant neurons weaken their dialog with the network via participation in distinct flavors of reactivation.
Collapse
Affiliation(s)
- Arthur U Sugden
- Division of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Jeffrey D Zaremba
- Division of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Lauren A Sugden
- Department of Mathematics and Computer Science, Duquesne University, Pittsburgh, PA, USA
| | - Kelly L McGuire
- Division of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
- Program in Neuroscience, Harvard Medical School, Boston, MA, USA
| | - Andrew Lutas
- Division of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Rohan N Ramesh
- Division of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
- Program in Neuroscience, Harvard Medical School, Boston, MA, USA
| | - Osama Alturkistani
- Division of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Kristian K Lensjø
- Division of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
- Department of Biosciences, University of Oslo, Oslo, Norway
| | - Christian R Burgess
- Division of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
- Michigan Neuroscience Institute, University of Michigan, Ann Arbor, MI, USA
| | - Mark L Andermann
- Division of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA.
- Program in Neuroscience, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
29
|
Harris KM. Structural LTP: from synaptogenesis to regulated synapse enlargement and clustering. Curr Opin Neurobiol 2020; 63:189-197. [PMID: 32659458 DOI: 10.1016/j.conb.2020.04.009] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Accepted: 04/30/2020] [Indexed: 02/09/2023]
Abstract
Nature teaches us that form precedes function, yet structure and function are intertwined. Such is the case with synapse structure, function, and plasticity underlying learning, especially in the hippocampus, a crucial brain region for memory formation. As the hippocampus matures, enduring changes in synapse structure produced by long-term potentiation (LTP) shift from synaptogenesis to synapse enlargement that is homeostatically balanced by stalled spine outgrowth and local spine clustering. Production of LTP leads to silent spine outgrowth at P15, and silent synapse enlargement in adult hippocampus at 2hours, but not at 5 or 30min following induction. Here we consider structural LTP in the context of developmental stage and variation in the availability of local resources of endosomes, smooth endoplasmic reticulum and polyribosomes. The emerging evidence supports a need for more nuanced analysis of synaptic plasticity in the context of subcellular resource availability and developmental stage.
Collapse
|
30
|
Locating the engram: Should we look for plastic synapses or information-storing molecules? Neurobiol Learn Mem 2020; 169:107164. [PMID: 31945459 DOI: 10.1016/j.nlm.2020.107164] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 09/18/2019] [Accepted: 01/10/2020] [Indexed: 12/12/2022]
Abstract
Karl Lashley began the search for the engram nearly seventy years ago. In the time since, much has been learned but divisions remain. In the contemporary neurobiology of learning and memory, two profoundly different conceptions contend: the associative/connectionist (A/C) conception and the computational/representational (C/R) conception. Both theories ground themselves in the belief that the mind is emergent from the properties and processes of a material brain. Where these theories differ is in their description of what the neurobiological substrate of memory is and where it resides in the brain. The A/C theory of memory emphasizes the need to distinguish memory cognition from the memory engram and postulates that memory cognition is an emergent property of patterned neural activity routed through engram circuits. In this model, learning re-organizes synapse association strengths to guide future neural activity. Importantly, the version of the A/C theory advocated for here contends that synaptic change is not symbolic and, despite normally being necessary, is not sufficient for memory cognition. Instead, synaptic change provides the capacity and a blueprint for reinstating symbolic patterns of neural activity. Unlike the A/C theory, which posits that memory emerges at the circuit level, the C/R conception suggests that memory manifests at the level of intracellular molecular structures. In C/R theory, these intracellular structures are information-conveying and have properties compatible with the view that brain computation utilizes a read/write memory, functionally similar to that in a computer. New research has energized both sides and highlighted the need for new discussion. Both theories, the key questions each theory has yet to resolve and several potential paths forward are presented here.
Collapse
|
31
|
Liu X, Kuzum D. Hippocampal-Cortical Memory Trace Transfer and Reactivation Through Cell-Specific Stimulus and Spontaneous Background Noise. Front Comput Neurosci 2019; 13:67. [PMID: 31680922 PMCID: PMC6798041 DOI: 10.3389/fncom.2019.00067] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 09/10/2019] [Indexed: 01/07/2023] Open
Abstract
The hippocampus plays important roles in memory formation and retrieval through sharp-wave-ripples. Recent studies have shown that certain neuron populations in the prefrontal cortex (PFC) exhibit coordinated reactivations during awake ripple events. These experimental findings suggest that the awake ripple is an important biomarker, through which the hippocampus interacts with the neocortex to assist memory formation and retrieval. However, the computational mechanisms of this ripple based hippocampal-cortical coordination are still not clear due to the lack of unified models that include both the hippocampal and cortical networks. In this work, using a coupled biophysical model of both CA1 and PFC, we investigate possible mechanisms of hippocampal-cortical memory trace transfer and the conditions that assist reactivation of the transferred memory traces in the PFC. To validate our model, we first show that the local field potentials generated in the hippocampus and PFC exhibit ripple range activities that are consistent with the recent experimental studies. Then we demonstrate that during ripples, sequence replays can successfully transfer the information stored in the hippocampus to the PFC recurrent networks. We investigate possible mechanisms of memory retrieval in PFC networks. Our results suggest that the stored memory traces in the PFC network can be retrieved through two different mechanisms, namely the cell-specific input representing external stimuli and non-specific spontaneous background noise representing spontaneous memory recall events. Importantly, in both cases, the memory reactivation quality is robust to network connection loss. Finally, we find out that the quality of sequence reactivations is enhanced by both increased number of SWRs and an optimal background noise intensity, which tunes the excitability of neurons to a proper level. Our study presents a mechanistic explanation for the memory trace transfer from the hippocampus to neocortex through ripple coupling in awake states and reports two different mechanisms by which the stored memory traces can be successfully retrieved.
Collapse
Affiliation(s)
- Xin Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, CA, United States
| | - Duygu Kuzum
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, CA, United States
| |
Collapse
|
32
|
Rule ME, O'Leary T, Harvey CD. Causes and consequences of representational drift. Curr Opin Neurobiol 2019; 58:141-147. [PMID: 31569062 PMCID: PMC7385530 DOI: 10.1016/j.conb.2019.08.005] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 08/13/2019] [Accepted: 08/27/2019] [Indexed: 01/27/2023]
Abstract
The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.
Collapse
Affiliation(s)
- Michael E Rule
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom.
| | | |
Collapse
|