1
|
Yang X, La Camera G. Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits. PLoS Comput Biol 2024; 20:e1012220. [PMID: 38950068 PMCID: PMC11244818 DOI: 10.1371/journal.pcbi.1012220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 07/12/2024] [Accepted: 06/01/2024] [Indexed: 07/03/2024] Open
Abstract
Evidence for metastable dynamics and its role in brain function is emerging at a fast pace and is changing our understanding of neural coding by putting an emphasis on hidden states of transient activity. Clustered networks of spiking neurons have enhanced synaptic connections among groups of neurons forming structures called cell assemblies; such networks are capable of producing metastable dynamics that is in agreement with many experimental results. However, it is unclear how a clustered network structure producing metastable dynamics may emerge from a fully local plasticity rule, i.e., a plasticity rule where each synapse has only access to the activity of the neurons it connects (as opposed to the activity of other neurons or other synapses). Here, we propose a local plasticity rule producing ongoing metastable dynamics in a deterministic, recurrent network of spiking neurons. The metastable dynamics co-exists with ongoing plasticity and is the consequence of a self-tuning mechanism that keeps the synaptic weights close to the instability line where memories are spontaneously reactivated. In turn, the synaptic structure is stable to ongoing dynamics and random perturbations, yet it remains sufficiently plastic to remap sensory representations to encode new sets of stimuli. Both the plasticity rule and the metastable dynamics scale well with network size, with synaptic stability increasing with the number of neurons. Overall, our results show that it is possible to generate metastable dynamics over meaningful hidden states using a simple but biologically plausible plasticity rule which co-exists with ongoing neural dynamics.
Collapse
Affiliation(s)
- Xiaoyu Yang
- Graduate Program in Physics and Astronomy, Stony Brook University, Stony Brook, New York, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
2
|
Wang B, Torok Z, Duffy A, Bell DG, Wongso S, Velho TAF, Fairhall AL, Lois C. Unsupervised restoration of a complex learned behavior after large-scale neuronal perturbation. Nat Neurosci 2024; 27:1176-1186. [PMID: 38684893 DOI: 10.1038/s41593-024-01630-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 03/26/2024] [Indexed: 05/02/2024]
Abstract
Reliable execution of precise behaviors requires that brain circuits are resilient to variations in neuronal dynamics. Genetic perturbation of the majority of excitatory neurons in HVC, a brain region involved in song production, in adult songbirds with stereotypical songs triggered severe degradation of the song. The song fully recovered within 2 weeks, and substantial improvement occurred even when animals were prevented from singing during the recovery period, indicating that offline mechanisms enable recovery in an unsupervised manner. Song restoration was accompanied by increased excitatory synaptic input to neighboring, unmanipulated neurons in the same brain region. A model inspired by the behavioral and electrophysiological findings suggests that unsupervised single-cell and population-level homeostatic plasticity rules can support the functional restoration after large-scale disruption of networks that implement sequential dynamics. These observations suggest the existence of cellular and systems-level restorative mechanisms that ensure behavioral resilience.
Collapse
Affiliation(s)
- Bo Wang
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| | - Zsofia Torok
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Alison Duffy
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
| | - David G Bell
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Shelyn Wongso
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Tarciso A F Velho
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Carlos Lois
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
3
|
Flores JC, Zito K. A synapse-specific refractory period for plasticity at individual dendritic spines. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.24.595787. [PMID: 38826343 PMCID: PMC11142223 DOI: 10.1101/2024.05.24.595787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
How newly formed memories are preserved while brain plasticity is ongoing has been a source of debate. One idea is that synapses which experienced recent plasticity become resistant to further plasticity, a type of metaplasticity often referred to as saturation. Here, we probe the local dendritic mechanisms that limit plasticity at recently potentiated synapses. We show that recently potentiated individual synapses exhibit a synapse-specific refractory period for further potentiation. We further found that the refractory period is associated with reduced postsynaptic CaMKII signaling; however, stronger synaptic activation only partially restored the ability for further plasticity. Importantly, the refractory period is released after one hour, a timing that coincides with the enrichment of several postsynaptic proteins to pre-plasticity levels. Notably, increasing the level of the postsynaptic scaffolding protein, PSD95, but not of PSD93, overcomes the refractory period. Our results support a model in which potentiation at a single synapse is sufficient to initiate a synapse-specific refractory period that persists until key postsynaptic proteins regain their steady-state synaptic levels.
Collapse
Affiliation(s)
- Juan C. Flores
- Center for Neuroscience, University of California, Davis, CA 95618
| | - Karen Zito
- Center for Neuroscience, University of California, Davis, CA 95618
| |
Collapse
|
4
|
Ratzon A, Derdikman D, Barak O. Representational drift as a result of implicit regularization. eLife 2024; 12:RP90069. [PMID: 38695551 PMCID: PMC11065423 DOI: 10.7554/elife.90069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2024] Open
Abstract
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Collapse
Affiliation(s)
- Aviv Ratzon
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Dori Derdikman
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Omri Barak
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| |
Collapse
|
5
|
Albesa-González A, Clopath C. Learning with filopodia and spines: Complementary strong and weak competition lead to specialized, graded, and protected receptive fields. PLoS Comput Biol 2024; 20:e1012110. [PMID: 38743789 PMCID: PMC11125506 DOI: 10.1371/journal.pcbi.1012110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 05/24/2024] [Accepted: 04/25/2024] [Indexed: 05/16/2024] Open
Abstract
Filopodia are thin synaptic protrusions that have been long known to play an important role in early development. Recently, they have been found to be more abundant in the adult cortex than previously thought, and more plastic than spines (button-shaped mature synapses). Inspired by these findings, we introduce a new model of synaptic plasticity that jointly describes learning of filopodia and spines. The model assumes that filopodia exhibit strongly competitive learning dynamics -similarly to additive spike-timing-dependent plasticity (STDP). At the same time it proposes that, if filopodia undergo sufficient potentiation, they consolidate into spines. Spines follow weakly competitive learning, classically associated with multiplicative, soft-bounded models of STDP. This makes spines more stable and sensitive to the fine structure of input correlations. We show that our learning rule has a selectivity comparable to additive STDP and captures input correlations as well as multiplicative models of STDP. We also show how it can protect previously formed memories and perform synaptic consolidation. Overall, our results can be seen as a phenomenological description of how filopodia and spines could cooperate to overcome the individual difficulties faced by strong and weak competition mechanisms.
Collapse
Affiliation(s)
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
6
|
Ratzon A, Derdikman D, Barak O. Representational drift as a result of implicit regularization. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.04.539512. [PMID: 38370656 PMCID: PMC10871206 DOI: 10.1101/2023.05.04.539512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Collapse
Affiliation(s)
- Aviv Ratzon
- Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa 31096, Israel
- Network Biology Research Laboratory, Technion - Israel Institute of Technology, Haifa 32000, Israel
| | - Dori Derdikman
- Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa 31096, Israel
| | - Omri Barak
- Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa 31096, Israel
- Network Biology Research Laboratory, Technion - Israel Institute of Technology, Haifa 32000, Israel
| |
Collapse
|
7
|
Masi M. An evidence-based critical review of the mind-brain identity theory. Front Psychol 2023; 14:1150605. [PMID: 37965649 PMCID: PMC10641890 DOI: 10.3389/fpsyg.2023.1150605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 09/18/2023] [Indexed: 11/16/2023] Open
Abstract
In the philosophy of mind, neuroscience, and psychology, the causal relationship between phenomenal consciousness, mentation, and brain states has always been a matter of debate. On the one hand, material monism posits consciousness and mind as pure brain epiphenomena. One of its most stringent lines of reasoning relies on a 'loss-of-function lesion premise,' according to which, since brain lesions and neurochemical modifications lead to cognitive impairment and/or altered states of consciousness, there is no reason to doubt the mind-brain identity. On the other hand, dualism or idealism (in one form or another) regard consciousness and mind as something other than the sole product of cerebral activity pointing at the ineffable, undefinable, and seemingly unphysical nature of our subjective qualitative experiences and its related mental dimension. Here, several neuroscientific findings are reviewed that question the idea that posits phenomenal experience as an emergent property of brain activity, and argue that the premise of material monism is based on a logical correlation-causation fallacy. While these (mostly ignored) findings, if considered separately from each other, could, in principle, be recast into a physicalist paradigm, once viewed from an integral perspective, they substantiate equally well an ontology that posits mind and consciousness as a primal phenomenon.
Collapse
Affiliation(s)
- Marco Masi
- Independent Researcher, Knetzgau, Germany
| |
Collapse
|
8
|
Arshavsky YI. Memory: Synaptic or Cellular, That Is the Question. Neuroscientist 2023; 29:538-553. [PMID: 35713238 DOI: 10.1177/10738584221086488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
According to the commonly accepted opinion, memory engrams are formed and stored at the level of neural networks due to a change in the strength of synaptic connections between neurons. This hypothesis of synaptic plasticity (HSP), formulated by Donald Hebb in the 1940s, continues to dominate the directions of experimental studies and the interpretations of experimental results in the field. The universal acceptance of the HSP has transformed it from a hypothesis into an incontrovertible theory. In this article, I show that the entire body of experimental and clinical data obtained in studies of long-term memory in mammals and humans is inconsistent with the HSP. Instead, these data suggest that long-term memory is formed and stored at the intracellular level where it is reliably protected from ongoing synaptic activity, including pathological epileptic activity. It seems that the generally accepted HSP became a serious obstacle to understanding the mechanisms of memory and that progress in this field requires rethinking this doctrine and shifting experimental efforts toward exploring the intracellular mechanisms.
Collapse
Affiliation(s)
- Yuri I Arshavsky
- BioCircuits Institute, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
9
|
Micou C, O'Leary T. Representational drift as a window into neural and behavioural plasticity. Curr Opin Neurobiol 2023; 81:102746. [PMID: 37392671 DOI: 10.1016/j.conb.2023.102746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 05/25/2023] [Accepted: 05/31/2023] [Indexed: 07/03/2023]
Abstract
Large-scale recordings of neural activity over days and weeks have revealed that neural representations of familiar tasks, precepts and actions continually evolve without obvious changes in behaviour. We hypothesise that this steady drift in neural activity and accompanying physiological changes is due in part to the continuous application of a learning rule at the cellular and population level. Explicit predictions of this drift can be found in neural network models that use iterative learning to optimise weights. Drift therefore provides a measurable signal that can reveal systems-level properties of biological plasticity mechanisms, such as their precision and effective learning rates.
Collapse
Affiliation(s)
- Charles Micou
- Department of Engineering, University of Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, United Kingdom; Theoretical Sciences Visiting Program, Okinawa Institute of Science and Technology Graduate University, Onna, 904-0495, Japan.
| |
Collapse
|
10
|
The molecular memory code and synaptic plasticity: A synthesis. Biosystems 2023; 224:104825. [PMID: 36610586 DOI: 10.1016/j.biosystems.2022.104825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 12/29/2022] [Accepted: 12/30/2022] [Indexed: 01/06/2023]
Abstract
The most widely accepted view of memory in the brain holds that synapses are the storage sites of memory, and that memories are formed through associative modification of synapses. This view has been challenged on conceptual and empirical grounds. As an alternative, it has been proposed that molecules within the cell body are the storage sites of memory, and that memories are formed through biochemical operations on these molecules. This paper proposes a synthesis of these two views, grounded in a computational model of memory. Synapses are conceived as storage sites for the parameters of an approximate posterior probability distribution over latent causes. Intracellular molecules are conceived as storage sites for the parameters of a generative model. The model stipulates how these two components work together as part of an integrated algorithm for learning and inference.
Collapse
|
11
|
Quantifying postsynaptic receptor dynamics: insights into synaptic function. Nat Rev Neurosci 2023; 24:4-22. [PMID: 36352031 DOI: 10.1038/s41583-022-00647-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/28/2022] [Indexed: 11/11/2022]
Abstract
The molecular composition of presynaptic and postsynaptic neuronal terminals is dynamic, and yet long-term stabilizations in postsynaptic responses are necessary for synaptic development and long-term plasticity. The need to reconcile these concepts is further complicated by learning- and memory-related plastic changes in the molecular make-up of synapses. Advances in single-particle tracking mean that we can now quantify the number and diffusive properties of specific synaptic molecules, while statistical thermodynamics provides a framework to analyse these molecular fluctuations. In this Review, we discuss the use of these approaches to gain quantitative descriptions of the processes underlying the turnover, long-term stability and plasticity of postsynaptic receptors and show how these can help us to understand the balance between local molecular turnover and synaptic structural identity and integrity.
Collapse
|
12
|
KASAI H. Unraveling the mysteries of dendritic spine dynamics: Five key principles shaping memory and cognition. PROCEEDINGS OF THE JAPAN ACADEMY. SERIES B, PHYSICAL AND BIOLOGICAL SCIENCES 2023; 99:254-305. [PMID: 37821392 PMCID: PMC10749395 DOI: 10.2183/pjab.99.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 07/11/2023] [Indexed: 10/13/2023]
Abstract
Recent research extends our understanding of brain processes beyond just action potentials and chemical transmissions within neural circuits, emphasizing the mechanical forces generated by excitatory synapses on dendritic spines to modulate presynaptic function. From in vivo and in vitro studies, we outline five central principles of synaptic mechanics in brain function: P1: Stability - Underpinning the integral relationship between the structure and function of the spine synapses. P2: Extrinsic dynamics - Highlighting synapse-selective structural plasticity which plays a crucial role in Hebbian associative learning, distinct from pathway-selective long-term potentiation (LTP) and depression (LTD). P3: Neuromodulation - Analyzing the role of G-protein-coupled receptors, particularly dopamine receptors, in time-sensitive modulation of associative learning frameworks such as Pavlovian classical conditioning and Thorndike's reinforcement learning (RL). P4: Instability - Addressing the intrinsic dynamics crucial to memory management during continual learning, spotlighting their role in "spine dysgenesis" associated with mental disorders. P5: Mechanics - Exploring how synaptic mechanics influence both sides of synapses to establish structural traces of short- and long-term memory, thereby aiding the integration of mental functions. We also delve into the historical background and foresee impending challenges.
Collapse
Affiliation(s)
- Haruo KASAI
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
13
|
Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10081-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
AbstractArtificial intelligence has not achieved defining features of biological intelligence despite models boasting more parameters than neurons in the human brain. In this perspective article, we synthesize historical approaches to understanding intelligent systems and argue that methodological and epistemic biases in these fields can be resolved by shifting away from cognitivist brain-as-computer theories and recognizing that brains exist within large, interdependent living systems. Integrating the dynamical systems view of cognition with the massive distributed feedback of perceptual control theory highlights a theoretical gap in our understanding of nonreductive neural mechanisms. Cell assemblies—properly conceived as reentrant dynamical flows and not merely as identified groups of neurons—may fill that gap by providing a minimal supraneuronal level of organization that establishes a neurodynamical base layer for computation. By considering information streams from physical embodiment and situational embedding, we discuss this computational base layer in terms of conserved oscillatory and structural properties of cortical-hippocampal networks. Our synthesis of embodied cognition, based in dynamical systems and perceptual control, aims to bypass the neurosymbolic stalemates that have arisen in artificial intelligence, cognitive science, and computational neuroscience.
Collapse
|
14
|
Aitken K, Garrett M, Olsen S, Mihalas S. The geometry of representational drift in natural and artificial neural networks. PLoS Comput Biol 2022; 18:e1010716. [PMID: 36441762 PMCID: PMC9731438 DOI: 10.1371/journal.pcbi.1010716] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 12/08/2022] [Accepted: 11/07/2022] [Indexed: 11/29/2022] Open
Abstract
Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.
Collapse
Affiliation(s)
- Kyle Aitken
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
- * E-mail:
| | - Marina Garrett
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Shawn Olsen
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Stefan Mihalas
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| |
Collapse
|
15
|
Tuning instability of non-columnar neurons in the salt-and-pepper whisker map in somatosensory cortex. Nat Commun 2022; 13:6611. [PMID: 36329010 PMCID: PMC9633707 DOI: 10.1038/s41467-022-34261-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
Rodent sensory cortex contains salt-and-pepper maps of sensory features, whose structure is not fully known. Here we investigated the structure of the salt-and-pepper whisker somatotopic map among L2/3 pyramidal neurons in somatosensory cortex, in awake mice performing one-vs-all whisker discrimination. Neurons tuned for columnar (CW) and non-columnar (non-CW) whiskers were spatially intermixed, with co-tuned neurons forming local (20 µm) clusters. Whisker tuning was markedly unstable in expert mice, with 35-46% of pyramidal cells significantly shifting tuning over 5-18 days. Tuning instability was highly concentrated in non-CW tuned neurons, and thus was structured in the map. Instability of non-CW neurons was unchanged during chronic whisker paralysis and when mice discriminated individual whiskers, suggesting it is an inherent feature. Thus, L2/3 combines two distinct components: a stable columnar framework of CW-tuned cells that may promote spatial perceptual stability, plus an intermixed, non-columnar surround with highly unstable tuning.
Collapse
|
16
|
Abstract
Humans have the remarkable ability to continually store new memories, while maintaining old memories for a lifetime. How the brain avoids catastrophic forgetting of memories due to interference between encoded memories is an open problem in computational neuroscience. Here we present a model for continual learning in a recurrent neural network combining Hebbian learning, synaptic decay and a novel memory consolidation mechanism: memories undergo stochastic rehearsals with rates proportional to the memory's basin of attraction, causing self-amplified consolidation. This mechanism gives rise to memory lifetimes that extend much longer than the synaptic decay time, and retrieval probability of memories that gracefully decays with their age. The number of retrievable memories is proportional to a power of the number of neurons. Perturbations to the circuit model cause temporally-graded retrograde and anterograde deficits, mimicking observed memory impairments following neurological trauma.
Collapse
|
17
|
Hennig MH. The sloppy relationship between neural circuit structure and function. J Physiol 2022. [PMID: 35876720 DOI: 10.1113/jp282757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/20/2022] [Indexed: 11/08/2022] Open
Abstract
Investigating and describing the relationships between the structure of a circuit and its function has a long tradition in neuroscience. Since neural circuits acquire their structure through sophisticated developmental programmes, and memories and experiences are maintained through synaptic modification, it is to be expected that structure is closely linked to function. Recent findings challenge this hypothesis from three different angles: Function does not strongly constrain circuit parameters, many parameters in neural circuits are irrelevant and contribute little to function, and circuit parameters are unstable and subject to constant random drift. At the same time however, recent work also showed that dynamics in neural circuit activity that is related to function are robust over time and across individuals. Here this apparent contradiction is addressed by considering the properties of neural manifolds that restrict circuit activity to functionally relevant subspaces, and it will be suggested that degenerate, anisotropic and unstable parameter spaces are a closely related to the structure and implementation of functionally relevant neural manifolds. Abstract figure legend What are the relationships between noisy and highly variable microscopic neural circuit variables on the one hand and the generation of behaviour on the other? Here it is proposed that an intermediate level of description exists where this relationship can be understood in terms of low-dimensional dynamics. Recordings of neural activity during unconstrained behaviour and the development of new machine learning methods will help to uncover these links. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Matthias H Hennig
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh
| |
Collapse
|
18
|
Redman WT, Wolcott NS, Montelisciani L, Luna G, Marks TD, Sit KK, Yu CH, Smith S, Goard MJ. Long-term transverse imaging of the hippocampus with glass microperiscopes. eLife 2022; 11:75391. [PMID: 35775393 PMCID: PMC9249394 DOI: 10.7554/elife.75391] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 06/12/2022] [Indexed: 11/19/2022] Open
Abstract
The hippocampus consists of a stereotyped neuronal circuit repeated along the septal-temporal axis. This transverse circuit contains distinct subfields with stereotyped connectivity that support crucial cognitive processes, including episodic and spatial memory. However, comprehensive measurements across the transverse hippocampal circuit in vivo are intractable with existing techniques. Here, we developed an approach for two-photon imaging of the transverse hippocampal plane in awake mice via implanted glass microperiscopes, allowing optical access to the major hippocampal subfields and to the dendritic arbor of pyramidal neurons. Using this approach, we tracked dendritic morphological dynamics on CA1 apical dendrites and characterized spine turnover. We then used calcium imaging to quantify the prevalence of place and speed cells across subfields. Finally, we measured the anatomical distribution of spatial information, finding a non-uniform distribution of spatial selectivity along the DG-to-CA1 axis. This approach extends the existing toolbox for structural and functional measurements of hippocampal circuitry.
Collapse
Affiliation(s)
- William T Redman
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, United States
| | - Nora S Wolcott
- Department of Molecular, Cellular, and Developmental Biology, University of California, Santa Barbara, United States
| | - Luca Montelisciani
- Cognitive and Systems Neuroscience Group, University of Amsterdam, Amsterdam, Netherlands
| | - Gabriel Luna
- Neuroscience Research Institute, University of California, Santa Barbara, United States
| | - Tyler D Marks
- Neuroscience Research Institute, University of California, Santa Barbara, United States
| | - Kevin K Sit
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, United States
| | - Che-Hang Yu
- Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, United States
| | - Spencer Smith
- Neuroscience Research Institute, University of California, Santa Barbara, United States.,Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, United States
| | - Michael J Goard
- Department of Molecular, Cellular, and Developmental Biology, University of California, Santa Barbara, United States.,Neuroscience Research Institute, University of California, Santa Barbara, United States.,Department of Psychological and Brain Sciences, University of California, Santa Barbara, United States
| |
Collapse
|
19
|
Masset P, Qin S, Zavatone-Veth JA. Drifting neuronal representations: Bug or feature? BIOLOGICAL CYBERNETICS 2022; 116:253-266. [PMID: 34993613 DOI: 10.1007/s00422-021-00916-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that learned features remain static. How do unstable neural codes support robust perception, memories, and actions? Here, we review recent experimental evidence for such representational drift across brain areas, as well as dissections of its functional characteristics and underlying mechanisms. We emphasize theoretical proposals for how drift need not only be a form of noise for which the brain must compensate. Rather, it can emerge from computationally beneficial mechanisms in hierarchical networks performing robust probabilistic computations.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Shanshan Qin
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Harvard University, Cambridge, MA, USA
| |
Collapse
|
20
|
Anwar H, Caby S, Dura-Bernal S, D’Onofrio D, Hasegan D, Deible M, Grunblatt S, Chadderdon GL, Kerr CC, Lakatos P, Lytton WW, Hazan H, Neymotin SA. Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning. PLoS One 2022; 17:e0265808. [PMID: 35544518 PMCID: PMC9094569 DOI: 10.1371/journal.pone.0265808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 03/08/2022] [Indexed: 11/18/2022] Open
Abstract
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. The models using these learning rules are often treated as black boxes, with little analysis on circuit architectures and learning mechanisms supporting optimal performance. Here we developed visual/motor spiking neuronal network models and trained them to play a virtual racket-ball game using several reinforcement learning algorithms inspired by the dopaminergic reward system. We systematically investigated how different architectures and circuit-motifs (feed-forward, recurrent, feedback) contributed to learning and performance. We also developed a new biologically-inspired learning rule that significantly enhanced performance, while reducing training time. Our models included visual areas encoding game inputs and relaying the information to motor areas, which used this information to learn to move the racket to hit the ball. Neurons in the early visual area relayed information encoding object location and motion direction across the network. Neuronal association areas encoded spatial relationships between objects in the visual scene. Motor populations received inputs from visual and association areas representing the dorsal pathway. Two populations of motor neurons generated commands to move the racket up or down. Model-generated actions updated the environment and triggered reward or punishment signals that adjusted synaptic weights so that the models could learn which actions led to reward. Here we demonstrate that our biologically-plausible learning rules were effective in training spiking neuronal network models to solve problems in dynamic environments. We used our models to dissect the circuit architectures and learning rules most effective for learning. Our model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks. In biological networks, all learning mechanisms may complement one another, accelerating the learning capabilities of animals. Furthermore, this also highlights the resilience and redundancy in biological systems.
Collapse
Affiliation(s)
- Haroon Anwar
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Simon Caby
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Salvador Dura-Bernal
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - David D’Onofrio
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Daniel Hasegan
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Matt Deible
- University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Sara Grunblatt
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - George L. Chadderdon
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - Cliff C. Kerr
- Dept Physics, University of Sydney, Sydney, Australia
- Institute for Disease Modeling, Global Health Division, Bill & Melinda Gates Foundation, Seattle, Washington, United States of America
| | - Peter Lakatos
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| | - William W. Lytton
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
- Dept Neurology, Kings County Hospital Center, Brooklyn, New York, United States of America
| | - Hananel Hazan
- Dept of Biology, Tufts University, Medford, Massachusetts, United States of America
| | - Samuel A. Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
21
|
Pinotsis DA, Miller EK. Beyond dimension reduction: Stable electric fields emerge from and allow representational drift. Neuroimage 2022; 253:119058. [PMID: 35272022 DOI: 10.1016/j.neuroimage.2022.119058] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/03/2022] [Accepted: 03/03/2022] [Indexed: 01/18/2023] Open
Abstract
It is known that the exact neurons maintaining a given memory (the neural ensemble) change from trial to trial. This raises the question of how the brain achieves stability in the face of this representational drift. Here, we demonstrate that this stability emerges at the level of the electric fields that arise from neural activity. We show that electric fields carry information about working memory content. The electric fields, in turn, can act as "guard rails" that funnel higher dimensional variable neural activity along stable lower dimensional routes. We obtained the latent space associated with each memory. We then confirmed the stability of the electric field by mapping the latent space to different cortical patches (that comprise a neural ensemble) and reconstructing information flow between patches. Stable electric fields can allow latent states to be transferred between brain areas, in accord with modern engram theory.
Collapse
Affiliation(s)
- Dimitris A Pinotsis
- Centre for Mathematical Neuroscience and Psychology and Department of Psychology, City-University of London, London EC1V 0HB, United Kingdom; The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
22
|
Wegner W, Steffens H, Gregor C, Wolf F, Willig KI. Environmental enrichment enhances patterning and remodeling of synaptic nanoarchitecture as revealed by STED nanoscopy. eLife 2022; 11:73603. [PMID: 35195066 PMCID: PMC8903838 DOI: 10.7554/elife.73603] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 02/22/2022] [Indexed: 12/04/2022] Open
Abstract
Synaptic plasticity underlies long-lasting structural and functional changes to brain circuitry and its experience-dependent remodeling can be fundamentally enhanced by environmental enrichment. It is however unknown, whether and how the environmental enrichment alters the morphology and dynamics of individual synapses. Here, we present a virtually crosstalk-free two-color in vivo stimulated emission depletion (STED) microscope to simultaneously superresolve the dynamics of endogenous PSD95 of the post-synaptic density and spine geometry in the mouse cortex. In general, the spine head geometry and PSD95 assemblies were highly dynamic, their changes depended linearly on their original size but correlated only mildly. With environmental enrichment, the size distributions of PSD95 and spine head sizes were sharper than in controls, indicating that synaptic strength is set more uniformly. The topography of the PSD95 nanoorganization was more dynamic after environmental enrichment; changes in size were smaller but more correlated than in mice housed in standard cages. Thus, two-color in vivo time-lapse imaging of synaptic nanoorganization uncovers a unique synaptic nanoplasticity associated with the enhanced learning capabilities under environmental enrichment.
Collapse
Affiliation(s)
- Waja Wegner
- Center for Nanoscale Microscopy and Molecular Physiology of the Brain, University Medical Center Göttingen, Göttingen, Germany
| | - Heinz Steffens
- Center for Nanoscale Microscopy and Molecular Physiology of the Brain, University Medical Center Göttingen, Göttingen, Germany
| | - Carola Gregor
- Department of NanoBiophotonics, Max Planck Institute for Biophysical Chemistry, Göttingen, Germany
| | - Fred Wolf
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Katrin I Willig
- Center for Nanoscale Microscopy and Molecular Physiology of the Brain, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
23
|
Gallinaro JV, Gašparović N, Rotter S. Homeostatic control of synaptic rewiring in recurrent networks induces the formation of stable memory engrams. PLoS Comput Biol 2022; 18:e1009836. [PMID: 35143489 PMCID: PMC8865699 DOI: 10.1371/journal.pcbi.1009836] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 02/23/2022] [Accepted: 01/14/2022] [Indexed: 12/04/2022] Open
Abstract
Brain networks store new memories using functional and structural synaptic plasticity. Memory formation is generally attributed to Hebbian plasticity, while homeostatic plasticity is thought to have an ancillary role in stabilizing network dynamics. Here we report that homeostatic plasticity alone can also lead to the formation of stable memories. We analyze this phenomenon using a new theory of network remodeling, combined with numerical simulations of recurrent spiking neural networks that exhibit structural plasticity based on firing rate homeostasis. These networks are able to store repeatedly presented patterns and recall them upon the presentation of incomplete cues. Storage is fast, governed by the homeostatic drift. In contrast, forgetting is slow, driven by a diffusion process. Joint stimulation of neurons induces the growth of associative connections between them, leading to the formation of memory engrams. These memories are stored in a distributed fashion throughout connectivity matrix, and individual synaptic connections have only a small influence. Although memory-specific connections are increased in number, the total number of inputs and outputs of neurons undergo only small changes during stimulation. We find that homeostatic structural plasticity induces a specific type of "silent memories", different from conventional attractor states.
Collapse
Affiliation(s)
- Júlia V. Gallinaro
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Nebojša Gašparović
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
24
|
Short-Term Synaptic Plasticity: Microscopic Modelling and (Some) Computational Implications. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:105-121. [DOI: 10.1007/978-3-030-89439-9_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
25
|
Ross TW, Easton A. The Hippocampal Horizon: Constructing and Segmenting Experience for Episodic Memory. Neurosci Biobehav Rev 2021; 132:181-196. [PMID: 34826509 DOI: 10.1016/j.neubiorev.2021.11.038] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 11/19/2021] [Accepted: 11/22/2021] [Indexed: 12/29/2022]
Abstract
How do we recollect specific events that have occurred during continuous ongoing experience? There is converging evidence from non-human animals that spatially modulated cellular activity of the hippocampal formation supports the construction of ongoing events. On the other hand, recent human oriented event cognition models have outlined that our experience is segmented into discrete units, and that such segmentation can operate on shorter or longer timescales. Here, we describe a unification of how these dynamic physiological mechanisms of the hippocampus relate to ongoing externally and internally driven event segmentation, facilitating the demarcation of specific moments during experience. Our cross-species interdisciplinary approach offers a novel perspective in the way we construct and remember specific events, leading to the generation of many new hypotheses for future research.
Collapse
Affiliation(s)
- T W Ross
- Department of Psychology, Durham University, South Road, Durham, DH1 3LE, United Kingdom; Centre for Learning and Memory Processes, Durham University, United Kingdom.
| | - A Easton
- Department of Psychology, Durham University, South Road, Durham, DH1 3LE, United Kingdom; Centre for Learning and Memory Processes, Durham University, United Kingdom
| |
Collapse
|
26
|
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. Proc Natl Acad Sci U S A 2021; 118:2023832118. [PMID: 34772802 DOI: 10.1073/pnas.2023832118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2021] [Indexed: 11/18/2022] Open
Abstract
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Collapse
|
27
|
Bauer J, Rose T. Mouse vision: Variability and stability across the visual processing hierarchy. Curr Biol 2021; 31:R1129-R1132. [PMID: 34637715 DOI: 10.1016/j.cub.2021.08.071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The response of individual neurons to stable sensory input or behavioral output can change over time. A new study provides evidence from the mouse visual system that such drift does not follow the hierarchy of information flow across the brain.
Collapse
Affiliation(s)
- Joel Bauer
- Max Planck Institute of Neurobiology, Am Klopferspitz 18, 82152 Martinsried, Germany
| | - Tobias Rose
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Venusberg-Campus 1, 53127 Bonn, Germany.
| |
Collapse
|
28
|
Sherf N, Shamir M. STDP and the distribution of preferred phases in the whisker system. PLoS Comput Biol 2021; 17:e1009353. [PMID: 34534208 PMCID: PMC8480728 DOI: 10.1371/journal.pcbi.1009353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 09/29/2021] [Accepted: 08/17/2021] [Indexed: 11/19/2022] Open
Abstract
Rats and mice use their whiskers to probe the environment. By rhythmically swiping their whiskers back and forth they can detect the existence of an object, locate it, and identify its texture. Localization can be accomplished by inferring the whisker’s position. Rhythmic neurons that track the phase of the whisking cycle encode information about the azimuthal location of the whisker. These neurons are characterized by preferred phases of firing that are narrowly distributed. Consequently, pooling the rhythmic signal from several upstream neurons is expected to result in a much narrower distribution of preferred phases in the downstream population, which however has not been observed empirically. Here, we show how spike timing dependent plasticity (STDP) can provide a solution to this conundrum. We investigated the effect of STDP on the utility of a neural population to transmit rhythmic information downstream using the framework of a modeling study. We found that under a wide range of parameters, STDP facilitated the transfer of rhythmic information despite the fact that all the synaptic weights remained dynamic. As a result, the preferred phase of the downstream neuron was not fixed, but rather drifted in time at a drift velocity that depended on the preferred phase, thus inducing a distribution of preferred phases. We further analyzed how the STDP rule governs the distribution of preferred phases in the downstream population. This link between the STDP rule and the distribution of preferred phases constitutes a natural test for our theory. The distribution of preferred phases of whisking neurons in the somatosensory system of rats and mice presents a conundrum: a simple pooling model predicts a distribution that is an order of magnitude narrower than what is observed empirically. Here, we suggest that this non-trivial distribution may result from activity-dependent plasticity in the form of spike timing dependent plasticity (STDP). We show that under STDP, the synaptic weights do not converge to a fixed value, but rather remain dynamic. As a result, the preferred phases of the whisking neurons vary in time, hence inducing a non-trivial distribution of preferred phases, which is governed by the STDP rule. Our results imply that the considerable synaptic volatility which has long been viewed as a difficulty that needs to be overcome, may actually be an underlying principle of the organization of the central nervous system.
Collapse
Affiliation(s)
- Nimrod Sherf
- Physics Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- * E-mail:
| | - Maoz Shamir
- Physics Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Department of Physiology and Cell Biology Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
29
|
Raman DV, O'Leary T. Optimal plasticity for memory maintenance during ongoing synaptic change. eLife 2021; 10:62912. [PMID: 34519270 PMCID: PMC8504970 DOI: 10.7554/elife.62912] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 09/13/2021] [Indexed: 11/13/2022] Open
Abstract
Synaptic connections in many brain circuits fluctuate, exhibiting substantial turnover and remodelling over hours to days. Surprisingly, experiments show that most of this flux in connectivity persists in the absence of learning or known plasticity signals. How can neural circuits retain learned information despite a large proportion of ongoing and potentially disruptive synaptic changes? We address this question from first principles by analysing how much compensatory plasticity would be required to optimally counteract ongoing fluctuations, regardless of whether fluctuations are random or systematic. Remarkably, we find that the answer is largely independent of plasticity mechanisms and circuit architectures: compensatory plasticity should be at most equal in magnitude to fluctuations, and often less, in direct agreement with previously unexplained experimental observations. Moreover, our analysis shows that a high proportion of learning-independent synaptic change is consistent with plasticity mechanisms that accurately compute error gradients.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
30
|
Computational roles of intrinsic synaptic dynamics. Curr Opin Neurobiol 2021; 70:34-42. [PMID: 34303124 DOI: 10.1016/j.conb.2021.06.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 12/26/2022]
Abstract
Conventional theories assume that long-term information storage in the brain is implemented by modifying synaptic efficacy. Recent experimental findings challenge this view by demonstrating that dendritic spine sizes, or their corresponding synaptic weights, are highly volatile even in the absence of neural activity. Here, we review previous computational works on the roles of these intrinsic synaptic dynamics. We first present the possibility for neuronal networks to sustain stable performance in their presence, and we then hypothesize that intrinsic dynamics could be more than mere noise to withstand, but they may improve information processing in the brain.
Collapse
|
31
|
Logiaco L, Abbott LF, Escola S. Thalamic control of cortical dynamics in a model of flexible motor sequencing. Cell Rep 2021; 35:109090. [PMID: 34077721 PMCID: PMC8449509 DOI: 10.1016/j.celrep.2021.109090] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 03/04/2021] [Accepted: 04/16/2021] [Indexed: 12/26/2022] Open
Abstract
The neural mechanisms that generate an extensible library of motor motifs and flexibly string them into arbitrary sequences are unclear. We developed a model in which inhibitory basal ganglia output neurons project to thalamic units that are themselves bidirectionally connected to a recurrent cortical network. We model the basal ganglia inhibitory patterns as silencing some thalamic neurons while leaving others disinhibited and free to interact with cortex during specific motifs. We show that a small number of disinhibited thalamic neurons can control cortical dynamics to generate specific motor output in a noise-robust way. Additionally, a single "preparatory" thalamocortical network can produce fast cortical dynamics that support rapid transitions between any pair of learned motifs. If the thalamic units associated with each sequence component are segregated, many motor outputs can be learned without interference and then combined in arbitrary orders for the flexible production of long and complex motor sequences.
Collapse
Affiliation(s)
- Laureline Logiaco
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Institute, Department of Psychiatry, Columbia University, New York, NY 10027, USA.
| | - L F Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sean Escola
- Zuckerman Institute, Department of Psychiatry, Columbia University, New York, NY 10027, USA
| |
Collapse
|
32
|
Steffens H, Mott AC, Li S, Wegner W, Švehla P, Kan VWY, Wolf F, Liebscher S, Willig KI. Stable but not rigid: Chronic in vivo STED nanoscopy reveals extensive remodeling of spines, indicating multiple drivers of plasticity. SCIENCE ADVANCES 2021; 7:7/24/eabf2806. [PMID: 34108204 PMCID: PMC8189587 DOI: 10.1126/sciadv.abf2806] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 04/22/2021] [Indexed: 06/01/2023]
Abstract
Excitatory synapses on dendritic spines of pyramidal neurons are considered a central memory locus. To foster both continuous adaption and the storage of long-term information, spines need to be plastic and stable at the same time. Here, we advanced in vivo STED nanoscopy to superresolve distinct features of spines (head size and neck length/width) in mouse neocortex for up to 1 month. While LTP-dependent changes predict highly correlated modifications of spine geometry, we find both, uncorrelated and correlated dynamics, indicating multiple independent drivers of spine remodeling. The magnitude of this remodeling suggests substantial fluctuations in synaptic strength. Despite this high degree of volatility, all spine features exhibit persistent components that are maintained over long periods of time. Furthermore, chronic nanoscopy uncovers structural alterations in the cortex of a mouse model of neurodegeneration. Thus, at the nanoscale, stable dendritic spines exhibit a delicate balance of stability and volatility.
Collapse
Affiliation(s)
- Heinz Steffens
- Optical Nanoscopy in Neuroscience, Center for Nanoscale Microscopy and Molecular Physiology of the Brain, University Medical Center Göttingen, Göttingen, Germany
- Max Planck Institute of Experimental Medicine, Göttingen, Germany
| | - Alexander C Mott
- Optical Nanoscopy in Neuroscience, Center for Nanoscale Microscopy and Molecular Physiology of the Brain, University Medical Center Göttingen, Göttingen, Germany
- Max Planck Institute of Experimental Medicine, Göttingen, Germany
| | - Siyuan Li
- Institute of Clinical Neuroimmunology, University Hospital, Ludwig-Maximilians-University Munich, Munich, Germany
- BioMedical Center, Faculty of Medicine, Ludwig-Maximilians-University Munich, Munich, Germany
| | - Waja Wegner
- Optical Nanoscopy in Neuroscience, Center for Nanoscale Microscopy and Molecular Physiology of the Brain, University Medical Center Göttingen, Göttingen, Germany
- Max Planck Institute of Experimental Medicine, Göttingen, Germany
| | - Pavel Švehla
- Institute of Clinical Neuroimmunology, University Hospital, Ludwig-Maximilians-University Munich, Munich, Germany
- BioMedical Center, Faculty of Medicine, Ludwig-Maximilians-University Munich, Munich, Germany
| | - Vanessa W Y Kan
- Institute of Clinical Neuroimmunology, University Hospital, Ludwig-Maximilians-University Munich, Munich, Germany
- BioMedical Center, Faculty of Medicine, Ludwig-Maximilians-University Munich, Munich, Germany
| | - Fred Wolf
- Max Planck Institute of Experimental Medicine, Göttingen, Germany
- Max Planck Institute for Dynamics and Self-Organization; Campus Institute for Dynamics of Biological Networks, Göttingen, Germany
| | - Sabine Liebscher
- Institute of Clinical Neuroimmunology, University Hospital, Ludwig-Maximilians-University Munich, Munich, Germany.
- BioMedical Center, Faculty of Medicine, Ludwig-Maximilians-University Munich, Munich, Germany
- Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| | - Katrin I Willig
- Optical Nanoscopy in Neuroscience, Center for Nanoscale Microscopy and Molecular Physiology of the Brain, University Medical Center Göttingen, Göttingen, Germany.
- Max Planck Institute of Experimental Medicine, Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
33
|
Kasai H, Ziv NE, Okazaki H, Yagishita S, Toyoizumi T. Spine dynamics in the brain, mental disorders and artificial neural networks. Nat Rev Neurosci 2021; 22:407-422. [PMID: 34050339 DOI: 10.1038/s41583-021-00467-3] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2021] [Indexed: 12/15/2022]
Abstract
In the brain, most synapses are formed on minute protrusions known as dendritic spines. Unlike their artificial intelligence counterparts, spines are not merely tuneable memory elements: they also embody algorithms that implement the brain's ability to learn from experience and cope with new challenges. Importantly, they exhibit structural dynamics that depend on activity, excitatory input and inhibitory input (synaptic plasticity or 'extrinsic' dynamics) and dynamics independent of activity ('intrinsic' dynamics), both of which are subject to neuromodulatory influences and reinforcers such as dopamine. Here we succinctly review extrinsic and intrinsic dynamics, compare these with parallels in machine learning where they exist, describe the importance of intrinsic dynamics for memory management and adaptation, and speculate on how disruption of extrinsic and intrinsic dynamics may give rise to mental disorders. Throughout, we also highlight algorithmic features of spine dynamics that may be relevant to future artificial intelligence developments.
Collapse
Affiliation(s)
- Haruo Kasai
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan. .,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan.
| | - Noam E Ziv
- Technion Faculty of Medicine and Network Biology Research Labs, Technion City, Haifa, Israel
| | - Hitoshi Okazaki
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan.,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Sho Yagishita
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan.,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama, Japan.,Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
34
|
Gold AR, Glanzman DL. The central importance of nuclear mechanisms in the storage of memory. Biochem Biophys Res Commun 2021; 564:103-113. [PMID: 34020774 DOI: 10.1016/j.bbrc.2021.04.125] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 04/28/2021] [Accepted: 04/28/2021] [Indexed: 12/14/2022]
Abstract
The neurobiological nature of the memory trace (engram) remains controversial. The most widely accepted hypothesis at present is that long-term memory is stored as stable, learning-induced changes in synaptic connections. This hypothesis, the synaptic plasticity hypothesis of memory, is supported by extensive experimental data gathered from over 50 years of research. Nonetheless, there are important mnemonic phenomena that the synaptic plasticity hypothesis cannot, or cannot readily, account for. Furthermore, recent work indicates that epigenetic and genomic mechanisms play heretofore underappreciated roles in memory. Here, we critically assess the evidence that supports the synaptic plasticity hypothesis and discuss alternative non-synaptic, nuclear mechanisms of memory storage, including DNA methylation and retrotransposition. We argue that long-term encoding of memory is mediated by nuclear processes; synaptic plasticity, by contrast, represents a means of relatively temporary memory storage. In addition, we propose that memories are evaluated for their mnemonic significance during an initial period of synaptic storage; if assessed as sufficiently important, the memories then undergo nuclear encoding.
Collapse
Affiliation(s)
- Adam R Gold
- Behavioral Neuroscience Program, Department of Psychology, University of California, Los Angeles, Los Angeles, CA, 90095, USA.
| | - David L Glanzman
- Department of Integrative Biology & Physiology, UCLA College, University of California, Los Angeles, Los Angeles, CA, 90095, USA; Department of Neurobiology, David Geffen School of Medicine at UCLA, University of California, Los Angeles, Los Angeles, CA, 90095, USA; Integrative Center for Learning and Memory, Brain Research Institute, University of California, Los Angeles, Los Angeles, CA, 90095, USA.
| |
Collapse
|
35
|
Zhu Y, Uytiepo M, Bushong E, Haberl M, Beutter E, Scheiwe F, Zhang W, Chang L, Luu D, Chui B, Ellisman M, Maximov A. Nanoscale 3D EM reconstructions reveal intrinsic mechanisms of structural diversity of chemical synapses. Cell Rep 2021; 35:108953. [PMID: 33826888 PMCID: PMC8354523 DOI: 10.1016/j.celrep.2021.108953] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 01/21/2021] [Accepted: 03/15/2021] [Indexed: 10/25/2022] Open
Abstract
Chemical synapses of shared cellular origins have remarkably heterogeneous structures, but how this diversity is generated is unclear. Here, we use three-dimensional (3D) electron microscopy and artificial intelligence algorithms for image processing to reconstruct functional excitatory microcircuits in the mouse hippocampus and microcircuits in which neurotransmitter signaling is permanently suppressed with genetic tools throughout the lifespan. These nanoscale analyses reveal that experience is dispensable for morphogenesis of synapses with different geometric shapes and contents of membrane organelles and that arrangement of morphologically distinct connections in local networks is stochastic. Moreover, loss of activity increases the variability in sizes of opposed pre- and postsynaptic structures without disrupting their alignments, suggesting that inherently variable weights of naive connections become progressively matched with repetitive use. These results demonstrate that mechanisms for the structural diversity of neuronal synapses are intrinsic and provide insights into how circuits essential for memory storage assemble and integrate information.
Collapse
Affiliation(s)
- Yongchuan Zhu
- Department of Neuroscience, The Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| | - Marco Uytiepo
- Department of Neuroscience, The Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| | - Eric Bushong
- National Center for Microscopy and Imaging Research, University of California, San Diego, CA 92037, USA; Department of Neurosciences, University of California, San Diego, School of Medicine, La Jolla, CA 92037, USA
| | - Matthias Haberl
- National Center for Microscopy and Imaging Research, University of California, San Diego, CA 92037, USA; Department of Neurosciences, University of California, San Diego, School of Medicine, La Jolla, CA 92037, USA
| | - Elizabeth Beutter
- Department of Neuroscience, The Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| | - Frederieke Scheiwe
- Department of Neuroscience, The Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| | - Weiheng Zhang
- Department of Neuroscience, The Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| | - Lyanne Chang
- Department of Neuroscience, The Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| | - Danielle Luu
- Department of Neuroscience, The Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| | - Brandon Chui
- Department of Neuroscience, The Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA
| | - Mark Ellisman
- National Center for Microscopy and Imaging Research, University of California, San Diego, CA 92037, USA; Department of Neurosciences, University of California, San Diego, School of Medicine, La Jolla, CA 92037, USA.
| | - Anton Maximov
- Department of Neuroscience, The Dorris Neuroscience Center, The Scripps Research Institute, La Jolla, CA 92037, USA.
| |
Collapse
|
36
|
Ryan TJ, Ortega-de San Luis C, Pezzoli M, Sen S. Engram cell connectivity: an evolving substrate for information storage. Curr Opin Neurobiol 2021; 67:215-225. [PMID: 33812274 DOI: 10.1016/j.conb.2021.01.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 01/15/2021] [Accepted: 01/16/2021] [Indexed: 01/02/2023]
Abstract
Understanding memory requires an explanation for how information can be stored in the brain in a stable state. The change in the brain that accounts for a given memory is referred to as an engram. In recent years, the term engram has been operationalized as the cells that are activated by a learning experience, undergoes plasticity, and are sufficient and necessary for memory recall. Using this framework, and a growing toolbox of related experimental techniques, engram manipulation has become a central topic in behavioral, systems, and molecular neuroscience. Recent research on the topic has provided novel insights into the mechanisms of long-term memory storage, and its overlap with instinct. We propose that memory and instinct may be embodied as isomorphic topological structures within the brain's microanatomical circuitry.
Collapse
Affiliation(s)
- Tomás J Ryan
- School of Biochemistry and Immunology and Trinity College Institute for Neuroscience, Trinity College Dublin, Dublin, D02 PN40, Ireland; Florey Institute of Neuroscience and Mental Health, Melbourne Brain Centre, University of Melbourne, Parkville, VIC 3052, Australia; Child & Brain Development Program, Canadian Institute for Advanced Research (CIFAR), Toronto, Ontario M5G 1M1, Canada.
| | - Clara Ortega-de San Luis
- School of Biochemistry and Immunology and Trinity College Institute for Neuroscience, Trinity College Dublin, Dublin, D02 PN40, Ireland
| | - Maurizio Pezzoli
- School of Biochemistry and Immunology and Trinity College Institute for Neuroscience, Trinity College Dublin, Dublin, D02 PN40, Ireland
| | - Siddhartha Sen
- Centre for Research on Adaptive Nanostructures and Nanodevices and School of Physics, Trinity College Dublin, D02 PN40, Ireland
| |
Collapse
|
37
|
Steinberg J, Advani M, Sompolinsky H. New role for circuit expansion for learning in neural networks. Phys Rev E 2021; 103:022404. [PMID: 33736047 DOI: 10.1103/physreve.103.022404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 12/16/2020] [Indexed: 11/07/2022]
Abstract
Many sensory pathways in the brain include sparsely active populations of neurons downstream from the input stimuli. The biological purpose of this expanded structure is unclear, but it may be beneficial due to the increased expressive power of the network. In this work, we show that certain ways of expanding a neural network can improve its generalization performance even when the expanded structure is pruned after the learning period. To study this setting, we use a teacher-student framework where a perceptron teacher network generates labels corrupted with small amounts of noise. We then train a student network structurally matched to the teacher. In this scenario, the student can achieve optimal accuracy if given the teacher's synaptic weights. We find that sparse expansion of the input layer of a student perceptron network both increases its capacity and improves the generalization performance of the network when learning a noisy rule from a teacher perceptron when the expansion is pruned after learning. We find similar behavior when the expanded units are stochastic and uncorrelated with the input and analyze this network in the mean-field limit. By solving the mean-field equations, we show that the generalization error of the stochastic expanded student network continues to drop as the size of the network increases. This improvement in generalization performance occurs despite the increased complexity of the student network relative to the teacher it is trying to learn. We show that this effect is closely related to the addition of slack variables in artificial neural networks and suggest possible implications for artificial and biological neural networks.
Collapse
Affiliation(s)
- Julia Steinberg
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA.,Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Madhu Advani
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Haim Sompolinsky
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA.,Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem 91904, Israel
| |
Collapse
|
38
|
Wason TD. A model integrating multiple processes of synchronization and coherence for information instantiation within a cortical area. Biosystems 2021; 205:104403. [PMID: 33746019 DOI: 10.1016/j.biosystems.2021.104403] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 03/05/2021] [Indexed: 12/14/2022]
Abstract
What is the form of dynamic, e.g., sensory, information in the mammalian cortex? Information in the cortex is modeled as a coherence map of a mixed chimera state of synchronous, phasic, and disordered minicolumns. The theoretical model is built on neurophysiological evidence. Complex spatiotemporal information is instantiated through a system of interacting biological processes that generate a synchronized cortical area, a coherent aperture. Minicolumn elements are grouped in macrocolumns in an array analogous to a phased-array radar, modeled as an aperture, a "hole through which radiant energy flows." Coherence maps in a cortical area transform inputs from multiple sources into outputs to multiple targets, while reducing complexity and entropy. Coherent apertures can assume extremely large numbers of different information states as coherence maps, which can be communicated among apertures with corresponding very large bandwidths. The coherent aperture model incorporates considerable reported research, integrating five conceptually and mathematically independent processes: 1) a damped Kuramoto network model, 2) a pumped area field potential, 3) the gating of nearly coincident spikes, 4) the coherence of activity across cortical lamina, and 5) complex information formed through functions in macrocolumns. Biological processes and their interactions are described in equations and a functional circuit such that the mathematical pieces can be assembled the same way the neurophysiological ones are. The model can be conceptually convolved over the specifics of local cortical areas within and across species. A coherent aperture becomes a node in a graph of cortical areas with a corresponding distribution of information.
Collapse
Affiliation(s)
- Thomas D Wason
- North Carolina State University, Department of Biological Sciences, Meitzen Laboratory, Campus Box 7617, 128 David Clark Labs, Raleigh, NC 27695-7617, USA.
| |
Collapse
|
39
|
Gershman SJ, Balbi PE, Gallistel CR, Gunawardena J. Reconsidering the evidence for learning in single cells. eLife 2021; 10:61907. [PMID: 33395388 PMCID: PMC7781593 DOI: 10.7554/elife.61907] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 12/11/2020] [Indexed: 12/19/2022] Open
Abstract
The question of whether single cells can learn led to much debate in the early 20th century. The view prevailed that they were capable of non-associative learning but not of associative learning, such as Pavlovian conditioning. Experiments indicating the contrary were considered either non-reproducible or subject to more acceptable interpretations. Recent developments suggest that the time is right to reconsider this consensus. We exhume the experiments of Beatrice Gelber on Pavlovian conditioning in the ciliate Paramecium aurelia, and suggest that criticisms of her findings can now be reinterpreted. Gelber was a remarkable scientist whose absence from the historical record testifies to the prevailing orthodoxy that single cells cannot learn. Her work, and more recent studies, suggest that such learning may be evolutionarily more widespread and fundamental to life than previously thought and we discuss the implications for different aspects of biology.
Collapse
Affiliation(s)
- Samuel J Gershman
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, United States.,Center for Brains, Mind and Machines, MIT, Cambridge, United States
| | - Petra Em Balbi
- Department of Systems Biology, Harvard Medical School, Boston, United States
| | - C Randy Gallistel
- Rutgers Center for Cognitive Science, Rutgers University at New Brunswick, New Brunswick, United States
| | - Jeremy Gunawardena
- Department of Systems Biology, Harvard Medical School, Boston, United States
| |
Collapse
|
40
|
Glutamatergic Dysfunction and Synaptic Ultrastructural Alterations in Schizophrenia and Autism Spectrum Disorder: Evidence from Human and Rodent Studies. Int J Mol Sci 2020; 22:ijms22010059. [PMID: 33374598 PMCID: PMC7793137 DOI: 10.3390/ijms22010059] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 12/15/2020] [Accepted: 12/22/2020] [Indexed: 12/12/2022] Open
Abstract
The correlation between dysfunction in the glutamatergic system and neuropsychiatric disorders, including schizophrenia and autism spectrum disorder, is undisputed. Both disorders are associated with molecular and ultrastructural alterations that affect synaptic plasticity and thus the molecular and physiological basis of learning and memory. Altered synaptic plasticity, accompanied by changes in protein synthesis and trafficking of postsynaptic proteins, as well as structural modifications of excitatory synapses, are critically involved in the postnatal development of the mammalian nervous system. In this review, we summarize glutamatergic alterations and ultrastructural changes in synapses in schizophrenia and autism spectrum disorder of genetic or drug-related origin, and briefly comment on the possible reversibility of these neuropsychiatric disorders in the light of findings in regular synaptic physiology.
Collapse
|
41
|
Lu J, Zuo Y. Shedding light on learning and memory: optical interrogation of the synaptic circuitry. Curr Opin Neurobiol 2020; 67:138-144. [PMID: 33279804 DOI: 10.1016/j.conb.2020.10.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 10/16/2020] [Accepted: 10/18/2020] [Indexed: 01/02/2023]
Abstract
In the quest for the physical substrate of learning and memory, a consensus gradually emerges that memory traces are stored in specific neuronal populations and the synaptic circuits that connect them. In this review, we discuss recent progresses in understanding the reorganization of synaptic circuits and neuronal assemblies associated with learning and memory, with an emphasis on optical techniques for in vivo interrogations. We also highlight some open questions on the missing link between synaptic modifications and neuronal coding, and how stable memory persists despite synaptic and neuronal fluctuations.
Collapse
Affiliation(s)
- Ju Lu
- Department of Molecular, Cell and Developmental Biology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA
| | - Yi Zuo
- Department of Molecular, Cell and Developmental Biology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA.
| |
Collapse
|
42
|
Gobbo F, Cattaneo A. Neuronal Activity at Synapse Resolution: Reporters and Effectors for Synaptic Neuroscience. Front Mol Neurosci 2020; 13:572312. [PMID: 33192296 PMCID: PMC7609880 DOI: 10.3389/fnmol.2020.572312] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Accepted: 08/31/2020] [Indexed: 12/15/2022] Open
Abstract
The development of methods for the activity-dependent tagging of neurons enabled a new way to tackle the problem of engram identification at the cellular level, giving rise to groundbreaking findings in the field of memory studies. However, the resolution of activity-dependent tagging remains limited to the whole-cell level. Notably, events taking place at the synapse level play a critical role in the establishment of new memories, and strong experimental evidence shows that learning and synaptic plasticity are tightly linked. Here, we provide a comprehensive review of the currently available techniques that enable to identify and track the neuronal activity with synaptic spatial resolution. We also present recent technologies that allow to selectively interfere with specific subsets of synapses. Lastly, we discuss how these technologies can be applied to the study of learning and memory.
Collapse
Affiliation(s)
- Francesco Gobbo
- Bio@SNS Laboratory of Biology, Scuola Normale Superiore, Pisa, Italy
- Centre for Discovery Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Antonino Cattaneo
- Bio@SNS Laboratory of Biology, Scuola Normale Superiore, Pisa, Italy
| |
Collapse
|
43
|
Capogna M, Castillo PE, Maffei A. The ins and outs of inhibitory synaptic plasticity: Neuron types, molecular mechanisms and functional roles. Eur J Neurosci 2020; 54:6882-6901. [PMID: 32663353 DOI: 10.1111/ejn.14907] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/30/2020] [Accepted: 07/08/2020] [Indexed: 01/05/2023]
Abstract
GABAergic interneurons are highly diverse, and their synaptic outputs express various forms of plasticity. Compelling evidence indicates that activity-dependent changes of inhibitory synaptic transmission play a significant role in regulating neural circuits critically involved in learning and memory and circuit refinement. Here, we provide an updated overview of inhibitory synaptic plasticity with a focus on the hippocampus and neocortex. To illustrate the diversity of inhibitory interneurons, we discuss the case of two highly divergent interneuron types, parvalbumin-expressing basket cells and neurogliaform cells, which support unique roles on circuit dynamics. We also present recent progress on the molecular mechanisms underlying long-term, activity-dependent plasticity of fast inhibitory transmission. Lastly, we discuss the role of inhibitory synaptic plasticity in neuronal circuits' function. The emerging picture is that inhibitory synaptic transmission in the CNS is extremely diverse, undergoes various mechanistically distinct forms of plasticity and contributes to a much more refined computational role than initially thought. Both the remarkable diversity of inhibitory interneurons and the various forms of plasticity expressed by GABAergic synapses provide an amazingly rich inhibitory repertoire that is central to a variety of complex neural circuit functions, including memory.
Collapse
Affiliation(s)
- Marco Capogna
- Department of Biomedicine, Danish National Research Foundation Center of Excellence PROMEMO, Aarhus University, Aarhus, Denmark
| | - Pablo E Castillo
- Dominck P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA.,Department of Psychiatry and Behavioral Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Arianna Maffei
- Center for Neural Circuit Dynamics and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
44
|
Multiplexing rhythmic information by spike timing dependent plasticity. PLoS Comput Biol 2020; 16:e1008000. [PMID: 32598350 PMCID: PMC7351241 DOI: 10.1371/journal.pcbi.1008000] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 07/10/2020] [Accepted: 05/29/2020] [Indexed: 01/05/2023] Open
Abstract
Rhythmic activity has been associated with a wide range of cognitive processes including the encoding of sensory information, navigation, the transfer of information and others. Rhythmic activity in the brain has also been suggested to be used for multiplexing information. Multiplexing is the ability to transmit more than one signal via the same channel. Here we focus on frequency division multiplexing, in which different signals are transmitted in different frequency bands. Recent work showed that spike-timing-dependent plasticity (STDP) can facilitate the transfer of rhythmic activity downstream the information processing pathway. However, STDP has also been known to generate strong winner-take-all like competition between subgroups of correlated synaptic inputs. This competition between different rhythmicity channels, induced by STDP, may prevent the multiplexing of information. Thus, raising doubts whether STDP is consistent with the idea of multiplexing. This study explores whether STDP can facilitate the multiplexing of information across multiple frequency channels, and if so, under what conditions. We address this question in a modelling study, investigating the STDP dynamics of two populations synapsing downstream onto the same neuron in a feed-forward manner. Each population was assumed to exhibit rhythmic activity, albeit in a different frequency band. Our theory reveals that the winner-take-all like competitions between the two populations is limited, in the sense that different rhythmic populations will not necessarily fully suppress each other. Furthermore, we found that for a wide range of parameters, the network converged to a solution in which the downstream neuron responded to both rhythms. Yet, the synaptic weights themselves did not converge to a fixed point, rather remained dynamic. These findings imply that STDP can support the multiplexing of rhythmic information, and demonstrate how functionality (multiplexing of information) can be retained in the face of continuous remodeling of all the synaptic weights. The constraints on the types of STDP rules that can support multiplexing provide a natural test for our theory. Spike timing dependent plasticity (STDP) quantifies the change in the synaptic efficacy as a function of the temporal relationship between pre- and post-synaptic firing. STDP can be viewed as a microscopic unsupervised learning rule, and a wide range of such microscopic learning rules have been described empirically. Since there is no supervisor in unsupervised learning (which would provide with the system its goal), theoreticians have struggled with the question of the possible computational roles of the various STDP rules. Previous studies have focused on the possible contribution of STDP to the spontaneous development of spatial structure. However, the rich temporal repertoire of reported STDP rules has largely been ignored. Here we studied the contribution of STDP to the development of temporal structure. We show how STDP can shape synaptic efficacies to facilitate the transfer of rhythmic information downstream and to enable the multiplexing of information across different frequency channels. Our work emphasizes the relationship between the temporal structure of the STDP rule and the rhythmic activity it can support.
Collapse
|
45
|
Bonilla-Quintana M, Wörgötter F, Tetzlaff C, Fauth M. Modeling the Shape of Synaptic Spines by Their Actin Dynamics. Front Synaptic Neurosci 2020; 12:9. [PMID: 32218728 PMCID: PMC7078677 DOI: 10.3389/fnsyn.2020.00009] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 02/24/2020] [Indexed: 01/05/2023] Open
Abstract
Dendritic spines are the morphological basis of excitatory synapses in the cortex and their size and shape correlates with functional synaptic properties. Recent experiments show that spines exhibit large shape fluctuations that are not related to activity-dependent plasticity but nonetheless might influence memory storage at their synapses. To investigate the determinants of such spontaneous fluctuations, we propose a mathematical model for the dynamics of the spine shape and analyze it in 2D-related to experimental microscopic imagery-and in 3D. We show that the spine shape is governed by a local imbalance between membrane tension and the expansive force from actin bundles that originates from discrete actin polymerization foci. Experiments have shown that only few such polymerization foci co-exist at any time in a spine, each having limited life time. The model shows that the momentarily existing set of such foci pushes the membrane along certain directions until foci are replaced and other directions may now be affected. We explore these relations in depth and use our model to predict shape and temporal characteristics of spines from the different biophysical parameters involved in actin polymerization. Approximating the model by a single recursive equation we finally demonstrate that the temporal evolution of the number of active foci is sufficient to predict the size of the model-spines. Thus, our model provides the first platform to study the relation between molecular and morphological properties of the spine with a high degree of biophysical detail.
Collapse
Affiliation(s)
- Mayte Bonilla-Quintana
- Department for Computational Neuroscience, Third Institute of Physics-Biophysics, Georg-August-University, Göttingen, Germany
| | - Florentin Wörgötter
- Department for Computational Neuroscience, Third Institute of Physics-Biophysics, Georg-August-University, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, Göttingen, Germany
| | - Christian Tetzlaff
- Department for Computational Neuroscience, Third Institute of Physics-Biophysics, Georg-August-University, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, Göttingen, Germany
| | - Michael Fauth
- Department for Computational Neuroscience, Third Institute of Physics-Biophysics, Georg-August-University, Göttingen, Germany
| |
Collapse
|
46
|
Activity Dependent and Independent Determinants of Synaptic Size Diversity. J Neurosci 2020; 40:2828-2848. [PMID: 32127494 DOI: 10.1523/jneurosci.2181-19.2020] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 02/04/2020] [Accepted: 02/13/2020] [Indexed: 11/21/2022] Open
Abstract
The extraordinary diversity of excitatory synapse sizes is commonly attributed to activity-dependent processes that drive synaptic growth and diminution. Recent studies also point to activity-independent size fluctuations, possibly driven by innate synaptic molecule dynamics, as important generators of size diversity. To examine the contributions of activity-dependent and independent processes to excitatory synapse size diversity, we studied glutamatergic synapse size dynamics and diversification in cultured rat cortical neurons (both sexes), silenced from plating. We found that in networks with no history of activity whatsoever, synaptic size diversity was no less extensive than that observed in spontaneously active networks. Synapses in silenced networks were larger, size distributions were broader, yet these were rightward-skewed and similar in shape when scaled by mean synaptic size. Silencing reduced the magnitude of size fluctuations and weakened constraints on size distributions, yet these were sufficient to explain synaptic size diversity in silenced networks. Model-based exploration followed by experimental testing indicated that silencing-associated changes in innate molecular dynamics and fluctuation characteristics might negatively impact synaptic persistence, resulting in reduced synaptic numbers. This, in turn, would increase synaptic molecule availability, promote synaptic enlargement, and ultimately alter fluctuation characteristics. These findings suggest that activity-independent size fluctuations are sufficient to fully diversify glutamatergic synaptic sizes, with activity-dependent processes primarily setting the scale rather than the shape of size distributions. Moreover, they point to reciprocal relationships between synaptic size fluctuations, size distributions, and synaptic numbers mediated by the innate dynamics of synaptic molecules as they move in, out, and between synapses.SIGNIFICANCE STATEMENT Sizes of glutamatergic synapses vary tremendously, even when formed on the same neuron. This diversity is commonly thought to reflect the outcome of activity-dependent forms of synaptic plasticity, yet activity-independent processes might also play some part. Here we show that in neurons with no history of activity whatsoever, synaptic sizes are no less diverse. We show that this diversity is the product of activity-independent size fluctuations, which are sufficient to generate a full repertoire of synaptic sizes at correct proportions. By combining modeling and experimentation we expose reciprocal relationships between size fluctuations, synaptic sizes and synaptic counts, and show how these phenomena might be connected through the dynamics of synaptic molecules as they move in, out, and between synapses.
Collapse
|
47
|
Abstract
Although Alzheimer's disease (AD) was described over a century ago, there are no effective approaches to its prevention and treatment. Such a slow progress is explained, at least in part, by our incomplete understanding of the mechanisms underlying the pathogenesis of AD. Here, I champion a hypothesis whereby AD is initiated on a disruption of the blood-brain barrier (BBB) caused by either genetic or non-genetic risk factors. The BBB disruption leads to an autoimmune response against pyramidal neurons located in the allo- and neocortical structures involved in memory formation and storage. The response caused by the adaptive immune system is not strong enough to directly kill neurons but may be sufficient to make them selectively vulnerable to neurofibrillary pathology. This hypothesis is based on the recent data showing that memory formation is associated with epigenetic chromatin modifications and, therefore, may be accompanied by expression of memory-specific proteins recognized by the immune system as "non-self" antigens. The autoimmune hypothesis is testable, and I discuss potential ways for its experimental and clinical verification. If confirmed, this hypothesis can radically change therapeutic approaches to AD prevention and treatment.
Collapse
Affiliation(s)
- Yuri I Arshavsky
- BioCircuits Institute, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
48
|
Locating the engram: Should we look for plastic synapses or information-storing molecules? Neurobiol Learn Mem 2020; 169:107164. [PMID: 31945459 DOI: 10.1016/j.nlm.2020.107164] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 09/18/2019] [Accepted: 01/10/2020] [Indexed: 12/12/2022]
Abstract
Karl Lashley began the search for the engram nearly seventy years ago. In the time since, much has been learned but divisions remain. In the contemporary neurobiology of learning and memory, two profoundly different conceptions contend: the associative/connectionist (A/C) conception and the computational/representational (C/R) conception. Both theories ground themselves in the belief that the mind is emergent from the properties and processes of a material brain. Where these theories differ is in their description of what the neurobiological substrate of memory is and where it resides in the brain. The A/C theory of memory emphasizes the need to distinguish memory cognition from the memory engram and postulates that memory cognition is an emergent property of patterned neural activity routed through engram circuits. In this model, learning re-organizes synapse association strengths to guide future neural activity. Importantly, the version of the A/C theory advocated for here contends that synaptic change is not symbolic and, despite normally being necessary, is not sufficient for memory cognition. Instead, synaptic change provides the capacity and a blueprint for reinstating symbolic patterns of neural activity. Unlike the A/C theory, which posits that memory emerges at the circuit level, the C/R conception suggests that memory manifests at the level of intracellular molecular structures. In C/R theory, these intracellular structures are information-conveying and have properties compatible with the view that brain computation utilizes a read/write memory, functionally similar to that in a computer. New research has energized both sides and highlighted the need for new discussion. Both theories, the key questions each theory has yet to resolve and several potential paths forward are presented here.
Collapse
|
49
|
Rule ME, O'Leary T, Harvey CD. Causes and consequences of representational drift. Curr Opin Neurobiol 2019; 58:141-147. [PMID: 31569062 PMCID: PMC7385530 DOI: 10.1016/j.conb.2019.08.005] [Citation(s) in RCA: 83] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 08/13/2019] [Accepted: 08/27/2019] [Indexed: 01/27/2023]
Abstract
The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.
Collapse
Affiliation(s)
- Michael E Rule
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom.
| | | |
Collapse
|
50
|
Susman L, Brenner N, Barak O. Stable memory with unstable synapses. Nat Commun 2019; 10:4441. [PMID: 31570719 PMCID: PMC6768856 DOI: 10.1038/s41467-019-12306-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 08/20/2019] [Indexed: 12/22/2022] Open
Abstract
What is the physiological basis of long-term memory? The prevailing view in Neuroscience attributes changes in synaptic efficacy to memory acquisition, implying that stable memories correspond to stable connectivity patterns. However, an increasing body of experimental evidence points to significant, activity-independent fluctuations in synaptic strengths. How memories can survive these fluctuations and the accompanying stabilizing homeostatic mechanisms is a fundamental open question. Here we explore the possibility of memory storage within a global component of network connectivity, while individual connections fluctuate. We find that homeostatic stabilization of fluctuations differentially affects different aspects of network connectivity. Specifically, memories stored as time-varying attractors of neural dynamics are more resilient to erosion than fixed-points. Such dynamic attractors can be learned by biologically plausible learning-rules and support associative retrieval. Our results suggest a link between the properties of learning-rules and those of network-level memory representations, and point at experimentally measurable signatures. How are stable memories maintained in the brain despite significant ongoing fluctuations in synaptic strengths? Here, the authors show that a model consistent with fluctuations, homeostasis and biologically plausible learning rules, naturally leads to memories implemented as dynamic attractors.
Collapse
Affiliation(s)
- Lee Susman
- Interdisciplinary Program in Applied Mathematics, Technion Israel Institute of Technology, Haifa, 32000, Israel. .,Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa, 32000, Israel.
| | - Naama Brenner
- Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa, 32000, Israel. .,Dept. of Chemical Engineering, Technion Israel Institute of Technology, Haifa, 32000, Israel.
| | - Omri Barak
- Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa, 32000, Israel. .,Rappaport Faculty of Medicine, Technion Israel Institute of Technology, Haifa, 32000, Israel.
| |
Collapse
|