1
|
Caya-Bissonnette L, Béïque JC. Half a century legacy of long-term potentiation. Curr Biol 2024; 34:R640-R662. [PMID: 38981433 DOI: 10.1016/j.cub.2024.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/11/2024]
Abstract
In 1973, two papers from Bliss and Lømo and from Bliss and Gardner-Medwin reported that high-frequency synaptic stimulation in the dentate gyrus of rabbits resulted in a long-lasting increase in synaptic strength. This form of synaptic plasticity, commonly referred to as long-term potentiation (LTP), was immediately considered as an attractive mechanism accounting for the ability of the brain to store information. In this historical piece looking back over the past 50 years, we discuss how these two landmark contributions directly motivated a colossal research effort and detail some of the resulting milestones that have shaped our evolving understanding of the molecular and cellular underpinnings of LTP. We highlight the main features of LTP, cover key experiments that defined its induction and expression mechanisms, and outline the evidence supporting a potential role of LTP in learning and memory. We also briefly explore some ramifications of LTP on network stability, consider current limitations of LTP as a model of associative memory, and entertain future research orientations.
Collapse
Affiliation(s)
- Léa Caya-Bissonnette
- Graduate Program in Neuroscience, University of Ottawa, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada; Brain and Mind Research Institute's Centre for Neural Dynamics and Artificial Intelligence, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada; Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada
| | - Jean-Claude Béïque
- Brain and Mind Research Institute's Centre for Neural Dynamics and Artificial Intelligence, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada; Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada.
| |
Collapse
|
2
|
Song Y, Benna MK. Parallel Synapses with Transmission Nonlinearities Enhance Neuronal Classification Capacity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.01.601490. [PMID: 39005326 PMCID: PMC11244940 DOI: 10.1101/2024.07.01.601490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Cortical neurons often establish multiple synaptic contacts with the same postsynaptic neuron. To avoid functional redundancy of these parallel synapses, it is crucial that each synapse exhibits distinct computational properties. Here we model the current to the soma contributed by each synapse as a sigmoidal transmission function of its presynaptic input, with learnable parameters such as amplitude, slope, and threshold. We evaluate the classification capacity of a neuron equipped with such nonlinear parallel synapses, and show that with a small number of parallel synapses per axon, it substantially exceeds that of the Perceptron. Furthermore, the number of correctly classified data points can increase superlinearly as the number of presynaptic axons grows. When training with an unrestricted number of parallel synapses, our model neuron can effectively implement an arbitrary aggregate transmission function for each axon, constrained only by monotonicity. Nevertheless, successful learning in the model neuron often requires only a small number of parallel synapses. We also apply these parallel synapses in a feedforward neural network trained to classify MNIST images, and show that they can increase the test accuracy. This demonstrates that multiple nonlinear synapses per input axon can substantially enhance a neuron's computational power.
Collapse
Affiliation(s)
- Yuru Song
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA
| | - Marcus K. Benna
- Department of Neurobiology, School of Biological Sciences, University of California San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
3
|
Choudhary K, Berberich S, Hahn TTG, McFarland JM, Mehta MR. Spontaneous persistent activity and inactivity in vivo reveals differential cortico-entorhinal functional connectivity. Nat Commun 2024; 15:3542. [PMID: 38719802 PMCID: PMC11079062 DOI: 10.1038/s41467-024-47617-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 04/04/2024] [Indexed: 05/12/2024] Open
Abstract
Understanding the functional connectivity between brain regions and its emergent dynamics is a central challenge. Here we present a theory-experiment hybrid approach involving iteration between a minimal computational model and in vivo electrophysiological measurements. Our model not only predicted spontaneous persistent activity (SPA) during Up-Down-State oscillations, but also inactivity (SPI), which has never been reported. These were confirmed in vivo in the membrane potential of neurons, especially from layer 3 of the medial and lateral entorhinal cortices. The data was then used to constrain two free parameters, yielding a unique, experimentally determined model for each neuron. Analytic and computational analysis of the model generated a dozen quantitative predictions about network dynamics, which were all confirmed in vivo to high accuracy. Our technique predicted functional connectivity; e. g. the recurrent excitation is stronger in the medial than lateral entorhinal cortex. This too was confirmed with connectomics data. This technique uncovers how differential cortico-entorhinal dialogue generates SPA and SPI, which could form an energetically efficient working-memory substrate and influence the consolidation of memories during sleep. More broadly, our procedure can reveal the functional connectivity of large networks and a theory of their emergent dynamics.
Collapse
Affiliation(s)
- Krishna Choudhary
- Department of Physics and Astronomy, University of California, Los Angeles, Los Angeles, CA, USA
- HRL Laboratories, Malibu, CA, USA
| | - Sven Berberich
- Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Department of Psychiatry and Psychotherapy, University Medical Center, Johannes Gutenberg University, Mainz, Germany
| | | | | | - Mayank R Mehta
- Department of Physics and Astronomy, University of California, Los Angeles, Los Angeles, CA, USA.
- W. M. Keck Center for Neurophysics, University of California, Los Angeles, CA, USA.
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA.
- Departments of Neurology and Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
4
|
Sakelaris B, Riecke H. Adult Neurogenesis Reconciles Flexibility and Stability of Olfactory Perceptual Memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.03.583153. [PMID: 38737721 PMCID: PMC11087939 DOI: 10.1101/2024.03.03.583153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/14/2024]
Abstract
In brain regions featuring ongoing plasticity, the task of quickly encoding new information without overwriting old memories presents a significant challenge. In the rodent olfactory bulb, which is renowned for substantial structural plasticity driven by adult neurogenesis and persistent turnover of dendritic spines, we show that such plasticity is vital to overcoming this flexibility-stability dilemma. To do so, we develop a computational model for structural plasticity in the olfactory bulb and show that the maturation of adult-born neurons facilitates the abilities to learn quickly and forget slowly. Particularly important to achieve this goal are the transient enhancement of the plasticity, excitability, and susceptibility to apoptosis that characterizes young neurons. The model captures many experimental observations and makes a number of testable predictions. Overall, it identifies memory consolidation as an important role of adult neurogenesis in olfaction and exemplifies how the brain can maintain stable memories despite ongoing extensive plasticity.
Collapse
Affiliation(s)
- Bennet Sakelaris
- Engineering Sciences and Applied Mathematics, Northwestern University, Evanston, Illinois, United States of America
| | - Hermann Riecke
- Engineering Sciences and Applied Mathematics, Northwestern University, Evanston, Illinois, United States of America
| |
Collapse
|
5
|
Ecker A, Egas Santander D, Bolaños-Puchet S, Isbister JB, Reimann MW. Cortical cell assemblies and their underlying connectivity: An in silico study. PLoS Comput Biol 2024; 20:e1011891. [PMID: 38466752 PMCID: PMC10927091 DOI: 10.1371/journal.pcbi.1011891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 02/05/2024] [Indexed: 03/13/2024] Open
Abstract
Recent developments in experimental techniques have enabled simultaneous recordings from thousands of neurons, enabling the study of functional cell assemblies. However, determining the patterns of synaptic connectivity giving rise to these assemblies remains challenging. To address this, we developed a complementary, simulation-based approach, using a detailed, large-scale cortical network model. Using a combination of established methods we detected functional cell assemblies from the stimulus-evoked spiking activity of 186,665 neurons. We studied how the structure of synaptic connectivity underlies assembly composition, quantifying the effects of thalamic innervation, recurrent connectivity, and the spatial arrangement of synapses on dendrites. We determined that these features reduce up to 30%, 22%, and 10% of the uncertainty of a neuron belonging to an assembly. The detected assemblies were activated in a stimulus-specific sequence and were grouped based on their position in the sequence. We found that the different groups were affected to different degrees by the structural features we considered. Additionally, connectivity was more predictive of assembly membership if its direction aligned with the temporal order of assembly activation, if it originated from strongly interconnected populations, and if synapses clustered on dendritic branches. In summary, reversing Hebb's postulate, we showed how cells that are wired together, fire together, quantifying how connectivity patterns interact to shape the emergence of assemblies. This includes a qualitative aspect of connectivity: not just the amount, but also the local structure matters; from the subcellular level in the form of dendritic clustering to the presence of specific network motifs.
Collapse
Affiliation(s)
- András Ecker
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Daniela Egas Santander
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Sirio Bolaños-Puchet
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - James B. Isbister
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Michael W. Reimann
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| |
Collapse
|
6
|
Sosa M, Plitt MH, Giocomo LM. Hippocampal sequences span experience relative to rewards. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.27.573490. [PMID: 38234842 PMCID: PMC10793396 DOI: 10.1101/2023.12.27.573490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Hippocampal place cells fire in sequences that span spatial environments and non-spatial modalities, suggesting that hippocampal activity can anchor to the most behaviorally salient aspects of experience. As reward is a highly salient event, we hypothesized that sequences of hippocampal activity can anchor to rewards. To test this, we performed two-photon imaging of hippocampal CA1 neurons as mice navigated virtual environments with changing hidden reward locations. When the reward moved, the firing fields of a subpopulation of cells moved to the same relative position with respect to reward, constructing a sequence of reward-relative cells that spanned the entire task structure. The density of these reward-relative sequences increased with task experience as additional neurons were recruited to the reward-relative population. Conversely, a largely separate subpopulation maintained a spatially-based place code. These findings thus reveal separate hippocampal ensembles can flexibly encode multiple behaviorally salient reference frames, reflecting the structure of the experience.
Collapse
Affiliation(s)
- Marielena Sosa
- Department of Neurobiology, Stanford University School of Medicine; Stanford, CA, USA
| | - Mark H. Plitt
- Department of Neurobiology, Stanford University School of Medicine; Stanford, CA, USA
- Present address: Department of Molecular and Cell Biology, University of California Berkeley; Berkeley, CA, USA
| | - Lisa M. Giocomo
- Department of Neurobiology, Stanford University School of Medicine; Stanford, CA, USA
| |
Collapse
|
7
|
Feng Y, Brunel N. Attractor neural networks with double well synapses. PLoS Comput Biol 2024; 20:e1011354. [PMID: 38324630 PMCID: PMC10878535 DOI: 10.1371/journal.pcbi.1011354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 02/20/2024] [Accepted: 01/23/2024] [Indexed: 02/09/2024] Open
Abstract
It is widely believed that memory storage depends on activity-dependent synaptic modifications. Classical studies of learning and memory in neural networks describe synaptic efficacy either as continuous or discrete. However, recent results suggest an intermediate scenario in which synaptic efficacy can be described by a continuous variable, but whose distribution is peaked around a small set of discrete values. Motivated by these results, we explored a model in which each synapse is described by a continuous variable that evolves in a potential with multiple minima. External inputs to the network can switch synapses from one potential well to another. Our analytical and numerical results show that this model can interpolate between models with discrete synapses which correspond to the deep potential limit, and models in which synapses evolve in a single quadratic potential. We find that the storage capacity of the network with double well synapses exhibits a power law dependence on the network size, rather than the logarithmic dependence observed in models with single well synapses. In addition, synapses with deeper potential wells lead to more robust information storage in the presence of noise. When memories are sparsely encoded, the scaling of the capacity with network size is similar to previously studied network models in the sparse coding limit.
Collapse
Affiliation(s)
- Yu Feng
- Department of Physics, Duke University, Durham, North Carolina, United States of America
| | - Nicolas Brunel
- Department of Physics, Duke University, Durham, North Carolina, United States of America
- Department of Neurobiology, Duke University, Durham, North Carolina, United States of America
| |
Collapse
|
8
|
Wang X, Jin Y, Du W, Wang J. Evolving Dual-Threshold Bienenstock-Cooper-Munro Learning Rules in Echo State Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:1572-1583. [PMID: 35763483 DOI: 10.1109/tnnls.2022.3184004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The strengthening and the weakening of synaptic strength in existing Bienenstock-Cooper-Munro (BCM) learning rule are determined by a long-term potentiation (LTP) sliding modification threshold and the afferent synaptic activities. However, synaptic long-term depression (LTD) even affects low-active synapses during the induction of synaptic plasticity, which may lead to information loss. Biological experiments have found another LTD threshold that can induce either potentiation or depression or no change, even at the activated synapses. In addition, existing BCM learning rules can only select a set of fixed rule parameters, which is biologically implausible and practically inflexible to learn the structural information of input signals. In this article, an evolved dual-threshold BCM learning rule is proposed to regulate the reservoir internal connection weights of the echo-state-network (ESN), which can contribute to alleviating information loss and enhancing learning performance by introducing different optimal LTD thresholds for different postsynaptic neurons. Our experimental results show that the evolved dual-threshold BCM learning rule can result in the synergistic learning of different plasticity rules, effectively improving the learning performance of an ESN in comparison with existing neural plasticity learning rules and some state-of-the-art ESN variants on three widely used benchmark tasks and the prediction of an esterification process.
Collapse
|
9
|
Li PY, Roxin A. Rapid memory encoding in a recurrent network model with behavioral time scale synaptic plasticity. PLoS Comput Biol 2023; 19:e1011139. [PMID: 37624848 PMCID: PMC10484462 DOI: 10.1371/journal.pcbi.1011139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/07/2023] [Accepted: 07/10/2023] [Indexed: 08/27/2023] Open
Abstract
Episodic memories are formed after a single exposure to novel stimuli. The plasticity mechanisms underlying such fast learning still remain largely unknown. Recently, it was shown that cells in area CA1 of the hippocampus of mice could form or shift their place fields after a single traversal of a virtual linear track. In-vivo intracellular recordings in CA1 cells revealed that previously silent inputs from CA3 could be switched on when they occurred within a few seconds of a dendritic plateau potential (PP) in the post-synaptic cell, a phenomenon dubbed Behavioral Time-scale Plasticity (BTSP). A recently developed computational framework for BTSP in which the dynamics of synaptic traces related to the pre-synaptic activity and post-synaptic PP are explicitly modelled, can account for experimental findings. Here we show that this model of plasticity can be further simplified to a 1D map which describes changes to the synaptic weights after a single trial. We use a temporally symmetric version of this map to study the storage of a large number of spatial memories in a recurrent network, such as CA3. Specifically, the simplicity of the map allows us to calculate the correlation of the synaptic weight matrix with any given past environment analytically. We show that the calculated memory trace can be used to predict the emergence and stability of bump attractors in a high dimensional neural network model endowed with BTSP.
Collapse
Affiliation(s)
- Pan Ye Li
- Centre de Recerca Matemàtica, Barcelona, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Barcelona, Spain
| |
Collapse
|
10
|
Rahman M, Bose S, Chakrabartty S. On-device synaptic memory consolidation using Fowler-Nordheim quantum-tunneling. Front Neurosci 2023; 16:1050585. [PMID: 36711131 PMCID: PMC9880265 DOI: 10.3389/fnins.2022.1050585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 12/28/2022] [Indexed: 01/15/2023] Open
Abstract
Introduction For artificial synapses whose strengths are assumed to be bounded and can only be updated with finite precision, achieving optimal memory consolidation using primitives from classical physics leads to synaptic models that are too complex to be scaled in-silico. Here we report that a relatively simple differential device that operates using the physics of Fowler-Nordheim (FN) quantum-mechanical tunneling can achieve tunable memory consolidation characteristics with different plasticity-stability trade-offs. Methods A prototype FN-synapse array was fabricated in a standard silicon process and was used to verify the optimal memory consolidation characteristics and used for estimating the parameters of an FN-synapse analytical model. The analytical model was then used for large-scale memory consolidation and continual learning experiments. Results We show that compared to other physical implementations of synapses for memory consolidation, the operation of the FN-synapse is near-optimal in terms of the synaptic lifetime and the consolidation properties. We also demonstrate that a network comprising FN-synapses outperforms a comparable elastic weight consolidation (EWC) network for some benchmark continual learning tasks. Discussions With an energy footprint of femtojoules per synaptic update, we believe that the proposed FN-synapse provides an ultra-energy-efficient approach for implementing both synaptic memory consolidation and continual learning on a physical device.
Collapse
|
11
|
Ji-An L, Stefanini F, Benna MK, Fusi S, La Porta CA. Face familiarity detection with complex synapses. iScience 2022; 26:105856. [PMID: 36636347 PMCID: PMC9829748 DOI: 10.1016/j.isci.2022.105856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 11/30/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
Synaptic plasticity is a complex phenomenon involving multiple biochemical processes that operate on different timescales. Complexity can greatly increase memory capacity when the variables characterizing the synaptic dynamics have limited precision, as shown in simple memory retrieval problems involving random patterns. Here we turn to a real-world problem, face familiarity detection, and we show that synaptic complexity can be harnessed to store in memory a large number of faces that can be recognized at a later time. The number of recognizable faces grows almost linearly with the number of synapses and quadratically with the number of neurons. Complex synapses outperform simple ones characterized by a single variable, even when the total number of dynamical variables is matched. Complex and simple synapses have distinct signatures that are testable in experiments. Our results indicate that a system with complex synapses can be used in real-world tasks such as face familiarity detection.
Collapse
Affiliation(s)
- Li Ji-An
- Zuckerman Institute, Columbia University, New York, NY 10027, USA,Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093, USA
| | - Fabio Stefanini
- Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Marcus K. Benna
- Zuckerman Institute, Columbia University, New York, NY 10027, USA,Department of Neurobiology, School of Biological Sciences, University of California San Diego, La Jolla, CA 92093, USA,Corresponding author
| | - Stefano Fusi
- Zuckerman Institute, Columbia University, New York, NY 10027, USA,Corresponding author
| | | |
Collapse
|
12
|
Vardalaki D, Chung K, Harnett MT. Filopodia are a structural substrate for silent synapses in adult neocortex. Nature 2022; 612:323-327. [PMID: 36450984 DOI: 10.1038/s41586-022-05483-6] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 10/25/2022] [Indexed: 12/05/2022]
Abstract
Newly generated excitatory synapses in the mammalian cortex lack sufficient AMPA-type glutamate receptors to mediate neurotransmission, resulting in functionally silent synapses that require activity-dependent plasticity to mature. Silent synapses are abundant in early development, during which they mediate circuit formation and refinement, but they are thought to be scarce in adulthood1. However, adults retain a capacity for neural plasticity and flexible learning that suggests that the formation of new connections is still prevalent. Here we used super-resolution protein imaging to visualize synaptic proteins at 2,234 synapses from layer 5 pyramidal neurons in the primary visual cortex of adult mice. Unexpectedly, about 25% of these synapses lack AMPA receptors. These putative silent synapses were located at the tips of thin dendritic protrusions, known as filopodia, which were more abundant by an order of magnitude than previously believed (comprising about 30% of all dendritic protrusions). Physiological experiments revealed that filopodia do indeed lack AMPA-receptor-mediated transmission, but they exhibit NMDA-receptor-mediated synaptic transmission. We further showed that functionally silent synapses on filopodia can be unsilenced through Hebbian plasticity, recruiting new active connections into a neuron's input matrix. These results challenge the model that functional connectivity is largely fixed in the adult cortex and demonstrate a new mechanism for flexible control of synaptic wiring that expands the learning capabilities of the mature brain.
Collapse
Affiliation(s)
- Dimitra Vardalaki
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA.,Department of Brain & Cognitive Sciences, MIT, Cambridge, MA, USA
| | - Kwanghun Chung
- Department of Brain & Cognitive Sciences, MIT, Cambridge, MA, USA.,Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA.,Institute for Medical Engineering and Science, MIT, Cambridge, MA, USA.,Department of Chemical Engineering, MIT, Cambridge, MA, USA.,Broad Institute of Harvard University and MIT, Cambridge, MA, USA
| | - Mark T Harnett
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA. .,Department of Brain & Cognitive Sciences, MIT, Cambridge, MA, USA.
| |
Collapse
|
13
|
Driscoll LN, Duncker L, Harvey CD. Representational drift: Emerging theories for continual learning and experimental future directions. Curr Opin Neurobiol 2022; 76:102609. [PMID: 35939861 DOI: 10.1016/j.conb.2022.102609] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/08/2022] [Accepted: 06/23/2022] [Indexed: 11/03/2022]
Abstract
Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks-a phenomenon called representational drift. Here, we highlight recent observations of drift, how drift is unlikely to be explained by experimental confounds, and how the brain can likely compensate for drift to allow stable computation. We propose that drift might have important roles in neural computation to allow continual learning, both for separating and relating memories that occur at distinct times. Finally, we present an outlook on future experimental directions that are needed to further characterize drift and to test emerging theories for drift's role in computation.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Lea Duncker
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA.
| | | |
Collapse
|
14
|
Randomly fluctuating neural connections may implement a consolidation mechanism that explains classic memory laws. Sci Rep 2022; 12:13423. [PMID: 35927567 PMCID: PMC9352731 DOI: 10.1038/s41598-022-17639-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Accepted: 07/28/2022] [Indexed: 11/09/2022] Open
Abstract
How can we reconcile the massive fluctuations in neural connections with a stable long-term memory? Two-photon microscopy studies have revealed that large portions of neural connections (spines, synapses) are unexpectedly active, changing unpredictably over time. This appears to invalidate the main assumption underlying the majority of memory models in cognitive neuroscience, which rely on stable connections that retain information over time. Here, we show that such random fluctuations may in fact implement a type of memory consolidation mechanism with a stable very long-term memory that offers novel explanations for several classic memory 'laws', namely Jost's Law (1897: superiority of spaced learning) and Ribot's Law (1881: loss of recent memories in retrograde amnesia), for which a common neural basis has been postulated but not established, as well as other general 'laws' of learning and forgetting. We show how these phenomena emerge naturally from massively fluctuating neural connections.
Collapse
|
15
|
Giotis C, Serb A, Manouras V, Stathopoulos S, Prodromakis T. Palimpsest memories stored in memristive synapses. SCIENCE ADVANCES 2022; 8:eabn7920. [PMID: 35731877 PMCID: PMC9217086 DOI: 10.1126/sciadv.abn7920] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 05/09/2022] [Indexed: 06/15/2023]
Abstract
Biological synapses store multiple memories on top of each other in a palimpsest fashion and at different time scales. Palimpsest consolidation is facilitated by the interaction of hidden biochemical processes governing synaptic efficacy during varying lifetimes. This arrangement allows idle memories to be temporarily overwritten without being forgotten, while previously unseen memories are used in the short term. While embedded artificial intelligence can greatly benefit from this functionality, a practical demonstration in hardware is missing. Here, we show how the intrinsic properties of metal-oxide volatile memristors emulate the processes supporting biological palimpsest consolidation. Our memristive synapses exhibit an expanded doubled capacity and protect a consolidated memory while up to hundreds of uncorrelated short-term memories temporarily overwrite it, without requiring specialized instructions. We further demonstrate this technology in the context of visual working memory. This showcases how emerging memory technologies can efficiently expand the capabilities of artificial intelligence hardware toward more generalized learning memories.
Collapse
Affiliation(s)
- Christos Giotis
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
| | - Alexander Serb
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
- Centre for Electronics Frontiers, School of Engineering, University of Edinburgh, Edinburgh EH9 3FB, UK
| | - Vasileios Manouras
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
| | - Spyros Stathopoulos
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
| | - Themis Prodromakis
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
- Centre for Electronics Frontiers, School of Engineering, University of Edinburgh, Edinburgh EH9 3FB, UK
| |
Collapse
|
16
|
Masset P, Qin S, Zavatone-Veth JA. Drifting neuronal representations: Bug or feature? BIOLOGICAL CYBERNETICS 2022; 116:253-266. [PMID: 34993613 DOI: 10.1007/s00422-021-00916-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that learned features remain static. How do unstable neural codes support robust perception, memories, and actions? Here, we review recent experimental evidence for such representational drift across brain areas, as well as dissections of its functional characteristics and underlying mechanisms. We emphasize theoretical proposals for how drift need not only be a form of noise for which the brain must compensate. Rather, it can emerge from computationally beneficial mechanisms in hierarchical networks performing robust probabilistic computations.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Shanshan Qin
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Harvard University, Cambridge, MA, USA
| |
Collapse
|
17
|
Feng Y, Brunel N. Storage capacity of networks with discrete synapses and sparsely encoded memories. Phys Rev E 2022; 105:054408. [PMID: 35706193 DOI: 10.1103/physreve.105.054408] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/11/2022] [Indexed: 06/15/2023]
Abstract
Attractor neural networks are one of the leading theoretical frameworks for the formation and retrieval of memories in networks of biological neurons. In this framework, a pattern imposed by external inputs to the network is said to be learned when this pattern becomes a fixed point attractor of the network dynamics. The storage capacity is the maximum number of patterns that can be learned by the network. In this paper, we study the storage capacity of fully connected and sparsely connected networks with a binarized Hebbian rule, for arbitrary coding levels. Our results show that a network with discrete synapses has a similar storage capacity as the model with continuous synapses, and that this capacity tends asymptotically towards the optimal capacity, in the space of all possible binary connectivity matrices, in the sparse coding limit. We also derive finite coding level corrections for the asymptotic solution in the sparse coding limit. The result indicates the capacity of networks with Hebbian learning rules converges to the optimal capacity extremely slowly when the coding level becomes small. Our results also show that in networks with sparse binary connectivity matrices, the information capacity per synapse is larger than in the fully connected case, and thus such networks store information more efficiently.
Collapse
Affiliation(s)
- Yu Feng
- Department of Physics, Duke University, Durham, North Carolina 27710, USA
| | - Nicolas Brunel
- Department of Physics, Duke University, Durham, North Carolina 27710, USA
- Department of Neurobiology, Duke University, Durham, North Carolina 27710, USA
| |
Collapse
|
18
|
Gonzalez KC, Losonczy A, Negrean A. Dendritic Excitability and Synaptic Plasticity In Vitro and In Vivo. Neuroscience 2022; 489:165-175. [PMID: 34998890 PMCID: PMC9392867 DOI: 10.1016/j.neuroscience.2021.12.039] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 12/29/2021] [Accepted: 12/30/2021] [Indexed: 02/06/2023]
Abstract
Much of our understanding of dendritic and synaptic physiology comes from in vitro experimentation, where the afforded mechanical stability and convenience of applying drugs allowed patch-clamping based recording techniques to investigate ion channel distributions, their gating kinetics, and to uncover dendritic integrative and synaptic plasticity rules. However, with current efforts to study these questions in vivo, there is a great need to translate existing knowledge between in vitro and in vivo experimental conditions. In this review, we identify discrepancies between in vitro and in vivo ionic composition of extracellular media and discuss how changes in ionic composition alter dendritic excitability and plasticity induction. Here, we argue that under physiological in vivo ionic conditions, dendrites are expected to be more excitable and the threshold for synaptic plasticity induction to be lowered. Consequently, the plasticity rules described in vitro vary significantly from those implemented in vivo.
Collapse
Affiliation(s)
- Kevin C Gonzalez
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, New York, NY, USA.
| | - Attila Losonczy
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, New York, NY, USA; Kavli Institute for Brain Science, New York, NY, USA.
| | - Adrian Negrean
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, New York, NY, USA.
| |
Collapse
|
19
|
Remme MWH, Bergmann U, Alevi D, Schreiber S, Sprekeler H, Kempter R. Hebbian plasticity in parallel synaptic pathways: A circuit mechanism for systems memory consolidation. PLoS Comput Biol 2021; 17:e1009681. [PMID: 34874938 PMCID: PMC8683039 DOI: 10.1371/journal.pcbi.1009681] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 12/17/2021] [Accepted: 11/24/2021] [Indexed: 12/03/2022] Open
Abstract
Systems memory consolidation involves the transfer of memories across brain regions and the transformation of memory content. For example, declarative memories that transiently depend on the hippocampal formation are transformed into long-term memory traces in neocortical networks, and procedural memories are transformed within cortico-striatal networks. These consolidation processes are thought to rely on replay and repetition of recently acquired memories, but the cellular and network mechanisms that mediate the changes of memories are poorly understood. Here, we suggest that systems memory consolidation could arise from Hebbian plasticity in networks with parallel synaptic pathways-two ubiquitous features of neural circuits in the brain. We explore this hypothesis in the context of hippocampus-dependent memories. Using computational models and mathematical analyses, we illustrate how memories are transferred across circuits and discuss why their representations could change. The analyses suggest that Hebbian plasticity mediates consolidation by transferring a linear approximation of a previously acquired memory into a parallel pathway. Our modelling results are further in quantitative agreement with lesion studies in rodents. Moreover, a hierarchical iteration of the mechanism yields power-law forgetting-as observed in psychophysical studies in humans. The predicted circuit mechanism thus bridges spatial scales from single cells to cortical areas and time scales from milliseconds to years.
Collapse
Affiliation(s)
- Michiel W. H. Remme
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Urs Bergmann
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Denis Alevi
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Susanne Schreiber
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
| | - Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
- Excellence Cluster Science of Intelligence, Berlin, Germany
| | - Richard Kempter
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
| |
Collapse
|
20
|
Silent Synapses in Cocaine-Associated Memory and Beyond. J Neurosci 2021; 41:9275-9285. [PMID: 34759051 DOI: 10.1523/jneurosci.1559-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 09/22/2021] [Accepted: 09/27/2021] [Indexed: 11/21/2022] Open
Abstract
Glutamatergic synapses are key cellular sites where cocaine experience creates memory traces that subsequently promote cocaine craving and seeking. In addition to making across-the-board synaptic adaptations, cocaine experience also generates a discrete population of new synapses that selectively encode cocaine memories. These new synapses are glutamatergic synapses that lack functionally stable AMPARs, often referred to as AMPAR-silent synapses or, simply, silent synapses. They are generated de novo in the NAc by cocaine experience. After drug withdrawal, some of these synapses mature by recruiting AMPARs, contributing to the consolidation of cocaine-associated memory. After cue-induced retrieval of cocaine memories, matured silent synapses alternate between two dynamic states (AMPAR-absent vs AMPAR-containing) that correspond with the behavioral manifestations of destabilization and reconsolidation of these memories. Here, we review the molecular mechanisms underlying silent synapse dynamics during behavior, discuss their contributions to circuit remodeling, and analyze their role in cocaine-memory-driven behaviors. We also propose several mechanisms through which silent synapses can form neuronal ensembles as well as cross-region circuit engrams for cocaine-specific behaviors. These perspectives lead to our hypothesis that cocaine-generated silent synapses stand as a distinct set of synaptic substrates encoding key aspects of cocaine memory that drive cocaine relapse.
Collapse
|
21
|
Primavera BA, Shainline JM. Considerations for Neuromorphic Supercomputing in Semiconducting and Superconducting Optoelectronic Hardware. Front Neurosci 2021; 15:732368. [PMID: 34552465 PMCID: PMC8450355 DOI: 10.3389/fnins.2021.732368] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 08/09/2021] [Indexed: 11/24/2022] Open
Abstract
Any large-scale spiking neuromorphic system striving for complexity at the level of the human brain and beyond will need to be co-optimized for communication and computation. Such reasoning leads to the proposal for optoelectronic neuromorphic platforms that leverage the complementary properties of optics and electronics. Starting from the conjecture that future large-scale neuromorphic systems will utilize integrated photonics and fiber optics for communication in conjunction with analog electronics for computation, we consider two possible paths toward achieving this vision. The first is a semiconductor platform based on analog CMOS circuits and waveguide-integrated photodiodes. The second is a superconducting approach that utilizes Josephson junctions and waveguide-integrated superconducting single-photon detectors. We discuss available devices, assess scaling potential, and provide a list of key metrics and demonstrations for each platform. Both platforms hold potential, but their development will diverge in important respects. Semiconductor systems benefit from a robust fabrication ecosystem and can build on extensive progress made in purely electronic neuromorphic computing but will require III-V light source integration with electronics at an unprecedented scale, further advances in ultra-low capacitance photodiodes, and success from emerging memory technologies. Superconducting systems place near theoretically minimum burdens on light sources (a tremendous boon to one of the most speculative aspects of either platform) and provide new opportunities for integrated, high-endurance synaptic memory. However, superconducting optoelectronic systems will also contend with interfacing low-voltage electronic circuits to semiconductor light sources, the serial biasing of superconducting devices on an unprecedented scale, a less mature fabrication ecosystem, and cryogenic infrastructure.
Collapse
Affiliation(s)
- Bryce A. Primavera
- National Institute of Standards and Technology, Boulder, CO, United States
- Department of Physics, University of Colorado Boulder, Boulder, CO, United States
| | | |
Collapse
|
22
|
Jiang L, Litwin-Kumar A. Models of heterogeneous dopamine signaling in an insect learning and memory center. PLoS Comput Biol 2021; 17:e1009205. [PMID: 34375329 PMCID: PMC8354444 DOI: 10.1371/journal.pcbi.1009205] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 06/22/2021] [Indexed: 11/25/2022] Open
Abstract
The Drosophila mushroom body exhibits dopamine dependent synaptic plasticity that underlies the acquisition of associative memories. Recordings of dopamine neurons in this system have identified signals related to external reinforcement such as reward and punishment. However, other factors including locomotion, novelty, reward expectation, and internal state have also recently been shown to modulate dopamine neurons. This heterogeneity is at odds with typical modeling approaches in which these neurons are assumed to encode a global, scalar error signal. How is dopamine dependent plasticity coordinated in the presence of such heterogeneity? We develop a modeling approach that infers a pattern of dopamine activity sufficient to solve defined behavioral tasks, given architectural constraints informed by knowledge of mushroom body circuitry. Model dopamine neurons exhibit diverse tuning to task parameters while nonetheless producing coherent learned behaviors. Notably, reward prediction error emerges as a mode of population activity distributed across these neurons. Our results provide a mechanistic framework that accounts for the heterogeneity of dopamine activity during learning and behavior. Dopamine neurons across the animal kingdom are involved in the formation of associative memories. While numerous studies have recorded activity in these neurons related to external and predicted rewards, the diversity of these neurons’ activity and their tuning to non-reward-related quantities such as novelty, movement, and internal state have proved challenging to account for in traditional modeling approaches. Using a well-characterized model system for learning and memory, the mushroom body of Drosophila fruit flies, Jiang and Litwin-Kumar provide an account of the diversity of signals across dopamine neurons. They show that models optimized to solve tasks like those encountered by flies exhibit heterogeneous activity across dopamine neurons, but nonetheless this activity is sufficient for the system to solve the tasks. The models will be useful to generate testable hypotheses about dopamine neuron activity across different experimental conditions.
Collapse
Affiliation(s)
- Linnie Jiang
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, New York, United States of America
- Neurosciences Program, Stanford University, Stanford, California, United States of America
| | - Ashok Litwin-Kumar
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
23
|
Benedetti M, Dotsenko V, Fischetti G, Marinari E, Oshanin G. Recognition capabilities of a Hopfield model with auxiliary hidden neurons. Phys Rev E 2021; 103:L060401. [PMID: 34271731 DOI: 10.1103/physreve.103.l060401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 05/24/2021] [Indexed: 11/07/2022]
Abstract
We study the recognition capabilities of the Hopfield model with auxiliary hidden layers, which emerge naturally upon a Hubbard-Stratonovich transformation. We show that the recognition capabilities of such a model at zero temperature outperform those of the original Hopfield model, due to a substantial increase of the storage capacity and the lack of a naturally defined basin of attraction. The modified model does not fall abruptly into the regime of complete confusion when memory load exceeds a sharp threshold. This latter circumstance, together with an increase of the storage capacity, renders such a modified Hopfield model a promising candidate for further research, with possible diverse applications.
Collapse
Affiliation(s)
- Marco Benedetti
- Università di Roma La Sapienza, Piazzale Aldo Moro 5, I-00185 Rome, Italy
| | - Victor Dotsenko
- Sorbonne Université, CNRS, Laboratoire de Physique Théorique de la Matière Condensée (UMR 7600), 4 Place Jussieu, F-75252 Paris Cedex 05, France
| | - Giulia Fischetti
- Università di Roma La Sapienza, Piazzale Aldo Moro 5, I-00185 Rome, Italy
| | - Enzo Marinari
- Università di Roma La Sapienza, Piazzale Aldo Moro 5, I-00185 Rome, Italy.,CNR-Nanotec and INFN, Sezione di Roma 1, I-00185 Rome, Italy
| | - Gleb Oshanin
- Sorbonne Université, CNRS, Laboratoire de Physique Théorique de la Matière Condensée (UMR 7600), 4 Place Jussieu, F-75252 Paris Cedex 05, France
| |
Collapse
|
24
|
Francioni V, Harnett MT. Rethinking Single Neuron Electrical Compartmentalization: Dendritic Contributions to Network Computation In Vivo. Neuroscience 2021; 489:185-199. [PMID: 34116137 DOI: 10.1016/j.neuroscience.2021.05.038] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 05/11/2021] [Accepted: 05/29/2021] [Indexed: 12/15/2022]
Abstract
Decades of experimental and theoretical work support a now well-established theory that active dendritic processing contributes to the computational power of individual neurons. This theory is based on the high degree of electrical compartmentalization observed in the dendrites of single neurons in ex vivo preparations. Compartmentalization allows dendrites to conduct semi-independent operations on their inputs before final integration and output at the axon, producing a "network-in-a-neuron." However, recent in vivo functional imaging experiments in mouse cortex have reported surprisingly little evidence for strong dendritic compartmentalization. In this review, we contextualize these new findings and discuss their impact on the future of the field. Specifically, we consider how highly coordinated, and thus less compartmentalized, activity in soma and dendrites can contribute to cortical computations including nonlinear mixed selectivity, prediction/expectation, multiplexing, and credit assignment.
Collapse
Affiliation(s)
- Valerio Francioni
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Mark T Harnett
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
25
|
Energetics of stochastic BCM type synaptic plasticity and storing of accurate information. J Comput Neurosci 2021; 49:71-106. [PMID: 33528721 PMCID: PMC8046702 DOI: 10.1007/s10827-020-00775-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 10/04/2020] [Accepted: 12/13/2020] [Indexed: 11/10/2022]
Abstract
Excitatory synaptic signaling in cortical circuits is thought to be metabolically expensive. Two fundamental brain functions, learning and memory, are associated with long-term synaptic plasticity, but we know very little about energetics of these slow biophysical processes. This study investigates the energy requirement of information storing in plastic synapses for an extended version of BCM plasticity with a decay term, stochastic noise, and nonlinear dependence of neuron’s firing rate on synaptic current (adaptation). It is shown that synaptic weights in this model exhibit bistability. In order to analyze the system analytically, it is reduced to a simple dynamic mean-field for a population averaged plastic synaptic current. Next, using the concepts of nonequilibrium thermodynamics, we derive the energy rate (entropy production rate) for plastic synapses and a corresponding Fisher information for coding presynaptic input. That energy, which is of chemical origin, is primarily used for battling fluctuations in the synaptic weights and presynaptic firing rates, and it increases steeply with synaptic weights, and more uniformly though nonlinearly with presynaptic firing. At the onset of synaptic bistability, Fisher information and memory lifetime both increase sharply, by a few orders of magnitude, but the plasticity energy rate changes only mildly. This implies that a huge gain in the precision of stored information does not have to cost large amounts of metabolic energy, which suggests that synaptic information is not directly limited by energy consumption. Interestingly, for very weak synaptic noise, such a limit on synaptic coding accuracy is imposed instead by a derivative of the plasticity energy rate with respect to the mean presynaptic firing, and this relationship has a general character that is independent of the plasticity type. An estimate for primate neocortex reveals that a relative metabolic cost of BCM type synaptic plasticity, as a fraction of neuronal cost related to fast synaptic transmission and spiking, can vary from negligible to substantial, depending on the synaptic noise level and presynaptic firing.
Collapse
|
26
|
Brivio S, Ly DRB, Vianello E, Spiga S. Non-linear Memristive Synaptic Dynamics for Efficient Unsupervised Learning in Spiking Neural Networks. Front Neurosci 2021; 15:580909. [PMID: 33633531 PMCID: PMC7901913 DOI: 10.3389/fnins.2021.580909] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Accepted: 01/06/2021] [Indexed: 11/13/2022] Open
Abstract
Spiking neural networks (SNNs) are a computational tool in which the information is coded into spikes, as in some parts of the brain, differently from conventional neural networks (NNs) that compute over real-numbers. Therefore, SNNs can implement intelligent information extraction in real-time at the edge of data acquisition and correspond to a complementary solution to conventional NNs working for cloud-computing. Both NN classes face hardware constraints due to limited computing parallelism and separation of logic and memory. Emerging memory devices, like resistive switching memories, phase change memories, or memristive devices in general are strong candidates to remove these hurdles for NN applications. The well-established training procedures of conventional NNs helped in defining the desiderata for memristive device dynamics implementing synaptic units. The generally agreed requirements are a linear evolution of memristive conductance upon stimulation with train of identical pulses and a symmetric conductance change for conductance increase and decrease. Conversely, little work has been done to understand the main properties of memristive devices supporting efficient SNN operation. The reason lies in the lack of a background theory for their training. As a consequence, requirements for NNs have been taken as a reference to develop memristive devices for SNNs. In the present work, we show that, for efficient CMOS/memristive SNNs, the requirements for synaptic memristive dynamics are very different from the needs of a conventional NN. System-level simulations of a SNN trained to classify hand-written digit images through a spike timing dependent plasticity protocol are performed considering various linear and non-linear plausible synaptic memristive dynamics. We consider memristive dynamics bounded by artificial hard conductance values and limited by the natural dynamics evolution toward asymptotic values (soft-boundaries). We quantitatively analyze the impact of resolution and non-linearity properties of the synapses on the network training and classification performance. Finally, we demonstrate that the non-linear synapses with hard boundary values enable higher classification performance and realize the best trade-off between classification accuracy and required training time. With reference to the obtained results, we discuss how memristive devices with non-linear dynamics constitute a technologically convenient solution for the development of on-line SNN training.
Collapse
Affiliation(s)
- Stefano Brivio
- CNR - IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| | - Denys R B Ly
- Université Grenoble Alpes, CEA, Leti, Grenoble, France
| | | | - Sabina Spiga
- CNR - IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| |
Collapse
|
27
|
Murray JM, Escola GS. Remembrance of things practiced with fast and slow learning in cortical and subcortical pathways. Nat Commun 2020; 11:6441. [PMID: 33361766 PMCID: PMC7758336 DOI: 10.1038/s41467-020-19788-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2020] [Accepted: 10/21/2020] [Indexed: 11/20/2022] Open
Abstract
The learning of motor skills unfolds over multiple timescales, with rapid initial gains in performance followed by a longer period in which the behavior becomes more refined, habitual, and automatized. While recent lesion and inactivation experiments have provided hints about how various brain areas might contribute to such learning, their precise roles and the neural mechanisms underlying them are not well understood. In this work, we propose neural- and circuit-level mechanisms by which motor cortex, thalamus, and striatum support motor learning. In this model, the combination of fast cortical learning and slow subcortical learning gives rise to a covert learning process through which control of behavior is gradually transferred from cortical to subcortical circuits, while protecting learned behaviors that are practiced repeatedly against overwriting by future learning. Together, these results point to a new computational role for thalamus in motor learning and, more broadly, provide a framework for understanding the neural basis of habit formation and the automatization of behavior through practice.
Collapse
Affiliation(s)
- James M Murray
- Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY, 10027, USA.
- Institute of Neuroscience, University of Oregon, Eugene, OR, 97403, USA.
| | - G Sean Escola
- Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY, 10027, USA
- Department of Psychiatry, Columbia University, New York, NY, 10032, USA
| |
Collapse
|
28
|
Helson P. A Mathematical Analysis of Memory Lifetime in a Simple Network Model of Memory. Neural Comput 2020; 32:1322-1354. [PMID: 32433900 DOI: 10.1162/neco_a_01286] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We study the learning of an external signal by a neural network and the time to forget it when this network is submitted to noise. The presentation of an external stimulus to the recurrent network of binary neurons may change the state of the synapses. Multiple presentations of a unique signal lead to its learning. Then, during the forgetting time, the presentation of other signals (noise) may also modify the synaptic weights. We construct an estimator of the initial signal using the synaptic currents and in this way define a probability of error. In our model, these synaptic currents evolve as Markov chains. We study the dynamics of these Markov chains and obtain a lower bound on the number of external stimuli that the network can receive before the initial signal is considered forgotten (probability of error above a given threshold). Our results are based on a finite-time analysis rather than large-time asymptotic. We finally present numerical illustrations of our results.
Collapse
|
29
|
Abstract
Synaptic plasticity, the activity-dependent change in neuronal connection strength, has long been considered an important component of learning and memory. Computational and engineering work corroborate the power of learning through the directed adjustment of connection weights. Here we review the fundamental elements of four broadly categorized forms of synaptic plasticity and discuss their functional capabilities and limitations. Although standard, correlation-based, Hebbian synaptic plasticity has been the primary focus of neuroscientists for decades, it is inherently limited. Three-factor plasticity rules supplement Hebbian forms with neuromodulation and eligibility traces, while true supervised types go even further by adding objectives and instructive signals. Finally, a recently discovered hippocampal form of synaptic plasticity combines the above elements, while leaving behind the primary Hebbian requirement. We suggest that the effort to determine the neural basis of adaptive behavior could benefit from renewed experimental and theoretical investigation of more powerful directed types of synaptic plasticity.
Collapse
Affiliation(s)
- Jeffrey C Magee
- Department of Neuroscience and Howard Hughes Medical Institute, Baylor College of Medicine, Houston, Texas 77030, USA;
| | - Christine Grienberger
- Department of Neuroscience and Howard Hughes Medical Institute, Baylor College of Medicine, Houston, Texas 77030, USA;
| |
Collapse
|
30
|
Yu L, Jin M, Zhou K. Multi-channel biomimetic visual transformation for object feature extraction and recognition of complex scenes. APPL INTELL 2019. [DOI: 10.1007/s10489-019-01550-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
31
|
Jung K, Jeong J, Kralik JD. A Computational Model of Attention Control in Multi-Attribute, Context-Dependent Decision Making. Front Comput Neurosci 2019; 13:40. [PMID: 31354461 PMCID: PMC6635580 DOI: 10.3389/fncom.2019.00040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 06/11/2019] [Indexed: 11/17/2022] Open
Abstract
Real-life decisions often require a comparison of multi-attribute options with various benefits and costs, and the evaluation of each option depends partly on the others in the choice set (i.e., the choice context). Although reinforcement learning models have successfully described choice behavior, how to account for multi-attribute information when making a context-dependent decision remains unclear. Here we develop a computational model of attention control that includes context effects on multi-attribute decisions, linking a context-dependent choice model with a reinforcement learning model. The overall model suggests that the distinctiveness of attributes guides an individual's preferences among multi-attribute options via an attention-control mechanism that determines whether choices are selectively biased toward the most distinctive attribute (selective attention) or proportionally distributed based on the relative distinctiveness of attributes (divided attention). To test the model, we conducted a behavioral experiment in rhesus monkeys, in which they made simple multi-attribute decisions over three conditions that manipulated the degree of distinctiveness between alternatives: (1) four foods of different size and calorie; (2) four pieces of the same food in different colors; and (3) four identical pieces of food. The model simulation of the choice behavior captured the preference bias (i.e., overall preference structure) and the choice persistence (repeated choices) in the empirical data, providing evidence for the respective influences of attention and memory on preference bias and choice persistence. Our study provides insights into computations underlying multi-attribute decisions, linking attentional control to decision-making processes.
Collapse
Affiliation(s)
- Kanghoon Jung
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, United States
| | - Jaeseung Jeong
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Jerald D Kralik
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, United States
| |
Collapse
|
32
|
Deger M, Seeholzer A, Gerstner W. Multicontact Co-operativity in Spike-Timing-Dependent Structural Plasticity Stabilizes Networks. Cereb Cortex 2019; 28:1396-1415. [PMID: 29300903 PMCID: PMC6041941 DOI: 10.1093/cercor/bhx339] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Accepted: 11/30/2017] [Indexed: 12/12/2022] Open
Abstract
Excitatory synaptic connections in the adult neocortex consist of multiple synaptic contacts, almost exclusively formed on dendritic spines. Changes of spine volume, a correlate of synaptic strength, can be tracked in vivo for weeks. Here, we present a combined model of structural and spike-timing–dependent plasticity that explains the multicontact configuration of synapses in adult neocortical networks under steady-state and lesion-induced conditions. Our plasticity rule with Hebbian and anti-Hebbian terms stabilizes both the postsynaptic firing rate and correlations between the pre- and postsynaptic activity at an active synaptic contact. Contacts appear spontaneously at a low rate and disappear if their strength approaches zero. Many presynaptic neurons compete to make strong synaptic connections onto a postsynaptic neuron, whereas the synaptic contacts of a given presynaptic neuron co-operate via postsynaptic firing. We find that co-operation of multiple synaptic contacts is crucial for stable, long-term synaptic memories. In simulations of a simplified network model of barrel cortex, our plasticity rule reproduces whisker-trimming–induced rewiring of thalamocortical and recurrent synaptic connectivity on realistic time scales.
Collapse
Affiliation(s)
- Moritz Deger
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland.,Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, 50674 Cologne, Germany
| | - Alexander Seeholzer
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland
| |
Collapse
|
33
|
Wu X, Mel GC, Strouse DJ, Mel BW. How Dendrites Affect Online Recognition Memory. PLoS Comput Biol 2019; 15:e1006892. [PMID: 31050662 PMCID: PMC6527246 DOI: 10.1371/journal.pcbi.1006892] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 05/20/2019] [Accepted: 02/18/2019] [Indexed: 11/18/2022] Open
Abstract
In order to record the stream of autobiographical information that defines our unique personal history, our brains must form durable memories from single brief exposures to the patterned stimuli that impinge on them continuously throughout life. However, little is known about the computational strategies or neural mechanisms that underlie the brain's ability to perform this type of "online" learning. Based on increasing evidence that dendrites act as both signaling and learning units in the brain, we developed an analytical model that relates online recognition memory capacity to roughly a dozen dendritic, network, pattern, and task-related parameters. We used the model to determine what dendrite size maximizes storage capacity under varying assumptions about pattern density and noise level. We show that over a several-fold range of both of these parameters, and over multiple orders-of-magnitude of memory size, capacity is maximized when dendrites contain a few hundred synapses-roughly the natural number found in memory-related areas of the brain. Thus, in comparison to entire neurons, dendrites increase storage capacity by providing a larger number of better-sized learning units. Our model provides the first normative theory that explains how dendrites increase the brain's capacity for online learning; predicts which combinations of parameter settings we should expect to find in the brain under normal operating conditions; leads to novel interpretations of an array of existing experimental results; and provides a tool for understanding which changes associated with neurological disorders, aging, or stress are most likely to produce memory deficits-knowledge that could eventually help in the design of improved clinical treatments for memory loss.
Collapse
Affiliation(s)
- Xundong Wu
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Gabriel C. Mel
- Computer Science Department, University of Southern California, Los Angeles, CA, United States
| | - D. J. Strouse
- Physics Department, Princeton University, Princeton, NJ, United States
| | - Bartlett W. Mel
- Biomedical Engineering Department and Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States
- * E-mail:
| |
Collapse
|
34
|
Abstract
We study numerically the memory that forgets, introduced in 1986 by Parisi by bounding the synaptic strength, with a mechanism that avoids confusion; allows remembering the pattern learned more recently; and has a physiologically very well-defined meaning. We analyze a number of features of this learning for a finite number of neurons and finite number of patterns. We discuss how the system behaves in the large but finite [Formula: see text] limit. We analyze the basin of attraction of the patterns that have been learned, and we show that it is exponentially small in the age of the pattern.
Collapse
Affiliation(s)
- Enzo Marinari
- Dipartimento di Fisica, Sapienza Università di Roma; INFN Sezione di Roma 1; and Nanotech-CNR, UOS di Roma, 00185 Roma, Italy
| |
Collapse
|
35
|
Brivio S, Conti D, Nair MV, Frascaroli J, Covi E, Ricciardi C, Indiveri G, Spiga S. Extended memory lifetime in spiking neural networks employing memristive synapses with nonlinear conductance dynamics. NANOTECHNOLOGY 2019; 30:015102. [PMID: 30378572 DOI: 10.1088/1361-6528/aae81c] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spiking neural networks (SNNs) employing memristive synapses are capable of life-long online learning. Because of their ability to process and classify large amounts of data in real-time using compact and low-power electronic systems, they promise a substantial technology breakthrough. However, the critical issue that memristor-based SNNs have to face is the fundamental limitation in their memory capacity due to finite resolution of the synaptic elements, which leads to the replacement of old memories with new ones and to a finite memory lifetime. In this study we demonstrate that the nonlinear conductance dynamics of memristive devices can be exploited to improve the memory lifetime of a network. The network is simulated on the basis of a spiking neuron model of mixed-signal digital-analogue sub-threshold neuromorphic CMOS circuits, and on memristive synapse models derived from the experimental nonlinear conductance dynamics of resistive memory devices when stimulated by trains of identical pulses. The network learning circuits implement a spike-based plasticity rule compatible with both spike-timing and rate-based learning rules. In order to get an insight on the memory lifetime of the network, we analyse the learning dynamics in the context of a classical benchmark of neural network learning, that is hand-written digit classification. In the proposed architecture, the memory lifetime and the performance of the network are improved for memristive synapses with nonlinear dynamics with respect to linear synapses with similar resolution. These results demonstrate the importance of following holistic approaches that combine the study of theoretical learning models with the development of neuromorphic CMOS SNNs with memristive devices used to implement life-long on-chip learning.
Collapse
Affiliation(s)
- S Brivio
- CNR-IMM, Unit of Agrate Brianza, via C. Olivetti 2, I-20864 Agrate Brianza, Italy
| | | | | | | | | | | | | | | |
Collapse
|
36
|
Elliott T. First Passage Time Memory Lifetimes for Simple, Multistate Synapses: Beyond the Eigenvector Requirement. Neural Comput 2018; 31:8-67. [PMID: 30576617 DOI: 10.1162/neco_a_01147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Models of associative memory with discrete-strength synapses are palimpsests, learning new memories by forgetting old ones. Memory lifetimes can be defined by the mean first passage time (MFPT) for a perceptron's activation to fall below firing threshold. By imposing the condition that the vector of possible strengths available to a synapse is a left eigenvector of the stochastic matrix governing transitions in strength, we previously derived results for MFPTs and first passage time (FPT) distributions in models with simple, multistate synapses. This condition permits jump moments to be computed via a 1-dimensional Fokker-Planck approach. Here, we study memory lifetimes in the absence of this condition. To do so, we must introduce additional variables, including the perceptron activation, that parameterize synaptic configurations, permitting Markovian dynamics in these variables to be formulated. FPT problems in these variables require solving multidimensional partial differential or integral equations. However, the FPT dynamics can be analytically well approximated by focusing on the slowest eigenmode in this higher-dimensional space. We may also obtain a much better approximation by restricting to the two dominant variables in this space, the restriction making numerical methods tractable. Analytical and numerical methods are in excellent agreement with simulation data, validating our methods. These methods prepare the ground for the study of FPT memory lifetimes with complex rather than simple, multistate synapses.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.
| |
Collapse
|
37
|
Ursino M, Cuppini C, Cappa SF, Catricalà E. A feature-based neurocomputational model of semantic memory. Cogn Neurodyn 2018; 12:525-547. [PMID: 30483362 PMCID: PMC6233327 DOI: 10.1007/s11571-018-9494-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Revised: 06/19/2018] [Accepted: 06/29/2018] [Indexed: 11/25/2022] Open
Abstract
According with a featural organization of semantic memory, this work is aimed at investigating, through an attractor network, the role of different kinds of features in the representation of concepts, both in normal and neurodegenerative conditions. We implemented new synaptic learning rules in order to take into account the role of partially shared features and of distinctive features with different saliency. The model includes semantic and lexical layers, coding, respectively for object features and word-forms. Connections among nodes are strongly asymmetrical. To account for the feature saliency, asymmetrical synapses were created using Hebbian rules of potentiation and depotentiation, setting different pre-synaptic and post-synaptic thresholds. A variable post-synaptic threshold, which automatically changed to reflect the feature frequency in different concepts (i.e., how many concepts share a feature), was used to account for partially shared features. The trained network solved naming tasks and word recognition tasks very well, exploiting the different role of salient versus marginal features in concept identification. In the case of damage, superordinate concepts were preserved better than the subordinate ones. Interestingly, the degradation of salient features, but not of marginal ones, prevented object identification. The model suggests that Hebbian rules, with adjustable post-synaptic thresholds, can provide a reliable semantic representation of objects exploiting the statistics of input features.
Collapse
Affiliation(s)
- Mauro Ursino
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, 40136 Bologna, Italy
| | - Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, 40136 Bologna, Italy
| | - Stefano F. Cappa
- NEtS Center, Scuola Universitaria Superiore IUSS, Pavia, Italy
- IRCCS S. Giovanni di Dio, Brescia, Italy
| | | |
Collapse
|
38
|
Vadakkan KI. A potential mechanism for first-person internal sensation of memory provides evidence for the relationship between learning and LTP induction. Behav Brain Res 2018; 360:16-35. [PMID: 30502355 DOI: 10.1016/j.bbr.2018.11.038] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2018] [Revised: 11/26/2018] [Accepted: 11/26/2018] [Indexed: 12/21/2022]
Abstract
Studies conducted to verify learning-induced changes anticipated from Hebb's postulate led to the finding of long-term potentiation (LTP). Even though several correlations have been found between behavioural markers of memory retrieval and LTP, it is not known how memories are retrieved using learning-induced changes. In this context, the following non-correlated findings between learning and LTP induction provide constraints for discovering the mechanism: 1) Requirement of high stimulus intensity for LTP induction in contrast to what is expected for a learning mechanism, 2) Delay of at least 20 to 30 s from stimulation to LTP induction, in contrast to mere milliseconds for associative learning, and 3) A sudden drop in peak-potentiated effect (short-term potentiation) that matches with short-lasting changes expected during working memory and occurs only at the time of delayed LTP induction. When memories are viewed as first-person internal sensations, a newly uncovered mechanism provides explanation for the relationship between memory and LTP. This work interconnects large number of findings from the fields of neuroscience and psychology and provides a further verifiable mechanism of learning.
Collapse
|
39
|
Neuhofer D, Kalivas P. Metaplasticity at the addicted tetrapartite synapse: A common denominator of drug induced adaptations and potential treatment target for addiction. Neurobiol Learn Mem 2018; 154:97-111. [PMID: 29428364 PMCID: PMC6112115 DOI: 10.1016/j.nlm.2018.02.007] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2017] [Revised: 01/26/2018] [Accepted: 02/07/2018] [Indexed: 11/22/2022]
Abstract
In light of the current worldwide addiction epidemic, the need for successful therapies is more urgent than ever. Although we made substantial progress in our basic understanding of addiction, reliable therapies are lacking. Since 40-60% of patients treated for substance use disorder return to active substance use within a year following treatment discharge, alleviating the vulnerability to relapse is regarded as the most promising avenue for addiction therapy. Preclinical addiction research often focuses on maladaptive synaptic plasticity within the reward pathway. However, drug induced neuroadaptations do not only lead to a strengthening of distinct drug associated cues and drug conditioned behaviors, but also seem to increase plasticity thresholds for environmental stimuli that are not associated with the drug. This form of higher order plasticity, or synaptic metaplasticity, is not expressed as a change in the efficacy of synaptic transmission but as a change in the direction or degree of plasticity induced by a distinct stimulation pattern. Experimental addiction research has demonstrated metaplasticity after exposure to multiple classes of addictive drugs. In this review we will focus on the concept of synaptic metaplasticity in the context of preclinical addiction research. We will take a closer look at the tetrapartite glutamatergic synapse and outline forms of metaplasticity that have been described at the addicted synapse. Finally we will discuss the different potential avenues for pharmacotherapies that target glutamatergic synaptic plasticity and metaplasticity. Here we will argue that aberrant metaplasticity renders the reward seeking circuitry more rigid and hence less able to adapt to changing environmental contingencies. An understanding of the molecular mechanisms that underlie this metaplasticity is crucial for the development of new strategies for addiction therapy. The correction of drug-induced metaplasticity could be used to support behavioral and pharmacotherapies for the treatment of addiction.
Collapse
Affiliation(s)
- Daniela Neuhofer
- Department of Neurosciences, Medical University of South Carolina, Charleston, SC 29425, United States.
| | - Peter Kalivas
- Department of Neurosciences, Medical University of South Carolina, Charleston, SC 29425, United States
| |
Collapse
|
40
|
Frascaroli J, Brivio S, Covi E, Spiga S. Evidence of soft bound behaviour in analogue memristive devices for neuromorphic computing. Sci Rep 2018; 8:7178. [PMID: 29740004 PMCID: PMC5940832 DOI: 10.1038/s41598-018-25376-x] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2018] [Accepted: 04/11/2018] [Indexed: 12/04/2022] Open
Abstract
The development of devices that can modulate their conductance under the application of electrical stimuli constitutes a fundamental step towards the realization of synaptic connectivity in neural networks. Optimization of synaptic functionality requires the understanding of the analogue conductance update under different programming conditions. Moreover, properties of physical devices such as bounded conductance values and state-dependent modulation should be considered as they affect storage capacity and performance of the network. This work provides a study of the conductance dynamics produced by identical pulses as a function of the programming parameters in an HfO2 memristive device. The application of a phenomenological model that considers a soft approach to the conductance boundaries allows the identification of different operation regimes and to quantify conductance modulation in the analogue region. Device non-linear switching kinetics is recognized as the physical origin of the transition between different dynamics and motivates the crucial trade-off between degree of analog modulation and memory window. Different kinetics for the processes of conductance increase and decrease account for device programming asymmetry. The identification of programming trade-off together with an evaluation of device variations provide a guideline for the optimization of the analogue programming in view of hardware implementation of neural networks.
Collapse
Affiliation(s)
- Jacopo Frascaroli
- Laboratorio MDM, IMM-CNR, Via C. Olivetti 2, 20864, Agrate Brianza, (MB), Italy
| | - Stefano Brivio
- Laboratorio MDM, IMM-CNR, Via C. Olivetti 2, 20864, Agrate Brianza, (MB), Italy
| | - Erika Covi
- Laboratorio MDM, IMM-CNR, Via C. Olivetti 2, 20864, Agrate Brianza, (MB), Italy
| | - Sabina Spiga
- Laboratorio MDM, IMM-CNR, Via C. Olivetti 2, 20864, Agrate Brianza, (MB), Italy.
| |
Collapse
|
41
|
Tao CL, Liu YT, Sun R, Zhang B, Qi L, Shivakoti S, Tian CL, Zhang P, Lau PM, Zhou ZH, Bi GQ. Differentiation and Characterization of Excitatory and Inhibitory Synapses by Cryo-electron Tomography and Correlative Microscopy. J Neurosci 2018; 38:1493-1510. [PMID: 29311144 PMCID: PMC5815350 DOI: 10.1523/jneurosci.1548-17.2017] [Citation(s) in RCA: 113] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 12/17/2017] [Accepted: 12/24/2017] [Indexed: 11/21/2022] Open
Abstract
As key functional units in neural circuits, different types of neuronal synapses play distinct roles in brain information processing, learning, and memory. Synaptic abnormalities are believed to underlie various neurological and psychiatric disorders. Here, by combining cryo-electron tomography and cryo-correlative light and electron microscopy, we distinguished intact excitatory and inhibitory synapses of cultured hippocampal neurons, and visualized the in situ 3D organization of synaptic organelles and macromolecules in their native state. Quantitative analyses of >100 synaptic tomograms reveal that excitatory synapses contain a mesh-like postsynaptic density (PSD) with thickness ranging from 20 to 50 nm. In contrast, the PSD in inhibitory synapses assumes a thin sheet-like structure ∼12 nm from the postsynaptic membrane. On the presynaptic side, spherical synaptic vesicles (SVs) of 25-60 nm diameter and discus-shaped ellipsoidal SVs of various sizes coexist in both synaptic types, with more ellipsoidal ones in inhibitory synapses. High-resolution tomograms obtained using a Volta phase plate and electron filtering and counting reveal glutamate receptor-like and GABAA receptor-like structures that interact with putative scaffolding and adhesion molecules, reflecting details of receptor anchoring and PSD organization. These results provide an updated view of the ultrastructure of excitatory and inhibitory synapses, and demonstrate the potential of our approach to gain insight into the organizational principles of cellular architecture underlying distinct synaptic functions.SIGNIFICANCE STATEMENT To understand functional properties of neuronal synapses, it is desirable to analyze their structure at molecular resolution. We have developed an integrative approach combining cryo-electron tomography and correlative fluorescence microscopy to visualize 3D ultrastructural features of intact excitatory and inhibitory synapses in their native state. Our approach shows that inhibitory synapses contain uniform thin sheet-like postsynaptic densities (PSDs), while excitatory synapses contain previously known mesh-like PSDs. We discovered "discus-shaped" ellipsoidal synaptic vesicles, and their distributions along with regular spherical vesicles in synaptic types are characterized. High-resolution tomograms further allowed identification of putative neurotransmitter receptors and their heterogeneous interaction with synaptic scaffolding proteins. The specificity and resolution of our approach enables precise in situ analysis of ultrastructural organization underlying distinct synaptic functions.
Collapse
Affiliation(s)
- Chang-Lu Tao
- National Laboratory for Physical Sciences at the Microscale
- School of Life Sciences
| | - Yun-Tao Liu
- National Laboratory for Physical Sciences at the Microscale
- School of Life Sciences
| | - Rong Sun
- National Laboratory for Physical Sciences at the Microscale
| | - Bin Zhang
- Chinese Academy of Sciences Key Laboratory of Brain Function and Disease
- School of Life Sciences
| | - Lei Qi
- Chinese Academy of Sciences Key Laboratory of Brain Function and Disease
- School of Life Sciences
| | - Sakar Shivakoti
- National Laboratory for Physical Sciences at the Microscale
- School of Life Sciences
| | - Chong-Li Tian
- Chinese Academy of Sciences Key Laboratory of Brain Function and Disease
- School of Life Sciences
| | - Peijun Zhang
- Division of Structural Biology, Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford OX37BN, United Kingdom
| | - Pak-Ming Lau
- Chinese Academy of Sciences Key Laboratory of Brain Function and Disease
- School of Life Sciences
| | - Z Hong Zhou
- National Laboratory for Physical Sciences at the Microscale,
- School of Life Sciences
- The California NanoSystems Institute, and
- Department of Microbiology, Immunology and Molecular Genetics, University of California, Los Angeles, Los Angeles, California 90095
| | - Guo-Qiang Bi
- National Laboratory for Physical Sciences at the Microscale,
- School of Life Sciences
- Chinese Academy of Sciences Center for Excellence in Brain Science and Intelligence Technology, Innovation Center for Cell Signaling Network, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
42
|
Mehta A. Storing and retrieving long-term memories: cooperation and competition in synaptic dynamics. ADVANCES IN PHYSICS: X 2018. [DOI: 10.1080/23746149.2018.1480415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Affiliation(s)
- Anita Mehta
- Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany
| |
Collapse
|
43
|
Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system. Sci Rep 2017; 7:17007. [PMID: 29208976 PMCID: PMC5717027 DOI: 10.1038/s41598-017-16994-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2017] [Accepted: 11/20/2017] [Indexed: 01/14/2023] Open
Abstract
It is well known that architecturally the brain is a neural network, i.e. a collection of many relatively simple units coupled flexibly. However, it has been unclear how the possession of this architecture enables higher-level cognitive functions, which are unique to the brain. Here, we consider the brain from the viewpoint of dynamical systems theory and hypothesize that the unique feature of the brain, the self-organized plasticity of its architecture, could represent the means of enabling the self-organized plasticity of its velocity vector field. We propose that, conceptually, the principle of cognition could amount to the existence of appropriate rules governing self-organization of the velocity field of a dynamical system with an appropriate account of stimuli. To support this hypothesis, we propose a simple non-neuromorphic mathematical model with a plastic self-organized velocity field, which has no prototype in physical world. This system is shown to be capable of basic cognition, which is illustrated numerically and with musical data. Our conceptual model could provide an additional insight into the working principles of the brain. Moreover, hardware implementations of plastic velocity fields self-organizing according to various rules could pave the way to creating artificial intelligence of a novel type.
Collapse
|
44
|
Exploring the limits of learning: Segregation of information integration and response selection is required for learning a serial reversal task. PLoS One 2017; 12:e0186959. [PMID: 29077735 PMCID: PMC5659652 DOI: 10.1371/journal.pone.0186959] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Accepted: 10/10/2017] [Indexed: 11/19/2022] Open
Abstract
Animals are proposed to learn the latent rules governing their environment in order to maximize their chances of survival. However, rules may change without notice, forcing animals to keep a memory of which one is currently at work. Rule switching can lead to situations in which the same stimulus/response pairing is positively and negatively rewarded in the long run, depending on variables that are not accessible to the animal. This fact raises questions on how neural systems are capable of reinforcement learning in environments where the reinforcement is inconsistent. Here we address this issue by asking about which aspects of connectivity, neural excitability and synaptic plasticity are key for a very general, stochastic spiking neural network model to solve a task in which rules change without being cued, taking the serial reversal task (SRT) as paradigm. Contrary to what could be expected, we found strong limitations for biologically plausible networks to solve the SRT. Especially, we proved that no network of neurons can learn a SRT if it is a single neural population that integrates stimuli information and at the same time is responsible of choosing the behavioural response. This limitation is independent of the number of neurons, neuronal dynamics or plasticity rules, and arises from the fact that plasticity is locally computed at each synapse, and that synaptic changes and neuronal activity are mutually dependent processes. We propose and characterize a spiking neural network model that solves the SRT, which relies on separating the functions of stimuli integration and response selection. The model suggests that experimental efforts to understand neural function should focus on the characterization of neural circuits according to their connectivity, neural dynamics, and the degree of modulation of synaptic plasticity with reward.
Collapse
|
45
|
Park Y, Choi W, Paik SB. Symmetry of learning rate in synaptic plasticity modulates formation of flexible and stable memories. Sci Rep 2017; 7:5671. [PMID: 28720795 PMCID: PMC5516032 DOI: 10.1038/s41598-017-05929-2] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 06/06/2017] [Indexed: 01/06/2023] Open
Abstract
Spike-timing-dependent plasticity (STDP) is considered critical to learning and memory functions in the human brain. Across various types of synapse, STDP is observed as different profiles of Hebbian and anti-Hebbian learning rules. However, the specific roles of diverse STDP profiles in memory formation still remain elusive. Here, we show that the symmetry of the learning rate profile in STDP is crucial to determining the character of stored memory. Using computer simulations, we found that an asymmetric learning rate generates flexible memory that is volatile and easily overwritten by newly appended information. Moreover, a symmetric learning rate generates stable memory that can coexist with newly appended information. In addition, by combining these two conditions, we could realize a hybrid memory type that operates in a way intermediate between stable and flexible memory. Our results demonstrate that various attributes of memory functions may originate from differences in the synaptic stability.
Collapse
Affiliation(s)
- Youngjin Park
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Woochul Choi
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea.,Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Se-Bum Paik
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea. .,Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea.
| |
Collapse
|
46
|
Optimal structure of metaplasticity for adaptive learning. PLoS Comput Biol 2017; 13:e1005630. [PMID: 28658247 PMCID: PMC5509349 DOI: 10.1371/journal.pcbi.1005630] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Revised: 07/13/2017] [Accepted: 06/09/2017] [Indexed: 11/30/2022] Open
Abstract
Learning from reward feedback in a changing environment requires a high degree of adaptability, yet the precise estimation of reward information demands slow updates. In the framework of estimating reward probability, here we investigated how this tradeoff between adaptability and precision can be mitigated via metaplasticity, i.e. synaptic changes that do not always alter synaptic efficacy. Using the mean-field and Monte Carlo simulations we identified ‘superior’ metaplastic models that can substantially overcome the adaptability-precision tradeoff. These models can achieve both adaptability and precision by forming two separate sets of meta-states: reservoirs and buffers. Synapses in reservoir meta-states do not change their efficacy upon reward feedback, whereas those in buffer meta-states can change their efficacy. Rapid changes in efficacy are limited to synapses occupying buffers, creating a bottleneck that reduces noise without significantly decreasing adaptability. In contrast, more-populated reservoirs can generate a strong signal without manifesting any observable plasticity. By comparing the behavior of our model and a few competing models during a dynamic probability estimation task, we found that superior metaplastic models perform close to optimally for a wider range of model parameters. Finally, we found that metaplastic models are robust to changes in model parameters and that metaplastic transitions are crucial for adaptive learning since replacing them with graded plastic transitions (transitions that change synaptic efficacy) reduces the ability to overcome the adaptability-precision tradeoff. Overall, our results suggest that ubiquitous unreliability of synaptic changes evinces metaplasticity that can provide a robust mechanism for mitigating the tradeoff between adaptability and precision and thus adaptive learning. Successful learning from our experience and feedback from the environment requires that the reward value assigned to a given option or action to be updated by a precise amount after each feedback. In the standard model for reward-based learning known as reinforcement learning, the learning rates determine the strength of such update. A large learning rate allows fast update of values (large adaptability) but introduces noise (small precision), whereas a small learning rate does the opposite. Thus, learning seems to be bounded by a tradeoff between adaptability and precision. Here, we asked whether there are synaptic mechanisms that are capable of adjusting the brain’s level of plasticity according to reward statistics, and, therefore, allow the learning process to be adaptive. We showed that metaplasticity, changes in the synaptic state that shape future synaptic modifications without any observable changes in the strength of synapses, could provide such a mechanism and furthermore, identified the optimal structure of such metaplasticity. We propose that metaplasticity, which sometimes causes no observable changes in behavior and thus could be perceived as a lack of learning, can provide a robust mechanism for adaptive learning.
Collapse
|
47
|
Sprekeler H. Functional consequences of inhibitory plasticity: homeostasis, the excitation-inhibition balance and beyond. Curr Opin Neurobiol 2017; 43:198-203. [PMID: 28500933 DOI: 10.1016/j.conb.2017.03.014] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 03/12/2017] [Accepted: 03/22/2017] [Indexed: 11/18/2022]
Abstract
Computational neuroscience has a long-standing tradition of investigating the consequences of excitatory synaptic plasticity. In contrast, the functions of inhibitory plasticity are still largely nebulous, particularly given the bewildering diversity of interneurons in the brain. Here, we review recent computational advances that provide first suggestions for the functional roles of inhibitory plasticity, such as a maintenance of the excitation-inhibition balance, a stabilization of recurrent network dynamics and a decorrelation of sensory responses. The field is still in its infancy, but given the existing body of theory for excitatory plasticity, it is likely to mature quickly and deliver important insights into the self-organization of inhibitory circuits in the brain.
Collapse
Affiliation(s)
- Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Berlin Institute of Technology, and Bernstein Center for Computational Neuroscience, Marchstr. 23, 10587 Berlin, Germany.
| |
Collapse
|
48
|
Synaptic plasticity in dendrites: complications and coping strategies. Curr Opin Neurobiol 2017; 43:177-186. [PMID: 28453975 DOI: 10.1016/j.conb.2017.03.012] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Revised: 03/20/2017] [Accepted: 03/22/2017] [Indexed: 12/15/2022]
Abstract
The elaborate morphology, nonlinear membrane mechanisms and spatiotemporally varying synaptic activation patterns of dendrites complicate the expression, compartmentalization and modulation of synaptic plasticity. To grapple with this complexity, we start with the observation that neurons in different brain areas face markedly different learning problems, and dendrites of different neuron types contribute to the cell's input-output function in markedly different ways. By committing to specific assumptions regarding a neuron's learning problem and its input-output function, specific inferences can be drawn regarding the synaptic plasticity mechanisms and outcomes that we 'ought' to expect for that neuron. Exploiting this assumption-driven approach can help both in interpreting existing experimental data and designing future experiments aimed at understanding the brain's myriad learning processes.
Collapse
|
49
|
Nguyen-Vu TB, Zhao GQ, Lahiri S, Kimpo RR, Lee H, Ganguli S, Shatz CJ, Raymond JL. A saturation hypothesis to explain both enhanced and impaired learning with enhanced plasticity. eLife 2017; 6. [PMID: 28234229 PMCID: PMC5386593 DOI: 10.7554/elife.20147] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Accepted: 02/02/2017] [Indexed: 11/19/2022] Open
Abstract
Across many studies, animals with enhanced synaptic plasticity exhibit either enhanced or impaired learning, raising a conceptual puzzle: how enhanced plasticity can yield opposite learning outcomes? Here, we show that the recent history of experience can determine whether mice with enhanced plasticity exhibit enhanced or impaired learning in response to the same training. Mice with enhanced cerebellar LTD, due to double knockout (DKO) of MHCI H2-Kb/H2-Db (KbDb−/−), exhibited oculomotor learning deficits. However, the same mice exhibited enhanced learning after appropriate pre-training. Theoretical analysis revealed that synapses with history-dependent learning rules could recapitulate the data, and suggested that saturation may be a key factor limiting the ability of enhanced plasticity to enhance learning. Optogenetic stimulation designed to saturate LTD produced the same impairment in WT as observed in DKO mice. Overall, our results suggest that the recent history of activity and the threshold for synaptic plasticity conspire to effect divergent learning outcomes. DOI:http://dx.doi.org/10.7554/eLife.20147.001 All animals can learn from their experiences. One of the main ideas for how learning occurs is that it involves changes in the strength of the connections between neurons, known as synapses. The ability of synapses to become stronger or weaker is referred to as synaptic plasticity. High levels of synaptic plasticity are generally thought to be good for learning, while low levels of synaptic plasticity make learning more difficult. Nevertheless, studies have also reported that high levels of synaptic plasticity can sometimes impair learning. To explain these mixed results, Nguyen-Vu, Zhao, Lahiri et al. studied mice that had been genetically modified to show greater synaptic plasticity than normal mice. The same individual mutant animals were sometimes less able to learn an eye-movement task than unmodified mice, and at other times better able to learn exactly the same task. The main factor that determined how well the mice could learn was what the mice had experienced shortly before they began the training. Nguyen-Vu et al. propose that some experiences change the strength of synapses so much that they temporarily prevent those synapses from undergoing any further changes. Animals with these “saturated” synapses will struggle to learn a new task, even if their brains are normally capable of high levels of synaptic plasticity. Notably, even normal activity appears to be able to put the synapses of the mutant mice into a saturated state, whereas this saturation would only occur in normal mice under a restricted set of circumstances. Consistent with this idea, Nguyen-Vu et al. showed that a specific type of pre-training that desaturates synapses improved the ability of the modified mice to learn the eye-movement task. Conversely, a different procedure that is known to saturate synapses impaired the learning ability of the unmodified mice. A future challenge is to test these predictions experimentally by measuring changes in synaptic plasticity directly, both in brain slices and in living animals. The results could ultimately help to develop treatments that improve the ability to learn and so could provide benefits to a wide range of individuals, including people who have suffered a brain injury or stroke. DOI:http://dx.doi.org/10.7554/eLife.20147.002
Collapse
Affiliation(s)
- Td Barbara Nguyen-Vu
- Department of Neurobiology, Stanford School of Medicine, Stanford, United States.,Department of Molecular and Cellular Physiology, Stanford School of Medicine, Stanford, United States
| | - Grace Q Zhao
- Department of Neurobiology, Stanford School of Medicine, Stanford, United States
| | - Subhaneil Lahiri
- Department of Applied Physics, Stanford University, Stanford, United States
| | - Rhea R Kimpo
- Department of Neurobiology, Stanford School of Medicine, Stanford, United States
| | - Hanmi Lee
- Department of Neurobiology, Stanford School of Medicine, Stanford, United States
| | - Surya Ganguli
- Department of Neurobiology, Stanford School of Medicine, Stanford, United States.,Department of Applied Physics, Stanford University, Stanford, United States
| | - Carla J Shatz
- Department of Neurobiology, Stanford School of Medicine, Stanford, United States.,Department of Biology, Stanford University, Stanford, United States
| | - Jennifer L Raymond
- Department of Neurobiology, Stanford School of Medicine, Stanford, United States
| |
Collapse
|
50
|
Sun H, Li R, Xu S, Liu Z, Ma X. Hypothalamic Astrocytes Respond to Gastric Mucosal Damage Induced by Restraint Water-Immersion Stress in Rat. Front Behav Neurosci 2016; 10:210. [PMID: 27847472 PMCID: PMC5088369 DOI: 10.3389/fnbeh.2016.00210] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2016] [Accepted: 10/17/2016] [Indexed: 12/20/2022] Open
Abstract
Restraint water-immersion stress (RWIS), a compound stress model, includes both psychological and physical stimulation. Studies have shown that neurons in the hypothalamus are involved in RWIS, but the role of astrocytes and the interactions between astrocytes and neurons in RWIS are not clear. Here, we tested our hypothesis that hypothalamus astrocytes are involved in RWIS and interact with neurons to regulate gastric mucosal damage induced by RWIS. The expression of Glial fibrillary acidic protein (GFAP) and c-Fos in the paraventricular nucleus (PVN) and supraoptic nucleus (SON) significantly increased following the RWIS. GFAP and c-Fos expression are similar in the temporal pattern, peaked at 1 h after the RWIS, then reduced gradually, and reached a maximal level again at 5 h which show “double-peak” characteristics. Intracerebroventricular administration of astroglial toxin L-a-aminoadipate (L-AA) and c-Fos antisense oligodeoxy nucleotides (ASO) both decreased RWIS-induced gastric mucosal damage. Results of immunohistochemistry assay revealed that both L-AA and ASO decreased the activation of astrocytes and neurons in the hypothalamus by RWIS. These results showed that hypothalamus neuron-astrocyte “network” involved in gastric mucosal damage induced by RWIS. This study may offer theoretical basis for some novel therapeutic strategies for RWIS-induced gastric ulcers.
Collapse
Affiliation(s)
- Haiji Sun
- College of Life Science, Shandong Normal University Jinan, China
| | - Ruisheng Li
- Research Center for Clinical and Translational Medicine, 302 Hospital of PLA Beijing, China
| | - Shiguo Xu
- College of Life Science, Shandong Normal University Jinan, China
| | - Zhen Liu
- College of Life Science, Shandong Normal University Jinan, China
| | - Xiaoli Ma
- Central Laboratory, Jinan Central Hospital Affiliated to Shandong University Jinan, China
| |
Collapse
|