1
|
Lu Y, Wu S. Learning sequence attractors in recurrent networks with hidden neurons. Neural Netw 2024; 178:106466. [PMID: 38968778 DOI: 10.1016/j.neunet.2024.106466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/15/2024] [Accepted: 06/13/2024] [Indexed: 07/07/2024]
Abstract
The brain is targeted for processing temporal sequence information. It remains largely unclear how the brain learns to store and retrieve sequence memories. Here, we study how recurrent networks of binary neurons learn sequence attractors to store predefined pattern sequences and retrieve them robustly. We show that to store arbitrary pattern sequences, it is necessary for the network to include hidden neurons even though their role in displaying sequence memories is indirect. We develop a local learning algorithm to learn sequence attractors in the networks with hidden neurons. The algorithm is proven to converge and lead to sequence attractors. We demonstrate that the network model can store and retrieve sequences robustly on synthetic and real-world datasets. We hope that this study provides new insights in understanding sequence memory and temporal information processing in the brain.
Collapse
Affiliation(s)
- Yao Lu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, China
| | - Si Wu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, China.
| |
Collapse
|
2
|
Jauch J, Becker M, Tetzlaff C, Fauth MJ. Differences in the consolidation by spontaneous and evoked ripples in the presence of active dendrites. PLoS Comput Biol 2024; 20:e1012218. [PMID: 38917228 PMCID: PMC11230591 DOI: 10.1371/journal.pcbi.1012218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 07/08/2024] [Accepted: 05/31/2024] [Indexed: 06/27/2024] Open
Abstract
Ripples are a typical form of neural activity in hippocampal neural networks associated with the replay of episodic memories during sleep as well as sleep-related plasticity and memory consolidation. The emergence of ripples has been observed both dependent as well as independent of input from other brain areas and often coincides with dendritic spikes. Yet, it is unclear how input-evoked and spontaneous ripples as well as dendritic excitability affect plasticity and consolidation. Here, we use mathematical modeling to compare these cases. We find that consolidation as well as the emergence of spontaneous ripples depends on a reliable propagation of activity in feed-forward structures which constitute memory representations. This propagation is facilitated by excitable dendrites, which entail that a few strong synapses are sufficient to trigger neuronal firing. In this situation, stimulation-evoked ripples lead to the potentiation of weak synapses within the feed-forward structure and, thus, to a consolidation of a more general sequence memory. However, spontaneous ripples that occur without stimulation, only consolidate a sparse backbone of the existing strong feed-forward structure. Based on this, we test a recently hypothesized scenario in which the excitability of dendrites is transiently enhanced after learning, and show that such a transient increase can strengthen, restructure and consolidate even weak hippocampal memories, which would be forgotten otherwise. Hence, a transient increase in dendritic excitability would indeed provide a mechanism for stabilizing memories.
Collapse
Affiliation(s)
- Jannik Jauch
- Third Institute for Physics, Georg-August-University, Göttingen, Germany
| | - Moritz Becker
- Group of Computational Synaptic Physiology, Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Göttingen, Germany
| | - Christian Tetzlaff
- Group of Computational Synaptic Physiology, Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Göttingen, Germany
| | - Michael Jan Fauth
- Third Institute for Physics, Georg-August-University, Göttingen, Germany
| |
Collapse
|
3
|
Vignoud G, Venance L, Touboul JD. Anti-Hebbian plasticity drives sequence learning in striatum. Commun Biol 2024; 7:555. [PMID: 38724614 PMCID: PMC11082161 DOI: 10.1038/s42003-024-06203-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 04/17/2024] [Indexed: 05/12/2024] Open
Abstract
Spatio-temporal activity patterns have been observed in a variety of brain areas in spontaneous activity, prior to or during action, or in response to stimuli. Biological mechanisms endowing neurons with the ability to distinguish between different sequences remain largely unknown. Learning sequences of spikes raises multiple challenges, such as maintaining in memory spike history and discriminating partially overlapping sequences. Here, we show that anti-Hebbian spike-timing dependent plasticity (STDP), as observed at cortico-striatal synapses, can naturally lead to learning spike sequences. We design a spiking model of the striatal output neuron receiving spike patterns defined as sequential input from a fixed set of cortical neurons. We use a simple synaptic plasticity rule that combines anti-Hebbian STDP and non-associative potentiation for a subset of the presented patterns called rewarded patterns. We study the ability of striatal output neurons to discriminate rewarded from non-rewarded patterns by firing only after the presentation of a rewarded pattern. In particular, we show that two biological properties of striatal networks, spiking latency and collateral inhibition, contribute to an increase in accuracy, by allowing a better discrimination of partially overlapping sequences. These results suggest that anti-Hebbian STDP may serve as a biological substrate for learning sequences of spikes.
Collapse
Affiliation(s)
- Gaëtan Vignoud
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France
| | - Laurent Venance
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France.
| | - Jonathan D Touboul
- Department of Mathematics and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA.
| |
Collapse
|
4
|
Soldado-Magraner S, Buonomano DV. Neural Sequences and the Encoding of Time. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:81-93. [PMID: 38918347 DOI: 10.1007/978-3-031-60183-5_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Converging experimental and computational evidence indicate that on the scale of seconds the brain encodes time through changing patterns of neural activity. Experimentally, two general forms of neural dynamic regimes that can encode time have been observed: neural population clocks and ramping activity. Neural population clocks provide a high-dimensional code to generate complex spatiotemporal output patterns, in which each neuron exhibits a nonlinear temporal profile. A prototypical example of neural population clocks are neural sequences, which have been observed across species, brain areas, and behavioral paradigms. Additionally, neural sequences emerge in artificial neural networks trained to solve time-dependent tasks. Here, we examine the role of neural sequences in the encoding of time, and how they may emerge in a biologically plausible manner. We conclude that neural sequences may represent a canonical computational regime to perform temporal computations.
Collapse
Affiliation(s)
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA.
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
5
|
Li PY, Roxin A. Rapid memory encoding in a recurrent network model with behavioral time scale synaptic plasticity. PLoS Comput Biol 2023; 19:e1011139. [PMID: 37624848 PMCID: PMC10484462 DOI: 10.1371/journal.pcbi.1011139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/07/2023] [Accepted: 07/10/2023] [Indexed: 08/27/2023] Open
Abstract
Episodic memories are formed after a single exposure to novel stimuli. The plasticity mechanisms underlying such fast learning still remain largely unknown. Recently, it was shown that cells in area CA1 of the hippocampus of mice could form or shift their place fields after a single traversal of a virtual linear track. In-vivo intracellular recordings in CA1 cells revealed that previously silent inputs from CA3 could be switched on when they occurred within a few seconds of a dendritic plateau potential (PP) in the post-synaptic cell, a phenomenon dubbed Behavioral Time-scale Plasticity (BTSP). A recently developed computational framework for BTSP in which the dynamics of synaptic traces related to the pre-synaptic activity and post-synaptic PP are explicitly modelled, can account for experimental findings. Here we show that this model of plasticity can be further simplified to a 1D map which describes changes to the synaptic weights after a single trial. We use a temporally symmetric version of this map to study the storage of a large number of spatial memories in a recurrent network, such as CA3. Specifically, the simplicity of the map allows us to calculate the correlation of the synaptic weight matrix with any given past environment analytically. We show that the calculated memory trace can be used to predict the emergence and stability of bump attractors in a high dimensional neural network model endowed with BTSP.
Collapse
Affiliation(s)
- Pan Ye Li
- Centre de Recerca Matemàtica, Barcelona, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Barcelona, Spain
| |
Collapse
|
6
|
Xue X, Wimmer RD, Halassa MM, Chen ZS. Spiking Recurrent Neural Networks Represent Task-Relevant Neural Sequences in Rule-Dependent Computation. Cognit Comput 2023; 15:1167-1189. [PMID: 37771569 PMCID: PMC10530699 DOI: 10.1007/s12559-022-09994-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 01/13/2022] [Indexed: 11/28/2022]
Abstract
Background Prefrontal cortical neurons play essential roles in performing rule-dependent tasks and working memory-based decision making. Methods Motivated by PFG recordings of task-performing mice, we developed an excitatory-inhibitory spiking recurrent neural network (SRNN) to perform a rule-dependent two-alternative forced choice (2AFC) task. We imposed several important biological constraints onto the SRNN, and adapted spike frequency adaptation (SFA) and SuperSpike gradient methods to train the SRNN efficiently. Results The trained SRNN produced emergent rule-specific tunings in single-unit representations, showing rule-dependent population dynamics that resembled experimentally observed data. Under varying test conditions, we manipulated the SRNN parameters or configuration in computer simulations, and we investigated the impacts of rule-coding error, delay duration, recurrent weight connectivity and sparsity, and excitation/inhibition (E/I) balance on both task performance and neural representations. Conclusions Overall, our modeling study provides a computational framework to understand neuronal representations at a fine timescale during working memory and cognitive control, and provides new experimentally testable hypotheses in future experiments.
Collapse
Affiliation(s)
- Xiaohe Xue
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - Ralf D. Wimmer
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Zhe Sage Chen
- Department of Psychiatry, New York University School of Medicine, New York, NY, USA
- Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
7
|
Goldt S, Krzakala F, Zdeborová L, Brunel N. Bayesian reconstruction of memories stored in neural networks from their connectivity. PLoS Comput Biol 2023; 19:e1010813. [PMID: 36716332 PMCID: PMC9910750 DOI: 10.1371/journal.pcbi.1010813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 02/09/2023] [Accepted: 12/12/2022] [Indexed: 02/01/2023] Open
Abstract
The advent of comprehensive synaptic wiring diagrams of large neural circuits has created the field of connectomics and given rise to a number of open research questions. One such question is whether it is possible to reconstruct the information stored in a recurrent network of neurons, given its synaptic connectivity matrix. Here, we address this question by determining when solving such an inference problem is theoretically possible in specific attractor network models and by providing a practical algorithm to do so. The algorithm builds on ideas from statistical physics to perform approximate Bayesian inference and is amenable to exact analysis. We study its performance on three different models, compare the algorithm to standard algorithms such as PCA, and explore the limitations of reconstructing stored patterns from synaptic connectivity.
Collapse
Affiliation(s)
- Sebastian Goldt
- International School of Advanced Studies (SISSA), Trieste, Italy
- * E-mail:
| | - Florent Krzakala
- IdePHICS laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Lenka Zdeborová
- SPOC laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Nicolas Brunel
- Department of Neurobiology, Duke University, Durham, North Carolina, United States of America
- Department of Physics, Duke University, Durham, North Carolina, United States of America
| |
Collapse
|
8
|
Botha AE, Ansariara M, Emadi S, Kolahchi MR. Chimera Patterns of Synchrony in a Frustrated Array of Hebb Synapses. Front Comput Neurosci 2022; 16:888019. [PMID: 35814347 PMCID: PMC9260432 DOI: 10.3389/fncom.2022.888019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 05/30/2022] [Indexed: 11/13/2022] Open
Abstract
The union of the Kuramoto–Sakaguchi model and the Hebb dynamics reproduces the Lisman switch through a bistability in synchronized states. Here, we show that, within certain ranges of the frustration parameter, the chimera pattern can emerge, causing a different, time-evolving, distribution in the Hebbian synaptic strengths. We study the stability range of the chimera as a function of the frustration (phase-lag) parameter. Depending on the range of the frustration, two different types of chimeras can appear spontaneously, i.e., from randomized initial conditions. In the first type, the oscillators in the coherent region rotate, on average, slower than those in the incoherent region; while in the second type, the average rotational frequencies of the two regions are reversed, i.e., the coherent region runs, on average, faster than the incoherent region. We also show that non-stationary behavior at finite N can be controlled by adjusting the natural frequency of a single pacemaker oscillator. By slowly cycling the frequency of the pacemaker, we observe hysteresis in the system. Finally, we discuss how we can have a model for learning and memory.
Collapse
Affiliation(s)
- A. E. Botha
- Department of Physics, Science Campus, University of South Africa, Private Bag X6, Johannesburg, South Africa
| | - M. Ansariara
- Department of Physics, Institute for Advanced Studies in Basic Sciences, Zanjan, Iran
| | - S. Emadi
- Department of Biological Sciences, Institute for Advanced Studies in Basic Sciences, Zanjan, Iran
| | - M. R. Kolahchi
- Department of Physics, Institute for Advanced Studies in Basic Sciences, Zanjan, Iran
- *Correspondence: M. R. Kolahchi
| |
Collapse
|
9
|
Self-healing codes: How stable neural populations can track continually reconfiguring neural representations. Proc Natl Acad Sci U S A 2022; 119:2106692119. [PMID: 35145024 PMCID: PMC8851551 DOI: 10.1073/pnas.2106692119] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/29/2021] [Indexed: 12/19/2022] Open
Abstract
The brain is capable of adapting while maintaining stable long-term memories and learned skills. Recent experiments show that neural responses are highly plastic in some circuits, while other circuits maintain consistent responses over time, raising the question of how these circuits interact coherently. We show how simple, biologically motivated Hebbian and homeostatic mechanisms in single neurons can allow circuits with fixed responses to continuously track a plastic, changing representation without reference to an external learning signal. As an adaptive system, the brain must retain a faithful representation of the world while continuously integrating new information. Recent experiments have measured population activity in cortical and hippocampal circuits over many days and found that patterns of neural activity associated with fixed behavioral variables and percepts change dramatically over time. Such “representational drift” raises the question of how malleable population codes can interact coherently with stable long-term representations that are found in other circuits and with relatively rigid topographic mappings of peripheral sensory and motor signals. We explore how known plasticity mechanisms can allow single neurons to reliably read out an evolving population code without external error feedback. We find that interactions between Hebbian learning and single-cell homeostasis can exploit redundancy in a distributed population code to compensate for gradual changes in tuning. Recurrent feedback of partially stabilized readouts could allow a pool of readout cells to further correct inconsistencies introduced by representational drift. This shows how relatively simple, known mechanisms can stabilize neural tuning in the short term and provides a plausible explanation for how plastic neural codes remain integrated with consolidated, long-term representations.
Collapse
|
10
|
Calderon CB, Verguts T, Frank MJ. Thunderstruck: The ACDC model of flexible sequences and rhythms in recurrent neural circuits. PLoS Comput Biol 2022; 18:e1009854. [PMID: 35108283 PMCID: PMC8843237 DOI: 10.1371/journal.pcbi.1009854] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 02/14/2022] [Accepted: 01/21/2022] [Indexed: 11/18/2022] Open
Abstract
Adaptive sequential behavior is a hallmark of human cognition. In particular, humans can learn to produce precise spatiotemporal sequences given a certain context. For instance, musicians can not only reproduce learned action sequences in a context-dependent manner, they can also quickly and flexibly reapply them in any desired tempo or rhythm without overwriting previous learning. Existing neural network models fail to account for these properties. We argue that this limitation emerges from the fact that sequence information (i.e., the position of the action) and timing (i.e., the moment of response execution) are typically stored in the same neural network weights. Here, we augment a biologically plausible recurrent neural network of cortical dynamics to include a basal ganglia-thalamic module which uses reinforcement learning to dynamically modulate action. This “associative cluster-dependent chain” (ACDC) model modularly stores sequence and timing information in distinct loci of the network. This feature increases computational power and allows ACDC to display a wide range of temporal properties (e.g., multiple sequences, temporal shifting, rescaling, and compositionality), while still accounting for several behavioral and neurophysiological empirical observations. Finally, we apply this ACDC network to show how it can learn the famous “Thunderstruck” song intro and then flexibly play it in a “bossa nova” rhythm without further training. How do humans flexibly adapt action sequences? For instance, musicians can learn a song and quickly speed up or slow down the tempo, or even play the song following a completely different rhythm (e.g., a rock song using a bossa nova rhythm). In this work, we build a biologically plausible network of cortico-basal ganglia interactions that explains how this temporal flexibility may emerge in the brain. Crucially, our model factorizes sequence order and action timing, respectively represented in cortical and basal ganglia dynamics. This factorization allows full temporal flexibility, i.e. the timing of a learned action sequence can be recomposed without interfering with the order of the sequence. As such, our model is capable of learning asynchronous action sequences, and flexibly shift, rescale, and recompose them, while accounting for biological data.
Collapse
Affiliation(s)
- Cristian Buc Calderon
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
- * E-mail:
| | - Tom Verguts
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Michael J. Frank
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
| |
Collapse
|
11
|
Metastable attractors explain the variable timing of stable behavioral action sequences. Neuron 2022; 110:139-153.e9. [PMID: 34717794 PMCID: PMC9194601 DOI: 10.1016/j.neuron.2021.10.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 08/30/2021] [Accepted: 10/05/2021] [Indexed: 01/07/2023]
Abstract
The timing of self-initiated actions shows large variability even when they are executed in stable, well-learned sequences. Could this mix of reliability and stochasticity arise within the same neural circuit? We trained rats to perform a stereotyped sequence of self-initiated actions and recorded neural ensemble activity in secondary motor cortex (M2), which is known to reflect trial-by-trial action-timing fluctuations. Using hidden Markov models, we established a dictionary between activity patterns and actions. We then showed that metastable attractors, representing activity patterns with a reliable sequential structure and large transition timing variability, could be produced by reciprocally coupling a high-dimensional recurrent network and a low-dimensional feedforward one. Transitions between attractors relied on correlated variability in this mesoscale feedback loop, predicting a specific structure of low-dimensional correlations that were empirically verified in M2 recordings. Our results suggest a novel mesoscale network motif based on correlated variability supporting naturalistic animal behavior.
Collapse
|
12
|
Wu YK, Zenke F. Nonlinear transient amplification in recurrent neural networks with short-term plasticity. eLife 2021; 10:71263. [PMID: 34895468 PMCID: PMC8820736 DOI: 10.7554/elife.71263] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 12/10/2021] [Indexed: 11/24/2022] Open
Abstract
To rapidly process information, neural circuits have to amplify specific activity patterns transiently. How the brain performs this nonlinear operation remains elusive. Hebbian assemblies are one possibility whereby strong recurrent excitatory connections boost neuronal activity. However, such Hebbian amplification is often associated with dynamical slowing of network dynamics, non-transient attractor states, and pathological run-away activity. Feedback inhibition can alleviate these effects but typically linearizes responses and reduces amplification gain. Here, we study nonlinear transient amplification (NTA), a plausible alternative mechanism that reconciles strong recurrent excitation with rapid amplification while avoiding the above issues. NTA has two distinct temporal phases. Initially, positive feedback excitation selectively amplifies inputs that exceed a critical threshold. Subsequently, short-term plasticity quenches the run-away dynamics into an inhibition-stabilized network state. By characterizing NTA in supralinear network models, we establish that the resulting onset transients are stimulus selective and well-suited for speedy information processing. Further, we find that excitatory-inhibitory co-tuning widens the parameter regime in which NTA is possible in the absence of persistent activity. In summary, NTA provides a parsimonious explanation for how excitatory-inhibitory co-tuning and short-term plasticity collaborate in recurrent networks to achieve transient amplification.
Collapse
Affiliation(s)
- Yue Kris Wu
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| |
Collapse
|
13
|
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. Proc Natl Acad Sci U S A 2021; 118:2023832118. [PMID: 34772802 DOI: 10.1073/pnas.2023832118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2021] [Indexed: 11/18/2022] Open
Abstract
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Collapse
|
14
|
Rajakumar A, Rinzel J, Chen ZS. Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation. Neural Comput 2021; 33:2603-2645. [PMID: 34530451 PMCID: PMC8750453 DOI: 10.1162/neco_a_01418] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/08/2021] [Indexed: 11/04/2022]
Abstract
Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics ("neural sequences") of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.
Collapse
Affiliation(s)
- Alfred Rajakumar
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, U.S.A.
| | - John Rinzel
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY 10012, USA.
| | - Zhe S Chen
- Department of Psychiatry and Neuroscience Institute, New York University School of Medicine, New York, NY 10016, U.S.A.
| |
Collapse
|
15
|
Spalla D, Cornacchia IM, Treves A. Continuous attractors for dynamic memories. eLife 2021; 10:69499. [PMID: 34520345 PMCID: PMC8439658 DOI: 10.7554/elife.69499] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 08/12/2021] [Indexed: 11/13/2022] Open
Abstract
Episodic memory has a dynamic nature: when we recall past episodes, we retrieve not only their content, but also their temporal structure. The phenomenon of replay, in the hippocampus of mammals, offers a remarkable example of this temporal dynamics. However, most quantitative models of memory treat memories as static configurations, neglecting the temporal unfolding of the retrieval process. Here, we introduce a continuous attractor network model with a memory-dependent asymmetric component in the synaptic connectivity, which spontaneously breaks the equilibrium of the memory configurations and produces dynamic retrieval. The detailed analysis of the model with analytical calculations and numerical simulations shows that it can robustly retrieve multiple dynamical memories, and that this feature is largely independent of the details of its implementation. By calculating the storage capacity, we show that the dynamic component does not impair memory capacity, and can even enhance it in certain regimes. When we recall a past experience, accessing what is known as an ‘episodic memory’, it usually does not appear as a still image or a snapshot of what occurred. Instead, our memories tend to be dynamic: we remember how a sequence of events unfolded, and when we do this, we often re-experience at least part of that same sequence. If the memory includes physical movement, the sequence combines space and time to remember a trajectory. For example, a mouse might remember how it went down a hole and found cheese there. However, mathematical models of how past experiences are stored in our brains and retrieved when we remember them have so far focused on snapshot memories. ‘Attractor network models’ are one type of mathematical model that neuroscientists use to represent how neurons communicate with each other to store memories. These models can provide insights into how circuits of neurons, for example those in the hippocampus (a part of the brain crucial for memory), may have evolved to remember the past, but so far they have only focused on how single moments, rather than sequences of events, are represented by populations of neurons. Spalla et al. found a way to extend these models, so they could analyse how networks of neurons can store and retrieve dynamic memories. These memories are represented in the brain as ‘continuous attractors’, which can be thought of as arrows that attract mental trajectories first to the arrow itself, and once on the arrow, to the arrowhead. Each recalled event elicits the next one on the arrow, as the mental trajectory advances towards the arrowhead. Spalla et al. determined that memory networks in the hippocampus of mammals can store large numbers of these ‘arrows’, up to the same amount of ‘snapshot’ memories predicted to be stored with similar models. Spalla et al.’s results may allow researchers to better understand memory storage and recall, since they allow for the modelling of complex and realistic aspects of episodic memories. This could provide insights into processes such as why our minds wander, as well as having implications for the study of how neurons physically interact with each other to transmit information.
Collapse
Affiliation(s)
- Davide Spalla
- SISSA - Cognitive Neuroscience, Via Bonomea, Trieste, Italy
| | - Isabel Maria Cornacchia
- SISSA - Cognitive Neuroscience, Via Bonomea, Trieste, Italy.,University of Turin - Physics Department, Torino, Italy
| | | |
Collapse
|
16
|
Li Y, Feng X, Liu Y, Han X. Apple quality identification and classification by image processing based on convolutional neural networks. Sci Rep 2021; 11:16618. [PMID: 34404850 PMCID: PMC8371106 DOI: 10.1038/s41598-021-96103-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Accepted: 08/03/2021] [Indexed: 01/23/2023] Open
Abstract
This work researched apple quality identification and classification from real images containing complicated disturbance information (background was similar to the surface of the apples). This paper proposed a novel model based on convolutional neural networks (CNN) which aimed at accurate and fast grading of apple quality. Specific, complex, and useful image characteristics for detection and classification were captured by the proposed model. Compared with existing methods, the proposed model could better learn high-order features of two adjacent layers that were not in the same channel but were very related. The proposed model was trained and validated, with best training and validation accuracy of 99% and 98.98% at 2590th and 3000th step, respectively. The overall accuracy of the proposed model tested using an independent 300 apple dataset was 95.33%. The results showed that the training accuracy, overall test accuracy and training time of the proposed model were better than Google Inception v3 model and traditional imaging process method based on histogram of oriented gradient (HOG), gray level co-occurrence matrix (GLCM) features merging and support vector machine (SVM) classifier. The proposed model has great potential in Apple’s quality detection and classification.
Collapse
Affiliation(s)
- Yanfei Li
- School of Mechanical Engineering, Shandong University, Jinan, 250061, Shandong, China.,Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan, 250061, Shandong, China
| | - Xianying Feng
- School of Mechanical Engineering, Shandong University, Jinan, 250061, Shandong, China. .,Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan, 250061, Shandong, China.
| | - Yandong Liu
- School of Mechanical Engineering, Shandong University, Jinan, 250061, Shandong, China.,Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan, 250061, Shandong, China
| | - Xingchang Han
- School of Mechanical Engineering, Shandong University, Jinan, 250061, Shandong, China.,Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan, 250061, Shandong, China.,Shandong Academy of Agricultural Machinery Sciences, Jinan, 250100, Shandong, China
| |
Collapse
|
17
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
18
|
Reifenstein ET, Bin Khalid I, Kempter R. Synaptic learning rules for sequence learning. eLife 2021; 10:e67171. [PMID: 33860763 PMCID: PMC8175084 DOI: 10.7554/elife.67171] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 03/31/2021] [Indexed: 12/29/2022] Open
Abstract
Remembering the temporal order of a sequence of events is a task easily performed by humans in everyday life, but the underlying neuronal mechanisms are unclear. This problem is particularly intriguing as human behavior often proceeds on a time scale of seconds, which is in stark contrast to the much faster millisecond time-scale of neuronal processing in our brains. One long-held hypothesis in sequence learning suggests that a particular temporal fine-structure of neuronal activity - termed 'phase precession' - enables the compression of slow behavioral sequences down to the fast time scale of the induction of synaptic plasticity. Using mathematical analysis and computer simulations, we find that - for short enough synaptic learning windows - phase precession can improve temporal-order learning tremendously and that the asymmetric part of the synaptic learning window is essential for temporal-order learning. To test these predictions, we suggest experiments that selectively alter phase precession or the learning window and evaluate memory of temporal order.
Collapse
Affiliation(s)
- Eric Torsten Reifenstein
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Ikhwan Bin Khalid
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu BerlinBerlinGermany
| | - Richard Kempter
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
- Einstein Center for Neurosciences BerlinBerlinGermany
| |
Collapse
|