1
|
Oberländer J, Bouhadjar Y, Morrison A. Learning and replaying spatiotemporal sequences: A replication study. Front Integr Neurosci 2022; 16:974177. [PMID: 36310714 PMCID: PMC9614051 DOI: 10.3389/fnint.2022.974177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
Learning and replaying spatiotemporal sequences are fundamental computations performed by the brain and specifically the neocortex. These features are critical for a wide variety of cognitive functions, including sensory perception and the execution of motor and language skills. Although several computational models demonstrate this capability, many are either hard to reconcile with biological findings or have limited functionality. To address this gap, a recent study proposed a biologically plausible model based on a spiking recurrent neural network supplemented with read-out neurons. After learning, the recurrent network develops precise switching dynamics by successively activating and deactivating small groups of neurons. The read-out neurons are trained to respond to particular groups and can thereby reproduce the learned sequence. For the model to serve as the basis for further research, it is important to determine its replicability. In this Brief Report, we give a detailed description of the model and identify missing details, inconsistencies or errors in or between the original paper and its reference implementation. We re-implement the full model in the neural simulator NEST in conjunction with the NESTML modeling language and confirm the main findings of the original work.
Collapse
Affiliation(s)
- Jette Oberländer
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
- Department of Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
- Jülich Research Centre and JARA, Peter Grünberg Institute (PGI-7, 10), Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- *Correspondence: Younes Bouhadjar
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
- Department of Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
2
|
Maes A, Barahona M, Clopath C. Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons. PLoS Comput Biol 2021; 17:e1008866. [PMID: 33764970 PMCID: PMC8023498 DOI: 10.1371/journal.pcbi.1008866] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 04/06/2021] [Accepted: 03/08/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Collapse
Affiliation(s)
- Amadeus Maes
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Mathematics Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
3
|
Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 2020; 16:e1007606. [PMID: 31961853 PMCID: PMC7028299 DOI: 10.1371/journal.pcbi.1007606] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/18/2020] [Accepted: 12/13/2019] [Indexed: 12/15/2022] Open
Abstract
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
4
|
Depannemaecker D, Canton Santos LE, Rodrigues AM, Scorza CA, Scorza FA, Almeida ACGD. Realistic spiking neural network: Non-synaptic mechanisms improve convergence in cell assembly. Neural Netw 2019; 122:420-433. [PMID: 31841876 DOI: 10.1016/j.neunet.2019.09.038] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 09/17/2019] [Accepted: 09/23/2019] [Indexed: 01/26/2023]
Abstract
Learning in neural networks inspired by brain tissue has been studied for machine learning applications. However, existing works primarily focused on the concept of synaptic weight modulation, and other aspects of neuronal interactions, such as non-synaptic mechanisms, have been neglected. Non-synaptic interaction mechanisms have been shown to play significant roles in the brain, and four classes of these mechanisms can be highlighted: (i) electrotonic coupling; (ii) ephaptic interactions; (iii) electric field effects; and iv) extracellular ionic fluctuations. In this work, we proposed simple rules for learning inspired by recent findings in machine learning adapted to a realistic spiking neural network. We show that the inclusion of non-synaptic interaction mechanisms improves cell assembly convergence. By including extracellular ionic fluctuation represented by the extracellular electrodiffusion in the network, we showed the importance of these mechanisms to improve cell assembly convergence. Additionally, we observed a variety of electrophysiological patterns of neuronal activity, particularly bursting and synchronism when the convergence is improved.
Collapse
Affiliation(s)
- Damien Depannemaecker
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil; Disciplina de Neurociência, Departamento de Neurologia e Neurocirurgia, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Luiz Eduardo Canton Santos
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil; Disciplina de Neurociência, Departamento de Neurologia e Neurocirurgia, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Antônio Márcio Rodrigues
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil
| | - Carla Alessandra Scorza
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil
| | - Fulvio Alexandre Scorza
- Disciplina de Neurociência, Departamento de Neurologia e Neurocirurgia, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Antônio-Carlos Guimarães de Almeida
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil.
| |
Collapse
|
5
|
La Camera G, Fontanini A, Mazzucato L. Cortical computations via metastable activity. Curr Opin Neurobiol 2019; 58:37-45. [PMID: 31326722 DOI: 10.1016/j.conb.2019.06.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 06/22/2019] [Indexed: 12/27/2022]
Abstract
Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary 'states'. Evidence that cortical neural activity unfolds as a sequence of metastable states is accumulating at fast pace. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Moreover, decoding stimuli or decisions via metastable states can be carried out trial-by-trial. Focusing on metastability will allow us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. Recent theoretical work has started to characterize the mechanistic origin and potential roles of metastable representations. In this article we review recent findings on metastable activity, how it may arise in biologically realistic models, and its potential role for representing internal states as well as relevant task variables.
Collapse
Affiliation(s)
- Giancarlo La Camera
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States.
| | - Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States
| | - Luca Mazzucato
- Departments of Biology and Mathematics and Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States
| |
Collapse
|
6
|
Muscinelli SP, Gerstner W, Schwalger T. How single neuron properties shape chaotic dynamics and signal transmission in random neural networks. PLoS Comput Biol 2019; 15:e1007122. [PMID: 31181063 PMCID: PMC6586367 DOI: 10.1371/journal.pcbi.1007122] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 06/20/2019] [Accepted: 05/22/2019] [Indexed: 02/07/2023] Open
Abstract
While most models of randomly connected neural networks assume single-neuron models with simple dynamics, neurons in the brain exhibit complex intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of single neurons and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate neurons shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single neurons. For the case of two-dimensional rate neurons with strong adaptation, we find that the network exhibits a state of "resonant chaos", characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated neurons, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single neurons, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic neural networks.
Collapse
Affiliation(s)
- Samuel P. Muscinelli
- School of Computer and Communication Sciences and School of Life Sciences, École polytechnique fédérale de Lausanne, Station 15, CH-1015 Lausanne EPFL, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, École polytechnique fédérale de Lausanne, Station 15, CH-1015 Lausanne EPFL, Switzerland
| | - Tilo Schwalger
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
- Institut für Mathematik, Technische Universität Berlin, 10623 Berlin, Germany
| |
Collapse
|