1
|
Maslennikov O, Perc M, Nekorkin V. Topological features of spike trains in recurrent spiking neural networks that are trained to generate spatiotemporal patterns. Front Comput Neurosci 2024; 18:1363514. [PMID: 38463243 PMCID: PMC10920356 DOI: 10.3389/fncom.2024.1363514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
In this study, we focus on training recurrent spiking neural networks to generate spatiotemporal patterns in the form of closed two-dimensional trajectories. Spike trains in the trained networks are examined in terms of their dissimilarity using the Victor-Purpura distance. We apply algebraic topology methods to the matrices obtained by rank-ordering the entries of the distance matrices, specifically calculating the persistence barcodes and Betti curves. By comparing the features of different types of output patterns, we uncover the complex relations between low-dimensional target signals and the underlying multidimensional spike trains.
Collapse
Affiliation(s)
- Oleg Maslennikov
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| | - Matjaž Perc
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung City, Taiwan
- Complexity Science Hub Vienna, Vienna, Austria
- Department of Physics, Kyung Hee University, Seoul, Republic of Korea
| | - Vladimir Nekorkin
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| |
Collapse
|
2
|
Capone C, Lupo C, Muratore P, Paolucci PS. Beyond spiking networks: The computational advantages of dendritic amplification and input segregation. Proc Natl Acad Sci U S A 2023; 120:e2220743120. [PMID: 38019856 PMCID: PMC10710097 DOI: 10.1073/pnas.2220743120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 10/11/2023] [Indexed: 12/01/2023] Open
Abstract
The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons and cannot achieve state-of-the-art performance in machine learning. Recent works have proposed that input segregation (neurons receive sensory information and higher-order feedback in segregated compartments), and nonlinear dendritic computation would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatiotemporal structure to all the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for target-based learning, which propagates targets rather than errors. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture supports a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing support for target-based learning. We show that this framework can be used to efficiently solve spatiotemporal tasks, such as context-dependent store and recall of three-dimensional trajectories, and navigation tasks. Finally, we suggest that this neuronal architecture naturally allows for orchestrating "hierarchical imitation learning", enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. We show a possible implementation of this in a two-level network, where the high network produces the contextual signal for the low network.
Collapse
Affiliation(s)
- Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Cosimo Lupo
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Paolo Muratore
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Visual Neuroscience Lab, Trieste34136, Italy
| | | |
Collapse
|
3
|
Capone C, De Luca C, De Bonis G, Gutzen R, Bernava I, Pastorelli E, Simula F, Lupo C, Tonielli L, Resta F, Allegra Mascaro AL, Pavone F, Denker M, Paolucci PS. Simulations approaching data: cortical slow waves in inferred models of the whole hemisphere of mouse. Commun Biol 2023; 6:266. [PMID: 36914748 PMCID: PMC10011502 DOI: 10.1038/s42003-023-04580-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 02/10/2023] [Indexed: 03/16/2023] Open
Abstract
The development of novel techniques to record wide-field brain activity enables estimation of data-driven models from thousands of recording channels and hence across large regions of cortex. These in turn improve our understanding of the modulation of brain states and the richness of traveling waves dynamics. Here, we infer data-driven models from high-resolution in-vivo recordings of mouse brain obtained from wide-field calcium imaging. We then assimilate experimental and simulated data through the characterization of the spatio-temporal features of cortical waves in experimental recordings. Inference is built in two steps: an inner loop that optimizes a mean-field model by likelihood maximization, and an outer loop that optimizes a periodic neuro-modulation via direct comparison of observables that characterize cortical slow waves. The model reproduces most of the features of the non-stationary and non-linear dynamics present in the high-resolution in-vivo recordings of the mouse brain. The proposed approach offers new methods of characterizing and understanding cortical waves for experimental and computational neuroscientists.
Collapse
Affiliation(s)
| | - Chiara De Luca
- INFN, Sezione di Roma, Rome, Italy
- PhD Program in Behavioural Neuroscience, "Sapienza" University of Rome, Rome, Italy
| | | | - Robin Gutzen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | | | | | | | | | | | - Francesco Resta
- European Laboratory for Non-Linear Spectroscopy, Sesto Fiorentino, Italy
| | - Anna Letizia Allegra Mascaro
- European Laboratory for Non-Linear Spectroscopy, Sesto Fiorentino, Italy
- Neuroscience Institute, National Research Council, Pisa, Italy
| | - Francesco Pavone
- European Laboratory for Non-Linear Spectroscopy, Sesto Fiorentino, Italy
- University of Florence, Physics and Astronomy Department, Sesto Fiorentino, Italy
| | - Michael Denker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | | |
Collapse
|
4
|
Yu C, Du Y, Chen M, Wang A, Wang G, Li E. MAP-SNN: Mapping spike activities with multiplicity, adaptability, and plasticity into bio-plausible spiking neural networks. Front Neurosci 2022; 16:945037. [PMID: 36203801 PMCID: PMC9531034 DOI: 10.3389/fnins.2022.945037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 08/29/2022] [Indexed: 11/26/2022] Open
Abstract
Spiking Neural Networks (SNNs) are considered more biologically realistic and power-efficient as they imitate the fundamental mechanism of the human brain. Backpropagation (BP) based SNN learning algorithms that utilize deep learning frameworks have achieved good performance. However, those BP-based algorithms partially ignore bio-interpretability. In modeling spike activity for biological plausible BP-based SNNs, we examine three properties: multiplicity, adaptability, and plasticity (MAP). Regarding multiplicity, we propose a Multiple-Spike Pattern (MSP) with multiple-spike transmission to improve model robustness in discrete time iterations. To realize adaptability, we adopt Spike Frequency Adaption (SFA) under MSP to reduce spike activities for enhanced efficiency. For plasticity, we propose a trainable state-free synapse that models spike response current to increase the diversity of spiking neurons for temporal feature extraction. The proposed SNN model achieves competitive performances on the N-MNIST and SHD neuromorphic datasets. In addition, experimental results demonstrate that the proposed three aspects are significant to iterative robustness, spike efficiency, and the capacity to extract spikes' temporal features. In summary, this study presents a realistic approach for bio-inspired spike activity with MAP, presenting a novel neuromorphic perspective for incorporating biological properties into spiking neural networks.
Collapse
Affiliation(s)
- Chengting Yu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
- Zhejiang University - University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
| | - Yangkai Du
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Mufeng Chen
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Aili Wang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
- Zhejiang University - University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
- *Correspondence: Aili Wang
| | - Gaoang Wang
- Zhejiang University - University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
| | - Erping Li
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
- Zhejiang University - University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
| |
Collapse
|
5
|
Capone C, Muratore P, Paolucci PS. Error-based or target-based? A unified framework for learning in recurrent spiking networks. PLoS Comput Biol 2022; 18:e1010221. [PMID: 35727852 PMCID: PMC9249234 DOI: 10.1371/journal.pcbi.1010221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 07/01/2022] [Accepted: 05/17/2022] [Indexed: 11/25/2022] Open
Abstract
The field of recurrent neural networks is over-populated by a variety of proposed learning rules and protocols. The scope of this work is to define a generalized framework, to move a step forward towards the unification of this fragmented scenario. In the field of supervised learning, two opposite approaches stand out, error-based and target-based. This duality gave rise to a scientific debate on which learning framework is the most likely to be implemented in biological networks of neurons. Moreover, the existence of spikes raises the question of whether the coding of information is rate-based or spike-based. To face these questions, we proposed a learning model with two main parameters, the rank of the feedback learning matrix R and the tolerance to spike timing τ⋆. We demonstrate that a low (high) rank R accounts for an error-based (target-based) learning rule, while high (low) tolerance to spike timing promotes rate-based (spike-based) coding. We show that in a store and recall task, high-ranks allow for lower MSE values, while low-ranks enable a faster convergence. Our framework naturally lends itself to Behavioral Cloning and allows for efficiently solving relevant closed-loop tasks, investigating what parameters (R,τ⋆) are optimal to solve a specific task. We found that a high R is essential for tasks that require retaining memory for a long time (Button and Food). On the other hand, this is not relevant for a motor task (the 2D Bipedal Walker). In this case, we find that precise spike-based coding enables optimal performances. Finally, we show that our theoretical formulation allows for defining protocols to estimate the rank of the feedback error in biological networks. We release a PyTorch implementation of our model supporting GPU parallelization. Learning in biological or artificial networks means changing the laws governing the network dynamics in order to better behave in a specific situation. However, there exists no consensus on what rules regulate learning in biological systems. To face these questions, we propose a novel theoretical formulation for learning with two main parameters, the number of learning constraints ( R) and the tolerance to spike timing (τ⋆). We demonstrate that a low (high) rank R accounts for an error-based (target-based) learning rule, while high (low) tolerance to spike timing τ⋆ promotes rate-based (spike-based) coding. Our approach naturally lends itself to Imitation Learning (and Behavioral Cloning in particular) and we apply it to solve relevant closed-loop tasks such as the button-and-food task, and the 2D Bipedal Walker. The button-and-food is a navigation task that requires retaining a long-term memory, and benefits from a high R. On the other hand, the 2D Bipedal Walker is a motor task and benefits from a low τ⋆. Finally, we show that our theoretical formulation suggests protocols to deduce the structure of learning feedback in biological networks.
Collapse
Affiliation(s)
| | - Paolo Muratore
- Cognitive Neuroscience, SISSA, Trieste, Italy
- * E-mail: (CC); (PM)
| | | |
Collapse
|
6
|
Linking Brain Structure, Activity, and Cognitive Function through Computation. eNeuro 2022; 9:ENEURO.0316-21.2022. [PMID: 35217544 PMCID: PMC8925650 DOI: 10.1523/eneuro.0316-21.2022] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 01/11/2022] [Accepted: 01/17/2022] [Indexed: 01/19/2023] Open
Abstract
Understanding the human brain is a “Grand Challenge” for 21st century research. Computational approaches enable large and complex datasets to be addressed efficiently, supported by artificial neural networks, modeling and simulation. Dynamic generative multiscale models, which enable the investigation of causation across scales and are guided by principles and theories of brain function, are instrumental for linking brain structure and function. An example of a resource enabling such an integrated approach to neuroscientific discovery is the BigBrain, which spatially anchors tissue models and data across different scales and ensures that multiscale models are supported by the data, making the bridge to both basic neuroscience and medicine. Research at the intersection of neuroscience, computing and robotics has the potential to advance neuro-inspired technologies by taking advantage of a growing body of insights into perception, plasticity and learning. To render data, tools and methods, theories, basic principles and concepts interoperable, the Human Brain Project (HBP) has launched EBRAINS, a digital neuroscience research infrastructure, which brings together a transdisciplinary community of researchers united by the quest to understand the brain, with fascinating insights and perspectives for societal benefits.
Collapse
|
7
|
Golosio B, De Luca C, Capone C, Pastorelli E, Stegel G, Tiddia G, De Bonis G, Paolucci PS. Thalamo-cortical spiking model of incremental learning combining perception, context and NREM-sleep. PLoS Comput Biol 2021; 17:e1009045. [PMID: 34181642 PMCID: PMC8270441 DOI: 10.1371/journal.pcbi.1009045] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 07/09/2021] [Accepted: 05/05/2021] [Indexed: 01/19/2023] Open
Abstract
The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories. We created a thalamo-cortical spiking model (ThaCo) with the purpose of demonstrating a link among two phenomena that we believe to be essential for the brain capability of efficient incremental learning from few examples in noisy environments. Grounded in two experimental observations—the first about the effects of deep-sleep on pre- and post-sleep firing rate distributions, the second about the combination of perceptual and contextual information in pyramidal neurons—our model joins these two ingredients. ThaCo alternates phases of incremental learning, classification and deep-sleep. Memories of handwritten digit examples are learned through thalamo-cortical and cortico-cortical plastic synapses. In absence of noise, the combination of contextual information with perception enables fast incremental learning. Deep-sleep becomes crucial when noisy inputs are considered. We observed in ThaCo both homeostatic and associative processes: deep-sleep fights noise in perceptual and internal knowledge and it supports the categorical association of examples belonging to the same digit class, through reinforcement of class-specific cortico-cortical synapses. The distributions of pre-sleep and post-sleep firing rates during classification change in a manner similar to those of experimental observation. These changes promote energetic efficiency during recall of memories, better representation of individual memories and categories and higher classification performances.
Collapse
Affiliation(s)
- Bruno Golosio
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Chiara De Luca
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
- * E-mail:
| | - Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Elena Pastorelli
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Giovanni Stegel
- Dipartimento di Chimica e Farmacia, Università di Sassari, Sassari, Italy
| | - Gianmarco Tiddia
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Giulia De Bonis
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | | |
Collapse
|