1
|
Gemo E, Spiga S, Brivio S. SHIP: a computational framework for simulating and validating novel technologies in hardware spiking neural networks. Front Neurosci 2024; 17:1270090. [PMID: 38264497 PMCID: PMC10804805 DOI: 10.3389/fnins.2023.1270090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 12/14/2023] [Indexed: 01/25/2024] Open
Abstract
Investigations in the field of spiking neural networks (SNNs) encompass diverse, yet overlapping, scientific disciplines. Examples range from purely neuroscientific investigations, researches on computational aspects of neuroscience, or applicative-oriented studies aiming to improve SNNs performance or to develop artificial hardware counterparts. However, the simulation of SNNs is a complex task that can not be adequately addressed with a single platform applicable to all scenarios. The optimization of a simulation environment to meet specific metrics often entails compromises in other aspects. This computational challenge has led to an apparent dichotomy of approaches, with model-driven algorithms dedicated to the detailed simulation of biological networks, and data-driven algorithms designed for efficient processing of large input datasets. Nevertheless, material scientists, device physicists, and neuromorphic engineers who develop new technologies for spiking neuromorphic hardware solutions would find benefit in a simulation environment that borrows aspects from both approaches, thus facilitating modeling, analysis, and training of prospective SNN systems. This manuscript explores the numerical challenges deriving from the simulation of spiking neural networks, and introduces SHIP, Spiking (neural network) Hardware In PyTorch, a numerical tool that supports the investigation and/or validation of materials, devices, small circuit blocks within SNN architectures. SHIP facilitates the algorithmic definition of the models for the components of a network, the monitoring of states and output of the modeled systems, and the training of the synaptic weights of the network, by way of user-defined unsupervised learning rules or supervised training techniques derived from conventional machine learning. SHIP offers a valuable tool for researchers and developers in the field of hardware-based spiking neural networks, enabling efficient simulation and validation of novel technologies.
Collapse
Affiliation(s)
- Emanuele Gemo
- CNR–IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| | | | | |
Collapse
|
2
|
Stapmanns J, Hahne J, Helias M, Bolten M, Diesmann M, Dahmen D. Event-Based Update of Synapses in Voltage-Based Learning Rules. Front Neuroinform 2021; 15:609147. [PMID: 34177505 PMCID: PMC8222618 DOI: 10.3389/fninf.2021.609147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/07/2021] [Indexed: 11/13/2022] Open
Abstract
Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
3
|
FNS allows efficient event-driven spiking neural network simulations based on a neuron model supporting spike latency. Sci Rep 2021; 11:12160. [PMID: 34108523 PMCID: PMC8190312 DOI: 10.1038/s41598-021-91513-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 05/24/2021] [Indexed: 02/05/2023] Open
Abstract
Neural modelling tools are increasingly employed to describe, explain, and predict the human brain's behavior. Among them, spiking neural networks (SNNs) make possible the simulation of neural activity at the level of single neurons, but their use is often threatened by the resources needed in terms of processing capabilities and memory. Emerging applications where a low energy burden is required (e.g. implanted neuroprostheses) motivate the exploration of new strategies able to capture the relevant principles of neuronal dynamics in reduced and efficient models. The recent Leaky Integrate-and-Fire with Latency (LIFL) spiking neuron model shows some realistic neuronal features and efficiency at the same time, a combination of characteristics that may result appealing for SNN-based brain modelling. In this paper we introduce FNS, the first LIFL-based SNN framework, which combines spiking/synaptic modelling with the event-driven approach, allowing us to define heterogeneous neuron groups and multi-scale connectivity, with delayed connections and plastic synapses. FNS allows multi-thread, precise simulations, integrating a novel parallelization strategy and a mechanism of periodic dumping. We evaluate the performance of FNS in terms of simulation time and used memory, and compare it with those obtained with neuronal models having a similar neurocomputational profile, implemented in NEST, showing that FNS performs better in both scenarios. FNS can be advantageously used to explore the interaction within and between populations of spiking neurons, even for long time-scales and with a limited hardware configuration.
Collapse
|
4
|
Bazenkov NI, Boldyshev BA, Dyakonova V, Kuznetsov OP. Simulating Small Neural Circuits with a Discrete Computational Model. BIOLOGICAL CYBERNETICS 2020; 114:349-362. [PMID: 32170500 DOI: 10.1007/s00422-020-00826-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 02/28/2020] [Indexed: 06/10/2023]
Abstract
Simulations of neural activity are commonly based on differential equations. We address the question what can be achieved with a simplified discrete model. The proposed model resembles artificial neural networks enriched with additional biologically inspired features. A neuron has several states, and the state transitions follow endogenous patterns which roughly correspond to firing behavior observed in biological neurons: oscillatory, tonic, plateauing, etc. Neural interactions consist of two components: synaptic connections and extrasynaptic emission of neurotransmitters. The dynamics is asynchronous and event-based; the events correspond to the changes in neurons activity. This model is innovative in introducing discrete framework for modeling neurotransmitter interactions which play the important role in neuromodulation. We simulate rhythmic activity of small neural ensembles like central pattern generators (CPG). The modeled examples include: the biphasic rhythm generated by the half-center mechanism with the post-inhibitory rebound (like the leech heartbeat CPG), the triphasic rhythm (like in pond snail feeding CPG) and the pattern switch in the system of several neurons (like the switch between ingestion and egestion in Aplysia feeding CPG). The asynchronous dynamics allows to obtain multi-phasic rhythms with phase durations close to their biological prototypes. The perspectives of discrete modeling in biological research are discussed in the conclusion.
Collapse
Affiliation(s)
- Nikolay I Bazenkov
- V.A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russia.
| | - Boris A Boldyshev
- V.A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russia
| | - Varvara Dyakonova
- N.K. Koltzov Institute of Developmental Biology of Russian Academy of Sciences, Moscow, Russia
| | - Oleg P Kuznetsov
- V.A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russia
| |
Collapse
|
5
|
Naveros F, Garrido JA, Carrillo RR, Ros E, Luque NR. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks. Front Neuroinform 2017; 11:7. [PMID: 28223930 PMCID: PMC5293783 DOI: 10.3389/fninf.2017.00007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Accepted: 01/18/2017] [Indexed: 12/12/2022] Open
Abstract
Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity.
Collapse
Affiliation(s)
- Francisco Naveros
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Jesus A Garrido
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Richard R Carrillo
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Niceto R Luque
- Vision Institute, Aging in Vision and Action LabParis, France; CNRS, INSERM, Pierre and Marie Curie UniversityParis, France
| |
Collapse
|
6
|
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity. eNeuro 2016; 3:eN-TNC-0048-15. [PMID: 27419214 PMCID: PMC4916275 DOI: 10.1523/eneuro.0048-15.2016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2015] [Revised: 01/25/2016] [Accepted: 03/03/2016] [Indexed: 01/28/2023] Open
Abstract
Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.
Collapse
|
7
|
Jonke Z, Habenschuss S, Maass W. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons. Front Neurosci 2016; 10:118. [PMID: 27065785 PMCID: PMC4811945 DOI: 10.3389/fnins.2016.00118] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2015] [Accepted: 03/09/2016] [Indexed: 11/13/2022] Open
Abstract
Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.
Collapse
Affiliation(s)
- Zeno Jonke
- Faculty of Computer Science and Biomedical Engineering, Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| | - Stefan Habenschuss
- Faculty of Computer Science and Biomedical Engineering, Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| | - Wolfgang Maass
- Faculty of Computer Science and Biomedical Engineering, Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| |
Collapse
|