1
|
Miedema R, Strydis C. ExaFlexHH: an exascale-ready, flexible multi-FPGA library for biologically plausible brain simulations. Front Neuroinform 2024; 18:1330875. [PMID: 38680548 PMCID: PMC11045893 DOI: 10.3389/fninf.2024.1330875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 02/05/2024] [Indexed: 05/01/2024] Open
Abstract
Introduction In-silico simulations are a powerful tool in modern neuroscience for enhancing our understanding of complex brain systems at various physiological levels. To model biologically realistic and detailed systems, an ideal simulation platform must possess: (1) high performance and performance scalability, (2) flexibility, and (3) ease of use for non-technical users. However, most existing platforms and libraries do not meet all three criteria, particularly for complex models such as the Hodgkin-Huxley (HH) model or for complex neuron-connectivity modeling such as gap junctions. Methods This work introduces ExaFlexHH, an exascale-ready, flexible library for simulating HH models on multi-FPGA platforms. Utilizing FPGA-based Data-Flow Engines (DFEs) and the dataflow programming paradigm, ExaFlexHH addresses all three requirements. The library is also parameterizable and compliant with NeuroML, a prominent brain-description language in computational neuroscience. We demonstrate the performance scalability of the platform by implementing a highly demanding extended-Hodgkin-Huxley (eHH) model of the Inferior Olive using ExaFlexHH. Results Model simulation results show linear scalability for unconnected networks and near-linear scalability for networks with complex synaptic plasticity, with a 1.99 × performance increase using two FPGAs compared to a single FPGA simulation, and 7.96 × when using eight FPGAs in a scalable ring topology. Notably, our results also reveal consistent performance efficiency in GFLOPS per watt, further facilitating exascale-ready computing speeds and pushing the boundaries of future brain-simulation platforms. Discussion The ExaFlexHH library shows superior resource efficiency, quantified in FLOPS per hardware resources, benchmarked against other competitive FPGA-based brain simulation implementations.
Collapse
Affiliation(s)
- Rene Miedema
- Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands
| | - Christos Strydis
- Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands
- Quantum and Computer Engineering Department, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
2
|
Vieth M, Rahimi A, Gorgan Mohammadi A, Triesch J, Ganjtabesh M. Accelerating spiking neural network simulations with PymoNNto and PymoNNtorch. Front Neuroinform 2024; 18:1331220. [PMID: 38444756 PMCID: PMC10913591 DOI: 10.3389/fninf.2024.1331220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/29/2024] [Indexed: 03/07/2024] Open
Abstract
Spiking neural network simulations are a central tool in Computational Neuroscience, Artificial Intelligence, and Neuromorphic Engineering research. A broad range of simulators and software frameworks for such simulations exist with different target application areas. Among these, PymoNNto is a recent Python-based toolbox for spiking neural network simulations that emphasizes the embedding of custom code in a modular and flexible way. While PymoNNto already supports GPU implementations, its backend relies on NumPy operations. Here we introduce PymoNNtorch, which is natively implemented with PyTorch while retaining PymoNNto's modular design. Furthermore, we demonstrate how changes to the implementations of common network operations in combination with PymoNNtorch's native GPU support can offer speed-up over conventional simulators like NEST, ANNarchy, and Brian 2 in certain situations. Overall, we show how PymoNNto's modular and flexible design in combination with PymoNNtorch's GPU acceleration and optimized indexing operations facilitate research and development of spiking neural networks in the Python programming language.
Collapse
Affiliation(s)
- Marius Vieth
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Ali Rahimi
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| | - Ashena Gorgan Mohammadi
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Mohammad Ganjtabesh
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| |
Collapse
|
3
|
Arthur BJ, Kim CM, Chen S, Preibisch S, Darshan R. A scalable implementation of the recursive least-squares algorithm for training spiking neural networks. Front Neuroinform 2023; 17:1099510. [PMID: 37441157 PMCID: PMC10333503 DOI: 10.3389/fninf.2023.1099510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 06/05/2023] [Indexed: 07/15/2023] Open
Abstract
Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code's utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive in-silico study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as in-vivo experiments are being conducted, thus closing the loop between modeling and experiments.
Collapse
Affiliation(s)
- Benjamin J. Arthur
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Christopher M. Kim
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, United States
| | - Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Stephan Preibisch
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| |
Collapse
|
4
|
Schmitt FJ, Rostami V, Nawrot MP. Efficient parameter calibration and real-time simulation of large-scale spiking neural networks with GeNN and NEST. Front Neuroinform 2023; 17:941696. [PMID: 36844916 PMCID: PMC9950635 DOI: 10.3389/fninf.2023.941696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.
Collapse
Affiliation(s)
- Felix Johannes Schmitt
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| | - Vahid Rostami
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| | | |
Collapse
|
5
|
Tiddia G, Golosio B, Albers J, Senk J, Simula F, Pronold J, Fanti V, Pastorelli E, Paolucci PS, van Albada SJ. Fast Simulation of a Multi-Area Spiking Network Model of Macaque Cortex on an MPI-GPU Cluster. Front Neuroinform 2022; 16:883333. [PMID: 35859800 PMCID: PMC9289599 DOI: 10.3389/fninf.2022.883333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 06/02/2022] [Indexed: 11/29/2022] Open
Abstract
Spiking neural network models are increasingly establishing themselves as an effective tool for simulating the dynamics of neuronal populations and for understanding the relationship between these dynamics and brain function. Furthermore, the continuous development of parallel computing technologies and the growing availability of computational resources are leading to an era of large-scale simulations capable of describing regions of the brain of ever larger dimensions at increasing detail. Recently, the possibility to use MPI-based parallel codes on GPU-equipped clusters to run such complex simulations has emerged, opening up novel paths to further speed-ups. NEST GPU is a GPU library written in CUDA-C/C++ for large-scale simulations of spiking neural networks, which was recently extended with a novel algorithm for remote spike communication through MPI on a GPU cluster. In this work we evaluate its performance on the simulation of a multi-area model of macaque vision-related cortex, made up of about 4 million neurons and 24 billion synapses and representing 32 mm2 surface area of the macaque cortex. The outcome of the simulations is compared against that obtained using the well-known CPU-based spiking neural network simulator NEST on a high-performance computing cluster. The results show not only an optimal match with the NEST statistical measures of the neural activity in terms of three informative distributions, but also remarkable achievements in terms of simulation time per second of biological activity. Indeed, NEST GPU was able to simulate a second of biological time of the full-scale macaque cortex model in its metastable state 3.1× faster than NEST using 32 compute nodes equipped with an NVIDIA V100 GPU each. Using the same configuration, the ground state of the full-scale macaque cortex model was simulated 2.4× faster than NEST.
Collapse
Affiliation(s)
- Gianmarco Tiddia
- Department of Physics, University of Cagliari, Monserrato, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Monserrato, Italy
| | - Bruno Golosio
- Department of Physics, University of Cagliari, Monserrato, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Monserrato, Italy
- *Correspondence: Bruno Golosio
| | - Jasper Albers
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Francesco Simula
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Viviana Fanti
- Department of Physics, University of Cagliari, Monserrato, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Monserrato, Italy
| | - Elena Pastorelli
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | | | - Sacha J. van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Faculty of Mathematics and Natural Sciences, Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
6
|
Awile O, Kumbhar P, Cornu N, Dura-Bernal S, King JG, Lupton O, Magkanaris I, McDougal RA, Newton AJH, Pereira F, Săvulescu A, Carnevale NT, Lytton WW, Hines ML, Schürmann F. Modernizing the NEURON Simulator for Sustainability, Portability, and Performance. Front Neuroinform 2022; 16:884046. [PMID: 35832575 PMCID: PMC9272742 DOI: 10.3389/fninf.2022.884046] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 05/26/2022] [Indexed: 12/25/2022] Open
Abstract
The need for reproducible, credible, multiscale biological modeling has led to the development of standardized simulation platforms, such as the widely-used NEURON environment for computational neuroscience. Developing and maintaining NEURON over several decades has required attention to the competing needs of backwards compatibility, evolving computer architectures, the addition of new scales and physical processes, accessibility to new users, and efficiency and flexibility for specialists. In order to meet these challenges, we have now substantially modernized NEURON, providing continuous integration, an improved build system and release workflow, and better documentation. With the help of a new source-to-source compiler of the NMODL domain-specific language we have enhanced NEURON's ability to run efficiently, via the CoreNEURON simulation engine, on a variety of hardware platforms, including GPUs. Through the implementation of an optimized in-memory transfer mechanism this performance optimized backend is made easily accessible to users, providing training and model-development paths from laptop to workstation to supercomputer and cloud platform. Similarly, we have been able to accelerate NEURON's reaction-diffusion simulation performance through the use of just-in-time compilation. We show that these efforts have led to a growing developer base, a simpler and more robust software distribution, a wider range of supported computer architectures, a better integration of NEURON with other scientific workflows, and substantially improved performance for the simulation of biophysical and biochemical models.
Collapse
Affiliation(s)
- Omar Awile
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Pramod Kumbhar
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Nicolas Cornu
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Salvador Dura-Bernal
- Department Physiology and Pharmacology, SUNY Downstate, Brooklyn, NY, United States
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - James Gonzalo King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Olli Lupton
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Ioannis Magkanaris
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Robert A. McDougal
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, United States
- Program in Computational Biology and Bioinformatics, Yale University, New Haven, CT, United States
- Yale Center for Medical Informatics, Yale University, New Haven, CT, United States
| | - Adam J. H. Newton
- Department Physiology and Pharmacology, SUNY Downstate, Brooklyn, NY, United States
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, United States
| | - Fernando Pereira
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Alexandru Săvulescu
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | | | - William W. Lytton
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Michael L. Hines
- Department of Neuroscience, Yale University, New Haven, CT, United States
| | - Felix Schürmann
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
7
|
Antonietti A, Geminiani A, Negri E, D'Angelo E, Casellato C, Pedrocchi A. Brain-Inspired Spiking Neural Network Controller for a Neurorobotic Whisker System. Front Neurorobot 2022; 16:817948. [PMID: 35770277 PMCID: PMC9234954 DOI: 10.3389/fnbot.2022.817948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 05/17/2021] [Indexed: 11/13/2022] Open
Abstract
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model for studying active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modeling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Human Brain Project's Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was adequately connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behavior experimentally recorded in mice.
Collapse
Affiliation(s)
- Alberto Antonietti
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- *Correspondence: Alberto Antonietti
| | - Alice Geminiani
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Edoardo Negri
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Egidio D'Angelo
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- Brain Connectivity Center, IRCCS Mondino Foundation, Pavia, Italy
| | - Claudia Casellato
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Alessandra Pedrocchi
- Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
8
|
Yegenoglu A, Subramoney A, Hater T, Jimenez-Romero C, Klijn W, Pérez Martín A, van der Vlag M, Herty M, Morrison A, Diaz-Pier S. Exploring Parameter and Hyper-Parameter Spaces of Neuroscience Models on High Performance Computers With Learning to Learn. Front Comput Neurosci 2022; 16:885207. [PMID: 35720775 PMCID: PMC9199579 DOI: 10.3389/fncom.2022.885207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 04/13/2022] [Indexed: 11/13/2022] Open
Abstract
Neuroscience models commonly have a high number of degrees of freedom and only specific regions within the parameter space are able to produce dynamics of interest. This makes the development of tools and strategies to efficiently find these regions of high importance to advance brain research. Exploring the high dimensional parameter space using numerical simulations has been a frequently used technique in the last years in many areas of computational neuroscience. Today, high performance computing (HPC) can provide a powerful infrastructure to speed up explorations and increase our general understanding of the behavior of the model in reasonable times. Learning to learn (L2L) is a well-known concept in machine learning (ML) and a specific method for acquiring constraints to improve learning performance. This concept can be decomposed into a two loop optimization process where the target of optimization can consist of any program such as an artificial neural network, a spiking network, a single cell model, or a whole brain simulation. In this work, we present L2L as an easy to use and flexible framework to perform parameter and hyper-parameter space exploration of neuroscience models on HPC infrastructure. Learning to learn is an implementation of the L2L concept written in Python. This open-source software allows several instances of an optimization target to be executed with different parameters in an embarrassingly parallel fashion on HPC. L2L provides a set of built-in optimizer algorithms, which make adaptive and efficient exploration of parameter spaces possible. Different from other optimization toolboxes, L2L provides maximum flexibility for the way the optimization target can be executed. In this paper, we show a variety of examples of neuroscience models being optimized within the L2L framework to execute different types of tasks. The tasks used to illustrate the concept go from reproducing empirical data to learning how to solve a problem in a dynamic environment. We particularly focus on simulations with models ranging from the single cell to the whole brain and using a variety of simulation engines like NEST, Arbor, TVB, OpenAIGym, and NetLogo.
Collapse
Affiliation(s)
- Alper Yegenoglu
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Department of Mathematics, Institute of Geometry and Applied Mathematics, RWTH Aachen University, Aachen, Germany
| | - Anand Subramoney
- Institute of Neural Computation, Ruhr University Bochum, Bochum, Germany
| | - Thorsten Hater
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Cristian Jimenez-Romero
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Aarón Pérez Martín
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Michiel van der Vlag
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Michael Herty
- Department of Mathematics, Institute of Geometry and Applied Mathematics, RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Sandra Diaz-Pier
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| |
Collapse
|
9
|
Feldotto B, Eppler JM, Jimenez-Romero C, Bignamini C, Gutierrez CE, Albanese U, Retamino E, Vorobev V, Zolfaghari V, Upton A, Sun Z, Yamaura H, Heidarinejad M, Klijn W, Morrison A, Cruz F, McMurtrie C, Knoll AC, Igarashi J, Yamazaki T, Doya K, Morin FO. Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure. Front Neuroinform 2022; 16:884180. [PMID: 35662903 PMCID: PMC9160925 DOI: 10.3389/fninf.2022.884180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 04/19/2022] [Indexed: 12/20/2022] Open
Abstract
Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtual machines; this provides a high-performance simulation environment that is accessible to multi-domain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a large-scale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.
Collapse
Affiliation(s)
- Benedikt Feldotto
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
- *Correspondence: Benedikt Feldotto
| | - Jochen Martin Eppler
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Cristian Jimenez-Romero
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | | | - Carlos Enrique Gutierrez
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Ugo Albanese
- Department of Excellence in Robotics and AI, The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera, Italy
| | - Eloy Retamino
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada, Granada, Spain
| | - Viktor Vorobev
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Vahid Zolfaghari
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Alex Upton
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Zhe Sun
- Image Processing Research Team, Center for Advanced Photonics, RIKEN, Wako, Japan
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Morteza Heidarinejad
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany
- Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Felipe Cruz
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Colin McMurtrie
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Alois C. Knoll
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Jun Igarashi
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
- Center for Computational Science, RIKEN, Kobe, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Kenji Doya
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Fabrice O. Morin
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| |
Collapse
|
10
|
Müller E, Arnold E, Breitwieser O, Czierlinski M, Emmel A, Kaiser J, Mauch C, Schmitt S, Spilger P, Stock R, Stradmann Y, Weis J, Baumbach A, Billaudelle S, Cramer B, Ebert F, Göltz J, Ilmberger J, Karasenko V, Kleider M, Leibfried A, Pehle C, Schemmel J. A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware. Front Neurosci 2022; 16:884128. [PMID: 35663548 PMCID: PMC9157770 DOI: 10.3389/fnins.2022.884128] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 04/20/2022] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging to unite efficiency and usability. This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation. We present use cases to discuss and derive requirements for the software and showcase the implementation. The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-the-loop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development. Finally, we discuss further developments in terms of hardware scale-up, system usability, and efficiency.
Collapse
Affiliation(s)
- Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Elias Arnold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Milena Czierlinski
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Arne Emmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Jakob Kaiser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Third Institute of Physics, University of Göttingen, Göttingen, Germany
| | - Philipp Spilger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Raphael Stock
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Yannik Stradmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Weis
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | | | - Benjamin Cramer
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Falk Ebert
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Julian Göltz
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Joscha Ilmberger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Aron Leibfried
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
11
|
Albers J, Pronold J, Kurth AC, Vennemo SB, Haghighi Mood K, Patronis A, Terhorst D, Jordan J, Kunkel S, Tetzlaff T, Diesmann M, Senk J. A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations. Front Neuroinform 2022; 16:837549. [PMID: 35645755 PMCID: PMC9131021 DOI: 10.3389/fninf.2022.837549] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 03/11/2022] [Indexed: 11/13/2022] Open
Abstract
Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.
Collapse
Affiliation(s)
- Jasper Albers
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- *Correspondence: Jasper Albers
| | - Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Anno Christopher Kurth
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Stine Brekke Vennemo
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Alexander Patronis
- Jülich Supercomputing Centre (JSC), Jülich Research Centre, Jülich, Germany
| | - Dennis Terhorst
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
12
|
Pronold J, Jordan J, Wylie BJN, Kitayama I, Diesmann M, Kunkel S. Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring. Front Neuroinform 2022; 15:785068. [PMID: 35300490 PMCID: PMC8921864 DOI: 10.3389/fninf.2021.785068] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 12/24/2021] [Indexed: 11/26/2022] Open
Abstract
Generic simulation code for spiking neuronal networks spends the major part of the time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size, a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here, we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm, all threads search through all spikes to pick out the relevant ones. With increasing network size, the fraction of hits remains invariant but the absolute number of rejections grows. Our new alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this, every thread completes delivery solely of the section of spikes for its own neurons. Independent of the number of threads, all spikes are looked at only two times. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40 %. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency that the instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching.
Collapse
Affiliation(s)
- Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Brian J. N. Wylie
- Jülich Supercomputing Centre, Jülich Research Centre, Jülich, Germany
| | | | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| |
Collapse
|
13
|
Schirner M, Kong X, Yeo BTT, Deco G, Ritter P. Dynamic primitives of brain network interaction Special Issue "Advances in Mapping the Connectome". Neuroimage 2022; 250:118928. [PMID: 35101596 DOI: 10.1016/j.neuroimage.2022.118928] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 12/03/2021] [Accepted: 01/20/2022] [Indexed: 01/04/2023] Open
Abstract
What dynamic processes underly functional brain networks? Functional connectivity (FC) and functional connectivity dynamics (FCD) are used to represent the patterns and dynamics of functional brain networks. FC(D) is related to the synchrony of brain activity: when brain areas oscillate in a coordinated manner this yields a high correlation between their signal time series. To explain the processes underlying FC(D) we review how synchronized oscillations emerge from coupled neural populations in brain network models (BNMs). From detailed spiking networks to more abstract population models, there is strong support for the idea that the brain operates near critical instabilities that give rise to multistable or metastable dynamics that in turn lead to the intermittently synchronized slow oscillations underlying FC(D). We explore further consequences from these fundamental mechanisms and how they fit with reality. We conclude by highlighting the need for integrative brain models that connect separate mechanisms across levels of description and spatiotemporal scales and link them with cognitive function.
Collapse
Affiliation(s)
- Michael Schirner
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany; Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Neurology with Experimental Neurology, Charitéplatz 1, 10117 Berlin, Germany; Bernstein Focus State Dependencies of Learning & Bernstein Center for Computational Neuroscience, Berlin, Germany; Einstein Center for Neuroscience Berlin, Charitéplatz 1, 10117 Berlin, Germany; Einstein Center Digital Future, Wilhelmstraße 67, 10117 Berlin, Germany.
| | - Xiaolu Kong
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore; Centre for Sleep & Cognition & Centre for Translational Magnetic Resonance Research, Yong Loo Lin School of Medicine, Singapore; N.1 Institute for Health & Institute for Digital Medicine, National University of Singapore, Singapore
| | - B T Thomas Yeo
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore; Centre for Sleep & Cognition & Centre for Translational Magnetic Resonance Research, Yong Loo Lin School of Medicine, Singapore; N.1 Institute for Health & Institute for Digital Medicine, National University of Singapore, Singapore; Integrative Sciences and Engineering Programme (ISEP), National University of Singapore, Singapore, Singapore; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, USA
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de la Recerca i Estudis Avançats, Barcelona, Spain; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; School of Psychological Sciences, Turner Institute for Brain and Mental Health, Monash University, Melbourne, Clayton, Australia
| | - Petra Ritter
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany; Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Neurology with Experimental Neurology, Charitéplatz 1, 10117 Berlin, Germany; Bernstein Focus State Dependencies of Learning & Bernstein Center for Computational Neuroscience, Berlin, Germany; Einstein Center for Neuroscience Berlin, Charitéplatz 1, 10117 Berlin, Germany; Einstein Center Digital Future, Wilhelmstraße 67, 10117 Berlin, Germany.
| |
Collapse
|
14
|
Dasbach S, Tetzlaff T, Diesmann M, Senk J. Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution. Front Neurosci 2021; 15:757790. [PMID: 35002599 PMCID: PMC8740282 DOI: 10.3389/fnins.2021.757790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.
Collapse
Affiliation(s)
- Stefan Dasbach
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
15
|
Ebert MA, Gebski V, Baldock C. In the future simulations will replace clinical trials. Phys Eng Sci Med 2021; 44:997-1001. [PMID: 34855127 PMCID: PMC8638236 DOI: 10.1007/s13246-021-01079-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2021] [Indexed: 01/10/2023]
Affiliation(s)
- Martin A Ebert
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Nedlands, WA, 6009, Australia.,School of Physics, Mathematics and Computing, University of Western Australia, Crawley, WA, 6009, Australia
| | - Val Gebski
- NHMRC Clinical Trials Centre, University of Sydney, Camperdown, NSW, 2050, Australia
| | - Clive Baldock
- Graduate Research School, Western Sydney University, Penrith, NSW, 2747, Australia.
| |
Collapse
|
16
|
Liu W, Duan H, Zhang D, Zhang X, Luo Q, Xie T, Yan H, Peng L, Hu Y, Liang L, Zhao G, Xie Z, Hu J. Concepts and Application of DNA Origami and DNA Self-Assembly: A Systematic Review. Appl Bionics Biomech 2021; 2021:9112407. [PMID: 34824603 PMCID: PMC8610680 DOI: 10.1155/2021/9112407] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Accepted: 10/20/2021] [Indexed: 01/02/2023] Open
Abstract
With the arrival of the post-Moore Era, the development of traditional silicon-based computers has reached the limit, and it is urgent to develop new computing technology to meet the needs of science and life. DNA computing has become an essential branch and research hotspot of new computer technology because of its powerful parallel computing capability and excellent data storage capability. Due to good biocompatibility and programmability properties, DNA molecules have been widely used to construct novel self-assembled structures. In this review, DNA origami is briefly introduced firstly. Then, the applications of DNA self-assembly in material physics, biogenetics, medicine, and other fields are described in detail, which will aid the development of DNA computational model in the future.
Collapse
Affiliation(s)
- Wei Liu
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Huaichuan Duan
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Derong Zhang
- School of Marxism, Chengdu Vocational & Technical College of Industry, Chengdu 610081, China
| | - Xun Zhang
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Qing Luo
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Tao Xie
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Hailian Yan
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Lianxin Peng
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Yichen Hu
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Li Liang
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Gang Zhao
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Zhenjian Xie
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| | - Jianping Hu
- Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, School of Pharmacy, Sichuan Industrial Institute of Antibiotics, Chengdu University, Chengdu 610106, China
| |
Collapse
|
17
|
Kobayashi T, Kuriyama R, Yamazaki T. Testing an Explicit Method for Multi-compartment Neuron Model Simulation on a GPU. Cognit Comput 2021. [DOI: 10.1007/s12559-021-09942-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
18
|
Jordan J, Schmidt M, Senn W, Petrovici MA. Evolving interpretable plasticity for spiking networks. eLife 2021; 10:66273. [PMID: 34709176 PMCID: PMC8553337 DOI: 10.7554/elife.66273] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022] Open
Abstract
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called ‘plasticity rules’, is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms. Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition – that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural – and reasonable – tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on ‘evolutionary algorithms’. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner – an example of reinforcement learning. Finally, in the third ‘supervised learning’ scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers ‘learn’ will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Maximilian Schmidt
- Ascent Robotics, Tokyo, Japan.,RIKEN Center for Brain Science, Tokyo, Japan
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland.,Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
19
|
Stapmanns J, Hahne J, Helias M, Bolten M, Diesmann M, Dahmen D. Event-Based Update of Synapses in Voltage-Based Learning Rules. Front Neuroinform 2021; 15:609147. [PMID: 34177505 PMCID: PMC8222618 DOI: 10.3389/fninf.2021.609147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/07/2021] [Indexed: 11/13/2022] Open
Abstract
Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
20
|
Kuriyama R, Casellato C, D'Angelo E, Yamazaki T. Real-Time Simulation of a Cerebellar Scaffold Model on Graphics Processing Units. Front Cell Neurosci 2021; 15:623552. [PMID: 33897369 PMCID: PMC8058369 DOI: 10.3389/fncel.2021.623552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 03/15/2021] [Indexed: 11/13/2022] Open
Abstract
Large-scale simulation of detailed computational models of neuronal microcircuits plays a prominent role in reproducing and predicting the dynamics of the microcircuits. To reconstruct a microcircuit, one must choose neuron and synapse models, placements, connectivity, and numerical simulation methods according to anatomical and physiological constraints. For reconstruction and refinement, it is useful to be able to replace one module easily while leaving the others as they are. One way to achieve this is via a scaffolding approach, in which a simulation code is built on independent modules for placements, connections, and network simulations. Owing to the modularity of functions, this approach enables researchers to improve the performance of the entire simulation by simply replacing a problematic module with an improved one. Casali et al. (2019) developed a spiking network model of the cerebellar microcircuit using this approach, and while it reproduces electrophysiological properties of cerebellar neurons, it takes too much computational time. Here, we followed this scaffolding approach and replaced the simulation module with an accelerated version on graphics processing units (GPUs). Our cerebellar scaffold model ran roughly 100 times faster than the original version. In fact, our model is able to run faster than real time, with good weak and strong scaling properties. To demonstrate an application of real-time simulation, we implemented synaptic plasticity mechanisms at parallel fiber-Purkinje cell synapses, and carried out simulation of behavioral experiments known as gain adaptation of optokinetic response. We confirmed that the computer simulation reproduced experimental findings while being completed in real time. Actually, a computer simulation for 2 s of the biological time completed within 750 ms. These results suggest that the scaffolding approach is a promising concept for gradual development and refactoring of simulation codes for large-scale elaborate microcircuits. Moreover, a real-time version of the cerebellar scaffold model, which is enabled by parallel computing technology owing to GPUs, may be useful for large-scale simulations and engineering applications that require real-time signal processing and motor control.
Collapse
Affiliation(s)
- Rin Kuriyama
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Claudia Casellato
- Neurophysiology Unit, Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Egidio D'Angelo
- Neurophysiology Unit, Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- IRCCS Mondino Foundation, Pavia, Italy
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
21
|
Knight JC, Nowotny T. Larger GPU-accelerated brain simulations with procedural connectivity. NATURE COMPUTATIONAL SCIENCE 2021; 1:136-142. [PMID: 38217218 DOI: 10.1038/s43588-020-00022-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 12/23/2020] [Indexed: 01/15/2024]
Abstract
Simulations are an important tool for investigating brain function but large models are needed to faithfully reproduce the statistics and dynamics of brain activity. Simulating large spiking neural network models has, until now, needed so much memory for storing synaptic connections that it required high performance computer systems. Here, we present an alternative simulation method we call 'procedural connectivity' where connectivity and synaptic weights are generated 'on the fly' instead of stored and retrieved from memory. This method is particularly well suited for use on graphical processing units (GPUs)-which are a common fixture in many workstations. Using procedural connectivity and an additional GPU code generation optimization, we can simulate a recent model of the macaque visual cortex with 4.13 × 106 neurons and 24.2 × 109 synapses on a single GPU-a significant step forward in making large-scale brain modeling accessible to more researchers.
Collapse
Affiliation(s)
- James C Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, UK.
| | - Thomas Nowotny
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, UK
| |
Collapse
|
22
|
Yamazaki T, Igarashi J, Yamaura H. Human-scale Brain Simulation via Supercomputer: A Case Study on the Cerebellum. Neuroscience 2021; 462:235-246. [PMID: 33482329 DOI: 10.1016/j.neuroscience.2021.01.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Revised: 12/30/2020] [Accepted: 01/06/2021] [Indexed: 01/03/2023]
Abstract
Performance of supercomputers has been steadily and exponentially increasing for the past 20 years, and is expected to increase further. This unprecedented computational power enables us to build and simulate large-scale neural network models composed of tens of billions of neurons and tens of trillions of synapses with detailed anatomical connections and realistic physiological parameters. Such "human-scale" brain simulation could be considered a milestone in computational neuroscience and even in general neuroscience. Towards this milestone, it is mandatory to introduce modern high-performance computing technology into neuroscience research. In this article, we provide an introductory landscape about large-scale brain simulation on supercomputers from the viewpoints of computational neuroscience and modern high-performance computing technology for specialists in experimental as well as computational neurosciences. This introduction to modeling and simulation methods is followed by a review of various representative large-scale simulation studies conducted to date. Then, we direct our attention to the cerebellum, with a review of more simulation studies specific to that region. Furthermore, we present recent simulation results of a human-scale cerebellar network model composed of 86 billion neurons on the Japanese flagship supercomputer K (now retired). Finally, we discuss the necessity and importance of human-scale brain simulation, and suggest future directions of such large-scale brain simulation research.
Collapse
Affiliation(s)
- Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Japan.
| | | | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Japan
| |
Collapse
|
23
|
Giannakakis E, Han CE, Weber B, Hutchings F, Kaiser M. Towards simulations of long-term behavior of neural networks: Modeling synaptic plasticity of connections within and between human brain regions. Neurocomputing 2020; 416:38-44. [PMID: 33250573 PMCID: PMC7598092 DOI: 10.1016/j.neucom.2020.01.050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Simulations of neural networks can be used to study the direct effect of internal or external changes on brain dynamics. However, some changes are not immediate but occur on the timescale of weeks, months, or years. Examples include effects of strokes, surgical tissue removal, or traumatic brain injury but also gradual changes during brain development. Simulating network activity over a long time, even for a small number of nodes, is a computational challenge. Here, we model a coupled network of human brain regions with a modified Wilson-Cowan model representing dynamics for each region and with synaptic plasticity adjusting connection weights within and between regions. Using strategies ranging from different models for plasticity, vectorization and a different differential equation solver setup, we achieved one second runtime for one second biological time.
Collapse
Affiliation(s)
- Emmanouil Giannakakis
- Interdisciplinary Computing and Complex BioSystems (ICOS) research group, School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, United Kingdom
| | - Cheol E Han
- Department of Electronics and Information Engineering, Korea University, Sejong, Republic of Korea
| | - Bernd Weber
- Institute of Experimental Epileptology and Cognition Research, University of Bonn, Germany
| | - Frances Hutchings
- Interdisciplinary Computing and Complex BioSystems (ICOS) research group, School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, United Kingdom
| | - Marcus Kaiser
- Interdisciplinary Computing and Complex BioSystems (ICOS) research group, School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, United Kingdom.,Institute of Neuroscience, Newcastle University, the Henry Wellcome Building, Newcastle upon Tyne NE2 4HH, United Kingdom.,Department of Functional Neurosurgery, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, China
| |
Collapse
|
24
|
Oscillator Motif as Design Pattern for the Spinal Cord Circuitry Reconstruction. BIONANOSCIENCE 2020. [DOI: 10.1007/s12668-020-00743-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
25
|
Cremonesi F, Schürmann F. Understanding Computational Costs of Cellular-Level Brain Tissue Simulations Through Analytical Performance Models. Neuroinformatics 2020; 18:407-428. [PMID: 32056104 PMCID: PMC7338826 DOI: 10.1007/s12021-019-09451-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Computational modeling and simulation have become essential tools in the quest to better understand the brain's makeup and to decipher the causal interrelations of its components. The breadth of biochemical and biophysical processes and structures in the brain has led to the development of a large variety of model abstractions and specialized tools, often times requiring high performance computing resources for their timely execution. What has been missing so far was an in-depth analysis of the complexity of the computational kernels, hindering a systematic approach to identifying bottlenecks of algorithms and hardware. If whole brain models are to be achieved on emerging computer generations, models and simulation engines will have to be carefully co-designed for the intrinsic hardware tradeoffs. For the first time, we present a systematic exploration based on analytic performance modeling. We base our analysis on three in silico models, chosen as representative examples of the most widely employed modeling abstractions: current-based point neurons, conductance-based point neurons and conductance-based detailed neurons. We identify that the synaptic modeling formalism, i.e. current or conductance-based representation, and not the level of morphological detail, is the most significant factor in determining the properties of memory bandwidth saturation and shared-memory scaling of in silico models. Even though general purpose computing has, until now, largely been able to deliver high performance, we find that for all types of abstractions, network latency and memory bandwidth will become severe bottlenecks as the number of neurons to be simulated grows. By adapting and extending a performance modeling approach, we deliver a first characterization of the performance landscape of brain tissue simulations, allowing us to pinpoint current bottlenecks for state-of-the-art in silico models, and make projections for future hardware and software requirements.
Collapse
Affiliation(s)
- Francesco Cremonesi
- Blue Brain Project, Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland
| | - Felix Schürmann
- Blue Brain Project, Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland.
| |
Collapse
|
26
|
Jordan J, Helias M, Diesmann M, Kunkel S. Efficient Communication in Distributed Simulations of Spiking Neuronal Networks With Gap Junctions. Front Neuroinform 2020; 14:12. [PMID: 32431602 PMCID: PMC7214808 DOI: 10.3389/fninf.2020.00012] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 03/06/2020] [Indexed: 12/01/2022] Open
Abstract
Investigating the dynamics and function of large-scale spiking neuronal networks with realistic numbers of synapses is made possible today by state-of-the-art simulation code that scales to the largest contemporary supercomputers. However, simulations that involve electrical interactions, also called gap junctions, besides chemical synapses scale only poorly due to a communication scheme that collects global data on each compute node. In comparison to chemical synapses, gap junctions are far less abundant. To improve scalability we exploit this sparsity by integrating an existing framework for continuous interactions with a recently proposed directed communication scheme for spikes. Using a reference implementation in the NEST simulator we demonstrate excellent scalability of the integrated framework, accelerating large-scale simulations with gap junctions by more than an order of magnitude. This allows, for the first time, the efficient exploration of the interactions of chemical and electrical coupling in large-scale neuronal networks models with natural synapse density distributed across thousands of compute nodes.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland.,Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| |
Collapse
|
27
|
Yamaura H, Igarashi J, Yamazaki T. Simulation of a Human-Scale Cerebellar Network Model on the K Computer. Front Neuroinform 2020; 14:16. [PMID: 32317955 PMCID: PMC7146068 DOI: 10.3389/fninf.2020.00016] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 03/18/2020] [Indexed: 12/15/2022] Open
Abstract
Computer simulation of the human brain at an individual neuron resolution is an ultimate goal of computational neuroscience. The Japanese flagship supercomputer, K, provides unprecedented computational capability toward this goal. The cerebellum contains 80% of the neurons in the whole brain. Therefore, computer simulation of the human-scale cerebellum will be a challenge for modern supercomputers. In this study, we built a human-scale spiking network model of the cerebellum, composed of 68 billion spiking neurons, on the K computer. As a benchmark, we performed a computer simulation of a cerebellum-dependent eye movement task known as the optokinetic response. We succeeded in reproducing plausible neuronal activity patterns that are observed experimentally in animals. The model was built on dedicated neural network simulation software called MONET (Millefeuille-like Organization NEural neTwork), which calculates layered sheet types of neural networks with parallelization by tile partitioning. To examine the scalability of the MONET simulator, we repeatedly performed simulations while changing the number of compute nodes from 1,024 to 82,944 and measured the computational time. We observed a good weak-scaling property for our cerebellar network model. Using all 82,944 nodes, we succeeded in simulating a human-scale cerebellum for the first time, although the simulation was 578 times slower than the wall clock time. These results suggest that the K computer is already capable of creating a simulation of a human-scale cerebellar model with the aid of the MONET simulator.
Collapse
Affiliation(s)
- Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Jun Igarashi
- Head Office for Information Systems and Cybersecurity, RIKEN, Saitama, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
28
|
Crone JC, Vindiola MM, Yu AB, Boothe DL, Beeman D, Oie KS, Franaszczuk PJ. Enabling Large-Scale Simulations With the GENESIS Neuronal Simulator. Front Neuroinform 2019; 13:69. [PMID: 31803040 PMCID: PMC6873326 DOI: 10.3389/fninf.2019.00069] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 10/30/2019] [Indexed: 11/13/2022] Open
Abstract
In this paper, we evaluate the computational performance of the GEneral NEural SImulation System (GENESIS) for large scale simulations of neural networks. While many benchmark studies have been performed for large scale simulations with leaky integrate-and-fire neurons or neuronal models with only a few compartments, this work focuses on higher fidelity neuronal models represented by 50–74 compartments per neuron. After making some modifications to the source code for GENESIS and its parallel implementation, PGENESIS, particularly to improve memory usage, we find that PGENESIS is able to efficiently scale on supercomputing resources to network sizes as large as 9 × 106 neurons with 18 × 109 synapses and 2.2 × 106 neurons with 45 × 109 synapses. The modifications to GENESIS that enabled these large scale simulations have been incorporated into the May 2019 Official Release of PGENESIS 2.4 available for download from the GENESIS web site (genesis-sim.org).
Collapse
Affiliation(s)
- Joshua C Crone
- Computational and Information Sciences Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Manuel M Vindiola
- Computational and Information Sciences Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Alfred B Yu
- Human Research and Engineering Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - David L Boothe
- Human Research and Engineering Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - David Beeman
- Department of Electrical, Computer, and Energy Engineering, University of Colorado, Boulder, CO, United States
| | - Kelvin S Oie
- Human Research and Engineering Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Piotr J Franaszczuk
- Human Research and Engineering Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States.,Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
29
|
Chen S, He Z, Han X, He X, Li R, Zhu H, Zhao D, Dai C, Zhang Y, Lu Z, Chi X, Niu B. How Big Data and High-performance Computing Drive Brain Science. GENOMICS PROTEOMICS & BIOINFORMATICS 2019; 17:381-392. [PMID: 31805369 PMCID: PMC6943776 DOI: 10.1016/j.gpb.2019.09.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 09/12/2019] [Accepted: 09/29/2019] [Indexed: 12/17/2022]
Abstract
Brain science accelerates the study of intelligence and behavior, contributes fundamental insights into human cognition, and offers prospective treatments for brain disease. Faced with the challenges posed by imaging technologies and deep learning computational models, big data and high-performance computing (HPC) play essential roles in studying brain function, brain diseases, and large-scale brain models or connectomes. We review the driving forces behind big data and HPC methods applied to brain science, including deep learning, powerful data analysis capabilities, and computational performance solutions, each of which can be used to improve diagnostic accuracy and research output. This work reinforces predictions that big data and HPC will continue to improve brain science by making ultrahigh-performance analysis possible, by improving data standardization and sharing, and by providing new neuromorphic insights.
Collapse
Affiliation(s)
- Shanyu Chen
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Zhipeng He
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Xinyin Han
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Xiaoyu He
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Ruilin Li
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Haidong Zhu
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Dan Zhao
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Chuangchuang Dai
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Yu Zhang
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Zhonghua Lu
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China
| | - Xuebin Chi
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China; Center of Scientific Computing Applications & Research, Chinese Academy of Sciences, Beijing 100190, China
| | - Beifang Niu
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China; Guizhou University School of Medicine, Guiyang 550025, China.
| |
Collapse
|
30
|
Collins LT. The case for emulating insect brains using anatomical "wiring diagrams" equipped with biophysical models of neuronal activity. BIOLOGICAL CYBERNETICS 2019; 113:465-474. [PMID: 31696303 DOI: 10.1007/s00422-019-00810-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 10/29/2019] [Indexed: 06/10/2023]
Abstract
Developing whole-brain emulation (WBE) technology would provide immense benefits across neuroscience, biomedicine, artificial intelligence, and robotics. At this time, constructing a simulated human brain lacks feasibility due to limited experimental data and limited computational resources. However, I suggest that progress toward this goal might be accelerated by working toward an intermediate objective, namely insect brain emulation (IBE). More specifically, this would entail creating biologically realistic simulations of entire insect nervous systems along with more approximate simulations of non-neuronal insect physiology to make "virtual insects." I argue that this could be realistically achievable within the next 20 years. I propose that developing emulations of insect brains will galvanize the global community of scientists, businesspeople, and policymakers toward pursuing the loftier goal of emulating the human brain. By demonstrating that WBE is possible via IBE, simulating mammalian brains and eventually the human brain may no longer be viewed as too radically ambitious to deserve substantial funding and resources. Furthermore, IBE will facilitate dramatic advances in cognitive neuroscience, artificial intelligence, and robotics through studies performed using virtual insects.
Collapse
Affiliation(s)
- Logan T Collins
- Department of Psychology and Neuroscience, University of Colorado, Boulder, 2860 Wilderness Place, Boulder, CO, 80301, USA.
| |
Collapse
|
31
|
Igarashi J, Yamaura H, Yamazaki T. Large-Scale Simulation of a Layered Cortical Sheet of Spiking Network Model Using a Tile Partitioning Method. Front Neuroinform 2019; 13:71. [PMID: 31849631 PMCID: PMC6895031 DOI: 10.3389/fninf.2019.00071] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 11/12/2019] [Indexed: 11/13/2022] Open
Abstract
One of the grand challenges for computational neuroscience and high-performance computing is computer simulation of a human-scale whole brain model with spiking neurons and synaptic plasticity using supercomputers. To achieve such a simulation, the target network model must be partitioned onto a number of computational nodes, and the sub-network models are executed in parallel while communicating spike information across different nodes. However, it remains unclear how the target network model should be partitioned for efficient computing on next generation of supercomputers. Specifically, reducing the communication of spike information across compute nodes is essential, because of the relatively slower network performance than processor and memory. From the viewpoint of biological features, the cerebral cortex and cerebellum contain 99% of neurons and synapses and form layered sheet structures. Therefore, an efficient method to split the network should exploit the layered sheet structures. In this study, we indicate that a tile partitioning method leads to efficient communication. To demonstrate it, a simulation software called MONET (Millefeuille-like Organization NEural neTwork simulator) that partitions a network model as described above was developed. The MONET simulator was implemented on the Japanese flagship supercomputer K, which is composed of 82,944 computational nodes. We examined a performance of calculation, communication and memory consumption in the tile partitioning method for a cortical model with realistic anatomical and physiological parameters. The result showed that the tile partitioning method drastically reduced communication data amount by replacing network communication with DRAM access and sharing the communication data with neighboring neurons. We confirmed the scalability and efficiency of the tile partitioning method on up to 63,504 compute nodes of the K computer for the cortical model. In the companion paper by Yamaura et al., the performance for a cerebellar model was examined. These results suggest that the tile partitioning method will have advantage for a human-scale whole-brain simulation on exascale computers.
Collapse
Affiliation(s)
- Jun Igarashi
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Saitama, Japan
| | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
32
|
Jordan J, Weidel P, Morrison A. A Closed-Loop Toolchain for Neural Network Simulations of Learning Autonomous Agents. Front Comput Neurosci 2019; 13:46. [PMID: 31427939 PMCID: PMC6687756 DOI: 10.3389/fncom.2019.00046] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 06/25/2019] [Indexed: 11/17/2022] Open
Abstract
Neural network simulation is an important tool for generating and evaluating hypotheses on the structure, dynamics, and function of neural circuits. For scientific questions addressing organisms operating autonomously in their environments, in particular where learning is involved, it is crucial to be able to operate such simulations in a closed-loop fashion. In such a set-up, the neural agent continuously receives sensory stimuli from the environment and provides motor signals that manipulate the environment or move the agent within it. So far, most studies requiring such functionality have been conducted with custom simulation scripts and manually implemented tasks. This makes it difficult for other researchers to reproduce and build upon previous work and nearly impossible to compare the performance of different learning architectures. In this work, we present a novel approach to solve this problem, connecting benchmark tools from the field of machine learning and state-of-the-art neural network simulators from computational neuroscience. The resulting toolchain enables researchers in both fields to make use of well-tested high-performance simulation software supporting biologically plausible neuron, synapse and network models and allows them to evaluate and compare their approach on the basis of standardized environments with various levels of complexity. We demonstrate the functionality of the toolchain by implementing a neuronal actor-critic architecture for reinforcement learning in the NEST simulator and successfully training it on two different environments from the OpenAI Gym. We compare its performance to a previously suggested neural network model of reinforcement learning in the basal ganglia and a generic Q-learning algorithm.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
| | - Philipp Weidel
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
- aiCTX, Zurich, Switzerland
- Department of Computer Science, RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
- Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
33
|
Pastorelli E, Capone C, Simula F, Sanchez-Vives MV, Del Giudice P, Mattia M, Paolucci PS. Scaling of a Large-Scale Simulation of Synchronous Slow-Wave and Asynchronous Awake-Like Activity of a Cortical Model With Long-Range Interconnections. Front Syst Neurosci 2019; 13:33. [PMID: 31396058 PMCID: PMC6664086 DOI: 10.3389/fnsys.2019.00033] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 07/08/2019] [Indexed: 01/06/2023] Open
Abstract
Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192 × 192 modules, each composed of 1,250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40 GHz clock rate. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1,024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3 × 109 and 4.1 × 109 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.
Collapse
Affiliation(s)
- Elena Pastorelli
- INFN, Sezione di Roma, Rome, Italy
- PhD Program in Behavioural Neuroscience, “Sapienza” University, Rome, Italy
| | - Cristiano Capone
- INFN, Sezione di Roma, Rome, Italy
- National Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanità, Rome, Italy
| | | | - Maria V. Sanchez-Vives
- Systems Neuroscience, IDIBAPS, Barcelona, Spain
- Department of Life and Medical Sciences, ICREA, Barcelona, Spain
| | - Paolo Del Giudice
- National Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanità, Rome, Italy
| | - Maurizio Mattia
- National Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanità, Rome, Italy
| | | |
Collapse
|
34
|
Khoyratee F, Grassia F, Saïghi S, Levi T. Optimized Real-Time Biomimetic Neural Network on FPGA for Bio-hybridization. Front Neurosci 2019; 13:377. [PMID: 31068781 PMCID: PMC6491680 DOI: 10.3389/fnins.2019.00377] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 04/02/2019] [Indexed: 01/04/2023] Open
Abstract
Neurological diseases can be studied by performing bio-hybrid experiments using a real-time biomimetic Spiking Neural Network (SNN) platform. The Hodgkin-Huxley model offers a set of equations including biophysical parameters which can serve as a base to represent different classes of neurons and affected cells. Also, connecting the artificial neurons to the biological cells would allow us to understand the effect of the SNN stimulation using different parameters on nerve cells. Thus, designing a real-time SNN could useful for the study of simulations of some part of the brain. Here, we present a different approach to optimize the Hodgkin-Huxley equations adapted for Field Programmable Gate Array (FPGA) implementation. The equations of the conductance have been unified to allow the use of same functions with different parameters for all ionic channels. The low resources and high-speed implementation also include features, such as synaptic noise using the Ornstein-Uhlenbeck process and different synapse receptors including AMPA, GABAa, GABAb, and NMDA receptors. The platform allows real-time modification of the neuron parameters and can output different cortical neuron families like Fast Spiking (FS), Regular Spiking (RS), Intrinsically Bursting (IB), and Low Threshold Spiking (LTS) neurons using a Digital to Analog Converter (DAC). Gaussian distribution of the synaptic noise highlights similarities with the biological noise. Also, cross-correlation between the implementation and the model shows strong correlations, and bifurcation analysis reproduces similar behavior compared to the original Hodgkin-Huxley model. The implementation of one core of calculation uses 3% of resources of the FPGA and computes in real-time 500 neurons with 25,000 synapses and synaptic noise which can be scaled up to 15,000 using all resources. This is the first step toward neuromorphic system which can be used for the simulation of bio-hybridization and for the study of neurological disorders or the advanced research on neuroprosthesis to regain lost function.
Collapse
Affiliation(s)
- Farad Khoyratee
- Laboratoire de l'Intégration du Matériau au Système, Bordeaux INP, CNRS UMR 5218, University of Bordeaux, Talence, France
| | - Filippo Grassia
- LTI Laboratory, EA 3899, University of Picardie Jules Verne, Amiens, France
| | - Sylvain Saïghi
- Laboratoire de l'Intégration du Matériau au Système, Bordeaux INP, CNRS UMR 5218, University of Bordeaux, Talence, France
| | - Timothée Levi
- Laboratoire de l'Intégration du Matériau au Système, Bordeaux INP, CNRS UMR 5218, University of Bordeaux, Talence, France.,Institute of Industrial Science, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
35
|
Fernandez-Musoles C, Coca D, Richmond P. Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability. Front Neuroinform 2019; 13:19. [PMID: 31001102 PMCID: PMC6454199 DOI: 10.3389/fninf.2019.00019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 03/11/2019] [Indexed: 11/30/2022] Open
Abstract
In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network.
Collapse
Affiliation(s)
| | - Daniel Coca
- Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Paul Richmond
- Computer Science, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
36
|
Wunderlich T, Kungl AF, Müller E, Hartel A, Stradmann Y, Aamir SA, Grübl A, Heimbrecht A, Schreiber K, Stöckel D, Pehle C, Billaudelle S, Kiene G, Mauch C, Schemmel J, Meier K, Petrovici MA. Demonstrating Advantages of Neuromorphic Computation: A Pilot Study. Front Neurosci 2019; 13:260. [PMID: 30971881 PMCID: PMC6444279 DOI: 10.3389/fnins.2019.00260] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Accepted: 03/05/2019] [Indexed: 11/26/2022] Open
Abstract
Neuromorphic devices represent an attempt to mimic aspects of the brain's architecture and dynamics with the aim of replicating its hallmark functional capabilities in terms of computational power, robust learning and energy efficiency. We employ a single-chip prototype of the BrainScaleS 2 neuromorphic system to implement a proof-of-concept demonstration of reward-modulated spike-timing-dependent plasticity in a spiking network that learns to play a simplified version of the Pong video game by smooth pursuit. This system combines an electronic mixed-signal substrate for emulating neuron and synapse dynamics with an embedded digital processor for on-chip learning, which in this work also serves to simulate the virtual environment and learning agent. The analog emulation of neuronal membrane dynamics enables a 1000-fold acceleration with respect to biological real-time, with the entire chip operating on a power budget of 57 mW. Compared to an equivalent simulation using state-of-the-art software, the on-chip emulation is at least one order of magnitude faster and three orders of magnitude more energy-efficient. We demonstrate how on-chip learning can mitigate the effects of fixed-pattern noise, which is unavoidable in analog substrates, while making use of temporal variability for action exploration. Learning compensates imperfections of the physical substrate, as manifested in neuronal parameter variability, by adapting synaptic weights to match respective excitability of individual neurons.
Collapse
Affiliation(s)
- Timo Wunderlich
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Akos F Kungl
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Eric Müller
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Hartel
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Yannik Stradmann
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Syed Ahmed Aamir
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Grübl
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Arthur Heimbrecht
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Korbinian Schreiber
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - David Stöckel
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Pehle
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Billaudelle
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Gerd Kiene
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Karlheinz Meier
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mihai A Petrovici
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany.,Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
37
|
Closed-Loop Systems and In Vitro Neuronal Cultures: Overview and Applications. ADVANCES IN NEUROBIOLOGY 2019; 22:351-387. [DOI: 10.1007/978-3-030-11135-9_15] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
38
|
Knight JC, Nowotny T. GPUs Outperform Current HPC and Neuromorphic Solutions in Terms of Speed and Energy When Simulating a Highly-Connected Cortical Model. Front Neurosci 2018; 12:941. [PMID: 30618570 PMCID: PMC6299048 DOI: 10.3389/fnins.2018.00941] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 11/29/2018] [Indexed: 11/15/2022] Open
Abstract
While neuromorphic systems may be the ultimate platform for deploying spiking neural networks (SNNs), their distributed nature and optimization for specific types of models makes them unwieldy tools for developing them. Instead, SNN models tend to be developed and simulated on computers or clusters of computers with standard von Neumann CPU architectures. Over the last decade, as well as becoming a common fixture in many workstations, NVIDIA GPU accelerators have entered the High Performance Computing field and are now used in 50 % of the Top 10 super computing sites worldwide. In this paper we use our GeNN code generator to re-implement two neo-cortex-inspired, circuit-scale, point neuron network models on GPU hardware. We verify the correctness of our GPU simulations against prior results obtained with NEST running on traditional HPC hardware and compare the performance with respect to speed and energy consumption against published data from CPU-based HPC and neuromorphic hardware. A full-scale model of a cortical column can be simulated at speeds approaching 0.5× real-time using a single NVIDIA Tesla V100 accelerator-faster than is currently possible using a CPU based cluster or the SpiNNaker neuromorphic system. In addition, we find that, across a range of GPU systems, the energy to solution as well as the energy per synaptic event of the microcircuit simulation is as much as 14× lower than either on SpiNNaker or in CPU-based simulations. Besides performance in terms of speed and energy consumption of the simulation, efficient initialization of models is also a crucial concern, particularly in a research context where repeated runs and parameter-space exploration are required. Therefore, we also introduce in this paper some of the novel parallel initialization methods implemented in the latest version of GeNN and demonstrate how they can enable further speed and energy advantages.
Collapse
Affiliation(s)
- James C. Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | | |
Collapse
|
39
|
Geminiani A, Casellato C, Locatelli F, Prestori F, Pedrocchi A, D'Angelo E. Complex Dynamics in Simplified Neuronal Models: Reproducing Golgi Cell Electroresponsiveness. Front Neuroinform 2018; 12:88. [PMID: 30559658 PMCID: PMC6287018 DOI: 10.3389/fninf.2018.00088] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 11/13/2018] [Indexed: 11/21/2022] Open
Abstract
Brain neurons exhibit complex electroresponsive properties – including intrinsic subthreshold oscillations and pacemaking, resonance and phase-reset – which are thought to play a critical role in controlling neural network dynamics. Although these properties emerge from detailed representations of molecular-level mechanisms in “realistic” models, they cannot usually be generated by simplified neuronal models (although these may show spike-frequency adaptation and bursting). We report here that this whole set of properties can be generated by the extended generalized leaky integrate-and-fire (E-GLIF) neuron model. E-GLIF derives from the GLIF model family and is therefore mono-compartmental, keeps the limited computational load typical of a linear low-dimensional system, admits analytical solutions and can be tuned through gradient-descent algorithms. Importantly, E-GLIF is designed to maintain a correspondence between model parameters and neuronal membrane mechanisms through a minimum set of equations. In order to test its potential, E-GLIF was used to model a specific neuron showing rich and complex electroresponsiveness, the cerebellar Golgi cell, and was validated against experimental electrophysiological data recorded from Golgi cells in acute cerebellar slices. During simulations, E-GLIF was activated by stimulus patterns, including current steps and synaptic inputs, identical to those used for the experiments. The results demonstrate that E-GLIF can reproduce the whole set of complex neuronal dynamics typical of these neurons – including intensity-frequency curves, spike-frequency adaptation, post-inhibitory rebound bursting, spontaneous subthreshold oscillations, resonance, and phase-reset – providing a new effective tool to investigate brain dynamics in large-scale simulations.
Collapse
Affiliation(s)
- Alice Geminiani
- NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Claudia Casellato
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Francesca Locatelli
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Francesca Prestori
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Alessandra Pedrocchi
- NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| |
Collapse
|
40
|
Luo Y, Wan L, Liu J, Harkin J, McDaid L, Cao Y, Ding X. Low Cost Interconnected Architecture for the Hardware Spiking Neural Networks. Front Neurosci 2018; 12:857. [PMID: 30524230 PMCID: PMC6258738 DOI: 10.3389/fnins.2018.00857] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Accepted: 11/02/2018] [Indexed: 11/13/2022] Open
Abstract
A novel low cost interconnected architecture (LCIA) is proposed in this paper, which is an efficient solution for the neuron interconnections for the hardware spiking neural networks (SNNs). It is based on an all-to-all connection that takes each paired input and output nodes of multi-layer SNNs as the source and destination of connections. The aim is to maintain an efficient routing performance under low hardware overhead. A Networks-on-Chip (NoC) router is proposed as the fundamental component of the LCIA, where an effective scheduler is designed to address the traffic challenge due to irregular spikes. The router can find requests rapidly, make the arbitration decision promptly, and provide equal services to different network traffic requests. Experimental results show that the LCIA can manage the intercommunication of the multi-layer neural networks efficiently and have a low hardware overhead which can maintain the scalability of hardware SNNs.
Collapse
Affiliation(s)
- Yuling Luo
- Faculty of Electronic Engineering, Guangxi Normal University, Guilin, China
| | - Lei Wan
- Faculty of Electronic Engineering, Guangxi Normal University, Guilin, China
| | - Junxiu Liu
- Faculty of Electronic Engineering, Guangxi Normal University, Guilin, China
| | - Jim Harkin
- School of Computing, Engineering and Intelligent Systems, University of Ulster, Londonderry, United Kingdom
| | - Liam McDaid
- School of Computing, Engineering and Intelligent Systems, University of Ulster, Londonderry, United Kingdom
| | - Yi Cao
- Management Science and Business Economics Group, Business School, University of Edinburgh, Edinburgh, United Kingdom
| | - Xuemei Ding
- School of Computing, Engineering and Intelligent Systems, University of Ulster, Londonderry, United Kingdom
- College of Mathematics and Informatics, Fujian Normal University, Fuzhou, China
| |
Collapse
|
41
|
Jordan J, Ippen T, Helias M, Kitayama I, Sato M, Igarashi J, Diesmann M, Kunkel S. Corrigendum: Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Front Neuroinform 2018; 12:34. [PMID: 30008668 PMCID: PMC6039790 DOI: 10.3389/fninf.2018.00034] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 05/17/2018] [Indexed: 11/17/2022] Open
Affiliation(s)
- Jakob Jordan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tammo Ippen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Itaru Kitayama
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Mitsuhisa Sato
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Jun Igarashi
- Computational Engineering Applications Unit, RIKEN, Wako, Japan
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.,Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Jülich Research Centre, Jülich, Germany
| |
Collapse
|
42
|
Nowke C, Diaz-Pier S, Weyers B, Hentschel B, Morrison A, Kuhlen TW, Peyser A. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation. Front Neuroinform 2018; 12:32. [PMID: 29937723 PMCID: PMC5992991 DOI: 10.3389/fninf.2018.00032] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Accepted: 05/11/2018] [Indexed: 11/13/2022] Open
Abstract
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases-the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.
Collapse
Affiliation(s)
- Christian Nowke
- Visual Computing Institute, RWTH Aachen University, JARA-HPC, Aachen, Germany
| | - Sandra Diaz-Pier
- SimLab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Benjamin Weyers
- Visual Computing Institute, RWTH Aachen University, JARA-HPC, Aachen, Germany
| | - Bernd Hentschel
- Visual Computing Institute, RWTH Aachen University, JARA-HPC, Aachen, Germany
| | - Abigail Morrison
- SimLab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany.,Institute of Neuroscience and Medicine, Institute for Advanced Simulation, JARA Institute Brain Structure-Function Relationships, Forschungszentrum Jülich GmbH, Jülich, Germany.,Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| | - Torsten W Kuhlen
- Visual Computing Institute, RWTH Aachen University, JARA-HPC, Aachen, Germany
| | - Alexander Peyser
- SimLab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| |
Collapse
|
43
|
van Albada SJ, Rowley AG, Senk J, Hopkins M, Schmidt M, Stokes AB, Lester DR, Diesmann M, Furber SB. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model. Front Neurosci 2018; 12:291. [PMID: 29875620 PMCID: PMC5974216 DOI: 10.3389/fnins.2018.00291] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 04/13/2018] [Indexed: 01/12/2023] Open
Abstract
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks.
Collapse
Affiliation(s)
- Sacha J van Albada
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany
| | - Andrew G Rowley
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany
| | - Michael Hopkins
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Maximilian Schmidt
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany.,Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Japan
| | - Alan B Stokes
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - David R Lester
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| |
Collapse
|