1
|
Igarashi J. Future projections for mammalian whole-brain simulations based on technological trends in related fields. Neurosci Res 2024:S0168-0102(24)00138-X. [PMID: 39571736 DOI: 10.1016/j.neures.2024.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 11/13/2024] [Indexed: 12/13/2024]
Abstract
Large-scale brain simulation allows us to understand the interaction of vast numbers of neurons having nonlinear dynamics to help understand the information processing mechanisms in the brain. The scale of brain simulations continues to rise as computer performance improves exponentially. However, a simulation of the human whole brain has not yet been achieved as of 2024 due to insufficient computational performance and brain measurement data. This paper examines technological trends in supercomputers, cell type classification, connectomics, and large-scale activity measurements relevant to whole-brain simulation. Based on these trends, we attempt to predict the feasible timeframe for mammalian whole-brain simulation. Our estimates suggest that mouse whole-brain simulation at the cellular level could be realized around 2034, marmoset around 2044, and human likely later than 2044.
Collapse
Affiliation(s)
- Jun Igarashi
- High Performance Artificial Intelligence Systems Research Team, Center for Computational Science, RIKEN, Japan.
| |
Collapse
|
2
|
Wang C, Zhang T, Chen X, He S, Li S, Wu S. BrainPy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming. eLife 2023; 12:e86365. [PMID: 38132087 PMCID: PMC10796146 DOI: 10.7554/elife.86365] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 12/20/2023] [Indexed: 12/23/2023] Open
Abstract
Elucidating the intricate neural mechanisms underlying brain functions requires integrative brain dynamics modeling. To facilitate this process, it is crucial to develop a general-purpose programming framework that allows users to freely define neural models across multiple scales, efficiently simulate, train, and analyze model dynamics, and conveniently incorporate new modeling approaches. In response to this need, we present BrainPy. BrainPy leverages the advanced just-in-time (JIT) compilation capabilities of JAX and XLA to provide a powerful infrastructure tailored for brain dynamics programming. It offers an integrated platform for building, simulating, training, and analyzing brain dynamics models. Models defined in BrainPy can be JIT compiled into binary instructions for various devices, including Central Processing Unit, Graphics Processing Unit, and Tensor Processing Unit, which ensures high-running performance comparable to native C or CUDA. Additionally, BrainPy features an extensible architecture that allows for easy expansion of new infrastructure, utilities, and machine-learning approaches. This flexibility enables researchers to incorporate cutting-edge techniques and adapt the framework to their specific needs.
Collapse
Affiliation(s)
- Chaoming Wang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
- Guangdong Institute of Intelligence Science and TechnologyGuangdongChina
| | - Tianqiu Zhang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Xiaoyu Chen
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Sichao He
- Beijing Jiaotong UniversityBeijingChina
| | - Shangyang Li
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Si Wu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
- Guangdong Institute of Intelligence Science and TechnologyGuangdongChina
| |
Collapse
|
3
|
Schmitt FJ, Rostami V, Nawrot MP. Efficient parameter calibration and real-time simulation of large-scale spiking neural networks with GeNN and NEST. Front Neuroinform 2023; 17:941696. [PMID: 36844916 PMCID: PMC9950635 DOI: 10.3389/fninf.2023.941696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.
Collapse
Affiliation(s)
| | | | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
4
|
Végh J, Berki ÁJ. On the Role of Speed in Technological and Biological Information Transfer for Computations. Acta Biotheor 2022; 70:26. [PMID: 36287247 PMCID: PMC9606061 DOI: 10.1007/s10441-022-09450-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 10/12/2022] [Indexed: 11/10/2022]
Abstract
In all kinds of implementations of computing, whether technological or biological, some material carrier for the information exists, so in real-world implementations, the propagation speed of information cannot exceed the speed of its carrier. Because of this limitation, one must also consider the transfer time between computing units for any implementation. We need a different mathematical method to consider this limitation: classic mathematics can only describe infinitely fast and small computing system implementations. The difference between mathematical handling methods leads to different descriptions of the computing features of the systems. The proposed handling also explains why biological implementations can have lifelong learning and technological ones cannot. Our conclusion about learning matches published experimental evidence, both in biological and technological computing.
Collapse
Affiliation(s)
| | - Ádám József Berki
- Department of Neurology, Semmelweis University, 1085 Budapest, Hungary
- János Szentágothai Doctoral School of Neurosciences, Semmelweis University, 1085 Budapest, Hungary
| |
Collapse
|
5
|
Mascart C, Scarella G, Reynaud-Bouret P, Muzy A. Scalability of Large Neural Network Simulations via Activity Tracking With Time Asynchrony and Procedural Connectivity. Neural Comput 2022; 34:1915-1943. [PMID: 35896155 DOI: 10.1162/neco_a_01524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 04/27/2022] [Indexed: 11/04/2022]
Abstract
We present a new algorithm to efficiently simulate random models of large neural networks satisfying the property of time asynchrony. The model parameters (average firing rate, number of neurons, synaptic connection probability, and postsynaptic duration) are of the order of magnitude of a small mammalian brain or of human brain areas. Through the use of activity tracking and procedural connectivity (dynamical regeneration of synapses), computational and memory complexities of this algorithm are proved to be theoretically linear with the number of neurons. These results are experimentally validated by sequential simulations of millions of neurons and billions of synapses running in a few minutes using a single thread of an equivalent desktop computer.
Collapse
Affiliation(s)
| | - Gilles Scarella
- Université Côte d'Azur, CNRS, I3S, France.,Université Côte d'Azur, CNRS, LJAD, 06103 Nice, France
| | | | | |
Collapse
|
6
|
Trensch G, Morrison A. A System-on-Chip Based Hybrid Neuromorphic Compute Node Architecture for Reproducible Hyper-Real-Time Simulations of Spiking Neural Networks. Front Neuroinform 2022; 16:884033. [PMID: 35846779 PMCID: PMC9277345 DOI: 10.3389/fninf.2022.884033] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 05/23/2022] [Indexed: 11/23/2022] Open
Abstract
Despite the great strides neuroscience has made in recent decades, the underlying principles of brain function remain largely unknown. Advancing the field strongly depends on the ability to study large-scale neural networks and perform complex simulations. In this context, simulations in hyper-real-time are of high interest, as they would enable both comprehensive parameter scans and the study of slow processes, such as learning and long-term memory. Not even the fastest supercomputer available today is able to meet the challenge of accurate and reproducible simulation with hyper-real acceleration. The development of novel neuromorphic computer architectures holds out promise, but the high costs and long development cycles for application-specific hardware solutions makes it difficult to keep pace with the rapid developments in neuroscience. However, advances in System-on-Chip (SoC) device technology and tools are now providing interesting new design possibilities for application-specific implementations. Here, we present a novel hybrid software-hardware architecture approach for a neuromorphic compute node intended to work in a multi-node cluster configuration. The node design builds on the Xilinx Zynq-7000 SoC device architecture that combines a powerful programmable logic gate array (FPGA) and a dual-core ARM Cortex-A9 processor extension on a single chip. Our proposed architecture makes use of both and takes advantage of their tight coupling. We show that available SoC device technology can be used to build smaller neuromorphic computing clusters that enable hyper-real-time simulation of networks consisting of tens of thousands of neurons, and are thus capable of meeting the high demands for modeling and simulation in neuroscience.
Collapse
Affiliation(s)
- Guido Trensch
- Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
| |
Collapse
|
7
|
Combining High-Resolution Hard X-ray Tomography and Histology for Stem Cell-Mediated Distraction Osteogenesis. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Distraction osteogenesis is a clinically established technique for lengthening, molding and shaping bone by new bone formation. The experimental evaluation of this expensive and time-consuming treatment is of high impact for better understanding of tissue engineering but mainly relies on a limited number of histological slices. These tissue slices contain two-dimensional information comprising only about one percent of the volume of interest. In order to analyze the soft and hard tissues of the entire jaw of a single rat in a multimodal assessment, we combined micro computed tomography (µCT) with histology. The µCT data acquired before and after decalcification were registered to determine the impact of decalcification on local tissue shrinkage. Identification of the location of the H&E-stained specimen within the synchrotron radiation-based µCT data collected after decalcification was achieved via non-rigid slice-to-volume registration. The resulting bi- and tri-variate histograms were divided into clusters related to anatomical features from bone and soft tissues, which allowed for a comparison of the approaches and resulted in the hypothesis that the combination of laboratory-based µCT before decalcification, synchrotron radiation-based µCT after decalcification and histology with hematoxylin-and-eosin staining could be used to discriminate between different types of collagen, key components of new bone formation.
Collapse
|
8
|
Feldotto B, Eppler JM, Jimenez-Romero C, Bignamini C, Gutierrez CE, Albanese U, Retamino E, Vorobev V, Zolfaghari V, Upton A, Sun Z, Yamaura H, Heidarinejad M, Klijn W, Morrison A, Cruz F, McMurtrie C, Knoll AC, Igarashi J, Yamazaki T, Doya K, Morin FO. Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure. Front Neuroinform 2022; 16:884180. [PMID: 35662903 PMCID: PMC9160925 DOI: 10.3389/fninf.2022.884180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 04/19/2022] [Indexed: 12/20/2022] Open
Abstract
Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtual machines; this provides a high-performance simulation environment that is accessible to multi-domain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a large-scale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.
Collapse
Affiliation(s)
- Benedikt Feldotto
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Jochen Martin Eppler
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Cristian Jimenez-Romero
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | | | - Carlos Enrique Gutierrez
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Ugo Albanese
- Department of Excellence in Robotics and AI, The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera, Italy
| | - Eloy Retamino
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada, Granada, Spain
| | - Viktor Vorobev
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Vahid Zolfaghari
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Alex Upton
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Zhe Sun
- Image Processing Research Team, Center for Advanced Photonics, RIKEN, Wako, Japan
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Morteza Heidarinejad
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany
- Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Felipe Cruz
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Colin McMurtrie
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Alois C. Knoll
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Jun Igarashi
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
- Center for Computational Science, RIKEN, Kobe, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Kenji Doya
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Fabrice O. Morin
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| |
Collapse
|
9
|
Müller E, Arnold E, Breitwieser O, Czierlinski M, Emmel A, Kaiser J, Mauch C, Schmitt S, Spilger P, Stock R, Stradmann Y, Weis J, Baumbach A, Billaudelle S, Cramer B, Ebert F, Göltz J, Ilmberger J, Karasenko V, Kleider M, Leibfried A, Pehle C, Schemmel J. A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware. Front Neurosci 2022; 16:884128. [PMID: 35663548 PMCID: PMC9157770 DOI: 10.3389/fnins.2022.884128] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 04/20/2022] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging to unite efficiency and usability. This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation. We present use cases to discuss and derive requirements for the software and showcase the implementation. The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-the-loop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development. Finally, we discuss further developments in terms of hardware scale-up, system usability, and efficiency.
Collapse
Affiliation(s)
- Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Elias Arnold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Milena Czierlinski
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Arne Emmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Jakob Kaiser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Third Institute of Physics, University of Göttingen, Göttingen, Germany
| | - Philipp Spilger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Raphael Stock
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Yannik Stradmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Weis
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | | | - Benjamin Cramer
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Falk Ebert
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Julian Göltz
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Joscha Ilmberger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Aron Leibfried
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
10
|
Albers J, Pronold J, Kurth AC, Vennemo SB, Haghighi Mood K, Patronis A, Terhorst D, Jordan J, Kunkel S, Tetzlaff T, Diesmann M, Senk J. A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations. Front Neuroinform 2022; 16:837549. [PMID: 35645755 PMCID: PMC9131021 DOI: 10.3389/fninf.2022.837549] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 03/11/2022] [Indexed: 11/13/2022] Open
Abstract
Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.
Collapse
Affiliation(s)
- Jasper Albers
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- *Correspondence: Jasper Albers
| | - Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Anno Christopher Kurth
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Stine Brekke Vennemo
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Alexander Patronis
- Jülich Supercomputing Centre (JSC), Jülich Research Centre, Jülich, Germany
| | - Dennis Terhorst
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
11
|
Pronold J, Jordan J, Wylie BJN, Kitayama I, Diesmann M, Kunkel S. Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring. Front Neuroinform 2022; 15:785068. [PMID: 35300490 PMCID: PMC8921864 DOI: 10.3389/fninf.2021.785068] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 12/24/2021] [Indexed: 11/26/2022] Open
Abstract
Generic simulation code for spiking neuronal networks spends the major part of the time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size, a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here, we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm, all threads search through all spikes to pick out the relevant ones. With increasing network size, the fraction of hits remains invariant but the absolute number of rejections grows. Our new alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this, every thread completes delivery solely of the section of spikes for its own neurons. Independent of the number of threads, all spikes are looked at only two times. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40 %. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency that the instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching.
Collapse
Affiliation(s)
- Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Brian J. N. Wylie
- Jülich Supercomputing Centre, Jülich Research Centre, Jülich, Germany
| | | | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| |
Collapse
|
12
|
Dasbach S, Tetzlaff T, Diesmann M, Senk J. Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution. Front Neurosci 2021; 15:757790. [PMID: 35002599 PMCID: PMC8740282 DOI: 10.3389/fnins.2021.757790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.
Collapse
Affiliation(s)
- Stefan Dasbach
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
13
|
Kobayashi T, Kuriyama R, Yamazaki T. Testing an Explicit Method for Multi-compartment Neuron Model Simulation on a GPU. Cognit Comput 2021. [DOI: 10.1007/s12559-021-09942-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Abstract
AbstractExperience shows that cooperating and communicating computing systems, comprising segregated single processors, have severe performance limitations, which cannot be explained using von Neumann’s classic computing paradigm. In his classic “First Draft,” he warned that using a “too fast processor” vitiates his simple “procedure” (but not his computing model!); furthermore, that using the classic computing paradigm for imitating neuronal operations is unsound. Amdahl added that large machines, comprising many processors, have an inherent disadvantage. Given that artificial neural network’s (ANN’s) components are heavily communicating with each other, they are built from a large number of components designed/fabricated for use in conventional computing, furthermore they attempt to mimic biological operation using improper technological solutions, and their achievable payload computing performance is conceptually modest. The type of workload that artificial intelligence-based systems generate leads to an exceptionally low payload computational performance, and their design/technology limits their size to just above the “toy” level systems: The scaling of processor-based ANN systems is strongly nonlinear. Given the proliferation and growing size of ANN systems, we suggest ideas to estimate in advance the efficiency of the device or application. The wealth of ANN implementations and the proprietary technical data do not enable more. Through analyzing published measurements, we provide evidence that the role of data transfer time drastically influences both ANNs performance and feasibility. It is discussed how some major theoretical limiting factors, ANN’s layer structure and their methods of technical implementation of communication affect their efficiency. The paper starts from von Neumann’s original model, without neglecting the transfer time apart from processing time, and derives an appropriate interpretation and handling for Amdahl’s law. It shows that, in that interpretation, Amdahl’s law correctly describes ANNs.
Collapse
|
15
|
Yamazaki T, Igarashi J, Yamaura H. Human-scale Brain Simulation via Supercomputer: A Case Study on the Cerebellum. Neuroscience 2021; 462:235-246. [PMID: 33482329 DOI: 10.1016/j.neuroscience.2021.01.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Revised: 12/30/2020] [Accepted: 01/06/2021] [Indexed: 01/03/2023]
Abstract
Performance of supercomputers has been steadily and exponentially increasing for the past 20 years, and is expected to increase further. This unprecedented computational power enables us to build and simulate large-scale neural network models composed of tens of billions of neurons and tens of trillions of synapses with detailed anatomical connections and realistic physiological parameters. Such "human-scale" brain simulation could be considered a milestone in computational neuroscience and even in general neuroscience. Towards this milestone, it is mandatory to introduce modern high-performance computing technology into neuroscience research. In this article, we provide an introductory landscape about large-scale brain simulation on supercomputers from the viewpoints of computational neuroscience and modern high-performance computing technology for specialists in experimental as well as computational neurosciences. This introduction to modeling and simulation methods is followed by a review of various representative large-scale simulation studies conducted to date. Then, we direct our attention to the cerebellum, with a review of more simulation studies specific to that region. Furthermore, we present recent simulation results of a human-scale cerebellar network model composed of 86 billion neurons on the Japanese flagship supercomputer K (now retired). Finally, we discuss the necessity and importance of human-scale brain simulation, and suggest future directions of such large-scale brain simulation research.
Collapse
Affiliation(s)
- Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Japan.
| | | | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Japan
| |
Collapse
|
16
|
Azevedo Carvalho N, Contassot-Vivier S, Buhry L, Martinez D. Simulation of Large Scale Neural Models With Event-Driven Connectivity Generation. Front Neuroinform 2020; 14:522000. [PMID: 33154719 PMCID: PMC7591773 DOI: 10.3389/fninf.2020.522000] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 08/31/2020] [Indexed: 11/15/2022] Open
Abstract
Accurate simulations of brain structures is a major problem in neuroscience. Many works are dedicated to design better models or to develop more efficient simulation schemes. In this paper, we propose a hybrid simulation scheme that combines time-stepping second-order integration of Hodgkin-Huxley (HH) type neurons with event-driven updating of the synaptic currents. As the HH model is a continuous model, there is no explicit spike events. Thus, in order to preserve the accuracy of the integration method, a spike detection algorithm is developed that accurately determines spike times. This approach allows us to regenerate the outgoing connections at each event, thereby avoiding the storage of the connectivity. Consequently, memory consumption is significantly reduced while preserving execution time and accuracy of the simulations, especially the spike times of detailed point neuron models. The efficiency of the method, implemented in the SiReNe software, is demonstrated by the simulation of a striatum model which consists of more than 106 neurons and 108 synapses (each neuron has a fan-out of 504 post-synaptic neurons), under normal and Parkinson's conditions.
Collapse
Affiliation(s)
| | | | - Laure Buhry
- Université de Lorraine, CNRS, Inria, LORIA, Nancy, France
| | | |
Collapse
|
17
|
Cremonesi F, Schürmann F. Understanding Computational Costs of Cellular-Level Brain Tissue Simulations Through Analytical Performance Models. Neuroinformatics 2020; 18:407-428. [PMID: 32056104 PMCID: PMC7338826 DOI: 10.1007/s12021-019-09451-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Computational modeling and simulation have become essential tools in the quest to better understand the brain's makeup and to decipher the causal interrelations of its components. The breadth of biochemical and biophysical processes and structures in the brain has led to the development of a large variety of model abstractions and specialized tools, often times requiring high performance computing resources for their timely execution. What has been missing so far was an in-depth analysis of the complexity of the computational kernels, hindering a systematic approach to identifying bottlenecks of algorithms and hardware. If whole brain models are to be achieved on emerging computer generations, models and simulation engines will have to be carefully co-designed for the intrinsic hardware tradeoffs. For the first time, we present a systematic exploration based on analytic performance modeling. We base our analysis on three in silico models, chosen as representative examples of the most widely employed modeling abstractions: current-based point neurons, conductance-based point neurons and conductance-based detailed neurons. We identify that the synaptic modeling formalism, i.e. current or conductance-based representation, and not the level of morphological detail, is the most significant factor in determining the properties of memory bandwidth saturation and shared-memory scaling of in silico models. Even though general purpose computing has, until now, largely been able to deliver high performance, we find that for all types of abstractions, network latency and memory bandwidth will become severe bottlenecks as the number of neurons to be simulated grows. By adapting and extending a performance modeling approach, we deliver a first characterization of the performance landscape of brain tissue simulations, allowing us to pinpoint current bottlenecks for state-of-the-art in silico models, and make projections for future hardware and software requirements.
Collapse
Affiliation(s)
- Francesco Cremonesi
- Blue Brain Project, Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland
| | - Felix Schürmann
- Blue Brain Project, Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland.
| |
Collapse
|
18
|
Yamaura H, Igarashi J, Yamazaki T. Simulation of a Human-Scale Cerebellar Network Model on the K Computer. Front Neuroinform 2020; 14:16. [PMID: 32317955 PMCID: PMC7146068 DOI: 10.3389/fninf.2020.00016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 03/18/2020] [Indexed: 12/15/2022] Open
Abstract
Computer simulation of the human brain at an individual neuron resolution is an ultimate goal of computational neuroscience. The Japanese flagship supercomputer, K, provides unprecedented computational capability toward this goal. The cerebellum contains 80% of the neurons in the whole brain. Therefore, computer simulation of the human-scale cerebellum will be a challenge for modern supercomputers. In this study, we built a human-scale spiking network model of the cerebellum, composed of 68 billion spiking neurons, on the K computer. As a benchmark, we performed a computer simulation of a cerebellum-dependent eye movement task known as the optokinetic response. We succeeded in reproducing plausible neuronal activity patterns that are observed experimentally in animals. The model was built on dedicated neural network simulation software called MONET (Millefeuille-like Organization NEural neTwork), which calculates layered sheet types of neural networks with parallelization by tile partitioning. To examine the scalability of the MONET simulator, we repeatedly performed simulations while changing the number of compute nodes from 1,024 to 82,944 and measured the computational time. We observed a good weak-scaling property for our cerebellar network model. Using all 82,944 nodes, we succeeded in simulating a human-scale cerebellum for the first time, although the simulation was 578 times slower than the wall clock time. These results suggest that the K computer is already capable of creating a simulation of a human-scale cerebellar model with the aid of the MONET simulator.
Collapse
Affiliation(s)
- Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Jun Igarashi
- Head Office for Information Systems and Cybersecurity, RIKEN, Saitama, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
19
|
An Optimizing Multi-platform Source-to-source Compiler Framework for the NEURON MODeling Language. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7302241 DOI: 10.1007/978-3-030-50371-0_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Domain-specific languages (DSLs) play an increasingly important role in the generation of high performing software. They allow the user to exploit domain knowledge for the generation of more efficient code on target architectures. Here, we describe a new code generation framework (NMODL) for an existing DSL in the NEURON framework, a widely used software for massively parallel simulation of biophysically detailed brain tissue models. Existing NMODL DSL transpilers lack either essential features to generate optimized code or capability to parse the diversity of existing models in the user community. Our NMODL framework has been tested against a large number of previously published user models and offers high-level domain-specific optimizations and symbolic algebraic simplifications before target code generation. NMODL implements multiple SIMD and SPMD targets optimized for modern hardware. When comparing NMODL-generated kernels with NEURON we observe a speedup of up to 20\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\times $$\end{document}, resulting in overall speedups of two different production simulations by \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\sim }{7}{\times }$$\end{document}. When compared to SIMD optimized kernels that heavily relied on auto-vectorization by the compiler still a speedup of up to \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\sim }{2}{\times }$$\end{document} is observed.
Collapse
|
20
|
Igarashi J, Yamaura H, Yamazaki T. Large-Scale Simulation of a Layered Cortical Sheet of Spiking Network Model Using a Tile Partitioning Method. Front Neuroinform 2019; 13:71. [PMID: 31849631 PMCID: PMC6895031 DOI: 10.3389/fninf.2019.00071] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 11/12/2019] [Indexed: 11/13/2022] Open
Abstract
One of the grand challenges for computational neuroscience and high-performance computing is computer simulation of a human-scale whole brain model with spiking neurons and synaptic plasticity using supercomputers. To achieve such a simulation, the target network model must be partitioned onto a number of computational nodes, and the sub-network models are executed in parallel while communicating spike information across different nodes. However, it remains unclear how the target network model should be partitioned for efficient computing on next generation of supercomputers. Specifically, reducing the communication of spike information across compute nodes is essential, because of the relatively slower network performance than processor and memory. From the viewpoint of biological features, the cerebral cortex and cerebellum contain 99% of neurons and synapses and form layered sheet structures. Therefore, an efficient method to split the network should exploit the layered sheet structures. In this study, we indicate that a tile partitioning method leads to efficient communication. To demonstrate it, a simulation software called MONET (Millefeuille-like Organization NEural neTwork simulator) that partitions a network model as described above was developed. The MONET simulator was implemented on the Japanese flagship supercomputer K, which is composed of 82,944 computational nodes. We examined a performance of calculation, communication and memory consumption in the tile partitioning method for a cortical model with realistic anatomical and physiological parameters. The result showed that the tile partitioning method drastically reduced communication data amount by replacing network communication with DRAM access and sharing the communication data with neighboring neurons. We confirmed the scalability and efficiency of the tile partitioning method on up to 63,504 compute nodes of the K computer for the cortical model. In the companion paper by Yamaura et al., the performance for a cerebellar model was examined. These results suggest that the tile partitioning method will have advantage for a human-scale whole-brain simulation on exascale computers.
Collapse
Affiliation(s)
- Jun Igarashi
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Saitama, Japan
| | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
21
|
Jordan J, Weidel P, Morrison A. A Closed-Loop Toolchain for Neural Network Simulations of Learning Autonomous Agents. Front Comput Neurosci 2019; 13:46. [PMID: 31427939 PMCID: PMC6687756 DOI: 10.3389/fncom.2019.00046] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 06/25/2019] [Indexed: 11/17/2022] Open
Abstract
Neural network simulation is an important tool for generating and evaluating hypotheses on the structure, dynamics, and function of neural circuits. For scientific questions addressing organisms operating autonomously in their environments, in particular where learning is involved, it is crucial to be able to operate such simulations in a closed-loop fashion. In such a set-up, the neural agent continuously receives sensory stimuli from the environment and provides motor signals that manipulate the environment or move the agent within it. So far, most studies requiring such functionality have been conducted with custom simulation scripts and manually implemented tasks. This makes it difficult for other researchers to reproduce and build upon previous work and nearly impossible to compare the performance of different learning architectures. In this work, we present a novel approach to solve this problem, connecting benchmark tools from the field of machine learning and state-of-the-art neural network simulators from computational neuroscience. The resulting toolchain enables researchers in both fields to make use of well-tested high-performance simulation software supporting biologically plausible neuron, synapse and network models and allows them to evaluate and compare their approach on the basis of standardized environments with various levels of complexity. We demonstrate the functionality of the toolchain by implementing a neuronal actor-critic architecture for reinforcement learning in the NEST simulator and successfully training it on two different environments from the OpenAI Gym. We compare its performance to a previously suggested neural network model of reinforcement learning in the basal ganglia and a generic Q-learning algorithm.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
| | - Philipp Weidel
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
- aiCTX, Zurich, Switzerland
- Department of Computer Science, RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
- Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
22
|
Khoyratee F, Grassia F, Saïghi S, Levi T. Optimized Real-Time Biomimetic Neural Network on FPGA for Bio-hybridization. Front Neurosci 2019; 13:377. [PMID: 31068781 PMCID: PMC6491680 DOI: 10.3389/fnins.2019.00377] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 04/02/2019] [Indexed: 01/04/2023] Open
Abstract
Neurological diseases can be studied by performing bio-hybrid experiments using a real-time biomimetic Spiking Neural Network (SNN) platform. The Hodgkin-Huxley model offers a set of equations including biophysical parameters which can serve as a base to represent different classes of neurons and affected cells. Also, connecting the artificial neurons to the biological cells would allow us to understand the effect of the SNN stimulation using different parameters on nerve cells. Thus, designing a real-time SNN could useful for the study of simulations of some part of the brain. Here, we present a different approach to optimize the Hodgkin-Huxley equations adapted for Field Programmable Gate Array (FPGA) implementation. The equations of the conductance have been unified to allow the use of same functions with different parameters for all ionic channels. The low resources and high-speed implementation also include features, such as synaptic noise using the Ornstein-Uhlenbeck process and different synapse receptors including AMPA, GABAa, GABAb, and NMDA receptors. The platform allows real-time modification of the neuron parameters and can output different cortical neuron families like Fast Spiking (FS), Regular Spiking (RS), Intrinsically Bursting (IB), and Low Threshold Spiking (LTS) neurons using a Digital to Analog Converter (DAC). Gaussian distribution of the synaptic noise highlights similarities with the biological noise. Also, cross-correlation between the implementation and the model shows strong correlations, and bifurcation analysis reproduces similar behavior compared to the original Hodgkin-Huxley model. The implementation of one core of calculation uses 3% of resources of the FPGA and computes in real-time 500 neurons with 25,000 synapses and synaptic noise which can be scaled up to 15,000 using all resources. This is the first step toward neuromorphic system which can be used for the simulation of bio-hybridization and for the study of neurological disorders or the advanced research on neuroprosthesis to regain lost function.
Collapse
Affiliation(s)
- Farad Khoyratee
- Laboratoire de l'Intégration du Matériau au Système, Bordeaux INP, CNRS UMR 5218, University of Bordeaux, Talence, France
| | - Filippo Grassia
- LTI Laboratory, EA 3899, University of Picardie Jules Verne, Amiens, France
| | - Sylvain Saïghi
- Laboratoire de l'Intégration du Matériau au Système, Bordeaux INP, CNRS UMR 5218, University of Bordeaux, Talence, France
| | - Timothée Levi
- Laboratoire de l'Intégration du Matériau au Système, Bordeaux INP, CNRS UMR 5218, University of Bordeaux, Talence, France.,Institute of Industrial Science, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
23
|
Végh J. How Amdahl's Law limits the performance of large artificial neural networks : why the functionality of full-scale brain simulation on processor-based simulators is limited. Brain Inform 2019; 6:4. [PMID: 30972504 PMCID: PMC6458202 DOI: 10.1186/s40708-019-0097-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 03/10/2019] [Indexed: 12/01/2022] Open
Abstract
With both knowing more and more details about how neurons and complex neural networks work and having serious demand for making performable huge artificial networks, more and more efforts are devoted to build both hardware and/or software simulators and supercomputers targeting artificial intelligence applications, demanding an exponentially increasing amount of computing capacity.
However, the inherently parallel operation of the neural networks is mostly simulated deploying inherently sequential (or in the best case: sequential–parallel) computing elements. The paper shows that neural network simulators, (both software and hardware ones), akin to all other sequential–parallel computing systems, have computing performance limitation due to deploying clock-driven electronic circuits, the 70-year old computing paradigm and Amdahl’s Law about parallelized computing systems. The findings explain the limitations/saturation experienced in former studies.
Collapse
Affiliation(s)
- János Végh
- Kalimános BT, Komlóssy u 26, Debrecen, 4032, Hungary.
| |
Collapse
|
24
|
Fernandez-Musoles C, Coca D, Richmond P. Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability. Front Neuroinform 2019; 13:19. [PMID: 31001102 PMCID: PMC6454199 DOI: 10.3389/fninf.2019.00019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 03/11/2019] [Indexed: 11/30/2022] Open
Abstract
In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network.
Collapse
Affiliation(s)
| | - Daniel Coca
- Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Paul Richmond
- Computer Science, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
25
|
Chatzikonstantis G, Sidiropoulos H, Strydis C, Negrello M, Smaragdos G, De Zeeuw C, Soudris D. Multinode implementation of an extended Hodgkin–Huxley simulator. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.062] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
26
|
Closed-Loop Systems and In Vitro Neuronal Cultures: Overview and Applications. ADVANCES IN NEUROBIOLOGY 2019; 22:351-387. [DOI: 10.1007/978-3-030-11135-9_15] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
27
|
Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, van Albada SJ. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS Comput Biol 2018; 14:e1006359. [PMID: 30335761 PMCID: PMC6193609 DOI: 10.1371/journal.pcbi.1006359] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 07/12/2018] [Indexed: 11/28/2022] Open
Abstract
Cortical activity has distinct features across scales, from the spiking statistics of individual cells to global resting-state networks. We here describe the first full-density multi-area spiking network model of cortex, using macaque visual cortex as a test system. The model represents each area by a microcircuit with area-specific architecture and features layer- and population-resolved connectivity between areas. Simulations reveal a structured asynchronous irregular ground state. In a metastable regime, the network reproduces spiking statistics from electrophysiological recordings and cortico-cortical interaction patterns in fMRI functional connectivity under resting-state conditions. Stable inter-area propagation is supported by cortico-cortical synapses that are moderately strong onto excitatory neurons and stronger onto inhibitory neurons. Causal interactions depend on both cortical structure and the dynamical state of populations. Activity propagates mainly in the feedback direction, similar to experimental results associated with visual imagery and sleep. The model unifies local and large-scale accounts of cortex, and clarifies how the detailed connectivity of cortex shapes its dynamics on multiple scales. Based on our simulations, we hypothesize that in the spontaneous condition the brain operates in a metastable regime where cortico-cortical projections target excitatory and inhibitory populations in a balanced manner that produces substantial inter-area interactions while maintaining global stability. The mammalian cortex fulfills its complex tasks by operating on multiple temporal and spatial scales from single cells to entire areas comprising millions of cells. These multi-scale dynamics are supported by specific network structures at all levels of organization. Since models of cortex hitherto tend to concentrate on a single scale, little is known about how cortical structure shapes the multi-scale dynamics of the network. We here present dynamical simulations of a multi-area network model at neuronal and synaptic resolution with population-specific connectivity based on extensive experimental data which accounts for a wide range of dynamical phenomena. Our model elucidates relationships between local and global scales in cortex and provides a platform for future studies of cortical function.
Collapse
Affiliation(s)
- Maximilian Schmidt
- Laboratory for Neural Coding and Brain Computing, RIKEN Center for Brain Science, Wako-Shi, Saitama, Japan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Rembrandt Bakker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Kelly Shen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada
| | - Gleb Bezgin
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, RWTH Aachen University, Aachen, Germany
| | - Sacha Jennifer van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- * E-mail:
| |
Collapse
|
28
|
Blundell I, Plotnikov D, Eppler JM, Morrison A. Automatically Selecting a Suitable Integration Scheme for Systems of Differential Equations in Neuron Models. Front Neuroinform 2018; 12:50. [PMID: 30349471 PMCID: PMC6186990 DOI: 10.3389/fninf.2018.00050] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 07/23/2018] [Indexed: 12/17/2022] Open
Abstract
On the level of the spiking activity, the integrate-and-fire neuron is one of the most commonly used descriptions of neural activity. A multitude of variants has been proposed to cope with the huge diversity of behaviors observed in biological nerve cells. The main appeal of this class of model is that it can be defined in terms of a hybrid model, where a set of mathematical equations describes the sub-threshold dynamics of the membrane potential and the generation of action potentials is often only added algorithmically without the shape of spikes being part of the equations. In contrast to more detailed biophysical models, this simple description of neuron models allows the routine simulation of large biological neuronal networks on standard hardware widely available in most laboratories these days. The time evolution of the relevant state variables is usually defined by a small set of ordinary differential equations (ODEs). A small number of evolution schemes for the corresponding systems of ODEs are commonly used for many neuron models, and form the basis of the neuron model implementations built into commonly used simulators like Brian, NEST and NEURON. However, an often neglected problem is that the implemented evolution schemes are only rarely selected through a structured process based on numerical criteria. This practice cannot guarantee accurate and stable solutions for the equations and the actual quality of the solution depends largely on the parametrization of the model. In this article, we give an overview of typical equations and state descriptions for the dynamics of the relevant variables in integrate-and-fire models. We then describe a formal mathematical process to automate the design or selection of a suitable evolution scheme for this large class of models. Finally, we present the reference implementation of our symbolic analysis toolbox for ODEs that can guide modelers during the implementation of custom neuron models.
Collapse
Affiliation(s)
- Inga Blundell
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Aachen Research Alliance BRAIN Institute I, Forschungszentrum Jülich, Jülich, Germany
| | - Dimitri Plotnikov
- Simulation Lab Neuroscience, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich, Jülich, Germany.,Chair of Software Engineering, Jülich Aachen Research Alliance, RWTH Aachen University, Aachen, Germany
| | - Jochen M Eppler
- Simulation Lab Neuroscience, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Aachen Research Alliance BRAIN Institute I, Forschungszentrum Jülich, Jülich, Germany.,Simulation Lab Neuroscience, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich, Jülich, Germany.,Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
29
|
Linne ML. Neuroinformatics and Computational Modelling as Complementary Tools for Neurotoxicology Studies. Basic Clin Pharmacol Toxicol 2018; 123 Suppl 5:56-61. [DOI: 10.1111/bcpt.13075] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Accepted: 06/18/2018] [Indexed: 11/28/2022]
Affiliation(s)
- Marja-Leena Linne
- BioMediTech and Faculty of Biomedical Sciences and Engineering; Tampere University of Technology; Tampere Finland
| |
Collapse
|
30
|
Heiberg T, Kriener B, Tetzlaff T, Einevoll GT, Plesser HE. Firing-rate models for neurons with a broad repertoire of spiking behaviors. J Comput Neurosci 2018; 45:103-132. [PMID: 30146661 PMCID: PMC6208914 DOI: 10.1007/s10827-018-0693-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 08/01/2018] [Accepted: 08/02/2018] [Indexed: 11/29/2022]
Abstract
Capturing the response behavior of spiking neuron models with rate-based models facilitates the investigation of neuronal networks using powerful methods for rate-based network dynamics. To this end, we investigate the responses of two widely used neuron model types, the Izhikevich and augmented multi-adapative threshold (AMAT) models, to a range of spiking inputs ranging from step responses to natural spike data. We find (i) that linear-nonlinear firing rate models fitted to test data can be used to describe the firing-rate responses of AMAT and Izhikevich spiking neuron models in many cases; (ii) that firing-rate responses are generally too complex to be captured by first-order low-pass filters but require bandpass filters instead; (iii) that linear-nonlinear models capture the response of AMAT models better than of Izhikevich models; (iv) that the wide range of response types evoked by current-injection experiments collapses to few response types when neurons are driven by stationary or sinusoidally modulated Poisson input; and (v) that AMAT and Izhikevich models show different responses to spike input despite identical responses to current injections. Together, these findings suggest that rate-based models of network dynamics may capture a wider range of neuronal response properties by incorporating second-order bandpass filters fitted to responses of spiking model neurons. These models may contribute to bringing rate-based network modeling closer to the reality of biological neuronal networks.
Collapse
Affiliation(s)
- Thomas Heiberg
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Birgit Kriener
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.,Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Gaute T Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.,Department of Physics, University of Oslo, Oslo, Norway
| | - Hans E Plesser
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway. .,Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.
| |
Collapse
|
31
|
van Albada SJ, Rowley AG, Senk J, Hopkins M, Schmidt M, Stokes AB, Lester DR, Diesmann M, Furber SB. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model. Front Neurosci 2018; 12:291. [PMID: 29875620 PMCID: PMC5974216 DOI: 10.3389/fnins.2018.00291] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 04/13/2018] [Indexed: 01/12/2023] Open
Abstract
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks.
Collapse
Affiliation(s)
- Sacha J van Albada
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany
| | - Andrew G Rowley
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany
| | - Michael Hopkins
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Maximilian Schmidt
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany.,Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Japan
| | - Alan B Stokes
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - David R Lester
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
32
|
Jordan J, Ippen T, Helias M, Kitayama I, Sato M, Igarashi J, Diesmann M, Kunkel S. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Front Neuroinform 2018; 12:2. [PMID: 29503613 PMCID: PMC5820465 DOI: 10.3389/fninf.2018.00002] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Accepted: 01/18/2018] [Indexed: 11/13/2022] Open
Abstract
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Collapse
Affiliation(s)
- Jakob Jordan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tammo Ippen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Itaru Kitayama
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Mitsuhisa Sato
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Jun Igarashi
- Computational Engineering Applications Unit, RIKEN, Wako, Japan
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.,Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Jülich Research Centre, Jülich, Germany
| |
Collapse
|
33
|
Bouchard KE, Aimone JB, Chun M, Dean T, Denker M, Diesmann M, Donofrio DD, Frank LM, Kasthuri N, Koch C, Ruebel O, Simon HD, Sommer FT, Prabhat. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination. Neuron 2017; 92:628-631. [PMID: 27810006 DOI: 10.1016/j.neuron.2016.10.035] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Revised: 10/18/2016] [Accepted: 10/18/2016] [Indexed: 10/20/2022]
Abstract
Opportunities offered by new neuro-technologies are threatened by lack of coherent plans to analyze, manage, and understand the data. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.
Collapse
Affiliation(s)
- Kristofer E Bouchard
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA; Kavli Institute for Fundamental Neuroscience, UC San Francisco, San Francisco, CA 94158, USA; Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA 94720, USA.
| | - James B Aimone
- Center for Computing Research, Sandia National Laboratories, Albuquerque, NM 87185, USA
| | | | - Thomas Dean
- Google Research, Mountain View, CA 94043, USA
| | - Michael Denker
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute, Jülich Research Centre, 52425 Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute, Jülich Research Centre, 52425 Jülich, Germany; Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, 52062 Aachen, Germany; Department of Physics, RWTH Aachen University, 52062 Aachen, Germany
| | - David D Donofrio
- Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Loren M Frank
- Kavli Institute for Fundamental Neuroscience, UC San Francisco, San Francisco, CA 94158, USA; Howard Hughes Medical Institute, UC San Francisco, San Francisco, CA 94158, USA; Department of Physiology, UC San Francisco, San Francisco, CA 94158, USA
| | - Narayanan Kasthuri
- Nanoscience Division, Argonne National Laboratory, Lemont, IL 60439, USA; Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA
| | - Chirstof Koch
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Oliver Ruebel
- Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Horst D Simon
- Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Friedrich T Sommer
- Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, CA 94720, USA
| | - Prabhat
- NERSC, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
| |
Collapse
|
34
|
Amunts K, Ebell C, Muller J, Telefont M, Knoll A, Lippert T. The Human Brain Project: Creating a European Research Infrastructure to Decode the Human Brain. Neuron 2017; 92:574-581. [PMID: 27809997 DOI: 10.1016/j.neuron.2016.10.046] [Citation(s) in RCA: 128] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Decoding the human brain is perhaps the most fascinating scientific challenge in the 21st century. The Human Brain Project (HBP), a 10-year European Flagship, targets the reconstruction of the brain's multi-scale organization. It uses productive loops of experiments, medical, data, data analytics, and simulation on all levels that will eventually bridge the scales. The HBP IT architecture is unique, utilizing cloud-based collaboration and development platforms with databases, workflow systems, petabyte storage, and supercomputers. The HBP is developing toward a European research infrastructure advancing brain research, medicine, and brain-inspired information technology.
Collapse
Affiliation(s)
- Katrin Amunts
- Institute for Neuroscience and Medicine, 52425 Forschungszentrum Jülich, Germany; C. and O. Vogt Institute for Brain Research, University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, 40225 Düsseldorf, Germany.
| | - Christoph Ebell
- Human Brain Project École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Batiment B1, Chemin des Mines 9, CH-1202 Geneva, Switzerland
| | - Jeff Muller
- Human Brain Project École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Batiment B1, Chemin des Mines 9, CH-1202 Geneva, Switzerland
| | - Martin Telefont
- Human Brain Project École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Batiment B1, Chemin des Mines 9, CH-1202 Geneva, Switzerland
| | - Alois Knoll
- Institut für Informatik VI, Technische Universität München, Boltzmannstraße 3, 85748 Garching bei München, Germany
| | - Thomas Lippert
- Jülich Supercomputing Centre, Institute for Advanced Simulation, 52425 Forschungszentrum Jülich, Germany
| |
Collapse
|
35
|
Kunkel S, Schenck W. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code. Front Neuroinform 2017; 11:40. [PMID: 28701946 PMCID: PMC5487483 DOI: 10.3389/fninf.2017.00040] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 06/07/2017] [Indexed: 11/29/2022] Open
Abstract
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
Collapse
Affiliation(s)
- Susanne Kunkel
- Simulation Laboratory Neuroscience, Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Forschungszentrum JülichJülich, Germany.,Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
| | - Wolfram Schenck
- Simulation Laboratory Neuroscience, Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Forschungszentrum JülichJülich, Germany.,Faculty of Engineering and Mathematics, Bielefeld University of Applied SciencesBielefeld, Germany
| |
Collapse
|
36
|
Hahne J, Dahmen D, Schuecker J, Frommer A, Bolten M, Helias M, Diesmann M. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator. Front Neuroinform 2017; 11:34. [PMID: 28596730 PMCID: PMC5442232 DOI: 10.3389/fninf.2017.00034] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2016] [Accepted: 05/01/2017] [Indexed: 01/21/2023] Open
Abstract
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
Collapse
Affiliation(s)
- Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
| | - Jannis Schuecker
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
| | - Andreas Frommer
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen UniversityAachen, Germany
| |
Collapse
|
37
|
Ippen T, Eppler JM, Plesser HE, Diesmann M. Constructing Neuronal Network Models in Massively Parallel Environments. Front Neuroinform 2017; 11:30. [PMID: 28559808 PMCID: PMC5432669 DOI: 10.3389/fninf.2017.00030] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Accepted: 04/04/2017] [Indexed: 11/13/2022] Open
Abstract
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.
Collapse
Affiliation(s)
- Tammo Ippen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research CentreJülich, Germany.,Faculty of Science and Technology, Norwegian University of Life SciencesÅs, Norway
| | - Jochen M Eppler
- Simulation Laboratory Neuroscience-Bernstein Facility Simulation and Database Technology, Institute for Advanced Simulation, Jülich Research Centre and JARAJülich, Germany
| | - Hans E Plesser
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research CentreJülich, Germany.,Faculty of Science and Technology, Norwegian University of Life SciencesÅs, Norway.,Department of Biosciences, Centre for Integrative Neuroplasticity, University of OsloOslo, Norway
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research CentreJülich, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen UniversityAachen, Germany.,Department of Physics, Faculty 1, RWTH Aachen UniversityAachen, Germany
| |
Collapse
|
38
|
Naveros F, Garrido JA, Carrillo RR, Ros E, Luque NR. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks. Front Neuroinform 2017; 11:7. [PMID: 28223930 PMCID: PMC5293783 DOI: 10.3389/fninf.2017.00007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Accepted: 01/18/2017] [Indexed: 12/12/2022] Open
Abstract
Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity.
Collapse
Affiliation(s)
- Francisco Naveros
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Jesus A Garrido
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Richard R Carrillo
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada Granada, Spain
| | - Niceto R Luque
- Vision Institute, Aging in Vision and Action LabParis, France; CNRS, INSERM, Pierre and Marie Curie UniversityParis, France
| |
Collapse
|
39
|
|
40
|
Falotico E, Vannucci L, Ambrosano A, Albanese U, Ulbrich S, Vasquez Tieck JC, Hinkel G, Kaiser J, Peric I, Denninger O, Cauli N, Kirtay M, Roennau A, Klinker G, Von Arnim A, Guyot L, Peppicelli D, Martínez-Cañada P, Ros E, Maier P, Weber S, Huber M, Plecher D, Röhrbein F, Deser S, Roitberg A, van der Smagt P, Dillman R, Levi P, Laschi C, Knoll AC, Gewaltig MO. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform. Front Neurorobot 2017; 11:2. [PMID: 28179882 PMCID: PMC5263131 DOI: 10.3389/fnbot.2017.00002] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Accepted: 01/04/2017] [Indexed: 11/13/2022] Open
Abstract
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
Collapse
Affiliation(s)
- Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | - Lorenzo Vannucci
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | | | - Ugo Albanese
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | - Stefan Ulbrich
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Juan Camilo Vasquez Tieck
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Georg Hinkel
- Department of Software Engineering (SE), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Jacques Kaiser
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Igor Peric
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Oliver Denninger
- Department of Software Engineering (SE), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Nino Cauli
- Computer and Robot Vision Laboratory, Instituto de Sistemas e Robotica, Instituto Superior Tecnico, Lisbon, Portugal
| | - Murat Kirtay
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | - Arne Roennau
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Gudrun Klinker
- Department of Informatics, Technical University of Munich, Garching, Germany
| | | | - Luc Guyot
- Blue Brain Project (BBP), École polytechnique fédérale de Lausanne (EPFL), Genève, Switzerland
| | - Daniel Peppicelli
- Blue Brain Project (BBP), École polytechnique fédérale de Lausanne (EPFL), Genève, Switzerland
| | - Pablo Martínez-Cañada
- Department of Computer Architecture and Technology, CITIC, University of Granada, Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, CITIC, University of Granada, Granada, Spain
| | - Patrick Maier
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Sandro Weber
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Manuel Huber
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - David Plecher
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Florian Röhrbein
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Stefan Deser
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Alina Roitberg
- Department of Informatics, Technical University of Munich, Garching, Germany
| | | | - Rüdiger Dillman
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Paul Levi
- Department of Intelligent Systems and Production Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Cecilia Laschi
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | - Alois C. Knoll
- Department of Informatics, Technical University of Munich, Garching, Germany
| | - Marc-Oliver Gewaltig
- Blue Brain Project (BBP), École polytechnique fédérale de Lausanne (EPFL), Genève, Switzerland
| |
Collapse
|
41
|
Hagen E, Dahmen D, Stavrinou ML, Lindén H, Tetzlaff T, van Albada SJ, Grün S, Diesmann M, Einevoll GT. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks. Cereb Cortex 2016; 26:4461-4496. [PMID: 27797828 PMCID: PMC6193674 DOI: 10.1093/cercor/bhw237] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Revised: 05/31/2016] [Accepted: 07/12/2016] [Indexed: 12/21/2022] Open
Abstract
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail.
Collapse
Affiliation(s)
- Espen Hagen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany.,Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
| | - Maria L Stavrinou
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway.,Department of Psychology, University of Oslo, 0373 Oslo, Norway
| | - Henrik Lindén
- Department of Neuroscience and Pharmacology, University of Copenhagen, 2200 Copenhagen, Denmark.,Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology, 100 44 Stockholm, Sweden
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
| | - Sacha J van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, 52056 Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, 52062 Aachen, Germany
| | - Gaute T Einevoll
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway.,Department of Physics, University of Oslo, 0316 Oslo, Norway
| |
Collapse
|
42
|
Weidel P, Djurfeldt M, Duarte RC, Morrison A. Closed Loop Interactions between Spiking Neural Network and Robotic Simulators Based on MUSIC and ROS. Front Neuroinform 2016; 10:31. [PMID: 27536234 PMCID: PMC4971076 DOI: 10.3389/fninf.2016.00031] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2016] [Accepted: 07/12/2016] [Indexed: 11/13/2022] Open
Abstract
In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning.
Collapse
Affiliation(s)
- Philipp Weidel
- Institute for Advanced Simulation, Theoretical Neuroscience and Institute of Neuroscience and Medicine, Computational and Systems Neuroscience and Jülich Aachen Research Alliance BRAIN Institute I, Jülich Research Center and Jülich Aachen Research Alliance Jülich, Germany
| | - Mikael Djurfeldt
- Institute for Advanced Simulation, Theoretical Neuroscience and Institute of Neuroscience and Medicine, Computational and Systems Neuroscience and Jülich Aachen Research Alliance BRAIN Institute I, Jülich Research Center and Jülich Aachen Research AllianceJülich, Germany; PDC Center for High Performance Computing, KTH Royal Institute of TechnologyStockholm, Sweden
| | - Renato C Duarte
- Institute for Advanced Simulation, Theoretical Neuroscience and Institute of Neuroscience and Medicine, Computational and Systems Neuroscience and Jülich Aachen Research Alliance BRAIN Institute I, Jülich Research Center and Jülich Aachen Research AllianceJülich, Germany; Faculty of Biology, Albert-Ludwig University of FreiburgFreiburg im Breisgau, Germany; Bernstein Center Freiburg, Albert-Ludwig University of FreiburgFreiburg im Breisgau, Germany
| | - Abigail Morrison
- Institute for Advanced Simulation, Theoretical Neuroscience and Institute of Neuroscience and Medicine, Computational and Systems Neuroscience and Jülich Aachen Research Alliance BRAIN Institute I, Jülich Research Center and Jülich Aachen Research AllianceJülich, Germany; Bernstein Center Freiburg, Albert-Ludwig University of FreiburgFreiburg im Breisgau, Germany; Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research CenterJülich, Germany; Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University BochumBochum, Germany
| |
Collapse
|
43
|
Diaz-Pier S, Naveau M, Butz-Ostendorf M, Morrison A. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity. Front Neuroanat 2016; 10:57. [PMID: 27303272 PMCID: PMC4880596 DOI: 10.3389/fnana.2016.00057] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 05/06/2016] [Indexed: 11/13/2022] Open
Abstract
With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.
Collapse
Affiliation(s)
- Sandra Diaz-Pier
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center Jülich, Germany
| | - Mikaël Naveau
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research CenterJülich, Germany; Serine Proteases and Pathophysiology of the Neurovascular Unit, Institut National de la Santé et de la Recherche Médicale UMR-S U919, Caen Normandy University, Groupement d'Intérêt Public (GIP) CYCERONCaen, France
| | - Markus Butz-Ostendorf
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center Jülich, Germany
| | - Abigail Morrison
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research CenterJülich, Germany; Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Jülich Research CentreJülich, Germany; Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University BochumBochum, Germany
| |
Collapse
|
44
|
van Albada SJ, Helias M, Diesmann M. Limits to the scalability of cortical network models. BMC Neurosci 2015. [PMCID: PMC4697573 DOI: 10.1186/1471-2202-16-s1-o1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
45
|
Hahne J, Helias M, Kunkel S, Igarashi J, Bolten M, Frommer A, Diesmann M. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations. Front Neuroinform 2015; 9:22. [PMID: 26441628 PMCID: PMC4563270 DOI: 10.3389/fninf.2015.00022] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2015] [Accepted: 08/20/2015] [Indexed: 11/30/2022] Open
Abstract
Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.
Collapse
Affiliation(s)
- Jan Hahne
- Department of Mathematics and Science, Bergische Universität Wuppertal Wuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research Centre Jülich, Germany ; Programming Environment Research Team, RIKEN Advanced Institute for Computational Science Kobe, Japan
| | - Susanne Kunkel
- Programming Environment Research Team, RIKEN Advanced Institute for Computational Science Kobe, Japan ; Simulation Laboratory Neuroscience, Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Centre Jülich, Germany
| | - Jun Igarashi
- Neural Computation Unit, Okinawa Institute of Science and Technology Okinawa, Japan ; Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute Wako, Japan
| | - Matthias Bolten
- Department of Mathematics and Science, Bergische Universität Wuppertal Wuppertal, Germany
| | - Andreas Frommer
- Department of Mathematics and Science, Bergische Universität Wuppertal Wuppertal, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research Centre Jülich, Germany ; Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University Aachen, Germany ; Department of Physics, Faculty 1, RWTH Aachen University Aachen, Germany
| |
Collapse
|
46
|
Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations. PLoS Comput Biol 2015; 11:e1004490. [PMID: 26325661 PMCID: PMC4556689 DOI: 10.1371/journal.pcbi.1004490] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Accepted: 08/05/2015] [Indexed: 11/19/2022] Open
Abstract
Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.
Collapse
|