1
|
Pimentel JM, Moioli RC, De Araujo MFP, Vargas PA. An Integrated Neurorobotics Model of the Cerebellar-Basal Ganglia Circuitry. Int J Neural Syst 2023; 33:2350059. [PMID: 37791495 DOI: 10.1142/s0129065723500594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
This work presents a neurorobotics model of the brain that integrates the cerebellum and the basal ganglia regions to coordinate movements in a humanoid robot. This cerebellar-basal ganglia circuitry is well known for its relevance to the motor control used by most mammals. Other computational models have been designed for similar applications in the robotics field. However, most of them completely ignore the interplay between neurons from the basal ganglia and cerebellum. Recently, neuroscientists indicated that neurons from both regions communicate not only at the level of the cerebral cortex but also at the subcortical level. In this work, we built an integrated neurorobotics model to assess the capacity of the network to predict and adjust the motion of the hands of a robot in real time. Our model was capable of performing different movements in a humanoid robot by respecting the sensorimotor loop of the robot and the biophysical features of the neuronal circuitry. The experiments were executed in simulation and the real world. We believe that our proposed neurorobotics model can be an important tool for new studies on the brain and a reference toward new robot motor controllers.
Collapse
Affiliation(s)
- Jhielson M Pimentel
- Edinburgh Centre for Robotics, Heriot-Watt University, Edinburgh EH14 4AS, UK
| | - Renan C Moioli
- Bioinformatics Multidisciplinary Environment, Digital Metropolis Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil
| | | | - Patricia A Vargas
- Edinburgh Centre for Robotics, Heriot-Watt University, Edinburgh EH14 4AS, UK
| |
Collapse
|
2
|
Cremonesi F, Schürmann F. Understanding Computational Costs of Cellular-Level Brain Tissue Simulations Through Analytical Performance Models. Neuroinformatics 2020; 18:407-428. [PMID: 32056104 PMCID: PMC7338826 DOI: 10.1007/s12021-019-09451-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Computational modeling and simulation have become essential tools in the quest to better understand the brain's makeup and to decipher the causal interrelations of its components. The breadth of biochemical and biophysical processes and structures in the brain has led to the development of a large variety of model abstractions and specialized tools, often times requiring high performance computing resources for their timely execution. What has been missing so far was an in-depth analysis of the complexity of the computational kernels, hindering a systematic approach to identifying bottlenecks of algorithms and hardware. If whole brain models are to be achieved on emerging computer generations, models and simulation engines will have to be carefully co-designed for the intrinsic hardware tradeoffs. For the first time, we present a systematic exploration based on analytic performance modeling. We base our analysis on three in silico models, chosen as representative examples of the most widely employed modeling abstractions: current-based point neurons, conductance-based point neurons and conductance-based detailed neurons. We identify that the synaptic modeling formalism, i.e. current or conductance-based representation, and not the level of morphological detail, is the most significant factor in determining the properties of memory bandwidth saturation and shared-memory scaling of in silico models. Even though general purpose computing has, until now, largely been able to deliver high performance, we find that for all types of abstractions, network latency and memory bandwidth will become severe bottlenecks as the number of neurons to be simulated grows. By adapting and extending a performance modeling approach, we deliver a first characterization of the performance landscape of brain tissue simulations, allowing us to pinpoint current bottlenecks for state-of-the-art in silico models, and make projections for future hardware and software requirements.
Collapse
Affiliation(s)
- Francesco Cremonesi
- Blue Brain Project, Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland
| | - Felix Schürmann
- Blue Brain Project, Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland.
| |
Collapse
|
3
|
Krzhizhanovskaya VV, Závodszky G, Lees MH, Dongarra JJ, Sloot PMA, Brissos S, Teixeira J. Fully-Asynchronous Fully-Implicit Variable-Order Variable-Timestep Simulation of Neural Networks. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7302593 DOI: 10.1007/978-3-030-50426-7_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
State-of-the-art simulations of detailed neurons follow the Bulk Synchronous Parallel execution model. Execution is divided in equidistant communication intervals, with parallel neurons interpolation and collective communication guiding synchronization. Such simulations, driven by stiff dynamics or wide range of time scales, struggle with fixed step interpolation methods, yielding excessive computation on intervals of quasi-constant activity and inaccurate interpolation of periods of high volatility in solution. Alternative adaptive timestepping methods are inefficient in parallel executions due to computational imbalance at the synchronization barriers. We introduce a distributed fully-asynchronous execution model that removes global synchronization, allowing for long variable timestep interpolations of neurons. Asynchronicity is provided by point-to-point communication notifying neurons’ time advancement to synaptic connectivities. Timestepping is driven by scheduled neuron advancements based on interneuron synaptic delays, yielding an exhaustive yet not speculative execution. Benchmarks on 64 Cray XE6 compute nodes demonstrate reduced number of interpolation steps, higher numerical accuracy and lower runtime compared to state-of-the-art methods. Efficiency is shown to be activity-dependent, with scaling of the algorithm demonstrated on a simulation of a laboratory experiment.
Collapse
|
4
|
Manninen T, Aćimović J, Havela R, Teppola H, Linne ML. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures. Front Neuroinform 2018; 12:20. [PMID: 29765315 PMCID: PMC5938413 DOI: 10.3389/fninf.2018.00020] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 04/06/2018] [Indexed: 01/26/2023] Open
Abstract
The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.
Collapse
Affiliation(s)
- Tiina Manninen
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Jugoslava Aćimović
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Riikka Havela
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Heidi Teppola
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Marja-Leena Linne
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| |
Collapse
|
5
|
Mulugeta L, Drach A, Erdemir A, Hunt CA, Horner M, Ku JP, Myers JG, Vadigepalli R, Lytton WW. Credibility, Replicability, and Reproducibility in Simulation for Biomedicine and Clinical Applications in Neuroscience. Front Neuroinform 2018; 12:18. [PMID: 29713272 PMCID: PMC5911506 DOI: 10.3389/fninf.2018.00018] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 03/29/2018] [Indexed: 12/27/2022] Open
Abstract
Modeling and simulation in computational neuroscience is currently a research enterprise to better understand neural systems. It is not yet directly applicable to the problems of patients with brain disease. To be used for clinical applications, there must not only be considerable progress in the field but also a concerted effort to use best practices in order to demonstrate model credibility to regulatory bodies, to clinics and hospitals, to doctors, and to patients. In doing this for neuroscience, we can learn lessons from long-standing practices in other areas of simulation (aircraft, computer chips), from software engineering, and from other biomedical disciplines. In this manuscript, we introduce some basic concepts that will be important in the development of credible clinical neuroscience models: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling. We also discuss how garnering strong community involvement can promote model credibility. Finally, in addition to direct usage with patients, we note the potential for simulation usage in the area of Simulation-Based Medical Education, an area which to date has been primarily reliant on physical models (mannequins) and scenario-based simulations rather than on numerical simulations.
Collapse
Affiliation(s)
| | - Andrew Drach
- The Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX, United States
| | - Ahmet Erdemir
- Department of Biomedical Engineering and Computational Biomodeling (CoBi) Core, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, United States
| | - C A Hunt
- Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, San Francisco, CA, United States
| | | | - Joy P Ku
- Department of Bioengineering, Stanford University, Stanford, CA, United States
| | - Jerry G Myers
- NASA Glenn Research Center, Cleveland, OH, United States
| | - Rajanikanth Vadigepalli
- Department of Pathology, Anatomy and Cell Biology, Daniel Baugh Institute for Functional Genomics and Computational Biology, Thomas Jefferson University, Philadelphia, PA, United States
| | - William W Lytton
- Department of Neurology, SUNY Downstate Medical Center, The State University of New York, New York, NY, United States.,Department of Physiology and Pharmacology, SUNY Downstate Medical Center, The State University of New York, New York, NY, United States.,Department of Neurology, Kings County Hospital Center, New York, NY, United States
| |
Collapse
|
6
|
Lytton WW, Seidenstein AH, Dura-Bernal S, McDougal RA, Schürmann F, Hines ML. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON. Neural Comput 2016; 28:2063-90. [PMID: 27557104 DOI: 10.1162/neco_a_00876] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Large multiscale neuronal network simulations are of increasing value as more big data are gathered about brain wiring and organization under the auspices of a current major research initiative, such as Brain Research through Advancing Innovative Neurotechnologies. The development of these models requires new simulation technologies. We describe here the current use of the NEURON simulator with message passing interface (MPI) for simulation in the domain of moderately large networks on commonly available high-performance computers (HPCs). We discuss the basic layout of such simulations, including the methods of simulation setup, the run-time spike-passing paradigm, and postsimulation data storage and data management approaches. Using the Neuroscience Gateway, a portal for computational neuroscience that provides access to large HPCs, we benchmark simulations of neuronal networks of different sizes (500-100,000 cells), and using different numbers of nodes (1-256). We compare three types of networks, composed of either Izhikevich integrate-and-fire neurons (I&F), single-compartment Hodgkin-Huxley (HH) cells, or a hybrid network with half of each. Results show simulation run time increased approximately linearly with network size and decreased almost linearly with the number of nodes. Networks with I&F neurons were faster than HH networks, although differences were small since all tested cells were point neurons with a single compartment.
Collapse
Affiliation(s)
- William W Lytton
- Departments of Physiology, Pharmacology, Biomedical Engineering, and Neurology, SUNY Downstate Medical Center, Brooklyn 11023, New York, and Kings County Hospital Center, Brooklyn 11203, New York, U.S.A.
| | - Alexandra H Seidenstein
- Departments of Physiology, Pharmacology, Biomedical Engineering, and Neurology, SUNY Downstate Medical Center, Brooklyn, NY 11023, and Department of Chemical and Biomolecular Engineering, Tandon School of Engineering, New York University, Brooklyn, NY 11201, U.S.A.
| | - Salvador Dura-Bernal
- Departments of Physiology, Pharmacology, Biomedical Engineering, and Neurology, SUNY Downstate Medical Center, Brooklyn, NY 11023, U.S.A.
| | - Robert A McDougal
- Department of Neuroscience, Yale University, New Haven, CT 06520, U.S.A.
| | - Felix Schürmann
- Blue Brain Project, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, 1015 Geneva, Switzerland
| | - Michael L Hines
- Blue Brain Project, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, 1015 Geneva, Switzerland, and Department of Neuroscience, Yale University, New Haven, CT 06520, U.S.A.
| |
Collapse
|
7
|
Huang S, Hong S, De Schutter E. Non-linear leak currents affect mammalian neuron physiology. Front Cell Neurosci 2015; 9:432. [PMID: 26594148 PMCID: PMC4635211 DOI: 10.3389/fncel.2015.00432] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2015] [Accepted: 10/14/2015] [Indexed: 01/24/2023] Open
Abstract
In their seminal works on squid giant axons, Hodgkin, and Huxley approximated the membrane leak current as Ohmic, i.e., linear, since in their preparation, sub-threshold current rectification due to the influence of ionic concentration is negligible. Most studies on mammalian neurons have made the same, largely untested, assumption. Here we show that the membrane time constant and input resistance of mammalian neurons (when other major voltage-sensitive and ligand-gated ionic currents are discounted) varies non-linearly with membrane voltage, following the prediction of a Goldman-Hodgkin-Katz-based passive membrane model. The model predicts that under such conditions, the time constant/input resistance-voltage relationship will linearize if the concentration differences across the cell membrane are reduced. These properties were observed in patch-clamp recordings of cerebellar Purkinje neurons (in the presence of pharmacological blockers of other background ionic currents) and were more prominent in the sub-threshold region of the membrane potential. Model simulations showed that the non-linear leak affects voltage-clamp recordings and reduces temporal summation of excitatory synaptic input. Together, our results demonstrate the importance of trans-membrane ionic concentration in defining the functional properties of the passive membrane in mammalian neurons as well as other excitable cells.
Collapse
Affiliation(s)
- Shiwei Huang
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University Okinawa, Japan
| | - Sungho Hong
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University Okinawa, Japan
| | - Erik De Schutter
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University Okinawa, Japan
| |
Collapse
|
8
|
Pecevski D, Kappel D, Jonke Z. NEVESIM: event-driven neural simulation framework with a Python interface. Front Neuroinform 2014; 8:70. [PMID: 25177291 PMCID: PMC4132371 DOI: 10.3389/fninf.2014.00070] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2013] [Accepted: 07/17/2014] [Indexed: 12/02/2022] Open
Abstract
NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.
Collapse
Affiliation(s)
- Dejan Pecevski
- Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| | - David Kappel
- Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| | - Zeno Jonke
- Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| |
Collapse
|
9
|
D'Haene M, Hermans M, Schrauwen B. Toward unified hybrid simulation techniques for spiking neural networks. Neural Comput 2014; 26:1055-79. [PMID: 24684451 DOI: 10.1162/neco_a_00587] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In the field of neural network simulation techniques, the common conception is that spiking neural network simulators can be divided in two categories: time-step-based and event-driven methods. In this letter, we look at state-of-the art simulation techniques in both categories and show that a clear distinction between both methods is increasingly difficult to define. In an attempt to improve the weak points of each simulation method, ideas of the alternative method are, sometimes unknowingly, incorporated in the simulation engine. Clearly the ideal simulation method is a mix of both methods. We formulate the key properties of such an efficient and generally applicable hybrid approach.
Collapse
|
10
|
McDougal RA, Hines ML, Lytton WW. Reaction-diffusion in the NEURON simulator. Front Neuroinform 2013; 7:28. [PMID: 24298253 PMCID: PMC3828620 DOI: 10.3389/fninf.2013.00028] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2013] [Accepted: 10/25/2013] [Indexed: 12/29/2022] Open
Abstract
In order to support research on the role of cell biological principles (genomics, proteomics, signaling cascades and reaction dynamics) on the dynamics of neuronal response in health and disease, NEURON's Reaction-Diffusion (rxd) module in Python provides specification and simulation for these dynamics, coupled with the electrophysiological dynamics of the cell membrane. Arithmetic operations on species and parameters are overloaded, allowing arbitrary reaction formulas to be specified using Python syntax. These expressions are then transparently compiled into bytecode that uses NumPy for fast vectorized calculations. At each time step, rxd combines NEURON's integrators with SciPy's sparse linear algebra library.
Collapse
Affiliation(s)
| | | | - William W. Lytton
- Department Physiology and Pharmacology, SUNY DownstateBrooklyn, NY, USA
- Department of Neurology, SUNY DownstateBrooklyn, NY, USA
- Kings County HospitalBrooklyn, NY, USA
| |
Collapse
|
11
|
Kunkel S, Potjans TC, Eppler JM, Plesser HE, Morrison A, Diesmann M. Meeting the memory challenges of brain-scale network simulation. Front Neuroinform 2012; 5:35. [PMID: 22291636 PMCID: PMC3264885 DOI: 10.3389/fninf.2011.00035] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2011] [Accepted: 12/09/2011] [Indexed: 11/13/2022] Open
Abstract
The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity, and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10(5) neurons with up to 10(9) synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been investigated in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Blue Gene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of neuronal simulators as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place. As a consequence, development cycles can be shorter and less expensive. Applying the model to our freely available Neural Simulation Tool (NEST), we identify the software components dominant at different scales, and develop general strategies for reducing the memory consumption, in particular by using data structures that exploit the sparseness of the local representation of the network. We show that these adaptations enable our simulation software to scale up to the order of 10,000 processors and beyond. As memory consumption issues are likely to be relevant for any software dealing with complex connectome data on such architectures, our approach and our findings should be useful for researchers developing novel neuroinformatics solutions to the challenges posed by the connectome project.
Collapse
Affiliation(s)
- Susanne Kunkel
- Functional Neural Circuits Group, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany
| | | | | | | | | | | |
Collapse
|
12
|
Hanuschkin A, Kunkel S, Helias M, Morrison A, Diesmann M. A general and efficient method for incorporating precise spike times in globally time-driven simulations. Front Neuroinform 2010; 4:113. [PMID: 21031031 PMCID: PMC2965048 DOI: 10.3389/fninf.2010.00113] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2009] [Accepted: 08/11/2010] [Indexed: 11/17/2022] Open
Abstract
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision.
Collapse
Affiliation(s)
- Alexander Hanuschkin
- Functional Neural Circuits Group, Faculty of Biology, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany
| | | | | | | | | |
Collapse
|
13
|
D'Haene M, Schrauwen B, Van Campenhout J, Stroobandt D. Accelerating event-driven simulation of spiking neurons with multiple synaptic time constants. Neural Comput 2009; 21:1068-99. [PMID: 18928367 DOI: 10.1162/neco.2008.02-08-707] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The simulation of spiking neural networks (SNNs) is known to be a very time-consuming task. This limits the size of SNN that can be simulated in reasonable time or forces users to overly limit the complexity of the neuron models. This is one of the driving forces behind much of the recent research on event-driven simulation strategies. Although event-driven simulation allows precise and efficient simulation of certain spiking neuron models, it is not straightforward to generalize the technique to more complex neuron models, mostly because the firing time of these neuron models is computationally expensive to evaluate. Most solutions proposed in literature concentrate on algorithms that can solve this problem efficiently. However, these solutions do not scale well when more state variables are involved in the neuron model, which is, for example, the case when multiple synaptic time constants for each neuron are used. In this letter, we show that an exact prediction of the firing time is not required in order to guarantee exact simulation results. Several techniques are presented that try to do the least possible amount of work to predict the firing times. We propose an elegant algorithm for the simulation of leaky integrate-and-fire (LIF) neurons with an arbitrary number of (unconstrained) synaptic time constants, which is able to combine these algorithmic techniques efficiently, resulting in very high simulation speed. Moreover, our algorithm is highly independent of the complexity (i.e., number of synaptic time constants) of the underlying neuron model.
Collapse
Affiliation(s)
- Michiel D'Haene
- Ghent University, Electronics and Information Systems Department, 9000 Ghent, Belgium.
| | | | | | | |
Collapse
|
14
|
Lytton WW, Omurtag A, Neymotin SA, Hines ML. Just-in-time connectivity for large spiking networks. Neural Comput 2008; 20:2745-56. [PMID: 18533821 DOI: 10.1162/neco.2008.10-07-622] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.
Collapse
Affiliation(s)
- William W Lytton
- Department of Physiology and Pharmacology, Biomedical Engineering, and Neurology, SUNY Downstate, Brooklyn, NY 11203, USA.
| | | | | | | |
Collapse
|
15
|
Voltage-stepping schemes for the simulation of spiking neural networks. J Comput Neurosci 2008; 26:409-23. [PMID: 19034641 DOI: 10.1007/s10827-008-0119-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2008] [Revised: 10/14/2008] [Accepted: 10/20/2008] [Indexed: 10/21/2022]
Abstract
The numerical simulation of spiking neural networks requires particular attention. On the one hand, time-stepping methods are generic but they are prone to numerical errors and need specific treatments to deal with the discontinuities of integrate-and-fire models. On the other hand, event-driven methods are more precise but they are restricted to a limited class of neuron models. We present here a voltage-stepping scheme that combines the advantages of these two approaches and consists of a discretization of the voltage state-space. The numerical simulation is reduced to a local event-driven method that induces an implicit activity-dependent time discretization (time-steps automatically increase when the neuron is slowly varying). We show analytically that such a scheme leads to a high-order algorithm so that it accurately approximates the neuronal dynamics. The voltage-stepping method is generic and can be used to simulate any kind of neuron models. We illustrate it on nonlinear integrate-and-fire models and show that it outperforms time-stepping schemes of Runge-Kutta type in terms of simulation time and accuracy.
Collapse
|
16
|
Hines ML, Markram H, Schürmann F. Fully implicit parallel simulation of single neurons. J Comput Neurosci 2008; 25:439-48. [PMID: 18379867 DOI: 10.1007/s10827-008-0087-5] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2007] [Revised: 12/19/2007] [Accepted: 02/26/2008] [Indexed: 11/28/2022]
Abstract
When a multi-compartment neuron is divided into subtrees such that no subtree has more than two connection points to other subtrees, the subtrees can be on different processors and the entire system remains amenable to direct Gaussian elimination with only a modest increase in complexity. Accuracy is the same as with standard Gaussian elimination on a single processor. It is often feasible to divide a 3-D reconstructed neuron model onto a dozen or so processors and experience almost linear speedup. We have also used the method for purposes of load balance in network simulations when some cells are so large that their individual computation time is much longer than the average processor computation time or when there are many more processors than cells. The method is available in the standard distribution of the NEURON simulation program.
Collapse
|
17
|
Brette R, Rudolph M, Carnevale T, Hines M, Beeman D, Bower JM, Diesmann M, Morrison A, Goodman PH, Harris FC, Zirpe M, Natschläger T, Pecevski D, Ermentrout B, Djurfeldt M, Lansner A, Rochel O, Vieville T, Muller E, Davison AP, El Boustani S, Destexhe A. Simulation of networks of spiking neurons: a review of tools and strategies. J Comput Neurosci 2007; 23:349-98. [PMID: 17629781 PMCID: PMC2638500 DOI: 10.1007/s10827-007-0038-6] [Citation(s) in RCA: 335] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2006] [Revised: 04/02/2007] [Accepted: 04/12/2007] [Indexed: 11/26/2022]
Abstract
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.
Collapse
|
18
|
Morrison A, Straube S, Plesser HE, Diesmann M. Exact subthreshold integration with continuous spike times in discrete-time neural network simulations. Neural Comput 2007; 19:47-79. [PMID: 17134317 DOI: 10.1162/neco.2007.19.1.47] [Citation(s) in RCA: 87] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.
Collapse
Affiliation(s)
- Abigail Morrison
- Computational Neurophysics, Institute of Biology III, and Bernstein Center for Computational Neuroscience, Albert-Ludwigs-University, 79104 Freiburg, Germany.
| | | | | | | |
Collapse
|
19
|
Ros E, Carrillo R, Ortigosa EM, Barbour B, Agís R. Event-driven simulation scheme for spiking neural networks using lookup tables to characterize neuronal dynamics. Neural Comput 2007; 18:2959-93. [PMID: 17052155 DOI: 10.1162/neco.2006.18.12.2959] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.
Collapse
|
20
|
Rudolph M, Destexhe A. Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies. Neural Comput 2006; 18:2146-210. [PMID: 16846390 DOI: 10.1162/neco.2006.18.9.2146] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical integration approaches because the timing of spikes is treated exactly. The drawback of such event-driven methods is that in order to be efficient, the membrane equations must be solvable analytically, or at least provide simple analytic approximations for the state variables describing the system. This requirement prevents, in general, the use of conductance-based synaptic interactions within the framework of event-driven simulations and, thus, the investigation of network paradigms where synaptic conductances are important. We propose here a number of extensions of the classical leaky IF neuron model involving approximations of the membrane equation with conductance-based synaptic current, which lead to simple analytic expressions for the membrane state, and therefore can be used in the event-driven framework. These conductance-based IF (gIF) models are compared to commonly used models, such as the leaky IF model or biophysical models in which conductances are explicitly integrated. All models are compared with respect to various spiking response properties in the presence of synaptic activity, such as the spontaneous discharge statistics, the temporal precision in resolving synaptic inputs, and gain modulation under in vivo-like synaptic bombardment. Being based on the passive membrane equation with fixed-threshold spike generation, the proposed gIF models are situated in between leaky IF and biophysical models but are much closer to the latter with respect to their dynamic behavior and response characteristics, while still being nearly as computationally efficient as simple IF neuron models. gIF models should therefore provide a useful tool for efficient and precise simulation of large-scale neuronal networks with realistic, conductance-based synaptic interactions.
Collapse
Affiliation(s)
- Michelle Rudolph
- Unité de Neuroscience Intégratives et Computationnelles, CNRS, 91198 Gif-sur-Yvette, France.
| | | |
Collapse
|
21
|
Migliore M, Cannia C, Lytton WW, Markram H, Hines ML. Parallel network simulations with NEURON. J Comput Neurosci 2006; 21:119-29. [PMID: 16732488 PMCID: PMC2655137 DOI: 10.1007/s10827-006-7949-5] [Citation(s) in RCA: 146] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2005] [Revised: 02/23/2006] [Accepted: 02/24/2006] [Indexed: 11/29/2022]
Abstract
The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.
Collapse
Affiliation(s)
- M Migliore
- Institute of Biophysics, National Research Council, via U La Malfa 153, 90146, Palermo, Italy.
| | | | | | | | | |
Collapse
|
22
|
Abstract
Computational neuroscience relies heavily on the simulation of large networks of neuron models. There are essentially two simulation strategies: (1) using an approximation method (e.g., Runge-Kutta) with spike times binned to the time step and (2) calculating spike times exactly in an event-driven fashion. In large networks, the computation time of the best algorithm for either strategy scales linearly with the number of synapses, but each strategy has its own assets and constraints: approximation methods can be applied to any model but are inexact; exact simulation avoids numerical artifacts but is limited to simple models. Previous work has focused on improving the accuracy of approximation methods. In this article, we extend the range of models that can be simulated exactly to a more realistic model: an integrate-and-fire model with exponential synaptic conductances.
Collapse
Affiliation(s)
- Romain Brette
- Département d'Informatique, Equipe Odyssée, Ecole Normale Supérieure, 75230 Paris Cedex 05, France.
| |
Collapse
|
23
|
Rangan AV, Cai D. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks. J Comput Neurosci 2006; 22:81-100. [PMID: 16896522 DOI: 10.1007/s10827-006-8526-7] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2005] [Revised: 03/25/2006] [Accepted: 03/28/2006] [Indexed: 10/24/2022]
Abstract
We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA.
| | | |
Collapse
|
24
|
|