1
|
Cudone E, Lower AM, McDougal RA. Reproducibility of biophysical in silico neuron states and spikes from event-based partial histories. PLoS Comput Biol 2023; 19:e1011548. [PMID: 37824576 PMCID: PMC10597496 DOI: 10.1371/journal.pcbi.1011548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 10/24/2023] [Accepted: 09/28/2023] [Indexed: 10/14/2023] Open
Abstract
Biophysically detailed simulations of neuronal activity often rely on solving large systems of differential equations; in some models, these systems have tens of thousands of states per cell. Numerically solving these equations is computationally intensive and requires making assumptions about the initial cell states. Additional realism from incorporating more biological detail is achieved at the cost of increasingly more states, more computational resources, and more modeling assumptions. We show that for both a point and morphologically-detailed cell model, the presence and timing of future action potentials is probabilistically well-characterized by the relative timings of a moderate number of recent events alone. Knowledge of initial conditions or full synaptic input history is not required. While model time constants, etc. impact the specifics, we demonstrate that for both individual spikes and sustained cellular activity, the uncertainty in spike response decreases as the number of known input events increases, to the point of approximate determinism. Further, we show cellular model states are reconstructable from ongoing synaptic events, despite unknown initial conditions. We propose that a strictly event-based modeling framework is capable of representing the complexity of cellular dynamics of the differential-equations models with significantly less per-cell state variables, thus offering a pathway toward utilizing modern data-driven modeling to scale up to larger network models while preserving individual cellular biophysics.
Collapse
Affiliation(s)
- Evan Cudone
- Program in Computational Biology and Bioinformatics, Yale University, New Haven, Connecticut, United States of America
| | - Amelia M. Lower
- Yale College, Yale University, New Haven, Connecticut, United States of America
| | - Robert A. McDougal
- Program in Computational Biology and Bioinformatics, Yale University, New Haven, Connecticut, United States of America
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, United States of America
- Section of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, United States of America
- Wu Tsai Institute, Yale University, New Haven, Connecticut, United States of America
| |
Collapse
|
2
|
A novel time-event-driven algorithm for simulating spiking neural networks based on circular array. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.02.085] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
3
|
Krishnan J, Porta Mana P, Helias M, Diesmann M, Di Napoli E. Perfect Detection of Spikes in the Linear Sub-threshold Dynamics of Point Neurons. Front Neuroinform 2018; 11:75. [PMID: 29379430 PMCID: PMC5770835 DOI: 10.3389/fninf.2017.00075] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2017] [Accepted: 12/15/2017] [Indexed: 11/13/2022] Open
Abstract
Spiking neuronal networks are usually simulated with one of three main schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work offers an alternative geometric point of view on neuronal dynamics, and derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but can be optimized in several ways. Comparison confirms earlier results that the imperfect tests rarely miss spikes (less than a fraction 1/108 of missed spikes) in biologically relevant settings.
Collapse
Affiliation(s)
- Jeyashree Krishnan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany.,Aachen Institute for Advanced Study in Computational Engineering Science, RWTH Aachen University, Aachen, Germany.,Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany
| | - PierGianLuca Porta Mana
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Edoardo Di Napoli
- Aachen Institute for Advanced Study in Computational Engineering Science, RWTH Aachen University, Aachen, Germany.,Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany
| |
Collapse
|
4
|
Albers C, Westkott M, Pawelzik K. Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity. PLoS One 2016; 11:e0148948. [PMID: 26900845 PMCID: PMC4763343 DOI: 10.1371/journal.pone.0148948] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2015] [Accepted: 12/23/2015] [Indexed: 11/28/2022] Open
Abstract
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.
Collapse
Affiliation(s)
- Christian Albers
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
- * E-mail:
| | - Maren Westkott
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| | - Klaus Pawelzik
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| |
Collapse
|
5
|
Pecevski D, Kappel D, Jonke Z. NEVESIM: event-driven neural simulation framework with a Python interface. Front Neuroinform 2014; 8:70. [PMID: 25177291 PMCID: PMC4132371 DOI: 10.3389/fninf.2014.00070] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2013] [Accepted: 07/17/2014] [Indexed: 12/02/2022] Open
Abstract
NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.
Collapse
Affiliation(s)
- Dejan Pecevski
- Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| | - David Kappel
- Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| | - Zeno Jonke
- Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| |
Collapse
|
6
|
D'Haene M, Hermans M, Schrauwen B. Toward unified hybrid simulation techniques for spiking neural networks. Neural Comput 2014; 26:1055-79. [PMID: 24684451 DOI: 10.1162/neco_a_00587] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In the field of neural network simulation techniques, the common conception is that spiking neural network simulators can be divided in two categories: time-step-based and event-driven methods. In this letter, we look at state-of-the art simulation techniques in both categories and show that a clear distinction between both methods is increasingly difficult to define. In an attempt to improve the weak points of each simulation method, ideas of the alternative method are, sometimes unknowingly, incorporated in the simulation engine. Clearly the ideal simulation method is a mix of both methods. We formulate the key properties of such an efficient and generally applicable hybrid approach.
Collapse
|
7
|
Caron LC, D'Haene M, Mailhot F, Schrauwen B, Rouat J. Event management for large scale event-driven digital hardware spiking neural networks. Neural Netw 2013; 45:83-93. [PMID: 23522624 DOI: 10.1016/j.neunet.2013.02.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Revised: 11/10/2012] [Accepted: 02/22/2013] [Indexed: 11/17/2022]
Abstract
The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms.
Collapse
Affiliation(s)
- Louis-Charles Caron
- NECOTIS, Université de Sherbrooke, 2500 boul. de l'Université, Sherbrooke (Québec), J1K 2R1, Canada.
| | | | | | | | | |
Collapse
|
8
|
Florian RV. The chronotron: a neuron that learns to fire temporally precise spike patterns. PLoS One 2012; 7:e40233. [PMID: 22879876 PMCID: PMC3412872 DOI: 10.1371/journal.pone.0040233] [Citation(s) in RCA: 131] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2012] [Accepted: 06/03/2012] [Indexed: 12/02/2022] Open
Abstract
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.
Collapse
Affiliation(s)
- Răzvan V Florian
- Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania.
| |
Collapse
|
9
|
Rudolph-Lilith M, Dubois M, Destexhe A. Analytical integrate-and-fire neuron models with conductance-based dynamics and realistic postsynaptic potential time course for event-driven simulation strategies. Neural Comput 2012; 24:1426-61. [PMID: 22364504 DOI: 10.1162/neco_a_00278] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In a previous paper (Rudolph & Destexhe, 2006), we proposed various models, the gIF neuron models, of analytical integrate-and-fire (IF) neurons with conductance-based (COBA) dynamics for use in event-driven simulations. These models are based on an analytical approximation of the differential equation describing the IF neuron with exponential synaptic conductances and were successfully tested with respect to their response to random and oscillating inputs. Because they are analytical and mathematically simple, the gIF models are best suited for fast event-driven simulation strategies. However, the drawback of such models is they rely on a nonrealistic postsynaptic potential (PSP) time course, consisting of a discontinuous jump followed by a decay governed by the membrane time constant. Here, we address this limitation by conceiving an analytical approximation of the COBA IF neuron model with the full PSP time course. The subthreshold and suprathreshold response of this gIF4 model reproduces remarkably well the postsynaptic responses of the numerically solved passive membrane equation subject to conductance noise, while gaining at least two orders of magnitude in computational performance. Although the analytical structure of the gIF4 model is more complex than that of its predecessors due to the necessity of calculating future spike times, a simple and fast algorithmic implementation for use in large-scale neural network simulations is proposed.
Collapse
Affiliation(s)
- Michelle Rudolph-Lilith
- Unité de Neuroscience Intégratives et Computationnelles, CNRS, 91198 Gif-sur-Yvette, France.
| | | | | |
Collapse
|
10
|
Event and Time Driven Hybrid Simulation of Spiking Neural Networks. ADVANCES IN COMPUTATIONAL INTELLIGENCE 2011. [DOI: 10.1007/978-3-642-21501-8_69] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
11
|
Hanuschkin A, Kunkel S, Helias M, Morrison A, Diesmann M. A general and efficient method for incorporating precise spike times in globally time-driven simulations. Front Neuroinform 2010; 4:113. [PMID: 21031031 PMCID: PMC2965048 DOI: 10.3389/fninf.2010.00113] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2009] [Accepted: 08/11/2010] [Indexed: 11/17/2022] Open
Abstract
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision.
Collapse
Affiliation(s)
- Alexander Hanuschkin
- Functional Neural Circuits Group, Faculty of Biology, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany
| | | | | | | | | |
Collapse
|
12
|
D'Haene M, Schrauwen B. Fast and exact simulation methods applied on a broad range of neuron models. Neural Comput 2010; 22:1468-72. [PMID: 20141478 DOI: 10.1162/neco.2010.07-09-1070] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recently van Elburg and van Ooyen (2009) published a generalization of the event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory currents and double exponential inhibitory synaptic currents, introduced by Carnevale and Hines. In the paper, it was shown that the constraints on the synaptic time constants imposed by the Newton-Raphson iteration scheme, can be relaxed. In this note, we show that according to the results published in D'Haene, Schrauwen, Van Campenhout, and Stroobandt (2009), a further generalization is possible, eliminating any constraint on the time constants. We also demonstrate that in fact, a wide range of linear neuron models can be efficiently simulated with this computation scheme, including neuron models mimicking complex neuronal behavior. These results can change the way complex neuronal spiking behavior is modeled: instead of highly nonlinear neuron models with few state variables, it is possible to efficiently simulate linear models with a large number of state variables.
Collapse
Affiliation(s)
- Michiel D'Haene
- Ghent University, Electronics and Information Systems Department, 9000 Ghent, Belgium.
| | | |
Collapse
|