1
|
Osborne H, de Kamps M. A numerical population density technique for N-dimensional neuron models. Front Neuroinform 2022; 16:883796. [PMID: 35935536 PMCID: PMC9354936 DOI: 10.3389/fninf.2022.883796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/24/2022] [Indexed: 11/13/2022] Open
Abstract
Population density techniques can be used to simulate the behavior of a population of neurons which adhere to a common underlying neuron model. They have previously been used for analyzing models of orientation tuning and decision making tasks. They produce a fully deterministic solution to neural simulations which often involve a non-deterministic or noise component. Until now, numerical population density techniques have been limited to only one- and two-dimensional models. For the first time, we demonstrate a method to take an N-dimensional underlying neuron model and simulate the behavior of a population. The technique enables so-called graceful degradation of the dynamics allowing a balance between accuracy and simulation speed while maintaining important behavioral features such as rate curves and bifurcations. It is an extension of the numerical population density technique implemented in the MIIND software framework that simulates networks of populations of neurons. Here, we describe the extension to N dimensions and simulate populations of leaky integrate-and-fire neurons with excitatory and inhibitory synaptic conductances then demonstrate the effect of degrading the accuracy on the solution. We also simulate two separate populations in an E-I configuration to demonstrate the technique's ability to capture complex behaviors of interacting populations. Finally, we simulate a population of four-dimensional Hodgkin-Huxley neurons under the influence of noise. Though the MIIND software has been used only for neural modeling up to this point, the technique can be used to simulate the behavior of a population of agents adhering to any system of ordinary differential equations under the influence of shot noise. MIIND has been modified to render a visualization of any three of an N-dimensional state space of a population which encourages fast model prototyping and debugging and could prove a useful educational tool for understanding dynamical systems.
Collapse
Affiliation(s)
- Hugh Osborne
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Marc de Kamps
- School of Computing, University of Leeds, Leeds, United Kingdom
- Leeds Institute for Data Analytics, University of Leeds, Leeds, United Kingdom
- The Alan Turing Institute, London, United Kingdom
- *Correspondence: Marc de Kamps
| |
Collapse
|
2
|
Valadez-Godínez S, Sossa H, Santiago-Montero R. On the accuracy and computational cost of spiking neuron implementation. Neural Netw 2019; 122:196-217. [PMID: 31689679 DOI: 10.1016/j.neunet.2019.09.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Revised: 09/12/2019] [Accepted: 09/17/2019] [Indexed: 10/25/2022]
Abstract
Since more than a decade ago, three statements about spiking neuron (SN) implementations have been widely accepted: 1) Hodgkin and Huxley (HH) model is computationally prohibitive, 2) Izhikevich (IZH) artificial neuron is as efficient as Leaky Integrate-and-Fire (LIF) model, and 3) IZH model is more efficient than HH model (Izhikevich, 2004). As suggested by Hodgkin and Huxley (1952), their model operates in two modes: by using the α's and β's rate functions directly (HH model) and by storing them into tables (HHT model) for computational cost reduction. Recently, it has been stated that: 1) HHT model (HH using tables) is not prohibitive, 2) IZH model is not efficient, and 3) both HHT and IZH models are comparable in computational cost (Skocik & Long, 2014). That controversy shows that there is no consensus concerning SN simulation capacities. Hence, in this work, we introduce a refined approach, based on the multiobjective optimization theory, describing the SN simulation capacities and ultimately choosing optimal simulation parameters. We have used normalized metrics to define the capacity levels of accuracy, computational cost, and efficiency. Normalized metrics allowed comparisons between SNs at the same level or scale. We conducted tests for balanced, lower, and upper boundary conditions under a regular spiking mode with constant and random current stimuli. We found optimal simulation parameters leading to a balance between computational cost and accuracy. Importantly, and, in general, we found that 1) HH model (without using tables) is the most accurate, computationally inexpensive, and efficient, 2) IZH model is the most expensive and inefficient, 3) both LIF and HHT models are the most inaccurate, 4) HHT model is more expensive and inaccurate than HH model due to α's and β's table discretization, and 5) HHT model is not comparable in computational cost to IZH model. These results refute the theory formulated over a decade ago (Izhikevich, 2004) and go more in-depth in the statements formulated by Skocik and Long (2014). Our statements imply that the number of dimensions or FLOPS in the SNs are theoretical but not practical indicators of the true computational cost. The metric we propose for the computational cost is more precise than FLOPS and was found to be invariant to computer architecture. Moreover, we found that the firing frequency used in previous works is a necessary but an insufficient metric to evaluate the simulation accuracy. We also show that our results are consistent with the theory of numerical methods and the theory of SN discontinuity. Discontinuous SNs, such LIF and IZH models, introduce a considerable error every time a spike is generated. In addition, compared to the constant input current, the random input current increases the computational cost and inaccuracy. Besides, we found that the search for optimal simulation parameters is problem-specific. That is important because most of the previous works have intended to find a general and unique optimal simulation. Here, we show that this solution could not exist because it is a multiobjective optimization problem that depends on several factors. This work sets up a renewed thesis concerning the SN simulation that is useful to several related research areas, including the emergent Deep Spiking Neural Networks.
Collapse
Affiliation(s)
- Sergio Valadez-Godínez
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; División de Ingeniería Informática, Instituto Tecnológico Superior de Purísima del Rincón, Gto., México, 36413, Mexico; División de Ingenierías de Educación Superior, Universidad Virtual del Estado de Guanajuato, Gto., México, 36400, Mexico.
| | - Humberto Sossa
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; Tecnológico de Monterrey, Campus Guadalajara, Av. Gral. Ramón Corona 2514, Zapopan, Jal., México, 45138, Mexico.
| | - Raúl Santiago-Montero
- División de Estudios de Posgrado e Investigación, Instituto Tecnológico de León, Av. Tecnológico S/N, León, Gto., México, 37290, Mexico.
| |
Collapse
|
3
|
Zhang Y, Xiao Y, Zhou D, Cai D. Spike-Triggered Regression for Synaptic Connectivity Reconstruction in Neuronal Networks. Front Comput Neurosci 2017; 11:101. [PMID: 29209189 PMCID: PMC5701668 DOI: 10.3389/fncom.2017.00101] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2017] [Accepted: 10/23/2017] [Indexed: 12/03/2022] Open
Abstract
How neurons are connected in the brain to perform computation is a key issue in neuroscience. Recently, the development of calcium imaging and multi-electrode array techniques have greatly enhanced our ability to measure the firing activities of neuronal populations at single cell level. Meanwhile, the intracellular recording technique is able to measure subthreshold voltage dynamics of a neuron. Our work addresses the issue of how to combine these measurements to reveal the underlying network structure. We propose the spike-triggered regression (STR) method, which employs both the voltage trace and firing activity of the neuronal population to reconstruct the underlying synaptic connectivity. Our numerical study of the conductance-based integrate-and-fire neuronal network shows that only short data of 20 ~ 100 s is required for an accurate recovery of network topology as well as the corresponding coupling strength. Our method can yield an accurate reconstruction of a large neuronal network even in the case of dense connectivity and nearly synchronous dynamics, which many other network reconstruction methods cannot successfully handle. In addition, we point out that, for sparse networks, the STR method can infer coupling strength between each pair of neurons with high accuracy in the absence of the global information of all other neurons.
Collapse
Affiliation(s)
- Yaoyu Zhang
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.,Courant Institute of Mathematical Sciences and Center for Neural Sciences, New York University, New York, NY, United States
| | - Yanyang Xiao
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.,Courant Institute of Mathematical Sciences and Center for Neural Sciences, New York University, New York, NY, United States
| | - Douglas Zhou
- School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - David Cai
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.,Courant Institute of Mathematical Sciences and Center for Neural Sciences, New York University, New York, NY, United States.,School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
4
|
Multirate method for co-simulation of electrical-chemical systems in multiscale modeling. J Comput Neurosci 2017; 42:245-256. [PMID: 28389716 PMCID: PMC5403853 DOI: 10.1007/s10827-017-0639-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Revised: 02/21/2017] [Accepted: 02/26/2017] [Indexed: 02/07/2023]
Abstract
Multiscale modeling by means of co-simulation is a powerful tool to address many vital questions in neuroscience. It can for example be applied in the study of the process of learning and memory formation in the brain. At the same time the co-simulation technique makes it possible to take advantage of interoperability between existing tools and multi-physics models as well as distributed computing. However, the theoretical basis for multiscale modeling is not sufficiently understood. There is, for example, a need of efficient and accurate numerical methods for time integration. When time constants of model components are different by several orders of magnitude, individual dynamics and mathematical definitions of each component all together impose stability, accuracy and efficiency challenges for the time integrator. Following our numerical investigations in Brocke et al. (Frontiers in Computational Neuroscience, 10, 97, 2016), we present a new multirate algorithm that allows us to handle each component of a large system with a step size appropriate to its time scale. We take care of error estimates in a recursive manner allowing individual components to follow their discretization time course while keeping numerical error within acceptable bounds. The method is developed with an ultimate goal of minimizing the communication between the components. Thus it is especially suitable for co-simulations. Our preliminary results support our confidence that the multirate approach can be used in the class of problems we are interested in. We show that the dynamics ofa communication signal as well as an appropriate choice of the discretization order between system components may have a significant impact on the accuracy of the coupled simulation. Although, the ideas presented in the paper have only been tested on a single model, it is likely that they can be applied to other problems without loss of generality. We believe that this work may significantly contribute to the establishment of a firm theoretical basis and to the development of an efficient computational framework for multiscale modeling and simulations.
Collapse
|
5
|
Stolyarov RM, Barreiro AK, Norris S. An efficient and accurate solver for large, sparse neural networks. BMC Neurosci 2015. [PMCID: PMC4699128 DOI: 10.1186/1471-2202-16-s1-p179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
6
|
Zhou D, Zhang Y, Xiao Y, Cai D. Analysis of sampling artifacts on the Granger causality analysis for topology extraction of neuronal dynamics. Front Comput Neurosci 2014; 8:75. [PMID: 25126067 PMCID: PMC4115622 DOI: 10.3389/fncom.2014.00075] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2014] [Accepted: 06/29/2014] [Indexed: 01/15/2023] Open
Abstract
Granger causality (GC) is a powerful method for causal inference for time series. In general, the GC value is computed using discrete time series sampled from continuous-time processes with a certain sampling interval length τ, i.e., the GC value is a function of τ. Using the GC analysis for the topology extraction of the simplest integrate-and-fire neuronal network of two neurons, we discuss behaviors of the GC value as a function of τ, which exhibits (i) oscillations, often vanishing at certain finite sampling interval lengths, (ii) the GC vanishes linearly as one uses finer and finer sampling. We show that these sampling effects can occur in both linear and non-linear dynamics: the GC value may vanish in the presence of true causal influence or become non-zero in the absence of causal influence. Without properly taking this issue into account, GC analysis may produce unreliable conclusions about causal influence when applied to empirical data. These sampling artifacts on the GC value greatly complicate the reliability of causal inference using the GC analysis, in general, and the validity of topology reconstruction for networks, in particular. We use idealized linear models to illustrate possible mechanisms underlying these phenomena and to gain insight into the general spectral structures that give rise to these sampling effects. Finally, we present an approach to circumvent these sampling artifacts to obtain reliable GC values.
Collapse
Affiliation(s)
- Douglas Zhou
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University Shanghai, China
| | - Yaoyu Zhang
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University Shanghai, China
| | - Yanyang Xiao
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University Shanghai, China
| | - David Cai
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University Shanghai, China ; Courant Institute of Mathematical Sciences and Center for Neural Science, New York University New York, NY, USA ; NYUAD Institute, New York University Abu Dhabi Abu Dhabi, UAE
| |
Collapse
|
7
|
D'Haene M, Hermans M, Schrauwen B. Toward unified hybrid simulation techniques for spiking neural networks. Neural Comput 2014; 26:1055-79. [PMID: 24684451 DOI: 10.1162/neco_a_00587] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In the field of neural network simulation techniques, the common conception is that spiking neural network simulators can be divided in two categories: time-step-based and event-driven methods. In this letter, we look at state-of-the art simulation techniques in both categories and show that a clear distinction between both methods is increasingly difficult to define. In an attempt to improve the weak points of each simulation method, ideas of the alternative method are, sometimes unknowingly, incorporated in the simulation engine. Clearly the ideal simulation method is a mix of both methods. We formulate the key properties of such an efficient and generally applicable hybrid approach.
Collapse
|
8
|
Zhou D, Xiao Y, Zhang Y, Xu Z, Cai D. Granger causality network reconstruction of conductance-based integrate-and-fire neuronal systems. PLoS One 2014; 9:e87636. [PMID: 24586285 PMCID: PMC3929548 DOI: 10.1371/journal.pone.0087636] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Accepted: 12/25/2013] [Indexed: 12/18/2022] Open
Abstract
Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (I&F) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based I&F neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings.
Collapse
Affiliation(s)
- Douglas Zhou
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Yanyang Xiao
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Yaoyu Zhang
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Zhiqin Xu
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - David Cai
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, New York, United States of America
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
9
|
Dynamics of the exponential integrate-and-fire model with slow currents and adaptation. J Comput Neurosci 2014; 37:161-80. [PMID: 24443127 PMCID: PMC4082791 DOI: 10.1007/s10827-013-0494-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2013] [Revised: 12/18/2013] [Accepted: 12/26/2013] [Indexed: 12/03/2022]
Abstract
In order to properly capture spike-frequency adaptation with a simplified point-neuron model, we study approximations of Hodgkin-Huxley (HH) models including slow currents by exponential integrate-and-fire (EIF) models that incorporate the same types of currents. We optimize the parameters of the EIF models under the external drive consisting of AMPA-type conductance pulses using the current-voltage curves and the van Rossum metric to best capture the subthreshold membrane potential, firing rate, and jump size of the slow current at the neuron’s spike times. Our numerical simulations demonstrate that, in addition to these quantities, the approximate EIF-type models faithfully reproduce bifurcation properties of the HH neurons with slow currents, which include spike-frequency adaptation, phase-response curves, critical exponents at the transition between a finite and infinite number of spikes with increasing constant external drive, and bifurcation diagrams of interspike intervals in time-periodically forced models. Dynamics of networks of HH neurons with slow currents can also be approximated by corresponding EIF-type networks, with the approximation being at least statistically accurate over a broad range of Poisson rates of the external drive. For the form of external drive resembling realistic, AMPA-like synaptic conductance response to incoming action potentials, the EIF model affords great savings of computation time as compared with the corresponding HH-type model. Our work shows that the EIF model with additional slow currents is well suited for use in large-scale, point-neuron models in which spike-frequency adaptation is important.
Collapse
|
10
|
Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks. J Comput Neurosci 2013; 36:279-95. [PMID: 23851661 DOI: 10.1007/s10827-013-0472-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2013] [Revised: 06/12/2013] [Accepted: 06/16/2013] [Indexed: 10/26/2022]
Abstract
Randomly connected populations of spiking neurons display a rich variety of dynamics. However, much of the current modeling and theoretical work has focused on two dynamical extremes: on one hand homogeneous dynamics characterized by weak correlations between neurons, and on the other hand total synchrony characterized by large populations firing in unison. In this paper we address the conceptual issue of how to mathematically characterize the partially synchronous "multiple firing events" (MFEs) which manifest in between these two dynamical extremes. We further develop a geometric method for obtaining the distribution of magnitudes of these MFEs by recasting the cascading firing event process as a first-passage time problem, and deriving an analytical approximation of the first passage time density valid for large neuron populations. Thus, we establish a direct link between the voltage distributions of excitatory and inhibitory neurons and the number of neurons firing in an MFE that can be easily integrated into population-based computational methods, thereby bridging the gap between homogeneous firing regimes and total synchrony.
Collapse
|
11
|
Madureira AL, Madureira DQ, Pinheiro PO. A multiscale numerical method for the heterogeneous cable equation. Neurocomputing 2012. [DOI: 10.1016/j.neucom.2011.08.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
12
|
Coarse-grained event tree analysis for quantifying Hodgkin-Huxley neuronal network dynamics. J Comput Neurosci 2011; 32:55-72. [PMID: 21597895 DOI: 10.1007/s10827-011-0339-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2011] [Revised: 04/28/2011] [Accepted: 04/28/2011] [Indexed: 10/18/2022]
Abstract
We present an event tree analysis of studying the dynamics of the Hodgkin-Huxley (HH) neuronal networks. Our study relies on a coarse-grained projection to event trees and to the event chains that comprise these trees by using a statistical collection of spatial-temporal sequences of relevant physiological observables (such as sequences of spiking multiple neurons). This projection can retain information about network dynamics that covers multiple features, swiftly and robustly. We demonstrate that for even small differences in inputs, some dynamical regimes of HH networks contain sufficiently higher order statistics as reflected in event chains within the event tree analysis. Therefore, this analysis is effective in discriminating small differences in inputs. Moreover, we use event trees to analyze the results computed from an efficient library-based numerical method proposed in our previous work, where a pre-computed high resolution data library of typical neuronal trajectories during the interval of an action potential (spike) allows us to avoid resolving the spikes in detail. In this way, we can evolve the HH networks using time steps one order of magnitude larger than the typical time steps used for resolving the trajectories without the library, while achieving comparable statistical accuracy in terms of average firing rate and power spectra of voltage traces. Our numerical simulation results show that the library method is efficient in the sense that the results generated by using this numerical method with much larger time steps contain sufficiently high order statistical structure of firing events that are similar to the ones obtained using a regular HH solver. We use our event tree analysis to demonstrate these statistical similarities.
Collapse
|
13
|
Kaabi MG, Tonnelier A, Martinez D. On the Performance of Voltage Stepping for the Simulation of Adaptive, Nonlinear Integrate-and-Fire Neuronal Networks. Neural Comput 2011; 23:1187-204. [DOI: 10.1162/neco_a_00112] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez ( 2009 ) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
Collapse
Affiliation(s)
| | | | - Dominique Martinez
- Unité Mixte de Recherche 7503, LORIA, CNRS, 54506 Vandoeuvre-lès-Nancy, France, and Unité Mixte de Recherche 1272, Physiologie de l'Insecte Signalisation et Communication, Institut National de la Recherche Agronomique, 78026 Versailles, France
| |
Collapse
|
14
|
Sun Y, Zhou D, Rangan AV, Cai D. Pseudo-Lyapunov exponents and predictability of Hodgkin-Huxley neuronal network dynamics. J Comput Neurosci 2009; 28:247-66. [PMID: 20020192 DOI: 10.1007/s10827-009-0202-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2009] [Revised: 11/23/2009] [Accepted: 12/02/2009] [Indexed: 10/20/2022]
|
15
|
Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type. J Comput Neurosci 2009; 28:229-45. [DOI: 10.1007/s10827-009-0201-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2009] [Revised: 10/31/2009] [Accepted: 11/18/2009] [Indexed: 10/20/2022]
|
16
|
Rangan AV. Diagrammatic expansion of pulse-coupled network dynamics in terms of subnetworks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2009; 80:036101. [PMID: 19905174 DOI: 10.1103/physreve.80.036101] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2008] [Revised: 05/11/2009] [Indexed: 05/28/2023]
Abstract
We introduce a framework wherein various measurements of a pulse-coupled network's stationary dynamics can be expanded in terms of the network's connectivity. Such measurements include the occurrence rate of pulses (e.g., firing rates within a neuronal network) as well as higher-order correlations in activity between various nodes in the network. The various terms in this expansion can be interpreted as diagrams corresponding to subnetworks of the original network, which span both space (in terms of the network's graph) as well as time (in the sense of causality).
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, New York 10012, USA
| |
Collapse
|
17
|
Zhou D, Rangan AV, Sun Y, Cai D. Network-induced chaos in integrate-and-fire neuronal ensembles. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2009; 80:031918. [PMID: 19905157 DOI: 10.1103/physreve.80.031918] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2008] [Revised: 07/21/2009] [Indexed: 05/28/2023]
Abstract
It has been shown that a single standard linear integrate-and-fire (IF) neuron under a general time-dependent stimulus cannot possess chaotic dynamics despite the firing-reset discontinuity. Here we address the issue of whether conductance-based, pulsed-coupled network interactions can induce chaos in an IF neuronal ensemble. Using numerical methods, we demonstrate that all-to-all, homogeneously pulse-coupled IF neuronal networks can indeed give rise to chaotic dynamics under an external periodic current drive. We also provide a precise characterization of the largest Lyapunov exponent for these high dimensional nonsmooth dynamical systems. In addition, we present a stable and accurate numerical algorithm for evaluating the largest Lyapunov exponent, which can overcome difficulties encountered by traditional methods for these nonsmooth dynamical systems with degeneracy induced by, e.g., refractoriness of neurons.
Collapse
Affiliation(s)
- Douglas Zhou
- Courant Institute of Mathematical Sciences, New York University, New York, New York 10012, USA.
| | | | | | | |
Collapse
|
18
|
Library-based numerical reduction of the Hodgkin-Huxley neuron for network simulation. J Comput Neurosci 2009; 27:369-90. [PMID: 19401809 DOI: 10.1007/s10827-009-0151-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2008] [Revised: 02/13/2009] [Accepted: 03/18/2009] [Indexed: 10/20/2022]
Abstract
We present an efficient library-based numerical method for simulating the Hodgkin-Huxley (HH) neuronal networks. The key components in our numerical method involve (i) a pre-computed high resolution data library which contains typical neuronal trajectories (i.e., the time-courses of membrane potential and gating variables) during the interval of an action potential (spike), thus allowing us to avoid resolving the spikes in detail and to use large numerical time steps for evolving the HH neuron equations; (ii) an algorithm of spike-spike corrections within the groups of strongly coupled neurons to account for spike-spike interactions in a single large time step. By using the library method, we can evolve the HH networks using time steps one order of magnitude larger than the typical time steps used for resolving the trajectories without the library, while achieving comparable resolution in statistical quantifications of the network activity, such as average firing rate, interspike interval distribution, power spectra of voltage traces. Moreover, our large time steps using the library method can break the stability requirement of standard methods (such as Runge-Kutta (RK) methods) for the original dynamics. We compare our library-based method with RK methods, and find that our method can capture very well phase-locked, synchronous, and chaotic dynamics of HH neuronal networks. It is important to point out that, in essence, our library-based HH neuron solver can be viewed as a numerical reduction of the HH neuron to an integrate-and-fire (I&F) neuronal representation that does not sacrifice the gating dynamics (as normally done in the analytical reduction to an I&F neuron).
Collapse
|
19
|
Voltage-stepping schemes for the simulation of spiking neural networks. J Comput Neurosci 2008; 26:409-23. [PMID: 19034641 DOI: 10.1007/s10827-008-0119-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2008] [Revised: 10/14/2008] [Accepted: 10/20/2008] [Indexed: 10/21/2022]
Abstract
The numerical simulation of spiking neural networks requires particular attention. On the one hand, time-stepping methods are generic but they are prone to numerical errors and need specific treatments to deal with the discontinuities of integrate-and-fire models. On the other hand, event-driven methods are more precise but they are restricted to a limited class of neuron models. We present here a voltage-stepping scheme that combines the advantages of these two approaches and consists of a discretization of the voltage state-space. The numerical simulation is reduced to a local event-driven method that induces an implicit activity-dependent time discretization (time-steps automatically increase when the neuron is slowly varying). We show analytically that such a scheme leads to a high-order algorithm so that it accurately approximates the neuronal dynamics. The voltage-stepping method is generic and can be used to simulate any kind of neuron models. We illustrate it on nonlinear integrate-and-fire models and show that it outperforms time-stepping schemes of Runge-Kutta type in terms of simulation time and accuracy.
Collapse
|
20
|
Quantifying neuronal network dynamics through coarse-grained event trees. Proc Natl Acad Sci U S A 2008; 105:10990-5. [PMID: 18667703 DOI: 10.1073/pnas.0804303105] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Animals process information about many stimulus features simultaneously, swiftly (in a few 100 ms), and robustly (even when individual neurons do not themselves respond reliably). When the brain carries, codes, and certainly when it decodes information, it must do so through some coarse-grained projection mechanism. How can a projection retain information about network dynamics that covers multiple features, swiftly and robustly? Here, by a coarse-grained projection to event trees and to the event chains that comprise these trees, we propose a method of characterizing dynamic information of neuronal networks by using a statistical collection of spatial-temporal sequences of relevant physiological observables (such as sequences of spiking multiple neurons). We demonstrate, through idealized point neuron simulations in small networks, that this event tree analysis can reveal, with high reliability, information about multiple stimulus features within short realistic observation times. Then, with a large-scale realistic computational model of V1, we show that coarse-grained event trees contain sufficient information, again over short observation times, for fine discrimination of orientation, with results consistent with recent experimental observation.
Collapse
|
21
|
Rangan AV, Kovacic G, Cai D. Kinetic theory for neuronal networks with fast and slow excitatory conductances driven by the same spike train. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2008; 77:041915. [PMID: 18517664 DOI: 10.1103/physreve.77.041915] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2007] [Revised: 12/29/2007] [Indexed: 05/26/2023]
Abstract
We present a kinetic theory for all-to-all coupled networks of identical, linear, integrate-and-fire, excitatory point neurons in which a fast and a slow excitatory conductance are driven by the same spike train in the presence of synaptic failure. The maximal-entropy principle guides us in deriving a set of three (1+1) -dimensional kinetic moment equations from a Boltzmann-like equation describing the evolution of the one-neuron probability density function. We explain the emergence of correlation terms in the kinetic moment and Boltzmann-like equations as a consequence of simultaneous activation of both the fast and slow excitatory conductances and furnish numerical evidence for their importance in correctly describing the coarse-grained dynamics of the underlying neuronal network.
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012-1185, USA
| | | | | |
Collapse
|
22
|
Ly C, Tranchina D. Critical analysis of dimension reduction by a moment closure method in a population density approach to neural network modeling. Neural Comput 2007; 19:2032-92. [PMID: 17571938 DOI: 10.1162/neco.2007.19.8.2032] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Computational techniques within the population density function (PDF) framework have provided time-saving alternatives to classical Monte Carlo simulations of neural network activity. Efficiency of the PDF method is lost as the underlying neuron model is made more realistic and the number of state variables increases. In a detailed theoretical and computational study, we elucidate strengths and weaknesses of dimension reduction by a particular moment closure method (Cai, Tao, Shelley, & McLaughlin, 2004; Cai, Tao, Rangan, & McLaughlin, 2006) as applied to integrate-and-fire neurons that receive excitatory synaptic input only. When the unitary postsynaptic conductance event has a single-exponential time course, the evolution equation for the PDF is a partial differential integral equation in two state variables, voltage and excitatory conductance. In the moment closure method, one approximates the conditional kth centered moment of excitatory conductance given voltage by the corresponding unconditioned moment. The result is a system of k coupled partial differential equations with one state variable, voltage, and k coupled ordinary differential equations. Moment closure at k = 2 works well, and at k = 3 works even better, in the regime of high dynamically varying synaptic input rates. Both closures break down at lower synaptic input rates. Phase-plane analysis of the k = 2 problem with typical parameters proves, and reveals why, no steady-state solutions exist below a synaptic input rate that gives a firing rate of 59 s(1) in the full 2D problem. Closure at k = 3 fails for similar reasons. Low firing-rate solutions can be obtained only with parameters for the amplitude or kinetics (or both) of the unitary postsynaptic conductance event that are on the edge of the physiological range. We conclude that this dimension-reduction method gives ill-posed problems for a wide range of physiological parameters, and we suggest future directions.
Collapse
Affiliation(s)
- Cheng Ly
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA.
| | | |
Collapse
|
23
|
Rangan AV, Cai D, McLaughlin DW. Modeling the spatiotemporal cortical activity associated with the line-motion illusion in primary visual cortex. Proc Natl Acad Sci U S A 2005; 102:18793-800. [PMID: 16380423 PMCID: PMC1323193 DOI: 10.1073/pnas.0509481102] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Our large-scale computational model of the primary visual cortex that incorporates orientation-specific, long-range couplings with slow NMDA conductances operates in a fluctuating dynamic state of intermittent desuppression (IDS), which captures the behavior of coherent spontaneous cortical activity, as revealed by in vivo optical imaging based on voltage-sensitive dyes. Here, we address the functional significance of the IDS cortical operating points by investigating our model cortex response to the Hikosaka line-motion illusion (LMI) stimulus-a cue of a quickly flashed stationary square followed a few milliseconds later by a stationary bar. As revealed by voltage-sensitive dye imaging, there is an intriguing similarity between the cortical spatiotemporal activity in response to (i) the Hikosaka LMI stimulus and (ii) a small moving square. This similarity is believed to be associated with the preattentive illusory motion perception. Our numerical cortex produces similar spatiotemporal patterns in response to the two stimuli above, which are both in very good agreement with experimental results. The essential network mechanisms underpinning the LMI phenomenon in our model are (i) the spatiotemporal structure of the LMI input as sculpted by the lateral geniculate nucleus, (ii) a priming effect of the long-range NMDA-type cortical coupling, and (iii) the NMDA conductance-voltage correlation manifested in the IDS state. This mechanism in our model cortex, in turn, suggests a physiological underpinning for the LMI-associated patterns in the visual cortex of anaesthetized cat.
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY 10012, USA
| | | | | |
Collapse
|