1
|
Dumont G, Pérez-Cervera A, Gutkin B. A framework for macroscopic phase-resetting curves for generalised spiking neural networks. PLoS Comput Biol 2022; 18:e1010363. [PMID: 35913991 PMCID: PMC9371324 DOI: 10.1371/journal.pcbi.1010363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 08/11/2022] [Accepted: 07/06/2022] [Indexed: 11/18/2022] Open
Abstract
Brain rhythms emerge from synchronization among interconnected spiking neurons. Key properties of such rhythms can be gleaned from the phase-resetting curve (PRC). Inferring the PRC and developing a systematic phase reduction theory for large-scale brain rhythms remains an outstanding challenge. Here we present a theoretical framework and methodology to compute the PRC of generic spiking networks with emergent collective oscillations. We adopt a renewal approach where neurons are described by the time since their last action potential, a description that can reproduce the dynamical feature of many cell types. For a sufficiently large number of neurons, the network dynamics are well captured by a continuity equation known as the refractory density equation. We develop an adjoint method for this equation giving a semi-analytical expression of the infinitesimal PRC. We confirm the validity of our framework for specific examples of neural networks. Our theoretical framework can link key biological properties at the individual neuron scale and the macroscopic oscillatory network properties. Beyond spiking networks, the approach is applicable to a broad class of systems that can be described by renewal processes. The formation of oscillatory neuronal assemblies at the network level has been hypothesized to be fundamental to many cognitive and motor functions. One prominent tool to understand the dynamics of oscillatory activity response to stimuli, and hence the neural code for which it is a substrate, is a nonlinear measure called Phase-Resetting Curve (PRC). At the network scale, the PRC defines the measure of how a given synaptic input perturbs the timing of next upcoming volley of spike assemblies: either advancing or delaying this timing. As a further application, one can use PRCs to make unambiguous predictions about whether communicating networks of neurons will phase-lock as it is often observed across the cortical areas and what would be this stable phase-configuration: synchronous, asynchronous or with asymmetric phase-shifts. The latter configuration also implies a preferential flow of information form the leading network to the follower, thereby giving causal signatures of directed functional connectivity. Because of the key position of the PRC in studying synchrony, information flow and entrainment to external forcing, it is crucial to move toward a theory that allows to compute the PRCs of network-wide oscillations not only for a restricted class of models, as has been done in the past, but to network descriptions that are generalized and can reflect flexibly single cell properties. In this manuscript, we tackle this issue by showing how the PRC for network oscillations can be computed using the adjoint systems of partial differential equations that define the dynamics of the neural activity density.
Collapse
Affiliation(s)
- Grégory Dumont
- Group for Neural Theory, LNC INSERM U960, DEC, Ecole Normale Supérieure - PSL University, Paris France
- * E-mail:
| | - Alberto Pérez-Cervera
- Center for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Moscow
- Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
| | - Boris Gutkin
- Group for Neural Theory, LNC INSERM U960, DEC, Ecole Normale Supérieure - PSL University, Paris France
- Center for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Moscow
| |
Collapse
|
2
|
Peterson AJ. A numerical method for computing interval distributions for an inhomogeneous Poisson point process modified by random dead times. BIOLOGICAL CYBERNETICS 2021; 115:177-190. [PMID: 33742314 PMCID: PMC8036215 DOI: 10.1007/s00422-021-00868-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 02/20/2021] [Indexed: 06/12/2023]
Abstract
The inhomogeneous Poisson point process is a common model for time series of discrete, stochastic events. When an event from a point process is detected, it may trigger a random dead time in the detector, during which subsequent events will fail to be detected. It can be difficult or impossible to obtain a closed-form expression for the distribution of intervals between detections, even when the rate function (often referred to as the intensity function) and the dead-time distribution are given. Here, a method is presented to numerically compute the interval distribution expected for any arbitrary inhomogeneous Poisson point process modified by dead times drawn from any arbitrary distribution. In neuroscience, such a point process is used to model trains of neuronal spikes triggered by the detection of excitatory events while the neuron is not refractory. The assumptions of the method are that the process is observed over a finite observation window and that the detector is not in a dead state at the start of the observation window. Simulations are used to verify the method for several example point processes. The method should be useful for modeling and understanding the relationships between the rate functions and interval distributions of the event and detection processes, and how these relationships depend on the dead-time distribution.
Collapse
Affiliation(s)
- Adam J Peterson
- Leibniz Institute for Neurobiology, Brenneckestrasse 6, 39118, Magdeburg, Germany.
| |
Collapse
|
3
|
Kostal L, Lansky P, Stiber M. Statistics of inverse interspike intervals: The instantaneous firing rate revisited. CHAOS (WOODBURY, N.Y.) 2018; 28:106305. [PMID: 30384662 DOI: 10.1063/1.5036831] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 07/20/2018] [Indexed: 06/08/2023]
Abstract
The rate coding hypothesis is the oldest and still one of the most accepted and investigated scenarios in neuronal activity analyses. However, the actual neuronal firing rate, while informally understood, can be mathematically defined in several different ways. These definitions yield distinct results; even their average values may differ dramatically for the simplest neuronal models. Such an inconsistency, together with the importance of "firing rate," motivates us to revisit the classical concept of the instantaneous firing rate. We confirm that different notions of firing rate can in fact be compatible, at least in terms of their averages, by carefully discerning the time instant at which the neuronal activity is observed. Two general cases are distinguished: either the inspection time is synchronised with a reference time or with the neuronal spiking. The statistical properties of the instantaneous firing rate, including parameter estimation, are analyzed, and compatibility with the intuitively understood concept is demonstrated.
Collapse
Affiliation(s)
- Lubomir Kostal
- Institute of Physiology of the Czech Academy of Sciences, Videnska 1083, 14220 Prague 4, Czech Republic
| | - Petr Lansky
- Institute of Physiology of the Czech Academy of Sciences, Videnska 1083, 14220 Prague 4, Czech Republic
| | - Michael Stiber
- Institute of Physiology of the Czech Academy of Sciences, Videnska 1083, 14220 Prague 4, Czech Republic
| |
Collapse
|
4
|
Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS Comput Biol 2017; 13:e1005507. [PMID: 28422957 PMCID: PMC5415267 DOI: 10.1371/journal.pcbi.1005507] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 05/03/2017] [Accepted: 04/07/2017] [Indexed: 11/22/2022] Open
Abstract
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. Understanding the brain requires mathematical models on different spatial scales. On the “microscopic” level of nerve cells, neural spike trains can be well predicted by phenomenological spiking neuron models. On a coarse scale, neural activity can be modeled by phenomenological equations that summarize the total activity of many thousands of neurons. Such population models are widely used to model neuroimaging data such as EEG, MEG or fMRI data. However, it is largely unknown how large-scale models are connected to an underlying microscale model. Linking the scales is vital for a correct description of rapid changes and fluctuations of the population activity, and is crucial for multiscale brain models. The challenge is to treat realistic spiking dynamics as well as fluctuations arising from the finite number of neurons. We obtained such a link by deriving stochastic population equations on the mesoscopic scale of 100–1000 neurons from an underlying microscopic model. These equations can be efficiently integrated and reproduce results of a microscopic simulation while achieving a high speed-up factor. We expect that our novel population theory on the mesoscopic scale will be instrumental for understanding experimental data on information processing in the brain, and ultimately link microscopic and macroscopic activity patterns.
Collapse
|
5
|
Levakova M, Tamborrino M, Ditlevsen S, Lansky P. A review of the methods for neuronal response latency estimation. Biosystems 2015; 136:23-34. [PMID: 25939679 DOI: 10.1016/j.biosystems.2015.04.008] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2015] [Accepted: 04/14/2015] [Indexed: 11/29/2022]
Abstract
Neuronal response latency is usually vaguely defined as the delay between the stimulus onset and the beginning of the response. It contains important information for the understanding of the temporal code. For this reason, the detection of the response latency has been extensively studied in the last twenty years, yielding different estimation methods. They can be divided into two classes, one of them including methods based on detecting an intensity change in the firing rate profile after the stimulus onset and the other containing methods based on detection of spikes evoked by the stimulation using interspike intervals and spike times. The aim of this paper is to present a review of the main techniques proposed in both classes, highlighting their advantages and shortcomings.
Collapse
Affiliation(s)
- Marie Levakova
- Institute of Physiology, The Czech Academy of Sciences, Videnska 1083, 14220 Prague 4, Czech Republic.
| | - Massimiliano Tamborrino
- Institute for Stochastics, Johannes Kepler University Linz, Altenbergerstraße 69, 4040 Linz, Austria.
| | - Susanne Ditlevsen
- Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark.
| | - Petr Lansky
- Institute of Physiology, The Czech Academy of Sciences, Videnska 1083, 14220 Prague 4, Czech Republic.
| |
Collapse
|
6
|
Lagzi F, Rotter S. A Markov model for the temporal dynamics of balanced random networks of finite size. Front Comput Neurosci 2014; 8:142. [PMID: 25520644 PMCID: PMC4253948 DOI: 10.3389/fncom.2014.00142] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2014] [Accepted: 10/20/2014] [Indexed: 11/21/2022] Open
Abstract
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks.
Collapse
Affiliation(s)
- Fereshteh Lagzi
- Bernstein Center Freiburg and Faculty of Biology, University of FreiburgFreiburg, Germany
| | | |
Collapse
|
7
|
Deger M, Schwalger T, Naud R, Gerstner W. Fluctuations and information filtering in coupled populations of spiking neurons with adaptation. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2014; 90:062704. [PMID: 25615126 DOI: 10.1103/physreve.90.062704] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2013] [Indexed: 06/04/2023]
Abstract
Finite-sized populations of spiking elements are fundamental to brain function but also are used in many areas of physics. Here we present a theory of the dynamics of finite-sized populations of spiking units, based on a quasirenewal description of neurons with adaptation. We derive an integral equation with colored noise that governs the stochastic dynamics of the population activity in response to time-dependent stimulation and calculate the spectral density in the asynchronous state. We show that systems of coupled populations with adaptation can generate a frequency band in which sensory information is preferentially encoded. The theory is applicable to fully as well as randomly connected networks and to leaky integrate-and-fire as well as to generalized spiking neurons with adaptation on multiple time scales.
Collapse
Affiliation(s)
- Moritz Deger
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École polytechnique fédérale de Lausanne, Station 15, 1015 Lausanne EPFL, Switzerland
| | - Tilo Schwalger
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École polytechnique fédérale de Lausanne, Station 15, 1015 Lausanne EPFL, Switzerland
| | - Richard Naud
- Department of Physics, University of Ottawa, 150 Louis Pasteur, Ottawa, Ontario, K1N 6N5 Canada
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École polytechnique fédérale de Lausanne, Station 15, 1015 Lausanne EPFL, Switzerland
| |
Collapse
|
8
|
Deger M, Kumar A, Aertsen A, Rotter S. Linking neural mass signals and spike train statistics through point process and linear systems theory. BMC Neurosci 2013. [PMCID: PMC3704725 DOI: 10.1186/1471-2202-14-s1-p330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
9
|
Deniz T, Rotter S. Going beyond Poisson processes: a new statistical framework in neuronal modeling and data analysis. BMC Neurosci 2013. [PMCID: PMC3704707 DOI: 10.1186/1471-2202-14-s1-p332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
10
|
Tamborrino M, Ditlevsen S, Lansky P. Identification of noisy response latency. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 86:021128. [PMID: 23005743 DOI: 10.1103/physreve.86.021128] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2012] [Indexed: 06/01/2023]
Abstract
In many physical systems there is a time delay before an applied input (stimulation) has an impact on the output (response), and the quantification of this delay is of paramount interest. If the response can only be observed on top of an indistinguishable background signal, the estimation can be highly unreliable, unless the background signal is accounted for in the analysis. In fact, if the background signal is ignored, however small it is compared to the response and however large the delay is, the estimate of the time delay will go to zero for any reasonable estimator when increasing the number of observations. Here we propose a unified concept of response latency identification in event data corrupted by a background signal. It is done in the context of information transfer within a neural system, more specifically on spike trains from single neurons. The estimators are compared on simulated data and the most suitable for specific situations are recommended.
Collapse
Affiliation(s)
- Massimiliano Tamborrino
- Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, DK 2100 Copenhagen, Denmark.
| | | | | |
Collapse
|
11
|
Reimer ICG, Staude B, Ehm W, Rotter S. Modeling and analyzing higher-order correlations in non-Poissonian spike trains. J Neurosci Methods 2012; 208:18-33. [PMID: 22561088 DOI: 10.1016/j.jneumeth.2012.04.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2011] [Revised: 04/17/2012] [Accepted: 04/18/2012] [Indexed: 11/17/2022]
Abstract
Measuring pairwise and higher-order spike correlations is crucial for studying their potential impact on neuronal information processing. In order to avoid misinterpretation of results, the tools used for data analysis need to be carefully calibrated with respect to their sensitivity and robustness. This, in turn, requires surrogate data with statistical properties common to experimental spike trains. Here, we present a novel method to generate correlated non-Poissonian spike trains and study the impact of single-neuron spike statistics on the inference of higher-order correlations. Our method to mimic cooperative neuronal spike activity allows the realization of a large variety of renewal processes with controlled higher-order correlation structure. Based on surrogate data obtained by this procedure we investigate the robustness of the recently proposed method empirical de-Poissonization (Ehm et al., 2007). It assumes Poissonian spiking, which is common also for many other estimation techniques. We observe that some degree of deviation from this assumption can generally be tolerated, that the results are more reliable for small analysis bins, and that the degree of misestimation depends on the detailed spike statistics. As a consequence of these findings we finally propose a strategy to assess the reliability of results for experimental data.
Collapse
Affiliation(s)
- Imke C G Reimer
- Bernstein Center Freiburg and Faculty of Biology, Albert-Ludwig University, Freiburg, Germany
| | | | | | | |
Collapse
|
12
|
Cardanobile S, Rotter S. Emergent properties of interacting populations of spiking neurons. Front Comput Neurosci 2011; 5:59. [PMID: 22207844 PMCID: PMC3245521 DOI: 10.3389/fncom.2011.00059] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2011] [Accepted: 11/28/2011] [Indexed: 12/05/2022] Open
Abstract
Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.
Collapse
|
13
|
Kumar A, Cardanobile S, Rotter S, Aertsen A. The role of inhibition in generating and controlling Parkinson's disease oscillations in the Basal Ganglia. Front Syst Neurosci 2011; 5:86. [PMID: 22028684 PMCID: PMC3199726 DOI: 10.3389/fnsys.2011.00086] [Citation(s) in RCA: 81] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2011] [Accepted: 10/03/2011] [Indexed: 11/23/2022] Open
Abstract
Movement disorders in Parkinson’s disease (PD) are commonly associated with slow oscillations and increased synchrony of neuronal activity in the basal ganglia. The neural mechanisms underlying this dynamic network dysfunction, however, are only poorly understood. Here, we show that the strength of inhibitory inputs from striatum to globus pallidus external (GPe) is a key parameter controlling oscillations in the basal ganglia. Specifically, the increase in striatal activity observed in PD is sufficient to unleash the oscillations in the basal ganglia. This finding allows us to propose a unified explanation for different phenomena: absence of oscillation in the healthy state of the basal ganglia, oscillations in dopamine-depleted state and quenching of oscillations under deep-brain-stimulation (DBS). These novel insights help us to better understand and optimize the function of DBS protocols. Furthermore, studying the model behavior under transient increase of activity of the striatal neurons projecting to the indirect pathway, we are able to account for both motor impairment in PD patients and for reduced response inhibition in DBS implanted patients.
Collapse
Affiliation(s)
- Arvind Kumar
- Bernstein Center Freiburg, University of Freiburg Germany
| | | | | | | |
Collapse
|
14
|
Deger M, Helias M, Boucsein C, Rotter S. Statistical properties of superimposed stationary spike trains. J Comput Neurosci 2011; 32:443-63. [PMID: 21964584 PMCID: PMC3343236 DOI: 10.1007/s10827-011-0362-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2011] [Revised: 09/07/2011] [Accepted: 09/08/2011] [Indexed: 11/28/2022]
Abstract
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities—like the count variability, inter-spike interval (ISI) variability and ISI correlations—and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
Collapse
Affiliation(s)
- Moritz Deger
- Bernstein Center Freiburg & Faculty of Biology, Albert-Ludwig University, 79104 Freiburg, Germany.
| | | | | | | |
Collapse
|