1
|
Fisco-Compte P, Aquilué-Llorens D, Roqueiro N, Fossas E, Guillamon A. Empirical modeling and prediction of neuronal dynamics. BIOLOGICAL CYBERNETICS 2024; 118:83-110. [PMID: 38597964 PMCID: PMC11068704 DOI: 10.1007/s00422-024-00986-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 02/29/2024] [Indexed: 04/11/2024]
Abstract
Mathematical modeling of neuronal dynamics has experienced a fast growth in the last decades thanks to the biophysical formalism introduced by Hodgkin and Huxley in the 1950s. Other types of models (for instance, integrate and fire models), although less realistic, have also contributed to understand neuronal dynamics. However, there is still a vast volume of data that have not been associated with a mathematical model, mainly because data are acquired more rapidly than they can be analyzed or because it is difficult to analyze (for instance, if the number of ionic channels involved is huge). Therefore, developing new methodologies to obtain mathematical or computational models associated with data (even without previous knowledge of the source) can be helpful to make future predictions. Here, we explore the capability of a wavelet neural network to identify neuronal (single-cell) dynamics. We present an optimized computational scheme that trains the ANN with biologically plausible input currents. We obtain successful identification for data generated from four different neuron models when using all variables as inputs of the network. We also show that the empiric model obtained is able to generalize and predict the neuronal dynamics generated by variable input currents different from those used to train the artificial network. In the more realistic situation of using only the voltage and the injected current as input data to train the network, we lose predictive ability but, for low-dimensional models, the results are still satisfactory. We understand our contribution as a first step toward obtaining empiric models from experimental voltage traces.
Collapse
Affiliation(s)
- Pau Fisco-Compte
- Departament d'Enginyeria Elèctrica, CITCEA-UPC, Universitat Politècnica de Catalunya - Barcelona TECH, Av. Diagonal, 647, (Edifici ETSEIB), Barcelona, Catalonia, 08028, Spain
| | - David Aquilué-Llorens
- Neuroscience BU, Starlab Barcelona S.L., Av Tibidabo 47 bis, Barcelona, Catalonia, 08035, Spain
| | - Nestor Roqueiro
- Depto. de Automação e Sistemas, Federal University of Santa Catarina, Bairro Trindade, Caixa Postal 476, Florianopolis, Santa Catarina, 88040-900, Brazil
| | - Enric Fossas
- Institut d'Organització i Control, Universitat Politècnica de Catalunya - Barcelona TECH, Av. Diagonal, 647, planta 11 (Edifici ETSEIB), Barcelona, Catalonia, 08028, Spain
| | - Antoni Guillamon
- Departament de Matemàtiques (EPSEB) and Institut de Matemàtiques de la UPC (IMTech), Universitat Politècnica de Catalunya - Barcelona TECH, Av. Dr. Marañón, 44-50, Barcelona, Catalonia, 08028, Spain.
- Centre de Recerca Matemàtica, Edifici C, Campus de Bellaterra, Cerdanyola del Vallès, Catalonia, 08193, Spain.
| |
Collapse
|
2
|
Exact mean-field models for spiking neural networks with adaptation. J Comput Neurosci 2022; 50:445-469. [PMID: 35834100 DOI: 10.1007/s10827-022-00825-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 06/15/2022] [Indexed: 10/17/2022]
Abstract
Networks of spiking neurons with adaption have been shown to be able to reproduce a wide range of neural activities, including the emergent population bursting and spike synchrony that underpin brain disorders and normal function. Exact mean-field models derived from spiking neural networks are extremely valuable, as such models can be used to determine how individual neurons and the network they reside within interact to produce macroscopic network behaviours. In the paper, we derive and analyze a set of exact mean-field equations for the neural network with spike frequency adaptation. Specifically, our model is a network of Izhikevich neurons, where each neuron is modeled by a two dimensional system consisting of a quadratic integrate and fire equation plus an equation which implements spike frequency adaptation. Previous work deriving a mean-field model for this type of network, relied on the assumption of sufficiently slow dynamics of the adaptation variable. However, this approximation did not succeed in establishing an exact correspondence between the macroscopic description and the realistic neural network, especially when the adaptation time constant was not large. The challenge lies in how to achieve a closed set of mean-field equations with the inclusion of the mean-field dynamics of the adaptation variable. We address this problem by using a Lorentzian ansatz combined with the moment closure approach to arrive at a mean-field system in the thermodynamic limit. The resulting macroscopic description is capable of qualitatively and quantitatively describing the collective dynamics of the neural network, including transition between states where the individual neurons exhibit asynchronous tonic firing and synchronous bursting. We extend the approach to a network of two populations of neurons and discuss the accuracy and efficacy of our mean-field approximations by examining all assumptions that are imposed during the derivation. Numerical bifurcation analysis of our mean-field models reveals bifurcations not previously observed in the models, including a novel mechanism for emergence of bursting in the network. We anticipate our results will provide a tractable and reliable tool to investigate the underlying mechanism of brain function and dysfunction from the perspective of computational neuroscience.
Collapse
|
3
|
Osborne H, Deutz L, de Kamps M. Multidimensional Dynamical Systems with Noise. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:159-178. [DOI: 10.1007/978-3-030-89439-9_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
4
|
Schwalger T. Mapping input noise to escape noise in integrate-and-fire neurons: a level-crossing approach. BIOLOGICAL CYBERNETICS 2021; 115:539-562. [PMID: 34668051 PMCID: PMC8551127 DOI: 10.1007/s00422-021-00899-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 09/27/2021] [Indexed: 06/13/2023]
Abstract
Noise in spiking neurons is commonly modeled by a noisy input current or by generating output spikes stochastically with a voltage-dependent hazard rate ("escape noise"). While input noise lends itself to modeling biophysical noise processes, the phenomenological escape noise is mathematically more tractable. Using the level-crossing theory for differentiable Gaussian processes, we derive an approximate mapping between colored input noise and escape noise in leaky integrate-and-fire neurons. This mapping requires the first-passage-time (FPT) density of an overdamped Brownian particle driven by colored noise with respect to an arbitrarily moving boundary. Starting from the Wiener-Rice series for the FPT density, we apply the second-order decoupling approximation of Stratonovich to the case of moving boundaries and derive a simplified hazard-rate representation that is local in time and numerically efficient. This simplification requires the calculation of the non-stationary auto-correlation function of the level-crossing process: For exponentially correlated input noise (Ornstein-Uhlenbeck process), we obtain an exact formula for the zero-lag auto-correlation as a function of noise parameters, mean membrane potential and its speed, as well as an exponential approximation of the full auto-correlation function. The theory well predicts the FPT and interspike interval densities as well as the population activities obtained from simulations with colored input noise and time-dependent stimulus or boundary. The agreement with simulations is strongly enhanced across the sub- and suprathreshold firing regime compared to a first-order decoupling approximation that neglects correlations between level crossings. The second-order approximation also improves upon a previously proposed theory in the subthreshold regime. Depending on a simplicity-accuracy trade-off, all considered approximations represent useful mappings from colored input noise to escape noise, enabling progress in the theory of neuronal population dynamics.
Collapse
Affiliation(s)
- Tilo Schwalger
- Institute of Mathematics, Technical University Berlin, 10623, Berlin, Germany.
- Bernstein Center for Computational Neuroscience Berlin, 10115, Berlin, Germany.
| |
Collapse
|
5
|
Huang CH, Lin CCK. A novel density-based neural mass model for simulating neuronal network dynamics with conductance-based synapses and membrane current adaptation. Neural Netw 2021; 143:183-197. [PMID: 34157643 DOI: 10.1016/j.neunet.2021.06.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Revised: 04/01/2021] [Accepted: 06/06/2021] [Indexed: 10/21/2022]
Abstract
Despite its success in understanding brain rhythms, the neural mass model, as a low-dimensional mean-field network model, is phenomenological in nature, so that it cannot replicate some of rich repertoire of responses seen in real neuronal tissues. Here, using a colored-synapse population density method, we derived a novel neural mass model, termed density-based neural mass model (dNMM), as the mean-field description of network dynamics of adaptive exponential integrate-and-fire (aEIF) neurons, in which two critical neuronal features, i.e., voltage-dependent conductance-based synaptic interactions and adaptation of firing rate responses, were included. Our results showed that the dNMM was capable of correctly estimating firing rate responses of a neuronal population of aEIF neurons receiving stationary or time-varying excitatory and inhibitory inputs. Finally, it was also able to quantitatively describe the effect of spike-frequency adaptation in the generation of asynchronous irregular activity of excitatory-inhibitory cortical networks. We conclude that in terms of its biological reality and calculation efficiency, the dNMM is a suitable candidate to build significantly large-scale network models involving multiple brain areas, where the neuronal population is the smallest dynamic unit.
Collapse
Affiliation(s)
- Chih-Hsu Huang
- Department of Neurology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Chou-Ching K Lin
- Department of Neurology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan.
| |
Collapse
|
6
|
Capone C, di Volo M, Romagnoni A, Mattia M, Destexhe A. State-dependent mean-field formalism to model different activity states in conductance-based networks of spiking neurons. Phys Rev E 2020; 100:062413. [PMID: 31962518 DOI: 10.1103/physreve.100.062413] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Indexed: 11/07/2022]
Abstract
More interest has been shown in recent years to large-scale spiking simulations of cerebral neuronal networks, coming both from the presence of high-performance computers and increasing details in experimental observations. In this context it is important to understand how population dynamics are generated by the designed parameters of the networks, which is the question addressed by mean-field theories. Despite analytic solutions for the mean-field dynamics already being proposed for current-based neurons (CUBA), a complete analytic description has not been achieved yet for more realistic neural properties, such as conductance-based (COBA) network of adaptive exponential neurons (AdEx). Here, we propose a principled approach to map a COBA on a CUBA. Such an approach provides a state-dependent approximation capable of reliably predicting the firing-rate properties of an AdEx neuron with noninstantaneous COBA integration. We also applied our theory to population dynamics, predicting the dynamical properties of the network in very different regimes, such as asynchronous irregular and synchronous irregular (slow oscillations). This result shows that a state-dependent approximation can be successfully introduced to take into account the subtle effects of COBA integration and to deal with a theory capable of correctly predicting the activity in regimes of alternating states like slow oscillations.
Collapse
Affiliation(s)
- Cristiano Capone
- INFN, Sezione di Roma, 00185 Rome, Italy and Department of Integrative and Computational Neuroscience (ICN), Paris- Saclay Institute of Neuroscience (NeuroPSI), Centre National de la Recherche Scientifique (CNRS), 91198 Gif-sur-Yvette, France
| | - Matteo di Volo
- Department of Integrative and Computational Neuroscience (ICN), Paris-Saclay Institute of Neuroscience (NeuroPSI), Centre National de la Recherche Scientifique (CNRS), Laboratoire de Physique Théorique et Modelisation, Université de Cergy-Pontoise, 95302 Cergy-Pontoise cedex, France
| | - Alberto Romagnoni
- Data Team, Département d'informatique de l'ENS, École normale supérieure France, CNRS, PSL Research University, 75005 Paris France and Centre de recherche sur linflammation UMR 1149, Inserm-Universit Paris Diderot, Paris, France
| | - Maurizio Mattia
- National Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanitá, 00161 Rome, Italy
| | - Alain Destexhe
- Department of Integrative and Computational Neuroscience (ICN), Paris- Saclay Institute of Neuroscience (NeuroPSI), Centre National de la Recherche Scientifique (CNRS), 91198 Gif-sur-Yvette, France
| |
Collapse
|
7
|
Mattia M, Biggio M, Galluzzi A, Storace M. Dimensional reduction in networks of non-Markovian spiking neurons: Equivalence of synaptic filtering and heterogeneous propagation delays. PLoS Comput Biol 2019; 15:e1007404. [PMID: 31593569 PMCID: PMC6799936 DOI: 10.1371/journal.pcbi.1007404] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 10/18/2019] [Accepted: 09/16/2019] [Indexed: 11/19/2022] Open
Abstract
Message passing between components of a distributed physical system is non-instantaneous and contributes to determine the time scales of the emerging collective dynamics. In biological neuron networks this is due in part to local synaptic filtering of exchanged spikes, and in part to the distribution of the axonal transmission delays. How differently these two kinds of communication protocols affect the network dynamics is still an open issue due to the difficulties in dealing with the non-Markovian nature of synaptic transmission. Here, we develop a mean-field dimensional reduction yielding to an effective Markovian dynamics of the population density of the neuronal membrane potential, valid under the hypothesis of small fluctuations of the synaptic current. Within this limit, the resulting theory allows us to prove the formal equivalence between the two transmission mechanisms, holding for any synaptic time scale, integrate-and-fire neuron model, spike emission regimes and for different network states even when the neuron number is finite. The equivalence holds even for larger fluctuations of the synaptic input, if white noise currents are incorporated to model other possible biological features such as ionic channel stochasticity.
Collapse
|
8
|
Schwalger T, Chizhov AV. Mind the last spike - firing rate models for mesoscopic populations of spiking neurons. Curr Opin Neurobiol 2019; 58:155-166. [PMID: 31590003 DOI: 10.1016/j.conb.2019.08.003] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 08/25/2019] [Indexed: 02/07/2023]
Abstract
The dominant modeling framework for understanding cortical computations are heuristic firing rate models. Despite their success, these models fall short to capture spike synchronization effects, to link to biophysical parameters and to describe finite-size fluctuations. In this opinion article, we propose that the refractory density method (RDM), also known as age-structured population dynamics or quasi-renewal theory, yields a powerful theoretical framework to build rate-based models for mesoscopic neural populations from realistic neuron dynamics at the microscopic level. We review recent advances achieved by the RDM to obtain efficient population density equations for networks of generalized integrate-and-fire (GIF) neurons - a class of neuron models that has been successfully fitted to various cell types. The theory not only predicts the nonstationary dynamics of large populations of neurons but also permits an extension to finite-size populations and a systematic reduction to low-dimensional rate dynamics. The new types of rate models will allow a re-examination of models of cortical computations under biological constraints.
Collapse
Affiliation(s)
- Tilo Schwalger
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany; Institut für Mathematik, Technische Universität Berlin, 10623 Berlin, Germany.
| | - Anton V Chizhov
- Ioffe Institute, 194021 Saint-Petersburg, Russia; Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, 194223 Saint-Petersburg, Russia
| |
Collapse
|
9
|
Conductance-Based Refractory Density Approach for a Population of Bursting Neurons. Bull Math Biol 2019; 81:4124-4143. [PMID: 31313084 DOI: 10.1007/s11538-019-00643-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 07/09/2019] [Indexed: 12/29/2022]
Abstract
The conductance-based refractory density (CBRD) approach is a parsimonious mathematical-computational framework for modelling interacting populations of regular spiking neurons, which, however, has not been yet extended for a population of bursting neurons. The canonical CBRD method allows to describe the firing activity of a statistical ensemble of uncoupled Hodgkin-Huxley-like neurons (differentiated by noise) and has demonstrated its validity against experimental data. The present manuscript generalises the CBRD for a population of bursting neurons; however, in this pilot computational study, we consider the simplest setting in which each individual neuron is governed by a piecewise linear bursting dynamics. The resulting population model makes use of slow-fast analysis, which leads to a novel methodology that combines CBRD with the theory of multiple timescale dynamics. The main prospect is that it opens novel avenues for mathematical explorations, as well as, the derivation of more sophisticated population activity from Hodgkin-Huxley-like bursting neurons, which will allow to capture the activity of synchronised bursting activity in hyper-excitable brain states (e.g. onset of epilepsy).
Collapse
|
10
|
de Kamps M, Lepperød M, Lai YM. Computational geometry for modeling neural populations: From visualization to simulation. PLoS Comput Biol 2019; 15:e1006729. [PMID: 30830903 PMCID: PMC6417745 DOI: 10.1371/journal.pcbi.1006729] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 03/14/2019] [Accepted: 11/26/2018] [Indexed: 11/18/2022] Open
Abstract
The importance of a mesoscopic description level of the brain has now been well established. Rate based models are widely used, but have limitations. Recently, several extremely efficient population-level methods have been proposed that go beyond the characterization of a population in terms of a single variable. Here, we present a method for simulating neural populations based on two dimensional (2D) point spiking neuron models that defines the state of the population in terms of a density function over the neural state space. Our method differs in that we do not make the diffusion approximation, nor do we reduce the state space to a single dimension (1D). We do not hard code the neural model, but read in a grid describing its state space in the relevant simulation region. Novel models can be studied without even recompiling the code. The method is highly modular: variations of the deterministic neural dynamics and the stochastic process can be investigated independently. Currently, there is a trend to reduce complex high dimensional neuron models to 2D ones as they offer a rich dynamical repertoire that is not available in 1D, such as limit cycles. We will demonstrate that our method is ideally suited to investigate noise in such systems, replicating results obtained in the diffusion limit and generalizing them to a regime of large jumps. The joint probability density function is much more informative than 1D marginals, and we will argue that the study of 2D systems subject to noise is important complementary to 1D systems.
Collapse
Affiliation(s)
- Marc de Kamps
- Institute for Artificial and Biological Intelligence, University of Leeds, Leeds, West Yorkshire, United Kingdom
| | - Mikkel Lepperød
- Institute of Basic Medical Sciences, and Center for Integrative Neuroplasticity, University of Oslo, Oslo, Norway
| | - Yi Ming Lai
- Institute for Artificial and Biological Intelligence, University of Leeds, Leeds, West Yorkshire, United Kingdom.,Currently at the School of Mathematical Sciences, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
11
|
An Efficient Population Density Method for Modeling Neural Networks with Synaptic Dynamics Manifesting Finite Relaxation Time and Short-Term Plasticity. eNeuro 2019; 5:eN-MNT-0002-18. [PMID: 30662939 PMCID: PMC6336402 DOI: 10.1523/eneuro.0002-18.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2018] [Revised: 10/24/2018] [Accepted: 11/21/2018] [Indexed: 12/05/2022] Open
Abstract
When incorporating more realistic synaptic dynamics, the computational efficiency of population density methods (PDMs) declines sharply due to the increase in the dimension of master equations. To avoid such a decline, we develop an efficient PDM, termed colored-synapse PDM (csPDM), in which the dimension of the master equations does not depend on the number of synapse-associated state variables in the underlying network model. Our goal is to allow the PDM to incorporate realistic synaptic dynamics that possesses not only finite relaxation time but also short-term plasticity (STP). The model equations of csPDM are derived based on the diffusion approximation on synaptic dynamics and probability density function methods for Langevin equations with colored noise. Numerical examples, given by simulations of the population dynamics of uncoupled exponential integrate-and-fire (EIF) neurons, show good agreement between the results of csPDM and Monte Carlo simulations (MCSs). Compared to the original full-dimensional PDM (fdPDM), the csPDM reveals more excellent computational efficiency because of the lower dimension of the master equations. In addition, it permits network dynamics to possess the short-term plastic characteristics inherited from plastic synapses. The novel csPDM has potential applicability to any spiking neuron models because of no assumptions on neuronal dynamics, and, more importantly, this is the first report of PDM to successfully encompass short-term facilitation/depression properties.
Collapse
|
12
|
Ly C, Marsat G. Variable synaptic strengths controls the firing rate distribution in feedforward neural networks. J Comput Neurosci 2017; 44:75-95. [DOI: 10.1007/s10827-017-0670-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 10/18/2017] [Accepted: 10/19/2017] [Indexed: 12/27/2022]
|
13
|
Augustin M, Ladenbauer J, Baumann F, Obermayer K. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation. PLoS Comput Biol 2017. [PMID: 28644841 PMCID: PMC5507472 DOI: 10.1371/journal.pcbi.1005545] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models. Characterizing the dynamics of biophysically modeled, large neuronal networks usually involves extensive numerical simulations. As an alternative to this expensive procedure we propose efficient models that describe the network activity in terms of a few ordinary differential equations. These systems are simple to solve and allow for convenient investigations of asynchronous, oscillatory or chaotic network states because linear stability analyses and powerful related methods are readily applicable. We build upon two research lines on which substantial efforts have been exerted in the last two decades: (i) the development of single neuron models of reduced complexity that can accurately reproduce a large repertoire of observed neuronal behavior, and (ii) different approaches to approximate the Fokker-Planck equation that represents the collective dynamics of large neuronal networks. We combine these advances and extend recent approximation methods of the latter kind to obtain spike rate models that surprisingly well reproduce the macroscopic dynamics of the underlying neuronal network. At the same time the microscopic properties are retained through the single neuron model parameters. To enable a fast adoption we have released an efficient Python implementation as open source software under a free license.
Collapse
Affiliation(s)
- Moritz Augustin
- Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Josef Ladenbauer
- Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Group for Neural Theory, Laboratoire de Neurosciences Cognitives, École Normale Supérieure, Paris, France
| | - Fabian Baumann
- Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Klaus Obermayer
- Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
14
|
When do correlations increase with firing rates in recurrent networks? PLoS Comput Biol 2017; 13:e1005506. [PMID: 28448499 PMCID: PMC5426798 DOI: 10.1371/journal.pcbi.1005506] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Revised: 05/11/2017] [Accepted: 04/07/2017] [Indexed: 02/04/2023] Open
Abstract
A central question in neuroscience is to understand how noisy firing patterns are used to transmit information. Because neural spiking is noisy, spiking patterns are often quantified via pairwise correlations, or the probability that two cells will spike coincidentally, above and beyond their baseline firing rate. One observation frequently made in experiments, is that correlations can increase systematically with firing rate. Theoretical studies have determined that stimulus-dependent correlations that increase with firing rate can have beneficial effects on information coding; however, we still have an incomplete understanding of what circuit mechanisms do, or do not, produce this correlation-firing rate relationship. Here, we studied the relationship between pairwise correlations and firing rates in recurrently coupled excitatory-inhibitory spiking networks with conductance-based synapses. We found that with stronger excitatory coupling, a positive relationship emerged between pairwise correlations and firing rates. To explain these findings, we used linear response theory to predict the full correlation matrix and to decompose correlations in terms of graph motifs. We then used this decomposition to explain why covariation of correlations with firing rate—a relationship previously explained in feedforward networks driven by correlated input—emerges in some recurrent networks but not in others. Furthermore, when correlations covary with firing rate, this relationship is reflected in low-rank structure in the correlation matrix. A central question in neuroscience is to understand how noisy firing patterns are used to transmit information. We quantify spiking patterns by using pairwise correlations, or the probability that two cells will spike coincidentally, above and beyond their baseline firing rate. One observation frequently made in experiments is that correlations can increase systematically with firing rate. Recent studies of a type of output cell in mouse retina found this relationship; furthermore, they determined that the increase of correlation with firing rate helped the cells encode information, provided the correlations were stimulus-dependent. Several theoretical studies have explored this basic structure, and found that it is generally beneficial to modulate correlations in this way. However—aside from mouse retinal cells referenced here—we do not yet have many examples of real neural circuits that show this correlation-firing rate pattern, so we do not know what common features (or mechanisms) might occur between them. In this study, we address this question via a computational model. We set up a computational model with features representative of a generic cortical network, to see whether correlations would increase with firing rate. To produce different firing patterns, we varied excitatory coupling. We found that with stronger excitatory coupling, there was a positive relationship between pairwise correlations and firing rates. We used a network linear response theory to show why correlations could increase with firing rates in some networks, but not in others; this could be explained by how cells responded to fluctuations in inhibitory conductances.
Collapse
|
15
|
Deniz T, Rotter S. Solving the two-dimensional Fokker-Planck equation for strongly correlated neurons. Phys Rev E 2017; 95:012412. [PMID: 28208505 DOI: 10.1103/physreve.95.012412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Indexed: 06/06/2023]
Abstract
Pairs of neurons in brain networks often share much of the input they receive from other neurons. Due to essential nonlinearities of the neuronal dynamics, the consequences for the correlation of the output spike trains are generally not well understood. Here we analyze the case of two leaky integrate-and-fire neurons using an approach which is nonperturbative with respect to the degree of input correlation. Our treatment covers both weakly and strongly correlated dynamics, generalizing previous results based on linear response theory.
Collapse
Affiliation(s)
- Taşkın Deniz
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Hansastraße 9a, 79104 Freiburg, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Hansastraße 9a, 79104 Freiburg, Germany
| |
Collapse
|
16
|
Examining the limits of cellular adaptation bursting mechanisms in biologically-based excitatory networks of the hippocampus. J Comput Neurosci 2015; 39:289-309. [DOI: 10.1007/s10827-015-0577-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2015] [Revised: 09/08/2015] [Accepted: 09/10/2015] [Indexed: 01/21/2023]
|
17
|
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity. J Comput Neurosci 2015; 39:311-27. [DOI: 10.1007/s10827-015-0578-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2015] [Revised: 07/06/2015] [Accepted: 09/23/2015] [Indexed: 11/25/2022]
|
18
|
|
19
|
Dumont G, Henry J, Tarniceriu CO. Well-posedness of a density model for a population of theta neurons. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2014; 4:2. [PMID: 24742324 DOI: 10.1186/2190-8567-4-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2013] [Accepted: 04/17/2014] [Indexed: 06/03/2023]
Abstract
Population density models used to describe the evolution of neural populations in a phase space are closely related to the single neuron model that describes the individual trajectories of the neurons of the population and which gives in particular the phase-space where the computations are made. Based on a transformation of the quadratic integrate and fire single neuron model, the so called theta-neuron model is obtained and we shall introduce in this paper a corresponding population density model for it. Existence and uniqueness of a solution will be proved and some numerical simulations are presented.
Collapse
|
20
|
Schaffer ES, Ostojic S, Abbott LF. A complex-valued firing-rate model that approximates the dynamics of spiking networks. PLoS Comput Biol 2013; 9:e1003301. [PMID: 24204236 PMCID: PMC3814717 DOI: 10.1371/journal.pcbi.1003301] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2013] [Accepted: 09/11/2013] [Indexed: 11/18/2022] Open
Abstract
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons.
Collapse
Affiliation(s)
- Evan S. Schaffer
- Department of Neuroscience, Department of Physiology and Cellular Biophysics, Columbia University College of Physicians and Surgeons, New York, New York, United States of America
- * E-mail:
| | - Srdjan Ostojic
- Department of Neuroscience, Department of Physiology and Cellular Biophysics, Columbia University College of Physicians and Surgeons, New York, New York, United States of America
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives, INSERM U960, Ecole Normale Superieure, Paris, France
| | - L. F. Abbott
- Department of Neuroscience, Department of Physiology and Cellular Biophysics, Columbia University College of Physicians and Surgeons, New York, New York, United States of America
| |
Collapse
|
21
|
|
22
|
Ly C. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics. Neural Comput 2013; 25:2682-708. [PMID: 23777517 DOI: 10.1162/neco_a_00489] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Statistical Sciences and Operations Research, Virginia Commonwealth University Richmond, VA 23284-3083, USA.
| |
Collapse
|
23
|
Bifurcations of large networks of two-dimensional integrate and fire neurons. J Comput Neurosci 2013; 35:87-108. [PMID: 23430291 DOI: 10.1007/s10827-013-0442-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2012] [Revised: 11/29/2012] [Accepted: 01/17/2013] [Indexed: 12/25/2022]
Abstract
Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.
Collapse
|
24
|
Deville REL, Peskin CS. Synchrony and asynchrony for neuronal dynamics defined on complex networks. Bull Math Biol 2011; 74:769-802. [PMID: 21755391 DOI: 10.1007/s11538-011-9674-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2009] [Accepted: 06/14/2011] [Indexed: 11/29/2022]
Abstract
We describe and analyze a model for a stochastic pulse-coupled neuronal network with many sources of randomness: random external input, potential synaptic failure, and random connectivity topologies. We show that different classes of network topologies give rise to qualitatively different types of synchrony: uniform (Erdős-Rényi) and "small-world" networks give rise to synchronization phenomena similar to that in "all-to-all" networks (in which there is a sharp onset of synchrony as coupling is increased); in contrast, in "scale-free" networks the dependence of synchrony on coupling strength is smoother. Moreover, we show that in the uniform and small-world cases, the fine details of the network are not important in determining the synchronization properties; this depends only on the mean connectivity. In contrast, for scale-free networks, the dynamics are significantly affected by the fine details of the network; in particular, they are significantly affected by the local neighborhoods of the "hubs" in the network.
Collapse
|
25
|
Moreno-Bote R, Parga N. Response of Integrate-and-Fire Neurons to Noisy Inputs Filtered by Synapses with Arbitrary Timescales: Firing Rate and Correlations. Neural Comput 2010; 22:1528-72. [DOI: 10.1162/neco.2010.06-09-1036] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Delivery of neurotransmitter produces on a synapse a current that flows through the membrane and gets transmitted into the soma of the neuron, where it is integrated. The decay time of the current depends on the synaptic receptor's type and ranges from a few (e.g., AMPA receptors) to a few hundred milliseconds (e.g., NMDA receptors). The role of the variety of synaptic timescales, several of them coexisting in the same neuron, is at present not understood. A prime question to answer is which is the effect of temporal filtering at different timescales of the incoming spike trains on the neuron's response. Here, based on our previous work on linear synaptic filtering, we build a general theory for the stationary firing response of integrate-and-fire (IF) neurons receiving stochastic inputs filtered by one, two, or multiple synaptic channels, each characterized by an arbitrary timescale. The formalism applies to arbitrary IF model neurons and arbitrary forms of input noise (i.e., not required to be gaussian or to have small amplitude), as well as to any form of synaptic filtering (linear or nonlinear). The theory determines with exact analytical expressions the firing rate of an IF neuron for long synaptic time constants using the adiabatic approach. The correlated spiking (cross-correlations function) of two neurons receiving common as well as independent sources of noise is also described. The theory is illustrated using leaky, quadratic, and noise-thresholded IF neurons. Although the adiabatic approach is exact when at least one of the synaptic timescales is long, it provides a good prediction of the firing rate even when the timescales of the synapses are comparable to that of the leak of the neuron; it is not required that the synaptic time constants are longer than the mean interspike intervals or that the noise has small variance. The distribution of the potential for general IF neurons is also characterized. Our results provide powerful analytical tools that can allow a quantitative description of the dynamics of neuronal networks with realistic synaptic dynamics.
Collapse
Affiliation(s)
- Rubén Moreno-Bote
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York, 14627, U.S.A., and Departamento de Física Teórica, Universidad Autónoma de Madrid, Cantoblanco 28049, Madrid, Spain
| | - Néstor Parga
- Departamento de Física Teórica, Universidad Autónoma de Madrid, Cantoblanco 29049, Madrid, Spain
| |
Collapse
|
26
|
Coombes S. Large-scale neural dynamics: simple and complex. Neuroimage 2010; 52:731-9. [PMID: 20096791 DOI: 10.1016/j.neuroimage.2010.01.045] [Citation(s) in RCA: 89] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2009] [Revised: 12/23/2009] [Accepted: 01/13/2010] [Indexed: 11/24/2022] Open
Abstract
We review the use of neural field models for modelling the brain at the large scales necessary for interpreting EEG, fMRI, MEG and optical imaging data. Albeit a framework that is limited to coarse-grained or mean-field activity, neural field models provide a framework for unifying data from different imaging modalities. Starting with a description of neural mass models, we build to spatially extend cortical models of layered two-dimensional sheets with long range axonal connections mediating synaptic interactions. Reformulations of the fundamental non-local mathematical model in terms of more familiar local differential (brain wave) equations are described. Techniques for the analysis of such models, including how to determine the onset of spatio-temporal pattern forming instabilities, are reviewed. Extensions of the basic formalism to treat refractoriness, adaptive feedback and inhomogeneous connectivity are described along with open challenges for the development of multi-scale models that can integrate macroscopic models at large spatial scales with models at the microscopic scale.
Collapse
Affiliation(s)
- S Coombes
- School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD, UK.
| |
Collapse
|
27
|
Ly C, Tranchina D. Spike train statistics and dynamics with synaptic input from any renewal process: a population density approach. Neural Comput 2009; 21:360-96. [PMID: 19431264 DOI: 10.1162/neco.2008.03-08-743] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In the probability density function (PDF) approach to neural network modeling, a common simplifying assumption is that the arrival times of elementary postsynaptic events are governed by a Poisson process. This assumption ignores temporal correlations in the input that sometimes have important physiological consequences. We extend PDF methods to models with synaptic event times governed by any modulated renewal process. We focus on the integrate-and-fire neuron with instantaneous synaptic kinetics and a random elementary excitatory postsynaptic potential (EPSP), A. Between presynaptic events, the membrane voltage, v, decays exponentially toward rest, while s, the time since the last synaptic input event, evolves with unit velocity. When a synaptic event arrives, v jumps by A, and s is reset to zero. If v crosses the threshold voltage, an action potential occurs, and v is reset to v(reset). The probability per unit time of a synaptic event at time t, given the elapsed time s since the last event, h(s, t), depends on specifics of the renewal process. We study how regularity of the train of synaptic input events affects output spike rate, PDF and coefficient of variation (CV) of the interspike interval, and the autocorrelation function of the output spike train. In the limit of a deterministic, clocklike train of input events, the PDF of the interspike interval converges to a sum of delta functions, with coefficients determined by the PDF for A. The limiting autocorrelation function of the output spike train is a sum of delta functions whose coefficients fall under a damped oscillatory envelope. When the EPSP CV, sigma A/mu A, is equal to 0.45, a CV for the intersynaptic event interval, sigma T/mu T = 0.35, is functionally equivalent to a deterministic periodic train of synaptic input events (CV = 0) with respect to spike statistics. We discuss the relevance to neural network simulations.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, USA.
| | | |
Collapse
|
28
|
Ly C, Doiron B. Divisive gain modulation with dynamic stimuli in integrate-and-fire neurons. PLoS Comput Biol 2009; 5:e1000365. [PMID: 19390603 PMCID: PMC2667215 DOI: 10.1371/journal.pcbi.1000365] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2008] [Accepted: 03/18/2009] [Indexed: 11/18/2022] Open
Abstract
The modulation of the sensitivity, or gain, of neural responses to input is an important component of neural computation. It has been shown that divisive gain modulation of neural responses can result from a stochastic shunting from balanced (mixed excitation and inhibition) background activity. This gain control scheme was developed and explored with static inputs, where the membrane and spike train statistics were stationary in time. However, input statistics, such as the firing rates of pre-synaptic neurons, are often dynamic, varying on timescales comparable to typical membrane time constants. Using a population density approach for integrate-and-fire neurons with dynamic and temporally rich inputs, we find that the same fluctuation-induced divisive gain modulation is operative for dynamic inputs driving nonequilibrium responses. Moreover, the degree of divisive scaling of the dynamic response is quantitatively the same as the steady-state responses—thus, gain modulation via balanced conductance fluctuations generalizes in a straight-forward way to a dynamic setting. Many neural computations, including sensory and motor processing, require neurons to control their sensitivity (often termed ‘gain’) to stimuli. One common form of gain manipulation is divisive gain control, where the neural response to a specific stimulus is simply scaled by a constant. Most previous theoretical and experimental work on divisive gain control have assumed input statistics to be constant in time. However, realistic inputs can be highly time-varying, often with time-varying statistics, and divisive gain control remains to be extended to these cases. A widespread mechanism for divisive gain control for static inputs is through an increase in stimulus independent membrane fluctuations. We address the question of whether this divisive gain control scheme is indeed operative for time-varying inputs. Using simplified spiking neuron models, we employ accurate theoretical methods to estimate the dynamic neural response. We find that gain control via membrane fluctuations does indeed extend to the time-varying regime, and moreover, the degree of divisive scaling does not depend on the timescales of the driving input. This significantly increases the relevance of this form of divisive gain control for neural computations where input statistics change in time, as expected during normal sensory and motor behavior.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- * E-mail: (CL); (BD)
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- * E-mail: (CL); (BD)
| |
Collapse
|
29
|
Marpeau F, Barua A, Josić K. A finite volume method for stochastic integrate-and-fire models. J Comput Neurosci 2008; 26:445-57. [PMID: 19067147 DOI: 10.1007/s10827-008-0121-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2008] [Revised: 09/26/2008] [Accepted: 10/24/2008] [Indexed: 11/30/2022]
Abstract
The stochastic integrate and fire neuron is one of the most commonly used stochastic models in neuroscience. Although some cases are analytically tractable, a full analysis typically calls for numerical simulations. We present a fast and accurate finite volume method to approximate the solution of the associated Fokker-Planck equation. The discretization of the boundary conditions offers a particular challenge, as standard operator splitting approaches cannot be applied without modification. We demonstrate the method using stationary and time dependent inputs, and compare them with Monte Carlo simulations. Such simulations are relatively easy to implement, but can suffer from convergence difficulties and long run times. In comparison, our method offers improved accuracy, and decreases computation times by several orders of magnitude. The method can easily be extended to two and three dimensional Fokker-Planck equations.
Collapse
Affiliation(s)
- Fabien Marpeau
- Department of Mathematics, University of Houston, Houston, TX 77204-3008, USA.
| | | | | |
Collapse
|
30
|
Ly C, Ermentrout GB. Synchronization dynamics of two coupled neural oscillators receiving shared and unshared noisy stimuli. J Comput Neurosci 2008; 26:425-43. [PMID: 19034640 DOI: 10.1007/s10827-008-0120-8] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2008] [Revised: 09/23/2008] [Accepted: 10/23/2008] [Indexed: 11/27/2022]
Abstract
The response of neurons to external stimuli greatly depends on the intrinsic dynamics of the network. Here, the intrinsic dynamics are modeled as coupling and the external input is modeled as shared and unshared noise. We assume the neurons are repetitively firing action potentials (i.e., neural oscillators), are weakly and identically coupled, and the external noise is weak. Shared noise can induce bistability between the synchronous and anti-phase states even though the anti-phase state is the only stable state in the absence of noise. We study the Fokker-Planck equation of the system and perform an asymptotic reduction rho(0). The rho(0) solution is more computationally efficient than both the Monte Carlo simulations and the 2D Fokker-Planck solver, and agrees remarkably well with the full system with weak noise and weak coupling. With moderate noise and coupling, rho(0) is still qualitatively correct despite the small noise and coupling assumption in the asymptotic reduction. Our phase model accurately predicts the behavior of a realistic synaptically coupled Morris-Lecar system.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, USA.
| | | |
Collapse
|
31
|
Liu CY, Nykamp DQ. A kinetic theory approach to capturing interneuronal correlation: the feed-forward case. J Comput Neurosci 2008; 26:339-68. [PMID: 18987967 DOI: 10.1007/s10827-008-0116-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2008] [Revised: 09/19/2008] [Accepted: 09/24/2008] [Indexed: 11/30/2022]
Abstract
We present an approach for using kinetic theory to capture first and second order statistics of neuronal activity. We coarse grain neuronal networks into populations of neurons and calculate the population average firing rate and output cross-correlation in response to time varying correlated input. We derive coupling equations for the populations based on first and second order statistics of the network connectivity. This coupling scheme is based on the hypothesis that second order statistics of the network connectivity are sufficient to determine second order statistics of neuronal activity. We implement a kinetic theory representation of a simple feed-forward network and demonstrate that the kinetic theory model captures key aspects of the emergence and propagation of correlations in the network, as long as the correlations do not become too strong. By analyzing the correlated activity of feed-forward networks with a variety of connectivity patterns, we provide evidence supporting our hypothesis of the sufficiency of second order connectivity statistics.
Collapse
Affiliation(s)
- Chin-Yueh Liu
- School of Mathematics, University of Minnesota, 206 Church St., Minneapolis, MN 55455, USA
| | | |
Collapse
|
32
|
de Kamps M, Baier V, Drever J, Dietz M, Mösenlechner L, van der Velde F. The state of MIIND. Neural Netw 2008; 21:1164-81. [PMID: 18783918 DOI: 10.1016/j.neunet.2008.07.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2007] [Revised: 06/09/2008] [Accepted: 07/28/2008] [Indexed: 10/21/2022]
Abstract
MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson-Cowan and Ornstein-Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models.
Collapse
Affiliation(s)
- Marc de Kamps
- Biosystems Group, School of Computing, University of Leeds, LS2 9JT Leeds, United Kingdom.
| | | | | | | | | | | |
Collapse
|
33
|
Synchrony and Asynchrony in a Fully Stochastic Neural Network. Bull Math Biol 2008; 70:1608-33. [DOI: 10.1007/s11538-008-9311-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2007] [Accepted: 02/12/2008] [Indexed: 10/22/2022]
|
34
|
Köndgen H, Geisler C, Fusi S, Wang XJ, Lüscher HR, Giugliano M. The dynamical response properties of neocortical neurons to temporally modulated noisy inputs in vitro. ACTA ACUST UNITED AC 2008; 18:2086-97. [PMID: 18263893 DOI: 10.1093/cercor/bhm235] [Citation(s) in RCA: 79] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Cortical neurons are often classified by current-frequency relationship. Such a static description is inadequate to interpret neuronal responses to time-varying stimuli. Theoretical studies suggested that single-cell dynamical response properties are necessary to interpret ensemble responses to fast input transients. Further, it was shown that input-noise linearizes and boosts the response bandwidth, and that the interplay between the barrage of noisy synaptic currents and the spike-initiation mechanisms determine the dynamical properties of the firing rate. To test these model predictions, we estimated the linear response properties of layer 5 pyramidal cells by injecting a superposition of a small-amplitude sinusoidal wave and a background noise. We characterized the evoked firing probability across many stimulation trials and a range of oscillation frequencies (1-1000 Hz), quantifying response amplitude and phase-shift while changing noise statistics. We found that neurons track unexpectedly fast transients, as their response amplitude has no attenuation up to 200 Hz. This cut-off frequency is higher than the limits set by passive membrane properties (approximately 50 Hz) and average firing rate (approximately 20 Hz) and is not affected by the rate of change of the input. Finally, above 200 Hz, the response amplitude decays as a power-law with an exponent that is independent of voltage fluctuations induced by the background noise.
Collapse
Affiliation(s)
- Harold Köndgen
- Department of Physiology, University of Bern, Bern CH-3012, Switzerland
| | | | | | | | | | | |
Collapse
|
35
|
Chizhov AV, Graham LJ. Efficient evaluation of neuron populations receiving colored-noise current based on a refractory density method. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2008; 77:011910. [PMID: 18351879 DOI: 10.1103/physreve.77.011910] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2007] [Revised: 10/20/2007] [Indexed: 05/26/2023]
Abstract
The expected firing probability of a stochastic neuron is approximated by a function of the expected subthreshold membrane potential, for the case of colored noise. We propose this approximation in order to extend the recently proposed white noise model [A. V. Chizhov and L. J. Graham, Phys. Rev. E 75, 011924 (2007)] to the case of colored noise, applying a refractory density approach to conductance-based neurons. The uncoupled neurons of a single population receive a common input and are dispersed by the noise. Within the framework of the model the effect of noise is expressed by the so-called hazard function, which is the probability density for a single neuron to fire given the average membrane potential in the presence of a noise term. To derive the hazard function we solve the Kolmogorov-Fokker-Planck equation for a mean voltage-driven neuron fluctuating due to colored noisy current. We show that a sum of both a self-similar solution for the case of slow changing mean voltage and a frozen stationary solution for fast changing mean voltage gives a satisfactory approximation for the hazard function in the arbitrary case. We demonstrate the quantitative effect of a temporal correlation of noisy input on the neuron dynamics in the case of leaky integrate-and-fire and detailed conductance-based neurons in response to an injected current step.
Collapse
Affiliation(s)
- Anton V Chizhov
- A.F. Ioffe Physico-Technical Institute of RAS, 26 Politekhnicheskaya Street, 194021 St. Petersburg, Russia.
| | | |
Collapse
|
36
|
Ly C, Tranchina D. Critical analysis of dimension reduction by a moment closure method in a population density approach to neural network modeling. Neural Comput 2007; 19:2032-92. [PMID: 17571938 DOI: 10.1162/neco.2007.19.8.2032] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Computational techniques within the population density function (PDF) framework have provided time-saving alternatives to classical Monte Carlo simulations of neural network activity. Efficiency of the PDF method is lost as the underlying neuron model is made more realistic and the number of state variables increases. In a detailed theoretical and computational study, we elucidate strengths and weaknesses of dimension reduction by a particular moment closure method (Cai, Tao, Shelley, & McLaughlin, 2004; Cai, Tao, Rangan, & McLaughlin, 2006) as applied to integrate-and-fire neurons that receive excitatory synaptic input only. When the unitary postsynaptic conductance event has a single-exponential time course, the evolution equation for the PDF is a partial differential integral equation in two state variables, voltage and excitatory conductance. In the moment closure method, one approximates the conditional kth centered moment of excitatory conductance given voltage by the corresponding unconditioned moment. The result is a system of k coupled partial differential equations with one state variable, voltage, and k coupled ordinary differential equations. Moment closure at k = 2 works well, and at k = 3 works even better, in the regime of high dynamically varying synaptic input rates. Both closures break down at lower synaptic input rates. Phase-plane analysis of the k = 2 problem with typical parameters proves, and reveals why, no steady-state solutions exist below a synaptic input rate that gives a firing rate of 59 s(1) in the full 2D problem. Closure at k = 3 fails for similar reasons. Low firing-rate solutions can be obtained only with parameters for the amplitude or kinetics (or both) of the unitary postsynaptic conductance event that are on the edge of the physiological range. We conclude that this dimension-reduction method gives ill-posed problems for a wide range of physiological parameters, and we suggest future directions.
Collapse
Affiliation(s)
- Cheng Ly
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA.
| | | |
Collapse
|