1
|
Fisco-Compte P, Aquilué-Llorens D, Roqueiro N, Fossas E, Guillamon A. Empirical modeling and prediction of neuronal dynamics. BIOLOGICAL CYBERNETICS 2024; 118:83-110. [PMID: 38597964 PMCID: PMC11068704 DOI: 10.1007/s00422-024-00986-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 02/29/2024] [Indexed: 04/11/2024]
Abstract
Mathematical modeling of neuronal dynamics has experienced a fast growth in the last decades thanks to the biophysical formalism introduced by Hodgkin and Huxley in the 1950s. Other types of models (for instance, integrate and fire models), although less realistic, have also contributed to understand neuronal dynamics. However, there is still a vast volume of data that have not been associated with a mathematical model, mainly because data are acquired more rapidly than they can be analyzed or because it is difficult to analyze (for instance, if the number of ionic channels involved is huge). Therefore, developing new methodologies to obtain mathematical or computational models associated with data (even without previous knowledge of the source) can be helpful to make future predictions. Here, we explore the capability of a wavelet neural network to identify neuronal (single-cell) dynamics. We present an optimized computational scheme that trains the ANN with biologically plausible input currents. We obtain successful identification for data generated from four different neuron models when using all variables as inputs of the network. We also show that the empiric model obtained is able to generalize and predict the neuronal dynamics generated by variable input currents different from those used to train the artificial network. In the more realistic situation of using only the voltage and the injected current as input data to train the network, we lose predictive ability but, for low-dimensional models, the results are still satisfactory. We understand our contribution as a first step toward obtaining empiric models from experimental voltage traces.
Collapse
Affiliation(s)
- Pau Fisco-Compte
- Departament d'Enginyeria Elèctrica, CITCEA-UPC, Universitat Politècnica de Catalunya - Barcelona TECH, Av. Diagonal, 647, (Edifici ETSEIB), Barcelona, Catalonia, 08028, Spain
| | - David Aquilué-Llorens
- Neuroscience BU, Starlab Barcelona S.L., Av Tibidabo 47 bis, Barcelona, Catalonia, 08035, Spain
| | - Nestor Roqueiro
- Depto. de Automação e Sistemas, Federal University of Santa Catarina, Bairro Trindade, Caixa Postal 476, Florianopolis, Santa Catarina, 88040-900, Brazil
| | - Enric Fossas
- Institut d'Organització i Control, Universitat Politècnica de Catalunya - Barcelona TECH, Av. Diagonal, 647, planta 11 (Edifici ETSEIB), Barcelona, Catalonia, 08028, Spain
| | - Antoni Guillamon
- Departament de Matemàtiques (EPSEB) and Institut de Matemàtiques de la UPC (IMTech), Universitat Politècnica de Catalunya - Barcelona TECH, Av. Dr. Marañón, 44-50, Barcelona, Catalonia, 08028, Spain.
- Centre de Recerca Matemàtica, Edifici C, Campus de Bellaterra, Cerdanyola del Vallès, Catalonia, 08193, Spain.
| |
Collapse
|
2
|
Exact mean-field models for spiking neural networks with adaptation. J Comput Neurosci 2022; 50:445-469. [PMID: 35834100 DOI: 10.1007/s10827-022-00825-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 06/15/2022] [Indexed: 10/17/2022]
Abstract
Networks of spiking neurons with adaption have been shown to be able to reproduce a wide range of neural activities, including the emergent population bursting and spike synchrony that underpin brain disorders and normal function. Exact mean-field models derived from spiking neural networks are extremely valuable, as such models can be used to determine how individual neurons and the network they reside within interact to produce macroscopic network behaviours. In the paper, we derive and analyze a set of exact mean-field equations for the neural network with spike frequency adaptation. Specifically, our model is a network of Izhikevich neurons, where each neuron is modeled by a two dimensional system consisting of a quadratic integrate and fire equation plus an equation which implements spike frequency adaptation. Previous work deriving a mean-field model for this type of network, relied on the assumption of sufficiently slow dynamics of the adaptation variable. However, this approximation did not succeed in establishing an exact correspondence between the macroscopic description and the realistic neural network, especially when the adaptation time constant was not large. The challenge lies in how to achieve a closed set of mean-field equations with the inclusion of the mean-field dynamics of the adaptation variable. We address this problem by using a Lorentzian ansatz combined with the moment closure approach to arrive at a mean-field system in the thermodynamic limit. The resulting macroscopic description is capable of qualitatively and quantitatively describing the collective dynamics of the neural network, including transition between states where the individual neurons exhibit asynchronous tonic firing and synchronous bursting. We extend the approach to a network of two populations of neurons and discuss the accuracy and efficacy of our mean-field approximations by examining all assumptions that are imposed during the derivation. Numerical bifurcation analysis of our mean-field models reveals bifurcations not previously observed in the models, including a novel mechanism for emergence of bursting in the network. We anticipate our results will provide a tractable and reliable tool to investigate the underlying mechanism of brain function and dysfunction from the perspective of computational neuroscience.
Collapse
|
3
|
Byrne Á, Ross J, Nicks R, Coombes S. Mean-Field Models for EEG/MEG: From Oscillations to Waves. Brain Topogr 2021; 35:36-53. [PMID: 33993357 PMCID: PMC8813727 DOI: 10.1007/s10548-021-00842-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 04/21/2021] [Indexed: 11/24/2022]
Abstract
Neural mass models have been used since the 1970s to model the coarse-grained activity of large populations of neurons. They have proven especially fruitful for understanding brain rhythms. However, although motivated by neurobiological considerations they are phenomenological in nature, and cannot hope to recreate some of the rich repertoire of responses seen in real neuronal tissue. Here we consider a simple spiking neuron network model that has recently been shown to admit an exact mean-field description for both synaptic and gap-junction interactions. The mean-field model takes a similar form to a standard neural mass model, with an additional dynamical equation to describe the evolution of within-population synchrony. As well as reviewing the origins of this next generation mass model we discuss its extension to describe an idealised spatially extended planar cortex. To emphasise the usefulness of this model for EEG/MEG modelling we show how it can be used to uncover the role of local gap-junction coupling in shaping large scale synaptic waves.
Collapse
Affiliation(s)
- Áine Byrne
- School of Mathematics and Statistics, Science Centre, University College Dublin, South Belfield, Dublin 4, Ireland.
| | - James Ross
- School of Mathematical Sciences, Centre for Mathematical Medicine and Biology, University of Nottingham, Nottingham, NG7 2RD, UK
| | - Rachel Nicks
- School of Mathematical Sciences, Centre for Mathematical Medicine and Biology, University of Nottingham, Nottingham, NG7 2RD, UK
| | - Stephen Coombes
- School of Mathematical Sciences, Centre for Mathematical Medicine and Biology, University of Nottingham, Nottingham, NG7 2RD, UK
| |
Collapse
|
4
|
Shao Y, Zhang J, Tao L. Dimensional reduction of emergent spatiotemporal cortical dynamics via a maximum entropy moment closure. PLoS Comput Biol 2020; 16:e1007265. [PMID: 32516336 PMCID: PMC7304648 DOI: 10.1371/journal.pcbi.1007265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 06/19/2020] [Accepted: 04/29/2020] [Indexed: 11/22/2022] Open
Abstract
Modern electrophysiological recordings and optical imaging techniques have revealed a diverse spectrum of spatiotemporal neural activities underlying fundamental cognitive processing. Oscillations, traveling waves and other complex population dynamical patterns are often concomitant with sensory processing, information transfer, decision making and memory consolidation. While neural population models such as neural mass, population density and kinetic theoretical models have been used to capture a wide range of the experimentally observed dynamics, a full account of how the multi-scale dynamics emerges from the detailed biophysical properties of individual neurons and the network architecture remains elusive. Here we apply a recently developed coarse-graining framework for reduced-dimensional descriptions of neuronal networks to model visual cortical dynamics. We show that, without introducing any new parameters, how a sequence of models culminating in an augmented system of spatially-coupled ODEs can effectively model a wide range of the observed cortical dynamics, ranging from visual stimulus orientation dynamics to traveling waves induced by visual illusory stimuli. In addition to an efficient simulation method, this framework also offers an analytic approach to studying large-scale network dynamics. As such, the dimensional reduction naturally leads to mesoscopic variables that capture the interplay between neuronal population stochasticity and network architecture that we believe to underlie many emergent cortical phenomena.
Collapse
Affiliation(s)
- Yuxiu Shao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, China
| | - Jiwei Zhang
- School of Mathematics and Statistics, and Hubei Key Laboratory of Computational Science, Wuhan University, China
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, China
- Center for Quantitative Biology, Peking University, Beijing, China
| |
Collapse
|
5
|
Rule ME, Schnoerr D, Hennig MH, Sanguinetti G. Neural field models for latent state inference: Application to large-scale neuronal recordings. PLoS Comput Biol 2019; 15:e1007442. [PMID: 31682604 PMCID: PMC6855563 DOI: 10.1371/journal.pcbi.1007442] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 11/14/2019] [Accepted: 09/27/2019] [Indexed: 11/18/2022] Open
Abstract
Large-scale neural recording methods now allow us to observe large populations of identified single neurons simultaneously, opening a window into neural population dynamics in living organisms. However, distilling such large-scale recordings to build theories of emergent collective dynamics remains a fundamental statistical challenge. The neural field models of Wilson, Cowan, and colleagues remain the mainstay of mathematical population modeling owing to their interpretable, mechanistic parameters and amenability to mathematical analysis. Inspired by recent advances in biochemical modeling, we develop a method based on moment closure to interpret neural field models as latent state-space point-process models, making them amenable to statistical inference. With this approach we can infer the intrinsic states of neurons, such as active and refractory, solely from spiking activity in large populations. After validating this approach with synthetic data, we apply it to high-density recordings of spiking activity in the developing mouse retina. This confirms the essential role of a long lasting refractory state in shaping spatiotemporal properties of neonatal retinal waves. This conceptual and methodological advance opens up new theoretical connections between mathematical theory and point-process state-space models in neural data analysis. Developing statistical tools to connect single-neuron activity to emergent collective dynamics is vital for building interpretable models of neural activity. Neural field models relate single-neuron activity to emergent collective dynamics in neural populations, but integrating them with data remains challenging. Recently, latent state-space models have emerged as a powerful tool for constructing phenomenological models of neural population activity. The advent of high-density multi-electrode array recordings now enables us to examine large-scale collective neural activity. We show that classical neural field approaches can yield latent state-space equations and demonstrate that this enables inference of the intrinsic states of neurons from recorded spike trains in large populations.
Collapse
Affiliation(s)
- Michael E. Rule
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
- * E-mail:
| | - David Schnoerr
- Theoretical Systems Biology, Imperial College London, London, United Kingdom
| | - Matthias H. Hennig
- Department of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Guido Sanguinetti
- Department of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
6
|
Mattia M, Biggio M, Galluzzi A, Storace M. Dimensional reduction in networks of non-Markovian spiking neurons: Equivalence of synaptic filtering and heterogeneous propagation delays. PLoS Comput Biol 2019; 15:e1007404. [PMID: 31593569 PMCID: PMC6799936 DOI: 10.1371/journal.pcbi.1007404] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 10/18/2019] [Accepted: 09/16/2019] [Indexed: 11/19/2022] Open
Abstract
Message passing between components of a distributed physical system is non-instantaneous and contributes to determine the time scales of the emerging collective dynamics. In biological neuron networks this is due in part to local synaptic filtering of exchanged spikes, and in part to the distribution of the axonal transmission delays. How differently these two kinds of communication protocols affect the network dynamics is still an open issue due to the difficulties in dealing with the non-Markovian nature of synaptic transmission. Here, we develop a mean-field dimensional reduction yielding to an effective Markovian dynamics of the population density of the neuronal membrane potential, valid under the hypothesis of small fluctuations of the synaptic current. Within this limit, the resulting theory allows us to prove the formal equivalence between the two transmission mechanisms, holding for any synaptic time scale, integrate-and-fire neuron model, spike emission regimes and for different network states even when the neuron number is finite. The equivalence holds even for larger fluctuations of the synaptic input, if white noise currents are incorporated to model other possible biological features such as ionic channel stochasticity.
Collapse
|
7
|
Conductance-Based Refractory Density Approach for a Population of Bursting Neurons. Bull Math Biol 2019; 81:4124-4143. [PMID: 31313084 DOI: 10.1007/s11538-019-00643-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 07/09/2019] [Indexed: 12/29/2022]
Abstract
The conductance-based refractory density (CBRD) approach is a parsimonious mathematical-computational framework for modelling interacting populations of regular spiking neurons, which, however, has not been yet extended for a population of bursting neurons. The canonical CBRD method allows to describe the firing activity of a statistical ensemble of uncoupled Hodgkin-Huxley-like neurons (differentiated by noise) and has demonstrated its validity against experimental data. The present manuscript generalises the CBRD for a population of bursting neurons; however, in this pilot computational study, we consider the simplest setting in which each individual neuron is governed by a piecewise linear bursting dynamics. The resulting population model makes use of slow-fast analysis, which leads to a novel methodology that combines CBRD with the theory of multiple timescale dynamics. The main prospect is that it opens novel avenues for mathematical explorations, as well as, the derivation of more sophisticated population activity from Hodgkin-Huxley-like bursting neurons, which will allow to capture the activity of synchronised bursting activity in hyper-excitable brain states (e.g. onset of epilepsy).
Collapse
|
8
|
Ly C, Shew WL, Barreiro AK. Efficient calculation of heterogeneous non-equilibrium statistics in coupled firing-rate models. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2019; 9:2. [PMID: 31073652 PMCID: PMC6509307 DOI: 10.1186/s13408-019-0070-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 04/28/2019] [Indexed: 06/09/2023]
Abstract
Understanding nervous system function requires careful study of transient (non-equilibrium) neural response to rapidly changing, noisy input from the outside world. Such neural response results from dynamic interactions among multiple, heterogeneous brain regions. Realistic modeling of these large networks requires enormous computational resources, especially when high-dimensional parameter spaces are considered. By assuming quasi-steady-state activity, one can neglect the complex temporal dynamics; however, in many cases the quasi-steady-state assumption fails. Here, we develop a new reduction method for a general heterogeneous firing-rate model receiving background correlated noisy inputs that accurately handles highly non-equilibrium statistics and interactions of heterogeneous cells. Our method involves solving an efficient set of nonlinear ODEs, rather than time-consuming Monte Carlo simulations or high-dimensional PDEs, and it captures the entire set of first and second order statistics while allowing significant heterogeneity in all model parameters.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Statistical Sciences and Operations Research, Virginia Commonwealth University, Richmond, USA
| | - Woodrow L. Shew
- Department of Physics, University of Arkansas, Fayetteville, USA
| | | |
Collapse
|
9
|
Ladenbauer J, Obermayer K. Weak electric fields promote resonance in neuronal spiking activity: Analytical results from two-compartment cell and network models. PLoS Comput Biol 2019; 15:e1006974. [PMID: 31009455 PMCID: PMC6476479 DOI: 10.1371/journal.pcbi.1006974] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 03/22/2019] [Indexed: 12/29/2022] Open
Abstract
Transcranial brain stimulation and evidence of ephaptic coupling have sparked strong interests in understanding the effects of weak electric fields on the dynamics of neuronal populations. While their influence on the subthreshold membrane voltage can be biophysically well explained using spatially extended neuron models, mechanistic analyses of neuronal spiking and network activity have remained a methodological challenge. More generally, this challenge applies to phenomena for which single-compartment (point) neuron models are oversimplified. Here we employ a pyramidal neuron model that comprises two compartments, allowing to distinguish basal-somatic from apical dendritic inputs and accounting for an extracellular field in a biophysically minimalistic way. Using an analytical approach we fit its parameters to reproduce the response properties of a canonical, spatial model neuron and dissect the stochastic spiking dynamics of single cells and large networks. We show that oscillatory weak fields effectively mimic anti-correlated inputs at the soma and dendrite and strongly modulate neuronal spiking activity in a rather narrow frequency band. This effect carries over to coupled populations of pyramidal cells and inhibitory interneurons, boosting network-induced resonance in the beta and gamma frequency bands. Our work contributes a useful theoretical framework for mechanistic analyses of population dynamics going beyond point neuron models, and provides insights on modulation effects of extracellular fields due to the morphology of pyramidal cells. The elongated spatial structure of pyramidal neurons, which possess large apical dendrites, plays an important role for the integration of synaptic inputs and mediates sensitivity to weak extracellular electric fields. Modeling studies at the population level greatly contribute to our mechanistic understanding but face a methodological challenge because morphologically detailed neuron models are too complex for use in noisy, in-vivo like conditions and large networks in particular. Here we present an analytical approach based on a two-compartment spiking neuron model that can distinguish synaptic inputs at the apical dendrite from those at the somatic region and accounts for an extracellular field in a biophysically minimalistic way. We devised efficient methods to approximate the responses of a spatially more detailed pyramidal neuron model, and to study the spiking dynamics of single neurons and sparsely coupled large networks in the presence of fluctuating inputs. Using these methods we focused on how responses are affected by oscillatory weak fields. Our results suggest that ephaptic coupling may play a mechanistic role for oscillations of population activity and indicate the potential to entrain networks by weak electric stimulation.
Collapse
Affiliation(s)
- Josef Ladenbauer
- Laboratoire de Neurosciences Cognitives et Computationnelles, École Normale Supérieure - PSL Research University, Paris, France
- * E-mail:
| | - Klaus Obermayer
- Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
10
|
de Kamps M, Lepperød M, Lai YM. Computational geometry for modeling neural populations: From visualization to simulation. PLoS Comput Biol 2019; 15:e1006729. [PMID: 30830903 PMCID: PMC6417745 DOI: 10.1371/journal.pcbi.1006729] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 03/14/2019] [Accepted: 11/26/2018] [Indexed: 11/18/2022] Open
Abstract
The importance of a mesoscopic description level of the brain has now been well established. Rate based models are widely used, but have limitations. Recently, several extremely efficient population-level methods have been proposed that go beyond the characterization of a population in terms of a single variable. Here, we present a method for simulating neural populations based on two dimensional (2D) point spiking neuron models that defines the state of the population in terms of a density function over the neural state space. Our method differs in that we do not make the diffusion approximation, nor do we reduce the state space to a single dimension (1D). We do not hard code the neural model, but read in a grid describing its state space in the relevant simulation region. Novel models can be studied without even recompiling the code. The method is highly modular: variations of the deterministic neural dynamics and the stochastic process can be investigated independently. Currently, there is a trend to reduce complex high dimensional neuron models to 2D ones as they offer a rich dynamical repertoire that is not available in 1D, such as limit cycles. We will demonstrate that our method is ideally suited to investigate noise in such systems, replicating results obtained in the diffusion limit and generalizing them to a regime of large jumps. The joint probability density function is much more informative than 1D marginals, and we will argue that the study of 2D systems subject to noise is important complementary to 1D systems.
Collapse
Affiliation(s)
- Marc de Kamps
- Institute for Artificial and Biological Intelligence, University of Leeds, Leeds, West Yorkshire, United Kingdom
| | - Mikkel Lepperød
- Institute of Basic Medical Sciences, and Center for Integrative Neuroplasticity, University of Oslo, Oslo, Norway
| | - Yi Ming Lai
- Institute for Artificial and Biological Intelligence, University of Leeds, Leeds, West Yorkshire, United Kingdom.,Currently at the School of Mathematical Sciences, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
11
|
An Efficient Population Density Method for Modeling Neural Networks with Synaptic Dynamics Manifesting Finite Relaxation Time and Short-Term Plasticity. eNeuro 2019; 5:eN-MNT-0002-18. [PMID: 30662939 PMCID: PMC6336402 DOI: 10.1523/eneuro.0002-18.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2018] [Revised: 10/24/2018] [Accepted: 11/21/2018] [Indexed: 12/05/2022] Open
Abstract
When incorporating more realistic synaptic dynamics, the computational efficiency of population density methods (PDMs) declines sharply due to the increase in the dimension of master equations. To avoid such a decline, we develop an efficient PDM, termed colored-synapse PDM (csPDM), in which the dimension of the master equations does not depend on the number of synapse-associated state variables in the underlying network model. Our goal is to allow the PDM to incorporate realistic synaptic dynamics that possesses not only finite relaxation time but also short-term plasticity (STP). The model equations of csPDM are derived based on the diffusion approximation on synaptic dynamics and probability density function methods for Langevin equations with colored noise. Numerical examples, given by simulations of the population dynamics of uncoupled exponential integrate-and-fire (EIF) neurons, show good agreement between the results of csPDM and Monte Carlo simulations (MCSs). Compared to the original full-dimensional PDM (fdPDM), the csPDM reveals more excellent computational efficiency because of the lower dimension of the master equations. In addition, it permits network dynamics to possess the short-term plastic characteristics inherited from plastic synapses. The novel csPDM has potential applicability to any spiking neuron models because of no assumptions on neuronal dynamics, and, more importantly, this is the first report of PDM to successfully encompass short-term facilitation/depression properties.
Collapse
|
12
|
Large deviations for randomly connected neural networks: I. Spatially extended systems. ADV APPL PROBAB 2018. [DOI: 10.1017/apr.2018.42] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
In a series of two papers, we investigate the large deviations and asymptotic behavior of stochastic models of brain neural networks with random interaction coefficients. In this first paper, we take into account the spatial structure of the brain and consider first the presence of interaction delays that depend on the distance between cells and then the Gaussian random interaction amplitude with a mean and variance that depend on the position of the neurons and scale as the inverse of the network size. We show that the empirical measure satisfies a large deviations principle with a good rate function reaching its minimum at a unique spatially extended probability measure. This result implies an averaged convergence of the empirical measure and a propagation of chaos. The limit is characterized through a complex non-Markovian implicit equation in which the network interaction term is replaced by a nonlocal Gaussian process with a mean and covariance that depend on the statistics of the solution over the whole neural field.
Collapse
|
13
|
Ly C, Marsat G. Variable synaptic strengths controls the firing rate distribution in feedforward neural networks. J Comput Neurosci 2017; 44:75-95. [DOI: 10.1007/s10827-017-0670-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 10/18/2017] [Accepted: 10/19/2017] [Indexed: 12/27/2022]
|
14
|
A theoretical framework for analyzing coupled neuronal networks: Application to the olfactory system. PLoS Comput Biol 2017; 13:e1005780. [PMID: 28968384 PMCID: PMC5638622 DOI: 10.1371/journal.pcbi.1005780] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 10/12/2017] [Accepted: 09/15/2017] [Indexed: 12/27/2022] Open
Abstract
Determining how synaptic coupling within and between regions is modulated during sensory processing is an important topic in neuroscience. Electrophysiological recordings provide detailed information about neural spiking but have traditionally been confined to a particular region or layer of cortex. Here we develop new theoretical methods to study interactions between and within two brain regions, based on experimental measurements of spiking activity simultaneously recorded from the two regions. By systematically comparing experimentally-obtained spiking statistics to (efficiently computed) model spike rate statistics, we identify regions in model parameter space that are consistent with the experimental data. We apply our new technique to dual micro-electrode array in vivo recordings from two distinct regions: olfactory bulb (OB) and anterior piriform cortex (PC). Our analysis predicts that: i) inhibition within the afferent region (OB) has to be weaker than the inhibition within PC, ii) excitation from PC to OB is generally stronger than excitation from OB to PC, iii) excitation from PC to OB and inhibition within PC have to both be relatively strong compared to presynaptic inputs from OB. These predictions are validated in a spiking neural network model of the OB–PC pathway that satisfies the many constraints from our experimental data. We find when the derived relationships are violated, the spiking statistics no longer satisfy the constraints from the data. In principle this modeling framework can be adapted to other systems and be used to investigate relationships between other neural attributes besides network connection strengths. Thus, this work can serve as a guide to further investigations into the relationships of various neural attributes within and across different regions during sensory processing. Sensory processing is known to span multiple regions of the nervous system. However, electrophysiological recordings during sensory processing have traditionally been limited to a single region or brain layer. With recent advances in experimental techniques, recorded spiking activity from multiple regions simultaneously is feasible. However, other important quantities— such as inter-region connection strengths—cannot yet be measured. Here, we develop new theoretical tools to leverage data obtained by recording from two different brain regions simultaneously. We address the following questions: what are the crucial neural network attributes that enable sensory processing across different regions, and how are these attributes related to one another? With a novel theoretical framework to efficiently calculate spiking statistics, we can characterize a high dimensional parameter space that satisfies data constraints. We apply our results to the olfactory system to make specific predictions about effective network connectivity. Our framework relies on incorporating relatively easy-to-measure quantities to predict hard-to-measure interactions across multiple brain regions. Because this work is adaptable to other systems, we anticipate it will be a valuable tool for analysis of other larger scale brain recordings.
Collapse
|
15
|
Avitable D, Wedgwood KCA. Macroscopic coherent structures in a stochastic neural network: from interface dynamics to coarse-grained bifurcation analysis. J Math Biol 2017; 75:885-928. [PMID: 28150175 PMCID: PMC5562874 DOI: 10.1007/s00285-016-1070-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2016] [Revised: 10/06/2016] [Indexed: 11/05/2022]
Abstract
We study coarse pattern formation in a cellular automaton modelling a spatially-extended stochastic neural network. The model, originally proposed by Gong and Robinson (Phys Rev E 85(5):055,101(R), 2012), is known to support stationary and travelling bumps of localised activity. We pose the model on a ring and study the existence and stability of these patterns in various limits using a combination of analytical and numerical techniques. In a purely deterministic version of the model, posed on a continuum, we construct bumps and travelling waves analytically using standard interface methods from neural field theory. In a stochastic version with Heaviside firing rate, we construct approximate analytical probability mass functions associated with bumps and travelling waves. In the full stochastic model posed on a discrete lattice, where a coarse analytic description is unavailable, we compute patterns and their linear stability using equation-free methods. The lifting procedure used in the coarse time-stepper is informed by the analysis in the deterministic and stochastic limits. In all settings, we identify the synaptic profile as a mesoscopic variable, and the width of the corresponding activity set as a macroscopic variable. Stationary and travelling bumps have similar meso- and macroscopic profiles, but different microscopic structure, hence we propose lifting operators which use microscopic motifs to disambiguate them. We provide numerical evidence that waves are supported by a combination of high synaptic gain and long refractory times, while meandering bumps are elicited by short refractory times.
Collapse
Affiliation(s)
- Daniele Avitable
- Centre for Mathematical Medicine and Biology, School of Mathematical Sciences, University of Nottingham, Nottingham, NG2 7RD, UK
| | - Kyle C A Wedgwood
- Centre for Biomedical Modelling and Analysis, University of Exeter, Living Systems Institute, Stocker Road, Exeter, EX4 4QD, UK.
| |
Collapse
|
16
|
Barreiro AK, Ly C. Practical approximation method for firing-rate models of coupled neural networks with correlated inputs. Phys Rev E 2017; 96:022413. [PMID: 28950506 DOI: 10.1103/physreve.96.022413] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Indexed: 01/18/2023]
Abstract
Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.
Collapse
Affiliation(s)
- Andrea K Barreiro
- Department of Mathematics, Southern Methodist University, P.O. Box 750235, Dallas, Texas 75275, USA
| | - Cheng Ly
- Department of Statistical Sciences and Operations Research, Virginia Commonwealth University, 1015 Floyd Avenue, Richmond, Virginia 23284, USA
| |
Collapse
|
17
|
Ghusinga KR, Vargas-Garcia CA, Lamperski A, Singh A. Exact lower and upper bounds on stationary moments in stochastic biochemical systems. Phys Biol 2017; 14:04LT01. [PMID: 28661893 DOI: 10.1088/1478-3975/aa75c6] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
In the stochastic description of biochemical reaction systems, the time evolution of statistical moments for species population counts is described by a linear dynamical system. However, except for some ideal cases (such as zero- and first-order reaction kinetics), the moment dynamics is underdetermined as lower-order moments depend upon higher-order moments. Here, we propose a novel method to find exact lower and upper bounds on stationary moments for a given arbitrary system of biochemical reactions. The method exploits the fact that statistical moments of any positive-valued random variable must satisfy some constraints that are compactly represented through the positive semidefiniteness of moment matrices. Our analysis shows that solving moment equations at steady state in conjunction with constraints on moment matrices provides exact lower and upper bounds on the moments. These results are illustrated by three different examples-the commonly used logistic growth model, stochastic gene expression with auto-regulation and an activator-repressor gene network motif. Interestingly, in all cases the accuracy of the bounds is shown to improve as moment equations are expanded to include higher-order moments. Our results provide avenues for development of approximation methods that provide explicit bounds on moments for nonlinear stochastic systems that are otherwise analytically intractable.
Collapse
Affiliation(s)
- Khem Raj Ghusinga
- Department of Electrical and Computer Engineering, University of Delaware, Newark, DE, United States of America
| | | | | | | |
Collapse
|
18
|
Droste F, Lindner B. Exact analytical results for integrate-and-fire neurons driven by excitatory shot noise. J Comput Neurosci 2017; 43:81-91. [PMID: 28585050 DOI: 10.1007/s10827-017-0649-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Revised: 04/14/2017] [Accepted: 05/04/2017] [Indexed: 11/24/2022]
Abstract
A neuron receives input from other neurons via electrical pulses, so-called spikes. The pulse-like nature of the input is frequently neglected in analytical studies; instead, the input is usually approximated to be Gaussian. Recent experimental studies have shown, however, that an assumption underlying this approximation is often not met: Individual presynaptic spikes can have a significant effect on a neuron's dynamics. It is thus desirable to explicitly account for the pulse-like nature of neural input, i.e. consider neurons driven by a shot noise - a long-standing problem that is mathematically challenging. In this work, we exploit the fact that excitatory shot noise with exponentially distributed weights can be obtained as a limit case of dichotomous noise, a Markovian two-state process. This allows us to obtain novel exact expressions for the stationary voltage density and the moments of the interspike-interval density of general integrate-and-fire neurons driven by such an input. For the special case of leaky integrate-and-fire neurons, we also give expressions for the power spectrum and the linear response to a signal. We verify and illustrate our expressions by comparison to simulations of leaky-, quadratic- and exponential integrate-and-fire neurons.
Collapse
Affiliation(s)
- Felix Droste
- Bernstein Center for Computational Neuroscience, Haus 2, Philippstrasse 13, 10115, Berlin, Germany. .,Department of Physics, Humboldt Universität zu Berlin, Newtonstr 15, 12489, Berlin, Germany.
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience, Haus 2, Philippstrasse 13, 10115, Berlin, Germany.,Department of Physics, Humboldt Universität zu Berlin, Newtonstr 15, 12489, Berlin, Germany
| |
Collapse
|
19
|
Siettos C, Starke J. Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools. WILEY INTERDISCIPLINARY REVIEWS-SYSTEMS BIOLOGY AND MEDICINE 2016; 8:438-58. [PMID: 27340949 DOI: 10.1002/wsbm.1348] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2016] [Revised: 05/01/2016] [Accepted: 05/14/2016] [Indexed: 11/09/2022]
Abstract
The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Constantinos Siettos
- School of Applied Mathematics and Physical Sciences, National Technical University of Athens, Athens, Greece
| | - Jens Starke
- School of Mathematical Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
20
|
Klinshov V, Franović I. Mean-field dynamics of a random neural network with noise. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:062813. [PMID: 26764750 DOI: 10.1103/physreve.92.062813] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Indexed: 06/05/2023]
Abstract
We consider a network of randomly coupled rate-based neurons influenced by external and internal noise. We derive a second-order stochastic mean-field model for the network dynamics and use it to analyze the stability and bifurcations in the thermodynamic limit, as well as to study the fluctuations due to the finite-size effect. It is demonstrated that the two types of noise have substantially different impact on the network dynamics. While both sources of noise give rise to stochastic fluctuations in the case of the finite-size network, only the external noise affects the stationary activity levels of the network in the thermodynamic limit. We compare the theoretical predictions with the direct simulation results and show that they agree for large enough network sizes and for parameter domains sufficiently away from bifurcations.
Collapse
Affiliation(s)
- Vladimir Klinshov
- Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod, Russia
| | - Igor Franović
- Scientific Computing Laboratory, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade, Serbia
| |
Collapse
|
21
|
Examining the limits of cellular adaptation bursting mechanisms in biologically-based excitatory networks of the hippocampus. J Comput Neurosci 2015; 39:289-309. [DOI: 10.1007/s10827-015-0577-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2015] [Revised: 09/08/2015] [Accepted: 09/10/2015] [Indexed: 01/21/2023]
|
22
|
|
23
|
Ly C. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics. Neural Comput 2013; 25:2682-708. [PMID: 23777517 DOI: 10.1162/neco_a_00489] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Statistical Sciences and Operations Research, Virginia Commonwealth University Richmond, VA 23284-3083, USA.
| |
Collapse
|
24
|
Buice MA, Chow CC. Beyond mean field theory: statistical field theory for neural networks. JOURNAL OF STATISTICAL MECHANICS (ONLINE) 2013; 2013:P03003. [PMID: 25243014 PMCID: PMC4169078 DOI: 10.1088/1742-5468/2013/03/p03003] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Mean field theories have been a stalwart for studying the dynamics of networks of coupled neurons. They are convenient because they are relatively simple and possible to analyze. However, classical mean field theory neglects the effects of fluctuations and correlations due to single neuron effects. Here, we consider various possible approaches for going beyond mean field theory and incorporating correlation effects. Statistical field theory methods, in particular the Doi-Peliti-Janssen formalism, are particularly useful in this regard.
Collapse
Affiliation(s)
- Michael A Buice
- Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA
| | - Carson C Chow
- Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, MD, USA
| |
Collapse
|
25
|
Bifurcations of large networks of two-dimensional integrate and fire neurons. J Comput Neurosci 2013; 35:87-108. [PMID: 23430291 DOI: 10.1007/s10827-013-0442-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2012] [Revised: 11/29/2012] [Accepted: 01/17/2013] [Indexed: 12/25/2022]
Abstract
Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.
Collapse
|
26
|
Perthame B, Salort D. On a voltage-conductance kinetic system for integrate & fire neural networks. ACTA ACUST UNITED AC 2013. [DOI: 10.3934/krm.2013.6.841] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
27
|
Population density models of integrate-and-fire neurons with jumps: well-posedness. J Math Biol 2012; 67:453-81. [DOI: 10.1007/s00285-012-0554-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2011] [Revised: 05/11/2012] [Indexed: 10/28/2022]
|
28
|
Baladron J, Fasoli D, Faugeras O, Touboul J. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2012; 2:10. [PMID: 22657695 PMCID: PMC3497713 DOI: 10.1186/2190-8567-2-10] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2011] [Accepted: 03/09/2012] [Indexed: 05/20/2023]
Abstract
We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons' initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis.Mathematics Subject Classification (2000): 60F99, 60B10, 92B20, 82C32, 82C80, 35Q80.
Collapse
Affiliation(s)
- Javier Baladron
- NeuroMathComp Laboratory, INRIA, Sophia-Antipolis Méditerranée, 06902, France.
| | | | | | | |
Collapse
|
29
|
A timestepper-based approach for the coarse-grained analysis of microscopic neuronal simulators on networks: Bifurcation and rare-events micro- to macro-computations. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2011.06.018] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
30
|
Bobryk RV. Closure schemes in stochastic nonlinear dynamics: a validation case study. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2011; 83:057701. [PMID: 21728701 DOI: 10.1103/physreve.83.057701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2010] [Indexed: 05/31/2023]
Abstract
It is known that randomness in dynamical systems often leads to infinite hierarchies of coupled equations for relevant probabilistic quantities. Several closure methods for truncation of the hierarchies have been proposed in the literature. In the present paper the performance of closure schemes for moment hierarchies is compared by using the well-known nonlinear equation of overdamped oscillator with additive Gaussian white noise. In the case of bistable dynamics it is shown that the closure schemes can give incorrect results despite of their good convergence properties. To overcome this deficiency new closure procedures are proposed. They are based on using the Hermite polynomials and their generalization.
Collapse
Affiliation(s)
- Roman V Bobryk
- Institute of Mathematics, Jan Kochanowski University, PL-25-406 Kielce, Poland.
| |
Collapse
|
31
|
Touboul JD, Ermentrout GB. Finite-size and correlation-induced effects in mean-field dynamics. J Comput Neurosci 2011; 31:453-84. [PMID: 21384156 DOI: 10.1007/s10827-011-0320-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2010] [Revised: 01/28/2011] [Accepted: 02/16/2011] [Indexed: 10/18/2022]
Abstract
The brain's activity is characterized by the interaction of a very large number of neurons that are strongly affected by noise. However, signals often arise at macroscopic scales integrating the effect of many neurons into a reliable pattern of activity. In order to study such large neuronal assemblies, one is often led to derive mean-field limits summarizing the effect of the interaction of a large number of neurons into an effective signal. Classical mean-field approaches consider the evolution of a deterministic variable, the mean activity, thus neglecting the stochastic nature of neural behavior. In this article, we build upon two recent approaches that include correlations and higher order moments in mean-field equations, and study how these stochastic effects influence the solutions of the mean-field equations, both in the limit of an infinite number of neurons and for large yet finite networks. We introduce a new model, the infinite model, which arises from both equations by a rescaling of the variables and, which is invertible for finite-size networks, and hence, provides equivalent equations to those previously derived models. The study of this model allows us to understand qualitative behavior of such large-scale networks. We show that, though the solutions of the deterministic mean-field equation constitute uncorrelated solutions of the new mean-field equations, the stability properties of limit cycles are modified by the presence of correlations, and additional non-trivial behaviors including periodic orbits appear when there were none in the mean field. The origin of all these behaviors is then explored in finite-size networks where interesting mesoscopic scale effects appear. This study leads us to show that the infinite-size system appears as a singular limit of the network equations, and for any finite network, the system will differ from the infinite system.
Collapse
Affiliation(s)
- Jonathan D Touboul
- NeuroMathComp Laboratory, INRIA/ENS Paris, 23 Avenue d'Italie, 75013 Paris, France.
| | | |
Collapse
|
32
|
Bressloff PC. Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2010; 82:051903. [PMID: 21230496 DOI: 10.1103/physreve.82.051903] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2010] [Revised: 09/22/2010] [Indexed: 05/08/2023]
Abstract
We analyze a stochastic model of neuronal population dynamics with intrinsic noise. In the thermodynamic limit N→∞ , where N determines the size of each population, the dynamics is described by deterministic Wilson-Cowan equations. On the other hand, for finite N the dynamics is described by a master equation that determines the probability of spiking activity within each population. We first consider a single excitatory population that exhibits bistability in the deterministic limit. The steady-state probability distribution of the stochastic network has maxima at points corresponding to the stable fixed points of the deterministic network; the relative weighting of the two maxima depends on the system size. For large but finite N , we calculate the exponentially small rate of noise-induced transitions between the resulting metastable states using a Wentzel-Kramers-Brillouin (WKB) approximation and matched asymptotic expansions. We then consider a two-population excitatory or inhibitory network that supports limit cycle oscillations. Using a diffusion approximation, we reduce the dynamics to a neural Langevin equation, and show how the intrinsic noise amplifies subthreshold oscillations (quasicycles).
Collapse
Affiliation(s)
- Paul C Bressloff
- Mathematical Institute, University of Oxford, 24-29 St. Giles', Oxford OX1 3LB, United Kingdom
| |
Collapse
|
33
|
Abstract
Population rate or activity equations are the foundation of a common approach to modeling for neural networks. These equations provide mean field dynamics for the firing rate or activity of neurons within a network given some connectivity. The shortcoming of these equations is that they take into account only the average firing rate, while leaving out higher-order statistics like correlations between firing. A stochastic theory of neural networks that includes statistics at all orders was recently formulated. We describe how this theory yields a systematic extension to population rate equations by introducing equations for correlations and appropriate coupling terms. Each level of the approximation yields closed equations; they depend only on the mean and specific correlations of interest, without an ad hoc criterion for doing so. We show in an example of an all-to-all connected network how our system of generalized activity equations captures phenomena missed by the mean field rate equations alone.
Collapse
|
34
|
Shinozaki T, Okada M, Reyes AD, Câteau H. Flexible traffic control of the synfire-mode transmission by inhibitory modulation: nonlinear noise reduction. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2010; 81:011913. [PMID: 20365405 DOI: 10.1103/physreve.81.011913] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2009] [Revised: 12/02/2009] [Indexed: 05/29/2023]
Abstract
Intermingled neural connections apparent in the brain make us wonder what controls the traffic of propagating activity in the brain to secure signal transmission without harmful crosstalk. Here, we reveal that inhibitory input but not excitatory input works as a particularly useful traffic controller because it controls the degree of synchrony of population firing of neurons as well as controlling the size of the population firing bidirectionally. Our dynamical system analysis reveals that the synchrony enhancement depends crucially on the nonlinear membrane potential dynamics and a hidden slow dynamical variable. Our electrophysiological study with rodent slice preparations show that the phenomenon happens in real neurons. Furthermore, our analysis with the Fokker-Planck equations demonstrates the phenomenon in a semianalytical manner.
Collapse
|
35
|
Dynamic Causal Models for phase coupling. J Neurosci Methods 2009; 183:19-30. [PMID: 19576931 PMCID: PMC2751835 DOI: 10.1016/j.jneumeth.2009.06.029] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2009] [Revised: 06/15/2009] [Accepted: 06/17/2009] [Indexed: 11/23/2022]
Abstract
This paper presents an extension of the Dynamic Causal Modelling (DCM) framework to the analysis of phase-coupled data. A weakly coupled oscillator approach is used to describe dynamic phase changes in a network of oscillators. The use of Bayesian model comparison allows one to infer the mechanisms underlying synchronization processes in the brain. For example, whether activity is driven by master-slave versus mutual entrainment mechanisms. Results are presented on synthetic data from physiological models and on MEG data from a study of visual working memory.
Collapse
|
36
|
Ly C, Tranchina D. Spike train statistics and dynamics with synaptic input from any renewal process: a population density approach. Neural Comput 2009; 21:360-96. [PMID: 19431264 DOI: 10.1162/neco.2008.03-08-743] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In the probability density function (PDF) approach to neural network modeling, a common simplifying assumption is that the arrival times of elementary postsynaptic events are governed by a Poisson process. This assumption ignores temporal correlations in the input that sometimes have important physiological consequences. We extend PDF methods to models with synaptic event times governed by any modulated renewal process. We focus on the integrate-and-fire neuron with instantaneous synaptic kinetics and a random elementary excitatory postsynaptic potential (EPSP), A. Between presynaptic events, the membrane voltage, v, decays exponentially toward rest, while s, the time since the last synaptic input event, evolves with unit velocity. When a synaptic event arrives, v jumps by A, and s is reset to zero. If v crosses the threshold voltage, an action potential occurs, and v is reset to v(reset). The probability per unit time of a synaptic event at time t, given the elapsed time s since the last event, h(s, t), depends on specifics of the renewal process. We study how regularity of the train of synaptic input events affects output spike rate, PDF and coefficient of variation (CV) of the interspike interval, and the autocorrelation function of the output spike train. In the limit of a deterministic, clocklike train of input events, the PDF of the interspike interval converges to a sum of delta functions, with coefficients determined by the PDF for A. The limiting autocorrelation function of the output spike train is a sum of delta functions whose coefficients fall under a damped oscillatory envelope. When the EPSP CV, sigma A/mu A, is equal to 0.45, a CV for the intersynaptic event interval, sigma T/mu T = 0.35, is functionally equivalent to a deterministic periodic train of synaptic input events (CV = 0) with respect to spike statistics. We discuss the relevance to neural network simulations.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, USA.
| | | |
Collapse
|
37
|
Ly C, Ermentrout GB. Synchronization dynamics of two coupled neural oscillators receiving shared and unshared noisy stimuli. J Comput Neurosci 2008; 26:425-43. [PMID: 19034640 DOI: 10.1007/s10827-008-0120-8] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2008] [Revised: 09/23/2008] [Accepted: 10/23/2008] [Indexed: 11/27/2022]
Abstract
The response of neurons to external stimuli greatly depends on the intrinsic dynamics of the network. Here, the intrinsic dynamics are modeled as coupling and the external input is modeled as shared and unshared noise. We assume the neurons are repetitively firing action potentials (i.e., neural oscillators), are weakly and identically coupled, and the external noise is weak. Shared noise can induce bistability between the synchronous and anti-phase states even though the anti-phase state is the only stable state in the absence of noise. We study the Fokker-Planck equation of the system and perform an asymptotic reduction rho(0). The rho(0) solution is more computationally efficient than both the Monte Carlo simulations and the 2D Fokker-Planck solver, and agrees remarkably well with the full system with weak noise and weak coupling. With moderate noise and coupling, rho(0) is still qualitatively correct despite the small noise and coupling assumption in the asymptotic reduction. Our phase model accurately predicts the behavior of a realistic synaptically coupled Morris-Lecar system.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, USA.
| | | |
Collapse
|
38
|
Mitchell CS, Lee RH. Output-based comparison of alternative kinetic schemes for the NMDA receptor within a glutamate spillover model. J Neural Eng 2007; 4:380-9. [PMID: 18057505 DOI: 10.1088/1741-2560/4/4/004] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Recent experimental and theoretical work continues to explore the mechanisms and implications of neurotransmitter spillover. Here we examine N-methyl-D-aspartate receptor (NMDA-R) kinetics to determine their implication(s) in glutamate spillover by comparing two mechanistically different NMDA-R models, the 5-state Lester and Jahr (LJ) model and the 8-state Banke and Traynelis (BT) model, within the context of a glutamate spillover model. We employ a search-survey-and-summarize strategy to analyze the relationships within model behavior (model relational analysis) and form a model output landscape. Our results indicate that model relational analysis can reveal differences in models whose outputs would be considered the same. The analysis reveals that the BT model, with its more complex kinetics, is less reliant on diffusion compared to the LJ version, resulting in differences in the relationships between open probability and glutamate concentration despite the fact that both model versions were able to produce the same target output values. Additionally, model relational analysis is able to distinguish between the BT and LJ NMDA-R model versions even though factor analysis indicates that the overall model output space dimensions are the same for both NMDA-R models. Furthermore, the work presented here suggests that model relational analysis may be broadly applicable as a means to examine the complex interactions hidden within overall model behavior.
Collapse
Affiliation(s)
- Cassie S Mitchell
- Laboratory for Neuroengineering, The Wallace H Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332-0535, USA
| | | |
Collapse
|