1
|
Xiao ZC, Lin KK, Young LS. Efficient models of cortical activity via local dynamic equilibria and coarse-grained interactions. Proc Natl Acad Sci U S A 2024; 121:e2320454121. [PMID: 38923983 PMCID: PMC11228477 DOI: 10.1073/pnas.2320454121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 05/14/2024] [Indexed: 06/28/2024] Open
Abstract
Biologically detailed models of brain circuitry are challenging to build and simulate due to the large number of neurons, their complex interactions, and the many unknown physiological parameters. Simplified mathematical models are more tractable, but harder to evaluate when too far removed from neuroanatomy/physiology. We propose that a multiscale model, coarse-grained (CG) while preserving local biological details, offers the best balance between biological realism and computability. This paper presents such a model. Generally, CG models focus on the interaction between groups of neurons-here termed "pixels"-rather than individual cells. In our case, dynamics are alternately updated at intra- and interpixel scales, with one informing the other, until convergence to equilibrium is achieved on both scales. An innovation is how we exploit the underlying biology: Taking advantage of the similarity in local anatomical structures across large regions of the cortex, we model intrapixel dynamics as a single dynamical system driven by "external" inputs. These inputs vary with events external to the pixel, but their ranges can be estimated a priori. Precomputing and tabulating all potential local responses speed up the updating procedure significantly compared to direct multiscale simulation. We illustrate our methodology using a model of the primate visual cortex. Except for local neuron-to-neuron variability (necessarily lost in any CG approximation) our model reproduces various features of large-scale network models at a tiny fraction of the computational cost. These include neuronal responses as a consequence of their orientation selectivity, a primary function of visual neurons.
Collapse
Affiliation(s)
- Zhuo-Cheng Xiao
- New York University - East China Normal University Institute of Mathematical Sciences, New York University, Shanghai 200124, China
- Institute of Brain and Cognitive Science, New York University - East China Normal University, New York University, Shanghai 200124, China
- College of Art and Sciences, New York University, Shanghai 200124, China
| | - Kevin K Lin
- Department of Mathematics, University of Arizona, Tucson, AZ 85721
| | - Lai-Sang Young
- Department of Mathematics, Courant Institute of Mathematical Sciences, New York University, New York, NY 10012
| |
Collapse
|
2
|
Xiao ZC, Lin KK, Young LS. A data-informed mean-field approach to mapping of cortical parameter landscapes. PLoS Comput Biol 2021; 17:e1009718. [PMID: 34941863 PMCID: PMC8741023 DOI: 10.1371/journal.pcbi.1009718] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 01/07/2022] [Accepted: 12/02/2021] [Indexed: 11/19/2022] Open
Abstract
Constraining the many biological parameters that govern cortical dynamics is computationally and conceptually difficult because of the curse of dimensionality. This paper addresses these challenges by proposing (1) a novel data-informed mean-field (MF) approach to efficiently map the parameter space of network models; and (2) an organizing principle for studying parameter space that enables the extraction biologically meaningful relations from this high-dimensional data. We illustrate these ideas using a large-scale network model of the Macaque primary visual cortex. Of the 10-20 model parameters, we identify 7 that are especially poorly constrained, and use the MF algorithm in (1) to discover the firing rate contours in this 7D parameter cube. Defining a "biologically plausible" region to consist of parameters that exhibit spontaneous Excitatory and Inhibitory firing rates compatible with experimental values, we find that this region is a slightly thickened codimension-1 submanifold. An implication of this finding is that while plausible regimes depend sensitively on parameters, they are also robust and flexible provided one compensates appropriately when parameters are varied. Our organizing principle for conceptualizing parameter dependence is to focus on certain 2D parameter planes that govern lateral inhibition: Intersecting these planes with the biologically plausible region leads to very simple geometric structures which, when suitably scaled, have a universal character independent of where the intersections are taken. In addition to elucidating the geometry of the plausible region, this invariance suggests useful approximate scaling relations. Our study offers, for the first time, a complete characterization of the set of all biologically plausible parameters for a detailed cortical model, which has been out of reach due to the high dimensionality of parameter space.
Collapse
Affiliation(s)
- Zhuo-Cheng Xiao
- Courant Institute of Mathematical Sciences, New York University, New York, New York, United States of America
| | - Kevin K. Lin
- Department of Mathematics, University of Arizona, Tucson, Arizona, United States of America
| | - Lai-Sang Young
- Courant Institute of Mathematical Sciences, New York University, New York, New York, United States of America
- Institute for Advanced Study, Princeton, New Jersey, United States of America
- * E-mail:
| |
Collapse
|
3
|
Shao Y, Zhang J, Tao L. Dimensional reduction of emergent spatiotemporal cortical dynamics via a maximum entropy moment closure. PLoS Comput Biol 2020; 16:e1007265. [PMID: 32516336 PMCID: PMC7304648 DOI: 10.1371/journal.pcbi.1007265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 06/19/2020] [Accepted: 04/29/2020] [Indexed: 11/22/2022] Open
Abstract
Modern electrophysiological recordings and optical imaging techniques have revealed a diverse spectrum of spatiotemporal neural activities underlying fundamental cognitive processing. Oscillations, traveling waves and other complex population dynamical patterns are often concomitant with sensory processing, information transfer, decision making and memory consolidation. While neural population models such as neural mass, population density and kinetic theoretical models have been used to capture a wide range of the experimentally observed dynamics, a full account of how the multi-scale dynamics emerges from the detailed biophysical properties of individual neurons and the network architecture remains elusive. Here we apply a recently developed coarse-graining framework for reduced-dimensional descriptions of neuronal networks to model visual cortical dynamics. We show that, without introducing any new parameters, how a sequence of models culminating in an augmented system of spatially-coupled ODEs can effectively model a wide range of the observed cortical dynamics, ranging from visual stimulus orientation dynamics to traveling waves induced by visual illusory stimuli. In addition to an efficient simulation method, this framework also offers an analytic approach to studying large-scale network dynamics. As such, the dimensional reduction naturally leads to mesoscopic variables that capture the interplay between neuronal population stochasticity and network architecture that we believe to underlie many emergent cortical phenomena.
Collapse
Affiliation(s)
- Yuxiu Shao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, China
| | - Jiwei Zhang
- School of Mathematics and Statistics, and Hubei Key Laboratory of Computational Science, Wuhan University, China
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, China
- Center for Quantitative Biology, Peking University, Beijing, China
| |
Collapse
|
4
|
Lima Dias Pinto I, Copelli M. Oscillations and collective excitability in a model of stochastic neurons under excitatory and inhibitory coupling. Phys Rev E 2019; 100:062416. [PMID: 31962449 DOI: 10.1103/physreve.100.062416] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Indexed: 06/10/2023]
Abstract
We study a model with excitable neurons modeled as stochastic units with three states, representing quiescence, firing, and refractoriness. The transition rates between quiescence and firing depend exponentially on the number of firing neighbors, whereas all other rates are kept constant. This model class was shown to exhibit collective oscillations (synchronization) if neurons are spiking autonomously, but not if neurons are in the excitable regime. In both cases, neurons were restricted to interact through excitatory coupling. Here we show that a plethora of collective phenomena appear if inhibitory coupling is added. Besides the usual transition between an absorbing and an active phase, the model with excitatory and inhibitory neurons can also undergo reentrant transitions to an oscillatory phase. In the mean-field description, oscillations can emerge through supercritical or subcritical Hopf bifurcations, as well as through infinite period bifurcations. The model has bistability between active and oscillating behavior, as well as collective excitability, a regime where the system can display a peak of global activity when subject to a sufficiently strong perturbation. We employ a variant of the Shinomoto-Kuramoto order parameter to characterize the phase transitions and their system-size dependence.
Collapse
Affiliation(s)
| | - Mauro Copelli
- Physics Department, Federal University of Pernambuco (UFPE), Recife, PE 50670-901, Brazil
| |
Collapse
|
5
|
Mattia M, Biggio M, Galluzzi A, Storace M. Dimensional reduction in networks of non-Markovian spiking neurons: Equivalence of synaptic filtering and heterogeneous propagation delays. PLoS Comput Biol 2019; 15:e1007404. [PMID: 31593569 PMCID: PMC6799936 DOI: 10.1371/journal.pcbi.1007404] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 10/18/2019] [Accepted: 09/16/2019] [Indexed: 11/19/2022] Open
Abstract
Message passing between components of a distributed physical system is non-instantaneous and contributes to determine the time scales of the emerging collective dynamics. In biological neuron networks this is due in part to local synaptic filtering of exchanged spikes, and in part to the distribution of the axonal transmission delays. How differently these two kinds of communication protocols affect the network dynamics is still an open issue due to the difficulties in dealing with the non-Markovian nature of synaptic transmission. Here, we develop a mean-field dimensional reduction yielding to an effective Markovian dynamics of the population density of the neuronal membrane potential, valid under the hypothesis of small fluctuations of the synaptic current. Within this limit, the resulting theory allows us to prove the formal equivalence between the two transmission mechanisms, holding for any synaptic time scale, integrate-and-fire neuron model, spike emission regimes and for different network states even when the neuron number is finite. The equivalence holds even for larger fluctuations of the synaptic input, if white noise currents are incorporated to model other possible biological features such as ionic channel stochasticity.
Collapse
|
6
|
Li Y, Xu H. Stochastic neural field model: multiple firing events and correlations. J Math Biol 2019; 79:1169-1204. [PMID: 31292682 DOI: 10.1007/s00285-019-01389-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 03/05/2019] [Indexed: 11/28/2022]
Abstract
This paper studies a nonlinear dynamical phenomenon called the multiple firing event (MFE) in a spatially heterogeneous stochastic neural field model, which is extended from that in our previous paper (Li et al. in J Math Biol 78:83-115, 2018). MFEs are a partially synchronized spiking barrages that are believed to be responsible for the Gamma oscillation. Rigorous results about the stochastic stability and the law of large numbers are proved, which further imply the well-definedness and computability of many quantities related to MFEs. Then we devote to study spatial and temporal properties of MFEs. Our key finding is that MFEs are spatially correlated but the spatial correlation decays quickly. Detailed mathematical justifications are made based on our qualitative models that aim to demonstrate the mechanism of MFEs.
Collapse
Affiliation(s)
- Yao Li
- Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA, 01002, USA.
| | - Hui Xu
- Department of Mathematics, Amherst College, Amherst, MA, 01002, USA
| |
Collapse
|
7
|
Zhang J, Shao Y, Rangan AV, Tao L. A coarse-graining framework for spiking neuronal networks: from strongly-coupled conductance-based integrate-and-fire neurons to augmented systems of ODEs. J Comput Neurosci 2019; 46:211-232. [PMID: 30788694 DOI: 10.1007/s10827-019-00712-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2018] [Revised: 01/27/2019] [Accepted: 01/31/2019] [Indexed: 11/29/2022]
Abstract
Homogeneously structured, fluctuation-driven networks of spiking neurons can exhibit a wide variety of dynamical behaviors, ranging from homogeneity to synchrony. We extend our partitioned-ensemble average (PEA) formalism proposed in Zhang et al. (Journal of Computational Neuroscience, 37(1), 81-104, 2014a) to systematically coarse grain the heterogeneous dynamics of strongly coupled, conductance-based integrate-and-fire neuronal networks. The population dynamics models derived here successfully capture the so-called multiple-firing events (MFEs), which emerge naturally in fluctuation-driven networks of strongly coupled neurons. Although these MFEs likely play a crucial role in the generation of the neuronal avalanches observed in vitro and in vivo, the mechanisms underlying these MFEs cannot easily be understood using standard population dynamic models. Using our PEA formalism, we systematically generate a sequence of model reductions, going from Master equations, to Fokker-Planck equations, and finally, to an augmented system of ordinary differential equations. Furthermore, we show that these reductions can faithfully describe the heterogeneous dynamic regimes underlying the generation of MFEs in strongly coupled conductance-based integrate-and-fire neuronal networks.
Collapse
Affiliation(s)
- Jiwei Zhang
- School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, China.,Hubei Key Laboratory of Computational Science, Wuhan University, Wuhan, 430072, China
| | - Yuxiu Shao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, 100871, China.,Center for Quantitative Biology, Peking University, Beijing, 100871, China
| | - Aaditya V Rangan
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, 100871, China. .,Center for Quantitative Biology, Peking University, Beijing, 100871, China.
| |
Collapse
|
8
|
Large deviations for randomly connected neural networks: I. Spatially extended systems. ADV APPL PROBAB 2018. [DOI: 10.1017/apr.2018.42] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
In a series of two papers, we investigate the large deviations and asymptotic behavior of stochastic models of brain neural networks with random interaction coefficients. In this first paper, we take into account the spatial structure of the brain and consider first the presence of interaction delays that depend on the distance between cells and then the Gaussian random interaction amplitude with a mean and variance that depend on the position of the neurons and scale as the inverse of the network size. We show that the empirical measure satisfies a large deviations principle with a good rate function reaching its minimum at a unique spatially extended probability measure. This result implies an averaged convergence of the empirical measure and a propagation of chaos. The limit is characterized through a complex non-Markovian implicit equation in which the network interaction term is replaced by a nonlocal Gaussian process with a mean and covariance that depend on the statistics of the solution over the whole neural field.
Collapse
|
9
|
Li Y, Chariker L, Young LS. How well do reduced models capture the dynamics in models of interacting neurons? J Math Biol 2018; 78:83-115. [PMID: 30062392 DOI: 10.1007/s00285-018-1268-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Revised: 06/05/2018] [Indexed: 11/25/2022]
Abstract
This paper introduces a class of stochastic models of interacting neurons with emergent dynamics similar to those seen in local cortical populations. Rigorous results on existence and uniqueness of nonequilibrium steady states are proved. These network models are then compared to very simple reduced models driven by the same mean excitatory and inhibitory currents. Discrepancies in firing rates between network and reduced models are investigated and explained by correlations in spiking, or partial synchronization, working in concert with "nonlinearities" in the time evolution of membrane potentials. The use of simple random walks and their first passage times to simulate fluctuations in neuronal membrane potentials and interspike times is also considered.
Collapse
Affiliation(s)
- Yao Li
- Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA, 01002, USA
| | - Logan Chariker
- Courant Institute of Mathematical Sciences, New York University, New York, NY, 10012, USA
| | - Lai-Sang Young
- Courant Institute of Mathematical Sciences, New York University, New York, NY, 10012, USA.
| |
Collapse
|
10
|
Avitable D, Wedgwood KCA. Macroscopic coherent structures in a stochastic neural network: from interface dynamics to coarse-grained bifurcation analysis. J Math Biol 2017; 75:885-928. [PMID: 28150175 PMCID: PMC5562874 DOI: 10.1007/s00285-016-1070-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2016] [Revised: 10/06/2016] [Indexed: 11/05/2022]
Abstract
We study coarse pattern formation in a cellular automaton modelling a spatially-extended stochastic neural network. The model, originally proposed by Gong and Robinson (Phys Rev E 85(5):055,101(R), 2012), is known to support stationary and travelling bumps of localised activity. We pose the model on a ring and study the existence and stability of these patterns in various limits using a combination of analytical and numerical techniques. In a purely deterministic version of the model, posed on a continuum, we construct bumps and travelling waves analytically using standard interface methods from neural field theory. In a stochastic version with Heaviside firing rate, we construct approximate analytical probability mass functions associated with bumps and travelling waves. In the full stochastic model posed on a discrete lattice, where a coarse analytic description is unavailable, we compute patterns and their linear stability using equation-free methods. The lifting procedure used in the coarse time-stepper is informed by the analysis in the deterministic and stochastic limits. In all settings, we identify the synaptic profile as a mesoscopic variable, and the width of the corresponding activity set as a macroscopic variable. Stationary and travelling bumps have similar meso- and macroscopic profiles, but different microscopic structure, hence we propose lifting operators which use microscopic motifs to disambiguate them. We provide numerical evidence that waves are supported by a combination of high synaptic gain and long refractory times, while meandering bumps are elicited by short refractory times.
Collapse
Affiliation(s)
- Daniele Avitable
- Centre for Mathematical Medicine and Biology, School of Mathematical Sciences, University of Nottingham, Nottingham, NG2 7RD, UK
| | - Kyle C A Wedgwood
- Centre for Biomedical Modelling and Analysis, University of Exeter, Living Systems Institute, Stocker Road, Exeter, EX4 4QD, UK.
| |
Collapse
|
11
|
Cowan JD, Neuman J, van Drongelen W. Wilson-Cowan Equations for Neocortical Dynamics. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2016; 6:1. [PMID: 26728012 PMCID: PMC4733815 DOI: 10.1186/s13408-015-0034-5] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2015] [Accepted: 12/18/2015] [Indexed: 05/23/2023]
Abstract
In 1972-1973 Wilson and Cowan introduced a mathematical model of the population dynamics of synaptically coupled excitatory and inhibitory neurons in the neocortex. The model dealt only with the mean numbers of activated and quiescent excitatory and inhibitory neurons, and said nothing about fluctuations and correlations of such activity. However, in 1997 Ohira and Cowan, and then in 2007-2009 Buice and Cowan introduced Markov models of such activity that included fluctuation and correlation effects. Here we show how both models can be used to provide a quantitative account of the population dynamics of neocortical activity.We first describe how the Markov models account for many recent measurements of the resting or spontaneous activity of the neocortex. In particular we show that the power spectrum of large-scale neocortical activity has a Brownian motion baseline, and that the statistical structure of the random bursts of spiking activity found near the resting state indicates that such a state can be represented as a percolation process on a random graph, called directed percolation.Other data indicate that resting cortex exhibits pair correlations between neighboring populations of cells, the amplitudes of which decay slowly with distance, whereas stimulated cortex exhibits pair correlations which decay rapidly with distance. Here we show how the Markov model can account for the behavior of the pair correlations.Finally we show how the 1972-1973 Wilson-Cowan equations can account for recent data which indicates that there are at least two distinct modes of cortical responses to stimuli. In mode 1 a low intensity stimulus triggers a wave that propagates at a velocity of about 0.3 m/s, with an amplitude that decays exponentially. In mode 2 a high intensity stimulus triggers a larger response that remains local and does not propagate to neighboring regions.
Collapse
Affiliation(s)
- Jack D Cowan
- Department of Mathematics, University of Chicago, 5734 South University Avenue, Chicago, IL, 60637, USA.
| | - Jeremy Neuman
- Department of Physics, University of Chicago, 5720 South Ellis Avenue, Chicago, IL, 60637, USA.
| | - Wim van Drongelen
- Department of Pediatrics, University of Chicago, KCBD 900 East 57th Street, Chicago, IL, 60637, USA.
| |
Collapse
|
12
|
Siettos C, Starke J. Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools. WILEY INTERDISCIPLINARY REVIEWS-SYSTEMS BIOLOGY AND MEDICINE 2016; 8:438-58. [PMID: 27340949 DOI: 10.1002/wsbm.1348] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2016] [Revised: 05/01/2016] [Accepted: 05/14/2016] [Indexed: 11/09/2022]
Abstract
The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Constantinos Siettos
- School of Applied Mathematics and Physical Sciences, National Technical University of Athens, Athens, Greece
| | - Jens Starke
- School of Mathematical Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
13
|
Barranca VJ, Zhou D, Cai D. Compressive sensing reconstruction of feed-forward connectivity in pulse-coupled nonlinear networks. Phys Rev E 2016; 93:060201. [PMID: 27415190 DOI: 10.1103/physreve.93.060201] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2015] [Indexed: 06/06/2023]
Abstract
Utilizing the sparsity ubiquitous in real-world network connectivity, we develop a theoretical framework for efficiently reconstructing sparse feed-forward connections in a pulse-coupled nonlinear network through its output activities. Using only a small ensemble of random inputs, we solve this inverse problem through the compressive sensing theory based on a hidden linear structure intrinsic to the nonlinear network dynamics. The accuracy of the reconstruction is further verified by the fact that complex inputs can be well recovered using the reconstructed connectivity. We expect this Rapid Communication provides a new perspective for understanding the structure-function relationship as well as compressive sensing principle in nonlinear network dynamics.
Collapse
Affiliation(s)
- Victor J Barranca
- Department of Mathematics and Statistics, Swarthmore College, Swarthmore, Pennsylvania 19081, USA
| | - Douglas Zhou
- Department of Mathematics, MOE-LSC, Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
| | - David Cai
- Department of Mathematics, MOE-LSC, Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, New York 10012, USA
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
14
|
Barranca VJ, Kovačič G, Zhou D, Cai D. Efficient image processing via compressive sensing of integrate-and-fire neuronal network dynamics. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.07.067] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Zhang JW, Rangan AV. A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony. J Comput Neurosci 2015; 38:355-404. [PMID: 25601481 DOI: 10.1007/s10827-014-0543-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2014] [Revised: 11/29/2014] [Accepted: 12/09/2014] [Indexed: 10/24/2022]
Abstract
In this paper we provide a general methodology for systematically reducing the dynamics of a class of integrate-and-fire networks down to an augmented 4-dimensional system of ordinary-differential-equations. The class of integrate-and-fire networks we focus on are homogeneously-structured, strongly coupled, and fluctuation-driven. Our reduction succeeds where most current firing-rate and population-dynamics models fail because we account for the emergence of 'multiple-firing-events' involving the semi-synchronous firing of many neurons. These multiple-firing-events are largely responsible for the fluctuations generated by the network and, as a result, our reduction faithfully describes many dynamic regimes ranging from homogeneous to synchronous. Our reduction is based on first principles, and provides an analyzable link between the integrate-and-fire network parameters and the relatively low-dimensional dynamics underlying the 4-dimensional augmented ODE.
Collapse
Affiliation(s)
- J W Zhang
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | | |
Collapse
|
16
|
Barranca VJ, Kovačič G, Zhou D, Cai D. Network dynamics for optimal compressive-sensing input-signal recovery. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2014; 90:042908. [PMID: 25375568 DOI: 10.1103/physreve.90.042908] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Indexed: 06/04/2023]
Abstract
By using compressive sensing (CS) theory, a broad class of static signals can be reconstructed through a sequence of very few measurements in the framework of a linear system. For networks with nonlinear and time-evolving dynamics, is it similarly possible to recover an unknown input signal from only a small number of network output measurements? We address this question for pulse-coupled networks and investigate the network dynamics necessary for successful input signal recovery. Determining the specific network characteristics that correspond to a minimal input reconstruction error, we are able to achieve high-quality signal reconstructions with few measurements of network output. Using various measures to characterize dynamical properties of network output, we determine that networks with highly variable and aperiodic output can successfully encode network input information with high fidelity and achieve the most accurate CS input reconstructions. For time-varying inputs, we also find that high-quality reconstructions are achievable by measuring network output over a relatively short time window. Even when network inputs change with time, the same optimal choice of network characteristics and corresponding dynamics apply as in the case of static inputs.
Collapse
Affiliation(s)
- Victor J Barranca
- Courant Institute of Mathematical Sciences & Center for Neural Science, New York University, New York, New York 10012, USA and NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Gregor Kovačič
- Mathematical Sciences Department, Rensselaer Polytechnic Institute, Troy, New York 12180, USA
| | - Douglas Zhou
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - David Cai
- Courant Institute of Mathematical Sciences & Center for Neural Science, New York University, New York, New York 10012, USA and NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates and Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
17
|
Barranca VJ, Kovačič G, Zhou D, Cai D. Sparsity and compressed coding in sensory systems. PLoS Comput Biol 2014; 10:e1003793. [PMID: 25144745 PMCID: PMC4140640 DOI: 10.1371/journal.pcbi.1003793] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2014] [Accepted: 07/04/2014] [Indexed: 11/28/2022] Open
Abstract
Considering that many natural stimuli are sparse, can a sensory system evolve to take advantage of this sparsity? We explore this question and show that significant downstream reductions in the numbers of neurons transmitting stimuli observed in early sensory pathways might be a consequence of this sparsity. First, we model an early sensory pathway using an idealized neuronal network comprised of receptors and downstream sensory neurons. Then, by revealing a linear structure intrinsic to neuronal network dynamics, our work points to a potential mechanism for transmitting sparse stimuli, related to compressed-sensing (CS) type data acquisition. Through simulation, we examine the characteristics of networks that are optimal in sparsity encoding, and the impact of localized receptive fields beyond conventional CS theory. The results of this work suggest a new network framework of signal sparsity, freeing the notion from any dependence on specific component-space representations. We expect our CS network mechanism to provide guidance for studying sparse stimulus transmission along realistic sensory pathways as well as engineering network designs that utilize sparsity encoding. In forming a mental percept of the surrounding world, sensory information is processed and transmitted through a wide array of neuronal networks of various sizes and functionalities. Despite, and perhaps because of, this, sensory systems are able to render highly accurate representations of stimuli. In the retina, for example, photoreceptors transform light into electric signals, which are later processed by a significantly smaller network of ganglion cells before entering the optic nerve. How then is sensory information preserved along such a pathway? In this work, we put forth a possible answer to this question using compressed sensing, a recent advance in the field of signal processing that demonstrates how sparse signals can be reconstructed using very few samples. Through model simulation, we discover that stimuli can be recovered from ganglion-cell dynamics, and demonstrate how localized receptive fields improve stimulus encoding. We hypothesize that organisms have evolved to utilize the sparsity of stimuli, demonstrating that compressed sensing may be a universal information-processing framework underlying both information acquisition and retention in sensory systems.
Collapse
Affiliation(s)
- Victor J. Barranca
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, New York, United States of America
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Gregor Kovačič
- Mathematical Sciences Department, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| | - Douglas Zhou
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
- * E-mail: (DZ); (DC)
| | - David Cai
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, New York, United States of America
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Department of Mathematics, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
- * E-mail: (DZ); (DC)
| |
Collapse
|
18
|
A coarse-grained framework for spiking neuronal networks: between homogeneity and synchrony. J Comput Neurosci 2013; 37:81-104. [DOI: 10.1007/s10827-013-0488-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2013] [Revised: 11/06/2013] [Accepted: 11/11/2013] [Indexed: 10/25/2022]
|
19
|
Wei H, Ren Y, Wang ZY. A computational neural model of orientation detection based on multiple guesses: comparison of geometrical and algebraic models. Cogn Neurodyn 2013; 7:361-79. [PMID: 24427212 PMCID: PMC3773326 DOI: 10.1007/s11571-012-9235-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2012] [Revised: 11/09/2012] [Accepted: 12/14/2012] [Indexed: 02/03/2023] Open
Abstract
The implementation of Hubel-Wiesel hypothesis that orientation selectivity of a simple cell is based on ordered arrangement of its afferent cells has some difficulties. It requires the receptive fields (RFs) of those ganglion cells (GCs) and LGN cells to be similar in size and sub-structure and highly arranged in a perfect order. It also requires an adequate number of regularly distributed simple cells to match ubiquitous edges. However, the anatomical and electrophysiological evidence is not strong enough to support this geometry-based model. These strict regularities also make the model very uneconomical in both evolution and neural computation. We propose a new neural model based on an algebraic method to estimate orientations. This approach synthesizes the guesses made by multiple GCs or LGN cells and calculates local orientation information subject to a group of constraints. This algebraic model need not obey the constraints of Hubel-Wiesel hypothesis, and is easily implemented with a neural network. By using the idea of a satisfiability problem with constraints, we also prove that the precision and efficiency of this model are mathematically practicable. The proposed model makes clear several major questions which Hubel-Wiesel model does not account for. Image-rebuilding experiments are conducted to check whether this model misses any important boundary in the visual field because of the estimation strategy. This study is significant in terms of explaining the neural mechanism of orientation detection, and finding the circuit structure and computational route in neural networks. For engineering applications, our model can be used in orientation detection and as a simulation platform for cell-to-cell communications to develop bio-inspired eye chips.
Collapse
Affiliation(s)
- Hui Wei
- Laboratory of Cognitive Model and Algorithm, School of Computer Science, Fudan University, Shanghai, 200433 China
| | - Yuan Ren
- Laboratory of Cognitive Model and Algorithm, School of Computer Science, Fudan University, Shanghai, 200433 China
| | - Zi Yan Wang
- Laboratory of Cognitive Model and Algorithm, School of Computer Science, Fudan University, Shanghai, 200433 China
| |
Collapse
|
20
|
Hong L, Yong WA. Simple moment-closure model for the self-assembly of breakable amyloid filaments. Biophys J 2013; 104:533-40. [PMID: 23442904 DOI: 10.1016/j.bpj.2012.12.039] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2012] [Revised: 12/19/2012] [Accepted: 12/21/2012] [Indexed: 11/25/2022] Open
Abstract
In this work, we derive a simple mathematical model from mass-action equations for amyloid fiber formation that takes into account the primary nucleation, elongation, and length-dependent fragmentation. The derivation is based on the principle of minimum free energy under certain constraints and is mathematically related to the partial equilibrium approximation. Direct numerical comparisons confirm the usefulness of our simple model. We further explore its basic kinetic and equilibrium properties, and show that the current model is a straightforward generalization of that with constant fragmentation rates.
Collapse
Affiliation(s)
- Liu Hong
- Zhou-Pei Yuan Center for Applied Mathematics, Tsinghua University, Beijing, P R China.
| | | |
Collapse
|
21
|
Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks. J Comput Neurosci 2013; 36:279-95. [PMID: 23851661 DOI: 10.1007/s10827-013-0472-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2013] [Revised: 06/12/2013] [Accepted: 06/16/2013] [Indexed: 10/26/2022]
Abstract
Randomly connected populations of spiking neurons display a rich variety of dynamics. However, much of the current modeling and theoretical work has focused on two dynamical extremes: on one hand homogeneous dynamics characterized by weak correlations between neurons, and on the other hand total synchrony characterized by large populations firing in unison. In this paper we address the conceptual issue of how to mathematically characterize the partially synchronous "multiple firing events" (MFEs) which manifest in between these two dynamical extremes. We further develop a geometric method for obtaining the distribution of magnitudes of these MFEs by recasting the cascading firing event process as a first-passage time problem, and deriving an analytical approximation of the first passage time density valid for large neuron populations. Thus, we establish a direct link between the voltage distributions of excitatory and inhibitory neurons and the number of neurons firing in an MFE that can be easily integrated into population-based computational methods, thereby bridging the gap between homogeneous firing regimes and total synchrony.
Collapse
|
22
|
Ly C. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics. Neural Comput 2013; 25:2682-708. [PMID: 23777517 DOI: 10.1162/neco_a_00489] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Statistical Sciences and Operations Research, Virginia Commonwealth University Richmond, VA 23284-3083, USA.
| |
Collapse
|
23
|
Buice MA, Chow CC. Beyond mean field theory: statistical field theory for neural networks. JOURNAL OF STATISTICAL MECHANICS (ONLINE) 2013; 2013:P03003. [PMID: 25243014 PMCID: PMC4169078 DOI: 10.1088/1742-5468/2013/03/p03003] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Mean field theories have been a stalwart for studying the dynamics of networks of coupled neurons. They are convenient because they are relatively simple and possible to analyze. However, classical mean field theory neglects the effects of fluctuations and correlations due to single neuron effects. Here, we consider various possible approaches for going beyond mean field theory and incorporating correlation effects. Statistical field theory methods, in particular the Doi-Peliti-Janssen formalism, are particularly useful in this regard.
Collapse
Affiliation(s)
- Michael A Buice
- Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA
| | - Carson C Chow
- Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, MD, USA
| |
Collapse
|
24
|
Buice MA, Chow CC. Dynamic finite size effects in spiking neural networks. PLoS Comput Biol 2013; 9:e1002872. [PMID: 23359258 PMCID: PMC3554590 DOI: 10.1371/journal.pcbi.1002872] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2012] [Accepted: 11/21/2012] [Indexed: 11/19/2022] Open
Abstract
We investigate the dynamics of a deterministic finite-sized network of synaptically coupled spiking neurons and present a formalism for computing the network statistics in a perturbative expansion. The small parameter for the expansion is the inverse number of neurons in the network. The network dynamics are fully characterized by a neuron population density that obeys a conservation law analogous to the Klimontovich equation in the kinetic theory of plasmas. The Klimontovich equation does not possess well-behaved solutions but can be recast in terms of a coupled system of well-behaved moment equations, known as a moment hierarchy. The moment hierarchy is impossible to solve but in the mean field limit of an infinite number of neurons, it reduces to a single well-behaved conservation law for the mean neuron density. For a large but finite system, the moment hierarchy can be truncated perturbatively with the inverse system size as a small parameter but the resulting set of reduced moment equations that are still very difficult to solve. However, the entire moment hierarchy can also be re-expressed in terms of a functional probability distribution of the neuron density. The moments can then be computed perturbatively using methods from statistical field theory. Here we derive the complete mean field theory and the lowest order second moment corrections for physiologically relevant quantities. Although we focus on finite-size corrections, our method can be used to compute perturbative expansions in any parameter. One avenue towards understanding how the brain functions is to create computational and mathematical models. However, a human brain has on the order of a hundred billion neurons with a quadrillion synaptic connections. Each neuron is a complex cell comprised of multiple compartments hosting a myriad of ions, proteins and other molecules. Even if computing power continues to increase exponentially, directly simulating all the processes in the brain on a computer is not feasible in the foreseeable future and even if this could be achieved, the resulting simulation may be no simpler to understand than the brain itself. Hence, the need for more tractable models. Historically, systems with many interacting bodies are easier to understand in the two opposite limits of a small number or an infinite number of elements and most of the theoretical efforts in understanding neural networks have been devoted to these two limits. There has been relatively little effort directed to the very relevant but difficult regime of large but finite networks. In this paper, we introduce a new formalism that borrows from the methods of many-body statistical physics to analyze finite size effects in spiking neural networks.
Collapse
Affiliation(s)
- Michael A. Buice
- Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland, United States of America
- * E-mail: (MAB); (CCC)
| | - Carson C. Chow
- Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland, United States of America
- * E-mail: (MAB); (CCC)
| |
Collapse
|
25
|
Perthame B, Salort D. On a voltage-conductance kinetic system for integrate & fire neural networks. ACTA ACUST UNITED AC 2013. [DOI: 10.3934/krm.2013.6.841] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
26
|
|
27
|
Baladron J, Fasoli D, Faugeras O, Touboul J. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2012; 2:10. [PMID: 22657695 PMCID: PMC3497713 DOI: 10.1186/2190-8567-2-10] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2011] [Accepted: 03/09/2012] [Indexed: 05/20/2023]
Abstract
We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons' initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis.Mathematics Subject Classification (2000): 60F99, 60B10, 92B20, 82C32, 82C80, 35Q80.
Collapse
Affiliation(s)
- Javier Baladron
- NeuroMathComp Laboratory, INRIA, Sophia-Antipolis Méditerranée, 06902, France.
| | | | | | | |
Collapse
|
28
|
Shkarayev MS, Kovačič G, Cai D. Topological effects on dynamics in complex pulse-coupled networks of integrate-and-fire type. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:036104. [PMID: 22587146 DOI: 10.1103/physreve.85.036104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2011] [Revised: 01/31/2012] [Indexed: 05/31/2023]
Abstract
For a class of integrate-and-fire, pulse-coupled networks with complex topology, we study the dependence of the pulse rate on the underlying architectural connectivity statistics. We derive the distribution of the pulse rate from this dependence and determine when the underlying scale-free architectural connectivity gives rise to a scale-free pulse-rate distribution. We identify the scaling of the pairwise coupling between the dynamical units in this network class that keeps their pulse rates bounded in the infinite-network limit. In the process, we determine the connectivity statistics for a specific scale-free network grown by preferential attachment.
Collapse
Affiliation(s)
- Maxim S Shkarayev
- Mathematical Sciences Department, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180, USA
| | | | | |
Collapse
|
29
|
A timestepper-based approach for the coarse-grained analysis of microscopic neuronal simulators on networks: Bifurcation and rare-events micro- to macro-computations. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2011.06.018] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
30
|
Deville REL, Peskin CS. Synchrony and asynchrony for neuronal dynamics defined on complex networks. Bull Math Biol 2011; 74:769-802. [PMID: 21755391 DOI: 10.1007/s11538-011-9674-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2009] [Accepted: 06/14/2011] [Indexed: 11/29/2022]
Abstract
We describe and analyze a model for a stochastic pulse-coupled neuronal network with many sources of randomness: random external input, potential synaptic failure, and random connectivity topologies. We show that different classes of network topologies give rise to qualitatively different types of synchrony: uniform (Erdős-Rényi) and "small-world" networks give rise to synchronization phenomena similar to that in "all-to-all" networks (in which there is a sharp onset of synchrony as coupling is increased); in contrast, in "scale-free" networks the dependence of synchrony on coupling strength is smoother. Moreover, we show that in the uniform and small-world cases, the fine details of the network are not important in determining the synchronization properties; this depends only on the mean connectivity. In contrast, for scale-free networks, the dynamics are significantly affected by the fine details of the network; in particular, they are significantly affected by the local neighborhoods of the "hubs" in the network.
Collapse
|
31
|
Touboul JD, Ermentrout GB. Finite-size and correlation-induced effects in mean-field dynamics. J Comput Neurosci 2011; 31:453-84. [PMID: 21384156 DOI: 10.1007/s10827-011-0320-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2010] [Revised: 01/28/2011] [Accepted: 02/16/2011] [Indexed: 10/18/2022]
Abstract
The brain's activity is characterized by the interaction of a very large number of neurons that are strongly affected by noise. However, signals often arise at macroscopic scales integrating the effect of many neurons into a reliable pattern of activity. In order to study such large neuronal assemblies, one is often led to derive mean-field limits summarizing the effect of the interaction of a large number of neurons into an effective signal. Classical mean-field approaches consider the evolution of a deterministic variable, the mean activity, thus neglecting the stochastic nature of neural behavior. In this article, we build upon two recent approaches that include correlations and higher order moments in mean-field equations, and study how these stochastic effects influence the solutions of the mean-field equations, both in the limit of an infinite number of neurons and for large yet finite networks. We introduce a new model, the infinite model, which arises from both equations by a rescaling of the variables and, which is invertible for finite-size networks, and hence, provides equivalent equations to those previously derived models. The study of this model allows us to understand qualitative behavior of such large-scale networks. We show that, though the solutions of the deterministic mean-field equation constitute uncorrelated solutions of the new mean-field equations, the stability properties of limit cycles are modified by the presence of correlations, and additional non-trivial behaviors including periodic orbits appear when there were none in the mean field. The origin of all these behaviors is then explored in finite-size networks where interesting mesoscopic scale effects appear. This study leads us to show that the infinite-size system appears as a singular limit of the network equations, and for any finite network, the system will differ from the infinite system.
Collapse
Affiliation(s)
- Jonathan D Touboul
- NeuroMathComp Laboratory, INRIA/ENS Paris, 23 Avenue d'Italie, 75013 Paris, France.
| | | |
Collapse
|
32
|
Bressloff PC. Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2010; 82:051903. [PMID: 21230496 DOI: 10.1103/physreve.82.051903] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2010] [Revised: 09/22/2010] [Indexed: 05/08/2023]
Abstract
We analyze a stochastic model of neuronal population dynamics with intrinsic noise. In the thermodynamic limit N→∞ , where N determines the size of each population, the dynamics is described by deterministic Wilson-Cowan equations. On the other hand, for finite N the dynamics is described by a master equation that determines the probability of spiking activity within each population. We first consider a single excitatory population that exhibits bistability in the deterministic limit. The steady-state probability distribution of the stochastic network has maxima at points corresponding to the stable fixed points of the deterministic network; the relative weighting of the two maxima depends on the system size. For large but finite N , we calculate the exponentially small rate of noise-induced transitions between the resulting metastable states using a Wentzel-Kramers-Brillouin (WKB) approximation and matched asymptotic expansions. We then consider a two-population excitatory or inhibitory network that supports limit cycle oscillations. Using a diffusion approximation, we reduce the dynamics to a neural Langevin equation, and show how the intrinsic noise amplifies subthreshold oscillations (quasicycles).
Collapse
Affiliation(s)
- Paul C Bressloff
- Mathematical Institute, University of Oxford, 24-29 St. Giles', Oxford OX1 3LB, United Kingdom
| |
Collapse
|
33
|
Lu W, Rossoni E, Feng J. On a Gaussian neuronal field model. Neuroimage 2010; 52:913-33. [DOI: 10.1016/j.neuroimage.2010.02.075] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2009] [Revised: 02/09/2010] [Accepted: 02/26/2010] [Indexed: 10/19/2022] Open
|
34
|
Laing CR, Frewen T, Kevrekidis IG. Reduced models for binocular rivalry. J Comput Neurosci 2010; 28:459-76. [PMID: 20182782 DOI: 10.1007/s10827-010-0227-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2009] [Revised: 01/21/2010] [Accepted: 02/15/2010] [Indexed: 11/28/2022]
Abstract
Binocular rivalry occurs when two very different images are presented to the two eyes, but a subject perceives only one image at a given time. A number of computational models for binocular rivalry have been proposed; most can be categorised as either "rate" models, containing a small number of variables, or as more biophysically-realistic "spiking neuron" models. However, a principled derivation of a reduced model from a spiking model is lacking. We present two such derivations, one heuristic and a second using recently-developed data-mining techniques to extract a small number of "macroscopic" variables from the results of a spiking neuron model simulation. We also consider bifurcations that can occur as parameters are varied, and the role of noise in such systems. Our methods are applicable to a number of other models of interest.
Collapse
Affiliation(s)
- Carlo R Laing
- IIMS, Massey University, Private Bag 102-904, NSMC, Auckland, New Zealand.
| | | | | |
Collapse
|
35
|
Abstract
Population rate or activity equations are the foundation of a common approach to modeling for neural networks. These equations provide mean field dynamics for the firing rate or activity of neurons within a network given some connectivity. The shortcoming of these equations is that they take into account only the average firing rate, while leaving out higher-order statistics like correlations between firing. A stochastic theory of neural networks that includes statistics at all orders was recently formulated. We describe how this theory yields a systematic extension to population rate equations by introducing equations for correlations and appropriate coupling terms. Each level of the approximation yields closed equations; they depend only on the mean and specific correlations of interest, without an ad hoc criterion for doing so. We show in an example of an all-to-all connected network how our system of generalized activity equations captures phenomena missed by the mean field rate equations alone.
Collapse
|
36
|
Coombes S. Large-scale neural dynamics: simple and complex. Neuroimage 2010; 52:731-9. [PMID: 20096791 DOI: 10.1016/j.neuroimage.2010.01.045] [Citation(s) in RCA: 89] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2009] [Revised: 12/23/2009] [Accepted: 01/13/2010] [Indexed: 11/24/2022] Open
Abstract
We review the use of neural field models for modelling the brain at the large scales necessary for interpreting EEG, fMRI, MEG and optical imaging data. Albeit a framework that is limited to coarse-grained or mean-field activity, neural field models provide a framework for unifying data from different imaging modalities. Starting with a description of neural mass models, we build to spatially extend cortical models of layered two-dimensional sheets with long range axonal connections mediating synaptic interactions. Reformulations of the fundamental non-local mathematical model in terms of more familiar local differential (brain wave) equations are described. Techniques for the analysis of such models, including how to determine the onset of spatio-temporal pattern forming instabilities, are reviewed. Extensions of the basic formalism to treat refractoriness, adaptive feedback and inhomogeneous connectivity are described along with open challenges for the development of multi-scale models that can integrate macroscopic models at large spatial scales with models at the microscopic scale.
Collapse
Affiliation(s)
- S Coombes
- School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD, UK.
| |
Collapse
|
37
|
Shinozaki T, Okada M, Reyes AD, Câteau H. Flexible traffic control of the synfire-mode transmission by inhibitory modulation: nonlinear noise reduction. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2010; 81:011913. [PMID: 20365405 DOI: 10.1103/physreve.81.011913] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2009] [Revised: 12/02/2009] [Indexed: 05/29/2023]
Abstract
Intermingled neural connections apparent in the brain make us wonder what controls the traffic of propagating activity in the brain to secure signal transmission without harmful crosstalk. Here, we reveal that inhibitory input but not excitatory input works as a particularly useful traffic controller because it controls the degree of synchrony of population firing of neurons as well as controlling the size of the population firing bidirectionally. Our dynamical system analysis reveals that the synchrony enhancement depends crucially on the nonlinear membrane potential dynamics and a hidden slow dynamical variable. Our electrophysiological study with rodent slice preparations show that the phenomenon happens in real neurons. Furthermore, our analysis with the Fokker-Planck equations demonstrates the phenomenon in a semianalytical manner.
Collapse
|
38
|
Dimensionally-reduced visual cortical network model predicts network response and connects system- and cellular-level descriptions. J Comput Neurosci 2009; 28:91-106. [PMID: 19806444 DOI: 10.1007/s10827-009-0189-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2009] [Revised: 07/22/2009] [Accepted: 09/18/2009] [Indexed: 10/20/2022]
|
39
|
Rangan AV. Diagrammatic expansion of pulse-coupled network dynamics in terms of subnetworks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2009; 80:036101. [PMID: 19905174 DOI: 10.1103/physreve.80.036101] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2008] [Revised: 05/11/2009] [Indexed: 05/28/2023]
Abstract
We introduce a framework wherein various measurements of a pulse-coupled network's stationary dynamics can be expanded in terms of the network's connectivity. Such measurements include the occurrence rate of pulses (e.g., firing rates within a neuronal network) as well as higher-order correlations in activity between various nodes in the network. The various terms in this expansion can be interpreted as diagrams corresponding to subnetworks of the original network, which span both space (in terms of the network's graph) as well as time (in the sense of causality).
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, New York 10012, USA
| |
Collapse
|
40
|
Kovacic G, Tao L, Rangan AV, Cai D. Fokker-Planck description of conductance-based integrate-and-fire neuronal networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2009; 80:021904. [PMID: 19792148 DOI: 10.1103/physreve.80.021904] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2009] [Revised: 06/27/2009] [Indexed: 05/28/2023]
Abstract
Steady dynamics of coupled conductance-based integrate-and-fire neuronal networks in the limit of small fluctuations is studied via the equilibrium states of a Fokker-Planck equation. An asymptotic approximation for the membrane-potential probability density function is derived and the corresponding gain curves are found. Validity conditions are discussed for the Fokker-Planck description and verified via direct numerical simulations.
Collapse
Affiliation(s)
- Gregor Kovacic
- Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, New York 12180, USA
| | | | | | | |
Collapse
|
41
|
Rangan AV. Diagrammatic expansion of pulse-coupled network dynamics. PHYSICAL REVIEW LETTERS 2009; 102:158101. [PMID: 19518674 DOI: 10.1103/physrevlett.102.158101] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2008] [Indexed: 05/27/2023]
Abstract
We introduce a framework wherein various long-time measurements of a pulse-coupled network's stationary dynamics can be expanded in terms of the network's connectivity. Such measurements include the occurrence rate of pulses as well as higher-order correlations in activity between various nodes in the network. The various terms in this expansion can be interpreted as diagrams corresponding to subnetworks of the original network which span both space (in terms of the network's graph) as well as time (in the sense of causality).
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences, New York University, New York, New York 10012-1185, USA
| |
Collapse
|
42
|
Buice MA, Cowan JD. Statistical mechanics of the neocortex. PROGRESS IN BIOPHYSICS AND MOLECULAR BIOLOGY 2009; 99:53-86. [PMID: 19695282 DOI: 10.1016/j.pbiomolbio.2009.07.003] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
43
|
Bressloff PC, Kilpatrick ZP. Nonlocal Ginzburg-Landau equation for cortical pattern formation. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2008; 78:041916. [PMID: 18999464 DOI: 10.1103/physreve.78.041916] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2008] [Indexed: 05/27/2023]
Abstract
We show how a nonlocal version of the real Ginzburg-Landau (GL) equation arises in a large-scale recurrent network model of primary visual cortex. We treat cortex as a continuous two-dimensional sheet of cells that signal both the position and orientation of a local visual stimulus. The recurrent circuitry is decomposed into a local part, which contributes primarily to the orientation tuning properties of the cells, and a long-range part that introduces spatial correlations. We assume that (a) the local network exists in a balanced state such that it operates close to a point of instability and (b) the long-range connections are weak and scale with the bifurcation parameter of the dynamical instability generated by the local circuitry. Carrying out a perturbation expansion with respect to the long-range coupling strength then generates a nonlocal coupling term in the GL amplitude equation. We use the nonlocal GL equation to analyze how axonal propagation delays arising from the slow conduction velocities of the long-range connections affect spontaneous pattern formation.
Collapse
Affiliation(s)
- Paul C Bressloff
- Department of Mathematics, University of Utah, Salt Lake City, Utah 84112, USA
| | | |
Collapse
|
44
|
Zhu W, Shelley M, Shapley R. A neuronal network model of primary visual cortex explains spatial frequency selectivity. J Comput Neurosci 2008; 26:271-87. [PMID: 18668360 DOI: 10.1007/s10827-008-0110-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2008] [Revised: 06/26/2008] [Accepted: 07/01/2008] [Indexed: 11/24/2022]
Abstract
We address how spatial frequency selectivity arises in Macaque primary visual cortex (V1) by simulating V1 with a large-scale network model consisting of O(10(4)) excitatory and inhibitory integrate-and-fire neurons with realistic synaptic conductances. The new model introduces variability of the widths of subregions in V1 neuron receptive fields. As a consequence different model V1 neurons prefer different spatial frequencies. The model cortex has distributions of spatial frequency selectivity and of preference that resemble experimental findings from the real V1. Two main sources of spatial frequency selectivity in the model are the spatial arrangement of feedforward excitation, and cortical nonlinear suppression, a result of cortical inhibition.
Collapse
Affiliation(s)
- Wei Zhu
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012, USA.
| | | | | |
Collapse
|
45
|
Synchrony and Asynchrony in a Fully Stochastic Neural Network. Bull Math Biol 2008; 70:1608-33. [DOI: 10.1007/s11538-008-9311-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2007] [Accepted: 02/12/2008] [Indexed: 10/22/2022]
|
46
|
Rangan AV, Kovacic G, Cai D. Kinetic theory for neuronal networks with fast and slow excitatory conductances driven by the same spike train. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2008; 77:041915. [PMID: 18517664 DOI: 10.1103/physreve.77.041915] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2007] [Revised: 12/29/2007] [Indexed: 05/26/2023]
Abstract
We present a kinetic theory for all-to-all coupled networks of identical, linear, integrate-and-fire, excitatory point neurons in which a fast and a slow excitatory conductance are driven by the same spike train in the presence of synaptic failure. The maximal-entropy principle guides us in deriving a set of three (1+1) -dimensional kinetic moment equations from a Boltzmann-like equation describing the evolution of the one-neuron probability density function. We explain the emergence of correlation terms in the kinetic moment and Boltzmann-like equations as a consequence of simultaneous activation of both the fast and slow excitatory conductances and furnish numerical evidence for their importance in correctly describing the coarse-grained dynamics of the underlying neuronal network.
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012-1185, USA
| | | | | |
Collapse
|
47
|
Abstract
We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance, and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network depends on only the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes.
Collapse
Affiliation(s)
- Hédi Soula
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD 20892, USA.
| | | |
Collapse
|
48
|
Ly C, Tranchina D. Critical analysis of dimension reduction by a moment closure method in a population density approach to neural network modeling. Neural Comput 2007; 19:2032-92. [PMID: 17571938 DOI: 10.1162/neco.2007.19.8.2032] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Computational techniques within the population density function (PDF) framework have provided time-saving alternatives to classical Monte Carlo simulations of neural network activity. Efficiency of the PDF method is lost as the underlying neuron model is made more realistic and the number of state variables increases. In a detailed theoretical and computational study, we elucidate strengths and weaknesses of dimension reduction by a particular moment closure method (Cai, Tao, Shelley, & McLaughlin, 2004; Cai, Tao, Rangan, & McLaughlin, 2006) as applied to integrate-and-fire neurons that receive excitatory synaptic input only. When the unitary postsynaptic conductance event has a single-exponential time course, the evolution equation for the PDF is a partial differential integral equation in two state variables, voltage and excitatory conductance. In the moment closure method, one approximates the conditional kth centered moment of excitatory conductance given voltage by the corresponding unconditioned moment. The result is a system of k coupled partial differential equations with one state variable, voltage, and k coupled ordinary differential equations. Moment closure at k = 2 works well, and at k = 3 works even better, in the regime of high dynamically varying synaptic input rates. Both closures break down at lower synaptic input rates. Phase-plane analysis of the k = 2 problem with typical parameters proves, and reveals why, no steady-state solutions exist below a synaptic input rate that gives a firing rate of 59 s(1) in the full 2D problem. Closure at k = 3 fails for similar reasons. Low firing-rate solutions can be obtained only with parameters for the amplitude or kinetics (or both) of the unitary postsynaptic conductance event that are on the edge of the physiological range. We conclude that this dimension-reduction method gives ill-posed problems for a wide range of physiological parameters, and we suggest future directions.
Collapse
Affiliation(s)
- Cheng Ly
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA.
| | | |
Collapse
|
49
|
Renart A, Moreno-Bote R, Wang XJ, Parga N. Mean-driven and fluctuation-driven persistent activity in recurrent networks. Neural Comput 2007; 19:1-46. [PMID: 17134316 DOI: 10.1162/neco.2007.19.1.1] [Citation(s) in RCA: 91] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Spike trains from cortical neurons show a high degree of irregularity, with coefficients of variation (CV) of their interspike interval (ISI) distribution close to or higher than one. It has been suggested that this irregularity might be a reflection of a particular dynamical state of the local cortical circuit in which excitation and inhibition balance each other. In this "balanced" state, the mean current to the neurons is below threshold, and firing is driven by current fluctuations, resulting in irregular Poisson-like spike trains. Recent data show that the degree of irregularity in neuronal spike trains recorded during the delay period of working memory experiments is the same for both low-activity states of a few Hz and for elevated, persistent activity states of a few tens of Hz. Since the difference between these persistent activity states cannot be due to external factors coming from sensory inputs, this suggests that the underlying network dynamics might support coexisting balanced states at different firing rates. We use mean field techniques to study the possible existence of multiple balanced steady states in recurrent networks of current-based leaky integrate-and-fire (LIF) neurons. To assess the degree of balance of a steady state, we extend existing mean-field theories so that not only the firing rate, but also the coefficient of variation of the interspike interval distribution of the neurons, are determined self-consistently. Depending on the connectivity parameters of the network, we find bistable solutions of different types. If the local recurrent connectivity is mainly excitatory, the two stable steady states differ mainly in the mean current to the neurons. In this case, the mean drive in the elevated persistent activity state is suprathreshold and typically characterized by low spiking irregularity. If the local recurrent excitatory and inhibitory drives are both large and nearly balanced, or even dominated by inhibition, two stable states coexist, both with subthreshold current drive. In this case, the spiking variability in both the resting state and the mnemonic persistent state is large, but the balance condition implies parameter fine-tuning. Since the degree of required fine-tuning increases with network size and, on the other hand, the size of the fluctuations in the afferent current to the cells increases for small networks, overall we find that fluctuation-driven persistent activity in the very simplified type of models we analyze is not a robust phenomenon. Possible implications of considering more realistic models are discussed.
Collapse
Affiliation(s)
- Alfonso Renart
- Departamento de Físca Teórica, Universidad Autónoma de Madrid, Cantoblanco 28049, Madrid, Spain.
| | | | | | | |
Collapse
|
50
|
Hildebrand EJ, Buice MA, Chow CC. Kinetic theory of coupled oscillators. PHYSICAL REVIEW LETTERS 2007; 98:054101. [PMID: 17358861 PMCID: PMC2561959 DOI: 10.1103/physrevlett.98.054101] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2006] [Indexed: 05/14/2023]
Abstract
We present an approach for the description of fluctuations that are due to finite system size induced correlations in the Kuramoto model of coupled oscillators. We construct a hierarchy for the moments of the density of oscillators that is analogous to the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy in the kinetic theory of plasmas and gases. To calculate the lowest order system size effect, we truncate this hierarchy at second order and solve the resulting closed equations for the two-oscillator correlation function around the incoherent state. We use this correlation function to compute the fluctuations of the order parameter, including the effect of transients, and compare this computation with numerical simulations.
Collapse
Affiliation(s)
- Eric J Hildebrand
- Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | | | | |
Collapse
|