1
|
Qi Y. Moment neural network and an efficient numerical method for modeling irregular spiking activity. Phys Rev E 2024; 110:024310. [PMID: 39295055 DOI: 10.1103/physreve.110.024310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 07/01/2024] [Indexed: 09/21/2024]
Abstract
Continuous rate-based neural networks have been widely applied to modeling the dynamics of cortical circuits. However, cortical neurons in the brain exhibit irregular spiking activity with complex correlation structures that cannot be captured by mean firing rate alone. To close this gap, we consider a framework for modeling irregular spiking activity, called the moment neural network, which naturally generalizes rate models to second-order moments and can accurately capture the firing statistics of spiking neural networks. We propose an efficient numerical method that allows for rapid evaluation of moment mappings for neuronal activations without solving the underlying Fokker-Planck equation. This allows simulation of coupled interactions of mean firing rate and firing variability of large-scale neural circuits while retaining the advantage of analytical tractability of continuous rate models. We demonstrate how the moment neural network can explain a range of phenomena including diverse Fano factor in networks with quenched disorder and the emergence of irregular oscillatory dynamics in excitation-inhibition networks with delay.
Collapse
Affiliation(s)
- Yang Qi
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai 200433, China; and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200433, China
| |
Collapse
|
2
|
Paliwal S, Ocker GK, Brinkman BAW. Metastability in networks of nonlinear stochastic integrate-and-fire neurons. ARXIV 2024:arXiv:2406.07445v1. [PMID: 38947936 PMCID: PMC11213153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Neurons in the brain continuously process the barrage of sensory inputs they receive from the environment. A wide array of experimental work has shown that the collective activity of neural populations encodes and processes this constant bombardment of information. How these collective patterns of activity depend on single neuron properties is often unclear. Single-neuron recordings have shown that individual neural responses to inputs are nonlinear, which prevents a straightforward extrapolation from single neuron features to emergent collective states. In this work, we use a field theoretic formulation of a stochastic leaky integrate-and-fire model to study the impact of nonlinear intensity functions on macroscopic network activity. We show that the interplay between nonlinear spike emission and membrane potential resets can i) give rise to metastable transitions between active firing rate states, and ii) can enhance or suppress mean firing rates and membrane potentials in opposite directions.
Collapse
Affiliation(s)
- Siddharth Paliwal
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Gabriel Koch Ocker
- Department of Mathematics and Statistics, Boston University, Boston, MA, 02215, USA
| | - Braden A W Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY, 11794, USA
| |
Collapse
|
3
|
Bardella G, Franchini S, Pan L, Balzan R, Ramawat S, Brunamonti E, Pani P, Ferraina S. Neural Activity in Quarks Language: Lattice Field Theory for a Network of Real Neurons. ENTROPY (BASEL, SWITZERLAND) 2024; 26:495. [PMID: 38920504 PMCID: PMC11203154 DOI: 10.3390/e26060495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/28/2024] [Accepted: 05/30/2024] [Indexed: 06/27/2024]
Abstract
Brain-computer interfaces have seen extraordinary surges in developments in recent years, and a significant discrepancy now exists between the abundance of available data and the limited headway made in achieving a unified theoretical framework. This discrepancy becomes particularly pronounced when examining the collective neural activity at the micro and meso scale, where a coherent formalization that adequately describes neural interactions is still lacking. Here, we introduce a mathematical framework to analyze systems of natural neurons and interpret the related empirical observations in terms of lattice field theory, an established paradigm from theoretical particle physics and statistical mechanics. Our methods are tailored to interpret data from chronic neural interfaces, especially spike rasters from measurements of single neuron activity, and generalize the maximum entropy model for neural networks so that the time evolution of the system is also taken into account. This is obtained by bridging particle physics and neuroscience, paving the way for particle physics-inspired models of the neocortex.
Collapse
Affiliation(s)
- Giampiero Bardella
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Simone Franchini
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Liming Pan
- School of Cyber Science and Technology, University of Science and Technology of China, Hefei 230026, China;
| | - Riccardo Balzan
- Laboratoire de Chimie et Biochimie Pharmacologiques et Toxicologiques, UMR 8601, UFR Biomédicale et des Sciences de Base, Université Paris Descartes-CNRS, PRES Paris Sorbonne Cité, 75006 Paris, France;
| | - Surabhi Ramawat
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Emiliano Brunamonti
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| |
Collapse
|
4
|
Painchaud V, Desrosiers P, Doyon N. The Determining Role of Covariances in Large Networks of Stochastic Neurons. Neural Comput 2024; 36:1121-1162. [PMID: 38657971 DOI: 10.1162/neco_a_01656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Biological neural networks are notoriously hard to model due to their stochastic behavior and high dimensionality. We tackle this problem by constructing a dynamical model of both the expectations and covariances of the fractions of active and refractory neurons in the network's populations. We do so by describing the evolution of the states of individual neurons with a continuous-time Markov chain, from which we formally derive a low-dimensional dynamical system. This is done by solving a moment closure problem in a way that is compatible with the nonlinearity and boundedness of the activation function. Our dynamical system captures the behavior of the high-dimensional stochastic model even in cases where the mean-field approximation fails to do so. Taking into account the second-order moments modifies the solutions that would be obtained with the mean-field approximation and can lead to the appearance or disappearance of fixed points and limit cycles. We moreover perform numerical experiments where the mean-field approximation leads to periodically oscillating solutions, while the solutions of the second-order model can be interpreted as an average taken over many realizations of the stochastic model. Altogether, our results highlight the importance of including higher moments when studying stochastic networks and deepen our understanding of correlated neuronal activity.
Collapse
Affiliation(s)
- Vincent Painchaud
- Department of Mathematics and Statistics, McGill University, Montreal, Québec H3A 0B6, Canada
| | - Patrick Desrosiers
- Department of Physics, Engineering Physics, and Optics, Université Laval, Quebec City, Québec G1V 0A6, Canada
- CERVO Brain Research Center, Quebec City, Québec G1E 1T2, Canada
- Centre interdisciplinaire en modélisation mathématique de l'Université Laval, Quebec City, Québec G1V 0A6, Canada
| | - Nicolas Doyon
- Départment of Mathematics and Statistics, Université Laval, Quebec City, Québec G1V 0A6, Canada
- CERVO Brain Research Center, Quebec City, Québec G1E 1T2, Canada
- Centre interdisciplinaire en modélisation mathématique de l'Université Laval, Quebec City, Québec G1V 0A6, Canada
| |
Collapse
|
5
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
6
|
Stern M, Istrate N, Mazzucato L. A reservoir of timescales emerges in recurrent circuits with heterogeneous neural assemblies. eLife 2023; 12:e86552. [PMID: 38084779 PMCID: PMC10810607 DOI: 10.7554/elife.86552] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 12/07/2023] [Indexed: 01/26/2024] Open
Abstract
The temporal activity of many physical and biological systems, from complex networks to neural circuits, exhibits fluctuations simultaneously varying over a large range of timescales. Long-tailed distributions of intrinsic timescales have been observed across neurons simultaneously recorded within the same cortical circuit. The mechanisms leading to this striking temporal heterogeneity are yet unknown. Here, we show that neural circuits, endowed with heterogeneous neural assemblies of different sizes, naturally generate multiple timescales of activity spanning several orders of magnitude. We develop an analytical theory using rate networks, supported by simulations of spiking networks with cell-type specific connectivity, to explain how neural timescales depend on assembly size and show that our model can naturally explain the long-tailed timescale distribution observed in the awake primate cortex. When driving recurrent networks of heterogeneous neural assemblies by a time-dependent broadband input, we found that large and small assemblies preferentially entrain slow and fast spectral components of the input, respectively. Our results suggest that heterogeneous assemblies can provide a biologically plausible mechanism for neural circuits to demix complex temporal input signals by transforming temporal into spatial neural codes via frequency-selective neural assemblies.
Collapse
Affiliation(s)
- Merav Stern
- Institute of Neuroscience, University of OregonEugeneUnited States
- Faculty of Medicine, The Hebrew University of JerusalemJerusalemIsrael
| | - Nicolae Istrate
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
| | - Luca Mazzucato
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
- Mathematics and Biology, University of OregonEugeneUnited States
| |
Collapse
|
7
|
Stoll EA. A thermodynamical model of non-deterministic computation in cortical neural networks. Phys Biol 2023; 21:016003. [PMID: 38078366 DOI: 10.1088/1478-3975/ad0f2d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 11/23/2023] [Indexed: 12/18/2023]
Abstract
Neuronal populations in the cerebral cortex engage in probabilistic coding, effectively encoding the state of the surrounding environment with high accuracy and extraordinary energy efficiency. A new approach models the inherently probabilistic nature of cortical neuron signaling outcomes as a thermodynamic process of non-deterministic computation. A mean field approach is used, with the trial Hamiltonian maximizing available free energy and minimizing the net quantity of entropy, compared with a reference Hamiltonian. Thermodynamic quantities are always conserved during the computation; free energy must be expended to produce information, and free energy is released during information compression, as correlations are identified between the encoding system and its surrounding environment. Due to the relationship between the Gibbs free energy equation and the Nernst equation, any increase in free energy is paired with a local decrease in membrane potential. As a result, this process of thermodynamic computation adjusts the likelihood of each neuron firing an action potential. This model shows that non-deterministic signaling outcomes can be achieved by noisy cortical neurons, through an energy-efficient computational process that involves optimally redistributing a Hamiltonian over some time evolution. Calculations demonstrate that the energy efficiency of the human brain is consistent with this model of non-deterministic computation, with net entropy production far too low to retain the assumptions of a classical system.
Collapse
Affiliation(s)
- Elizabeth A Stoll
- Western Institute for Advanced Study, Denver, Colorado, United States of America
| |
Collapse
|
8
|
Clark KB. Neural Field Continuum Limits and the Structure-Function Partitioning of Cognitive-Emotional Brain Networks. BIOLOGY 2023; 12:352. [PMID: 36979044 PMCID: PMC10045557 DOI: 10.3390/biology12030352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/07/2023] [Accepted: 02/13/2023] [Indexed: 02/25/2023]
Abstract
In The cognitive-emotional brain, Pessoa overlooks continuum effects on nonlinear brain network connectivity by eschewing neural field theories and physiologically derived constructs representative of neuronal plasticity. The absence of this content, which is so very important for understanding the dynamic structure-function embedding and partitioning of brains, diminishes the rich competitive and cooperative nature of neural networks and trivializes Pessoa's arguments, and similar arguments by other authors, on the phylogenetic and operational significance of an optimally integrated brain filled with variable-strength neural connections. Riemannian neuromanifolds, containing limit-imposing metaplastic Hebbian- and antiHebbian-type control variables, simulate scalable network behavior that is difficult to capture from the simpler graph-theoretic analysis preferred by Pessoa and other neuroscientists. Field theories suggest the partitioning and performance benefits of embedded cognitive-emotional networks that optimally evolve between exotic classical and quantum computational phases, where matrix singularities and condensations produce degenerate structure-function homogeneities unrealistic of healthy brains. Some network partitioning, as opposed to unconstrained embeddedness, is thus required for effective execution of cognitive-emotional network functions and, in our new era of neuroscience, should be considered a critical aspect of proper brain organization and operation.
Collapse
Affiliation(s)
- Kevin B. Clark
- Cures Within Reach, Chicago, IL 60602, USA;
- Felidae Conservation Fund, Mill Valley, CA 94941, USA
- Campus and Domain Champions Program, Multi-Tier Assistance, Training, and Computational Help (MATCH) Track, National Science Foundation’s Advanced Cyberinfrastructure Coordination Ecosystem: Services and Support (ACCESS), https://access-ci.org/
- Expert Network, Penn Center for Innovation, University of Pennsylvania, Philadelphia, PA 19104, USA
- Network for Life Detection (NfoLD), NASA Astrobiology Program, NASA Ames Research Center, Mountain View, CA 94035, USA
- Multi-Omics and Systems Biology & Artificial Intelligence and Machine Learning Analysis Working Groups, NASA GeneLab, NASA Ames Research Center, Mountain View, CA 94035, USA
- Frontier Development Lab, NASA Ames Research Center, Mountain View, CA 94035, USA & SETI Institute, Mountain View, CA 94043, USA
- Peace Innovation Institute, The Hague 2511, Netherlands & Stanford University, Palo Alto, CA 94305, USA
- Shared Interest Group for Natural and Artificial Intelligence (sigNAI), Max Planck Alumni Association, 14057 Berlin, Germany
- Biometrics and Nanotechnology Councils, Institute for Electrical and Electronics Engineers (IEEE), New York, NY 10016, USA
| |
Collapse
|
9
|
Lynn CW, Holmes CM, Bialek W, Schwab DJ. Decomposing the Local Arrow of Time in Interacting Systems. PHYSICAL REVIEW LETTERS 2022; 129:118101. [PMID: 36154397 PMCID: PMC9751844 DOI: 10.1103/physrevlett.129.118101] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 06/03/2022] [Accepted: 06/24/2022] [Indexed: 05/30/2023]
Abstract
We show that the evidence for a local arrow of time, which is equivalent to the entropy production in thermodynamic systems, can be decomposed. In a system with many degrees of freedom, there is a term that arises from the irreversible dynamics of the individual variables, and then a series of non-negative terms contributed by correlations among pairs, triplets, and higher-order combinations of variables. We illustrate this decomposition on simple models of noisy logical computations, and then apply it to the analysis of patterns of neural activity in the retina as it responds to complex dynamic visual scenes. We find that neural activity breaks detailed balance even when the visual inputs do not, and that this irreversibility arises primarily from interactions between pairs of neurons.
Collapse
Affiliation(s)
- Christopher W Lynn
- Initiative for the Theoretical Sciences, The Graduate Center, City University of New York, New York, New York 10016, USA
- Joseph Henry Laboratories of Physics and Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey 08544, USA
| | - Caroline M Holmes
- Joseph Henry Laboratories of Physics and Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey 08544, USA
| | - William Bialek
- Initiative for the Theoretical Sciences, The Graduate Center, City University of New York, New York, New York 10016, USA
- Joseph Henry Laboratories of Physics and Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey 08544, USA
| | - David J Schwab
- Initiative for the Theoretical Sciences, The Graduate Center, City University of New York, New York, New York 10016, USA
| |
Collapse
|
10
|
Layer M, Senk J, Essink S, van Meegen A, Bos H, Helias M. NNMT: Mean-Field Based Analysis Tools for Neuronal Network Models. Front Neuroinform 2022; 16:835657. [PMID: 35712677 PMCID: PMC9196133 DOI: 10.3389/fninf.2022.835657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
Mean-field theory of neuronal networks has led to numerous advances in our analytical and intuitive understanding of their dynamics during the past decades. In order to make mean-field based analysis tools more accessible, we implemented an extensible, easy-to-use open-source Python toolbox that collects a variety of mean-field methods for the leaky integrate-and-fire neuron model. The Neuronal Network Mean-field Toolbox (NNMT) in its current state allows for estimating properties of large neuronal networks, such as firing rates, power spectra, and dynamical stability in mean-field and linear response approximation, without running simulations. In this article, we describe how the toolbox is implemented, show how it is used to reproduce results of previous studies, and discuss different use-cases, such as parameter space explorations, or mapping different network models. Although the initial version of the toolbox focuses on methods for leaky integrate-and-fire neurons, its structure is designed to be open and extensible. It aims to provide a platform for collecting analytical methods for neuronal network model analysis, such that the neuroscientific community can take maximal advantage of them.
Collapse
Affiliation(s)
- Moritz Layer
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Simon Essink
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Alexander van Meegen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Institute of Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Hannah Bos
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
11
|
The Mean Field Approach for Populations of Spiking Neurons. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:125-157. [DOI: 10.1007/978-3-030-89439-9_6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractMean field theory is a device to analyze the collective behavior of a dynamical system comprising many interacting particles. The theory allows to reduce the behavior of the system to the properties of a handful of parameters. In neural circuits, these parameters are typically the firing rates of distinct, homogeneous subgroups of neurons. Knowledge of the firing rates under conditions of interest can reveal essential information on both the dynamics of neural circuits and the way they can subserve brain function. The goal of this chapter is to provide an elementary introduction to the mean field approach for populations of spiking neurons. We introduce the general idea in networks of binary neurons, starting from the most basic results and then generalizing to more relevant situations. This allows to derive the mean field equations in a simplified setting. We then derive the mean field equations for populations of integrate-and-fire neurons. An effort is made to derive the main equations of the theory using only elementary methods from calculus and probability theory. The chapter ends with a discussion of the assumptions of the theory and some of the consequences of violating those assumptions. This discussion includes an introduction to balanced and metastable networks and a brief catalogue of successful applications of the mean field approach to the study of neural circuits.
Collapse
|
12
|
Cai Y, Wu T, Tao L, Xiao ZC. Model Reduction Captures Stochastic Gamma Oscillations on Low-Dimensional Manifolds. Front Comput Neurosci 2021; 15:678688. [PMID: 34489666 PMCID: PMC8418102 DOI: 10.3389/fncom.2021.678688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 07/23/2021] [Indexed: 12/02/2022] Open
Abstract
Gamma frequency oscillations (25–140 Hz), observed in the neural activities within many brain regions, have long been regarded as a physiological basis underlying many brain functions, such as memory and attention. Among numerous theoretical and computational modeling studies, gamma oscillations have been found in biologically realistic spiking network models of the primary visual cortex. However, due to its high dimensionality and strong non-linearity, it is generally difficult to perform detailed theoretical analysis of the emergent gamma dynamics. Here we propose a suite of Markovian model reduction methods with varying levels of complexity and apply it to spiking network models exhibiting heterogeneous dynamical regimes, ranging from nearly homogeneous firing to strong synchrony in the gamma band. The reduced models not only successfully reproduce gamma oscillations in the full model, but also exhibit the same dynamical features as we vary parameters. Most remarkably, the invariant measure of the coarse-grained Markov process reveals a two-dimensional surface in state space upon which the gamma dynamics mainly resides. Our results suggest that the statistical features of gamma oscillations strongly depend on the subthreshold neuronal distributions. Because of the generality of the Markovian assumptions, our dimensional reduction methods offer a powerful toolbox for theoretical examinations of other complex cortical spatio-temporal behaviors observed in both neurophysiological experiments and numerical simulations.
Collapse
Affiliation(s)
- Yuhang Cai
- Department of Statistics, University of Chicago, Chicago, IL, United States
| | - Tianyi Wu
- School of Mathematical Sciences, Peking University, Beijing, China.,Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, China
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, China.,Center for Quantitative Biology, Peking University, Beijing, China
| | - Zhuo-Cheng Xiao
- Courant Institute of Mathematical Sciences, New York University, New York, NY, United States
| |
Collapse
|
13
|
Wein S, Deco G, Tomé AM, Goldhacker M, Malloni WM, Greenlee MW, Lang EW. Brain Connectivity Studies on Structure-Function Relationships: A Short Survey with an Emphasis on Machine Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5573740. [PMID: 34135951 PMCID: PMC8177997 DOI: 10.1155/2021/5573740] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 05/06/2021] [Indexed: 12/12/2022]
Abstract
This short survey reviews the recent literature on the relationship between the brain structure and its functional dynamics. Imaging techniques such as diffusion tensor imaging (DTI) make it possible to reconstruct axonal fiber tracks and describe the structural connectivity (SC) between brain regions. By measuring fluctuations in neuronal activity, functional magnetic resonance imaging (fMRI) provides insights into the dynamics within this structural network. One key for a better understanding of brain mechanisms is to investigate how these fast dynamics emerge on a relatively stable structural backbone. So far, computational simulations and methods from graph theory have been mainly used for modeling this relationship. Machine learning techniques have already been established in neuroimaging for identifying functionally independent brain networks and classifying pathological brain states. This survey focuses on methods from machine learning, which contribute to our understanding of functional interactions between brain regions and their relation to the underlying anatomical substrate.
Collapse
Affiliation(s)
- Simon Wein
- CIML, Biophysics, University of Regensburg, Regensburg 93040, Germany
- Experimental Psychology, University of Regensburg, Regensburg 93040, Germany
| | - Gustavo Deco
- Center for Brain and Cognition, Department of Technology and Information, University Pompeu Fabra, Carrer Tanger, 122-140, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats, University Barcelona, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Ana Maria Tomé
- IEETA/DETI, University de Aveiro, Aveiro 3810-193, Portugal
| | - Markus Goldhacker
- CIML, Biophysics, University of Regensburg, Regensburg 93040, Germany
- Experimental Psychology, University of Regensburg, Regensburg 93040, Germany
| | - Wilhelm M. Malloni
- Experimental Psychology, University of Regensburg, Regensburg 93040, Germany
| | - Mark W. Greenlee
- Experimental Psychology, University of Regensburg, Regensburg 93040, Germany
| | - Elmar W. Lang
- CIML, Biophysics, University of Regensburg, Regensburg 93040, Germany
| |
Collapse
|
14
|
Cofré R, Maldonado C, Cessac B. Thermodynamic Formalism in Neuronal Dynamics and Spike Train Statistics. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1330. [PMID: 33266513 PMCID: PMC7712217 DOI: 10.3390/e22111330] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 11/13/2020] [Accepted: 11/15/2020] [Indexed: 12/04/2022]
Abstract
The Thermodynamic Formalism provides a rigorous mathematical framework for studying quantitative and qualitative aspects of dynamical systems. At its core, there is a variational principle that corresponds, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science. In particular, it has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, in order to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism.
Collapse
Affiliation(s)
- Rodrigo Cofré
- CIMFAV-Ingemat, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2340000, Chile
| | - Cesar Maldonado
- IPICYT/División de Matemáticas Aplicadas, San Luis Potosí 78216, Mexico;
| | - Bruno Cessac
- Inria Biovision team and Neuromod Institute, Université Côte d’Azur, 06901 CEDEX Inria, France;
| |
Collapse
|
15
|
Kulkarni A, Ranft J, Hakim V. Synchronization, Stochasticity, and Phase Waves in Neuronal Networks With Spatially-Structured Connectivity. Front Comput Neurosci 2020; 14:569644. [PMID: 33192427 PMCID: PMC7604323 DOI: 10.3389/fncom.2020.569644] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 08/18/2020] [Indexed: 01/15/2023] Open
Abstract
Oscillations in the beta/low gamma range (10–45 Hz) are recorded in diverse neural structures. They have successfully been modeled as sparsely synchronized oscillations arising from reciprocal interactions between randomly connected excitatory (E) pyramidal cells and local interneurons (I). The synchronization of spatially distant oscillatory spiking E–I modules has been well-studied in the rate model framework but less so for modules of spiking neurons. Here, we first show that previously proposed modifications of rate models provide a quantitative description of spiking E–I modules of Exponential Integrate-and-Fire (EIF) neurons. This allows us to analyze the dynamical regimes of sparsely synchronized oscillatory E–I modules connected by long-range excitatory interactions, for two modules, as well as for a chain of such modules. For modules with a large number of neurons (> 105), we obtain results similar to previously obtained ones based on the classic deterministic Wilson-Cowan rate model, with the added bonus that the results quantitatively describe simulations of spiking EIF neurons. However, for modules with a moderate (~ 104) number of neurons, stochastic variations in the spike emission of neurons are important and need to be taken into account. On the one hand, they modify the oscillations in a way that tends to promote synchronization between different modules. On the other hand, independent fluctuations on different modules tend to disrupt synchronization. The correlations between distant oscillatory modules can be described by stochastic equations for the oscillator phases that have been intensely studied in other contexts. On shorter distances, we develop a description that also takes into account amplitude modes and that quantitatively accounts for our simulation data. Stochastic dephasing of neighboring modules produces transient phase gradients and the transient appearance of phase waves. We propose that these stochastically-induced phase waves provide an explanative framework for the observations of traveling waves in the cortex during beta oscillations.
Collapse
Affiliation(s)
- Anirudh Kulkarni
- Laboratoire de Physique de l'Ecole Normale Supérieure, CNRS, Ecole Normale Supérieure, PSL University, Sorbonne Université, Université de Paris, Paris, France.,IBENS, Ecole Normale Supérieure, PSL University, CNRS, INSERM, Paris, France
| | - Jonas Ranft
- Laboratoire de Physique de l'Ecole Normale Supérieure, CNRS, Ecole Normale Supérieure, PSL University, Sorbonne Université, Université de Paris, Paris, France.,IBENS, Ecole Normale Supérieure, PSL University, CNRS, INSERM, Paris, France
| | - Vincent Hakim
- Laboratoire de Physique de l'Ecole Normale Supérieure, CNRS, Ecole Normale Supérieure, PSL University, Sorbonne Université, Université de Paris, Paris, France
| |
Collapse
|
16
|
Abstract
The Wilson-Cowan equations represent a landmark in the history of computational neuroscience. Along with the insights Wilson and Cowan offered for neuroscience, they crystallized an approach to modeling neural dynamics and brain function. Although their iconic equations are used in various guises today, the ideas that led to their formulation and the relationship to other approaches are not well known. Here, we give a little context to some of the biological and theoretical concepts that lead to the Wilson-Cowan equations and discuss how to extend beyond them.
Collapse
Affiliation(s)
- Carson C Chow
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland
| | - Yahya Karimipanah
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
17
|
Qiu SW, Chow CC. Finite-size effects for spiking neural networks with spatially dependent coupling. Phys Rev E 2018; 98:062414. [PMID: 32478211 PMCID: PMC7258138 DOI: 10.1103/physreve.98.062414] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We study finite-size fluctuations in a network of spiking deterministic neurons coupled with nonuniform synaptic coupling. We generalize a previously developed theory of finite-size effects for globally coupled neurons with a uniform coupling function. In the uniform coupling case, mean-field theory is well defined by averaging over the network as the number of neurons in the network goes to infinity. However, for nonuniform coupling it is no longer possible to average over the entire network if we are interested in fluctuations at a particular location within the network. We show that if the coupling function approaches a continuous function in the infinite system size limit, then an average over a local neighborhood can be defined such that mean-field theory is well defined for a spatially dependent field. We then use a path-integral formalism to derive a perturbation expansion in the inverse system size around the mean-field limit for the covariance of the input to a neuron (synaptic drive) and firing rate fluctuations due to dynamical deterministic finite-size effects.
Collapse
Affiliation(s)
- Si-Wei Qiu
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), National Institutes of Health (NIH), Bethesda, Maryland 20892, USA
| | - Carson C Chow
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), National Institutes of Health (NIH), Bethesda, Maryland 20892, USA
| |
Collapse
|
18
|
Large deviations for randomly connected neural networks: I. Spatially extended systems. ADV APPL PROBAB 2018. [DOI: 10.1017/apr.2018.42] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
In a series of two papers, we investigate the large deviations and asymptotic behavior of stochastic models of brain neural networks with random interaction coefficients. In this first paper, we take into account the spatial structure of the brain and consider first the presence of interaction delays that depend on the distance between cells and then the Gaussian random interaction amplitude with a mean and variance that depend on the position of the neurons and scale as the inverse of the network size. We show that the empirical measure satisfies a large deviations principle with a good rate function reaching its minimum at a unique spatially extended probability measure. This result implies an averaged convergence of the empirical measure and a propagation of chaos. The limit is characterized through a complex non-Markovian implicit equation in which the network interaction term is replaced by a nonlocal Gaussian process with a mean and covariance that depend on the statistics of the solution over the whole neural field.
Collapse
|
19
|
Prefronto-cortical dopamine D1 receptor sensitivity can critically influence working memory maintenance during delayed response tasks. PLoS One 2018; 13:e0198136. [PMID: 29813109 PMCID: PMC5973564 DOI: 10.1371/journal.pone.0198136] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2018] [Accepted: 05/14/2018] [Indexed: 01/15/2023] Open
Abstract
The dopamine (DA) hypothesis of cognitive deficits suggests that too low or too high extracellular DA concentration in the prefrontal cortex (PFC) can severely impair the working memory (WM) maintenance during delay period. Thus, there exists only an optimal range of DA where the sustained-firing activity, the neural correlate of WM maintenance, in the cortex possesses optimal firing frequency as well as robustness against noisy distractions. Empirical evidences demonstrate changes even in the D1 receptor (D1R)-sensitivity to extracellular DA, collectively manifested through D1R density and DA-binding affinity, in the PFC under neuropsychiatric conditions such as ageing and schizophrenia. However, the impact of alterations in the cortical D1R-sensitivity on WM maintenance has yet remained poorly addressed. Using a quantitative neural mass model of the prefronto-mesoprefrontal system, the present study reveals that higher D1R-sensitivity may not only effectuate shrunk optimal DA range but also shift of the range to lower concentrations. Moreover, higher sensitivity may significantly reduce the WM-robustness even within the optimal DA range and exacerbates the decline at abnormal DA levels. These findings project important clinical implications, such as dosage precision and variability of DA-correcting drugs across patients, and failure in acquiring healthy WM maintenance even under drug-controlled normal cortical DA levels.
Collapse
|
20
|
A theoretical framework for analyzing coupled neuronal networks: Application to the olfactory system. PLoS Comput Biol 2017; 13:e1005780. [PMID: 28968384 PMCID: PMC5638622 DOI: 10.1371/journal.pcbi.1005780] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 10/12/2017] [Accepted: 09/15/2017] [Indexed: 12/27/2022] Open
Abstract
Determining how synaptic coupling within and between regions is modulated during sensory processing is an important topic in neuroscience. Electrophysiological recordings provide detailed information about neural spiking but have traditionally been confined to a particular region or layer of cortex. Here we develop new theoretical methods to study interactions between and within two brain regions, based on experimental measurements of spiking activity simultaneously recorded from the two regions. By systematically comparing experimentally-obtained spiking statistics to (efficiently computed) model spike rate statistics, we identify regions in model parameter space that are consistent with the experimental data. We apply our new technique to dual micro-electrode array in vivo recordings from two distinct regions: olfactory bulb (OB) and anterior piriform cortex (PC). Our analysis predicts that: i) inhibition within the afferent region (OB) has to be weaker than the inhibition within PC, ii) excitation from PC to OB is generally stronger than excitation from OB to PC, iii) excitation from PC to OB and inhibition within PC have to both be relatively strong compared to presynaptic inputs from OB. These predictions are validated in a spiking neural network model of the OB–PC pathway that satisfies the many constraints from our experimental data. We find when the derived relationships are violated, the spiking statistics no longer satisfy the constraints from the data. In principle this modeling framework can be adapted to other systems and be used to investigate relationships between other neural attributes besides network connection strengths. Thus, this work can serve as a guide to further investigations into the relationships of various neural attributes within and across different regions during sensory processing. Sensory processing is known to span multiple regions of the nervous system. However, electrophysiological recordings during sensory processing have traditionally been limited to a single region or brain layer. With recent advances in experimental techniques, recorded spiking activity from multiple regions simultaneously is feasible. However, other important quantities— such as inter-region connection strengths—cannot yet be measured. Here, we develop new theoretical tools to leverage data obtained by recording from two different brain regions simultaneously. We address the following questions: what are the crucial neural network attributes that enable sensory processing across different regions, and how are these attributes related to one another? With a novel theoretical framework to efficiently calculate spiking statistics, we can characterize a high dimensional parameter space that satisfies data constraints. We apply our results to the olfactory system to make specific predictions about effective network connectivity. Our framework relies on incorporating relatively easy-to-measure quantities to predict hard-to-measure interactions across multiple brain regions. Because this work is adaptable to other systems, we anticipate it will be a valuable tool for analysis of other larger scale brain recordings.
Collapse
|
21
|
Barreiro AK, Ly C. Practical approximation method for firing-rate models of coupled neural networks with correlated inputs. Phys Rev E 2017; 96:022413. [PMID: 28950506 DOI: 10.1103/physreve.96.022413] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Indexed: 01/18/2023]
Abstract
Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.
Collapse
Affiliation(s)
- Andrea K Barreiro
- Department of Mathematics, Southern Methodist University, P.O. Box 750235, Dallas, Texas 75275, USA
| | - Cheng Ly
- Department of Statistical Sciences and Operations Research, Virginia Commonwealth University, 1015 Floyd Avenue, Richmond, Virginia 23284, USA
| |
Collapse
|
22
|
Ocker GK, Hu Y, Buice MA, Doiron B, Josić K, Rosenbaum R, Shea-Brown E. From the statistics of connectivity to the statistics of spike times in neuronal networks. Curr Opin Neurobiol 2017; 46:109-119. [PMID: 28863386 DOI: 10.1016/j.conb.2017.07.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 07/21/2017] [Accepted: 07/27/2017] [Indexed: 10/19/2022]
Abstract
An essential step toward understanding neural circuits is linking their structure and their dynamics. In general, this relationship can be almost arbitrarily complex. Recent theoretical work has, however, begun to identify some broad principles underlying collective spiking activity in neural circuits. The first is that local features of network connectivity can be surprisingly effective in predicting global statistics of activity across a network. The second is that, for the important case of large networks with excitatory-inhibitory balance, correlated spiking persists or vanishes depending on the spatial scales of recurrent and feedforward connectivity. We close by showing how these ideas, together with plasticity rules, can help to close the loop between network structure and activity statistics.
Collapse
Affiliation(s)
| | - Yu Hu
- Center for Brain Science, Harvard University, United States
| | - Michael A Buice
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, United States; Center for the Neural Basis of Cognition, Pittsburgh, United States
| | - Krešimir Josić
- Department of Mathematics, University of Houston, United States; Department of Biology and Biochemistry, University of Houston, United States; Department of BioSciences, Rice University, United States
| | - Robert Rosenbaum
- Department of Mathematics, University of Notre Dame, United States
| | - Eric Shea-Brown
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States; Department of Physiology and Biophysics, and University of Washington Institute for Neuroengineering, United States.
| |
Collapse
|
23
|
Abstract
Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. Neuronal networks, like many biological systems, exhibit variable activity. This activity is shaped by both the underlying biology of the component neurons and the structure of their interactions. How can we combine knowledge of these two things—that is, models of individual neurons and of their interactions—to predict the statistics of single- and multi-neuron activity? Current approaches rely on linearizing neural activity around a stationary state. In the face of neural nonlinearities, however, these linear methods can fail to predict spiking statistics and even fail to correctly predict whether activity is stable or pathological. Here, we show how to calculate any spike train cumulant in a broad class of models, while systematically accounting for nonlinear effects. We then study a fundamental effect of nonlinear input-rate transfer–coupling between different orders of spiking statistic–and how this depends on single-neuron and network properties.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Krešimir Josić
- Department of Mathematics and Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- Department of BioSciences, Rice University, Houston, Texas, United States of America
| | - Eric Shea-Brown
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- Department of Physiology and Biophysics, and UW Institute of Neuroengineering, University of Washington, Seattle, Washington, United States of America
| | - Michael A. Buice
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
24
|
Weber MF, Frey E. Master equations and the theory of stochastic path integrals. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2017; 80:046601. [PMID: 28306551 DOI: 10.1088/1361-6633/aa5ae2] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.
Collapse
Affiliation(s)
- Markus F Weber
- Arnold Sommerfeld Center for Theoretical Physics and Center for NanoScience, Department of Physics, Ludwig-Maximilians-Universität München, Theresienstraße 37, 80333 München, Germany
| | - Erwin Frey
- Arnold Sommerfeld Center for Theoretical Physics and Center for NanoScience, Department of Physics, Ludwig-Maximilians-Universität München, Theresienstraße 37, 80333 München, Germany
| |
Collapse
|
25
|
Stability and instability of a neuron network with excitatory and inhibitory small-world connections. Neural Netw 2017; 89:50-60. [PMID: 28324759 DOI: 10.1016/j.neunet.2017.02.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2016] [Revised: 02/16/2017] [Accepted: 02/24/2017] [Indexed: 11/20/2022]
Abstract
This study considers a delayed neural network with excitatory and inhibitory shortcuts. The global stability of the trivial equilibrium is investigated based on Lyapunov's direct method and the delay-dependent criteria are obtained. It is shown that both the excitatory and inhibitory shortcuts decrease the stability interval, but a time delay can be employed as a global stabilizer. In addition, we analyze the bounds of the eigenvalues of the adjacent matrix using matrix perturbation theory and then obtain the generalized sufficient conditions for local stability. The possibility of small inhibitory shortcuts is helpful for maintaining stability. The mechanisms of instability, bifurcation modes, and chaos are also investigated. Compared with methods based on mean-field theory, the proposed method can guarantee the stability of the system in most cases with random events. The proposed method is effective for cases where excitatory and inhibitory shortcuts exist simultaneously in the network.
Collapse
|
26
|
Zimmer FM, Schmidt M, Maziero J. Quantum correlated cluster mean-field theory applied to the transverse Ising model. Phys Rev E 2016; 93:062116. [PMID: 27415217 DOI: 10.1103/physreve.93.062116] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Indexed: 11/07/2022]
Abstract
Mean-field theory (MFT) is one of the main available tools for analytical calculations entailed in investigations regarding many-body systems. Recently, there has been a surge of interest in ameliorating this kind of method, mainly with the aim of incorporating geometric and correlation properties of these systems. The correlated cluster MFT (CCMFT) is an improvement that succeeded quite well in doing that for classical spin systems. Nevertheless, even the CCMFT presents some deficiencies when applied to quantum systems. In this article, we address this issue by proposing the quantum CCMFT (QCCMFT), which, in contrast to its former approach, uses general quantum states in its self-consistent mean-field equations. We apply the introduced QCCMFT to the transverse Ising model in honeycomb, square, and simple cubic lattices and obtain fairly good results both for the Curie temperature of thermal phase transition and for the critical field of quantum phase transition. Actually, our results match those obtained via exact solutions, series expansions or Monte Carlo simulations.
Collapse
Affiliation(s)
- F M Zimmer
- Departamento de Física, Centro de Ciências Naturais e Exatas, Universidade Federal de Santa Maria, Avenida Roraima 1000, 97105-900, Santa Maria, RS, Brazil
| | - M Schmidt
- Departamento de Física, Centro de Ciências Naturais e Exatas, Universidade Federal de Santa Maria, Avenida Roraima 1000, 97105-900, Santa Maria, RS, Brazil
| | - Jonas Maziero
- Departamento de Física, Centro de Ciências Naturais e Exatas, Universidade Federal de Santa Maria, Avenida Roraima 1000, 97105-900, Santa Maria, RS, Brazil.,Instituto de Física, Facultad de Ingeniería, Universidad de la República, J. Herrera y Reissig 565, 11300, Montevideo, Uruguay
| |
Collapse
|
27
|
Steyn-Ross ML, Steyn-Ross DA. From individual spiking neurons to population behavior: Systematic elimination of short-wavelength spatial modes. Phys Rev E 2016; 93:022402. [PMID: 26986357 DOI: 10.1103/physreve.93.022402] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2015] [Indexed: 12/14/2022]
Abstract
Mean-field models of the brain approximate spiking dynamics by assuming that each neuron responds to its neighbors via a naive spatial average that neglects local fluctuations and correlations in firing activity. In this paper we address this issue by introducing a rigorous formalism to enable spatial coarse-graining of spiking dynamics, scaling from the microscopic level of a single type 1 (integrator) neuron to a macroscopic assembly of spiking neurons that are interconnected by chemical synapses and nearest-neighbor gap junctions. Spiking behavior at the single-neuron scale ℓ≈10μm is described by Wilson's two-variable conductance-based equations [H. R. Wilson, J. Theor. Biol. 200, 375 (1999)], driven by fields of incoming neural activity from neighboring neurons. We map these equations to a coarser spatial resolution of grid length Bℓ, with B≫1 being the blocking ratio linking micro and macro scales. Our method systematically eliminates high-frequency (short-wavelength) spatial modes q(->) in favor of low-frequency spatial modes Q(->) using an adiabatic elimination procedure that has been shown to be equivalent to the path-integral coarse graining applied to renormalization group theory of critical phenomena. This bottom-up neural regridding allows us to track the percolation of synaptic and ion-channel noise from the single neuron up to the scale of macroscopic population-average variables. Anticipated applications of neural regridding include extraction of the current-to-firing-rate transfer function, investigation of fluctuation criticality near phase-transition tipping points, determination of spatial scaling laws for avalanche events, and prediction of the spatial extent of self-organized macrocolumnar structures. As a first-order exemplar of the method, we recover nonlinear corrections for a coarse-grained Wilson spiking neuron embedded in a network of identical diffusively coupled neurons whose chemical synapses have been disabled. Intriguingly, we find that reblocking transforms the original type 1 Wilson integrator into a type 2 resonator whose spike-rate transfer function exhibits abrupt spiking onset with near-vertical takeoff and chaotic dynamics just above threshold.
Collapse
Affiliation(s)
| | - D A Steyn-Ross
- School of Engineering, University of Waikato, Hamilton, New Zealand
| |
Collapse
|
28
|
Chow CC, Buice MA. Path integral methods for stochastic differential equations. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2015; 5:8. [PMID: 25852983 PMCID: PMC4385267 DOI: 10.1186/s13408-015-0018-5] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2015] [Accepted: 02/13/2015] [Indexed: 06/04/2023]
Abstract
Stochastic differential equations (SDEs) have multiple applications in mathematical neuroscience and are notoriously difficult. Here, we give a self-contained pedagogical review of perturbative field theoretic and path integral methods to calculate moments of the probability density function of SDEs. The methods can be extended to high dimensional systems such as networks of coupled neurons and even deterministic systems with quenched disorder.
Collapse
Affiliation(s)
- Carson C. Chow
- Mathematical Biology Section, Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, MD 20892 USA
| | - Michael A. Buice
- Mathematical Biology Section, Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, MD 20892 USA
| |
Collapse
|
29
|
Bressloff PC. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2015; 5:4. [PMID: 25852979 PMCID: PMC4385107 DOI: 10.1186/s13408-014-0016-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2014] [Accepted: 12/11/2014] [Indexed: 06/04/2023]
Abstract
We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.
Collapse
Affiliation(s)
- Paul C. Bressloff
- Department of Mathematics, University of Utah, 155 South 1400 East, Salt Lake City, UT 84112 USA
| |
Collapse
|
30
|
Lagzi F, Rotter S. A Markov model for the temporal dynamics of balanced random networks of finite size. Front Comput Neurosci 2014; 8:142. [PMID: 25520644 PMCID: PMC4253948 DOI: 10.3389/fncom.2014.00142] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2014] [Accepted: 10/20/2014] [Indexed: 11/21/2022] Open
Abstract
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks.
Collapse
Affiliation(s)
- Fereshteh Lagzi
- Bernstein Center Freiburg and Faculty of Biology, University of FreiburgFreiburg, Germany
| | | |
Collapse
|
31
|
Buice MA, Chow CC. Generalized activity equations for spiking neural network dynamics. Front Comput Neurosci 2013; 7:162. [PMID: 24298252 PMCID: PMC3829481 DOI: 10.3389/fncom.2013.00162] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Accepted: 10/23/2013] [Indexed: 11/25/2022] Open
Abstract
Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales-the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.
Collapse
Affiliation(s)
- Michael A. Buice
- Modeling, Analysis and Theory Team, Allen Institute for Brain ScienceSeattle, WA, USA
| | - Carson C. Chow
- Laboratory of Biological Modeling, NIDDK, National Institutes of HealthBethesda, MD, USA
| |
Collapse
|