1
|
Xia J, Jasper A, Kohn A, Miller KD. Circuit-motivated generalized affine models characterize stimulus-dependent visual cortical shared variability. iScience 2024; 27:110512. [PMID: 39156642 PMCID: PMC11328009 DOI: 10.1016/j.isci.2024.110512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 04/01/2024] [Accepted: 07/12/2024] [Indexed: 08/20/2024] Open
Abstract
Correlated variability in the visual cortex is modulated by stimulus properties. The stimulus dependence of correlated variability impacts stimulus coding and is indicative of circuit structure. An affine model combining a multiplicative factor and an additive offset has been proposed to explain how correlated variability in primary visual cortex (V1) depends on stimulus orientations. However, whether the affine model could be extended to explain modulations by other stimulus variables or variability shared between two brain areas is unknown. Motivated by a simple neural circuit mechanism, we modified the affine model to better explain the contrast dependence of neural variability shared within either primary or secondary visual cortex (V1 or V2) as well as the orientation dependence of neural variability shared between V1 and V2. Our results bridge neural circuit mechanisms and statistical models and provide a parsimonious explanation for the stimulus dependence of correlated variability within and between visual areas.
Collapse
Affiliation(s)
- Ji Xia
- Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Anna Jasper
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Adam Kohn
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Kenneth D. Miller
- Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
- Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons and Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York City, NY 10027, USA
| |
Collapse
|
2
|
Negrón A, Getz MP, Handy G, Doiron B. The mechanics of correlated variability in segregated cortical excitatory subnetworks. Proc Natl Acad Sci U S A 2024; 121:e2306800121. [PMID: 38959037 PMCID: PMC11252788 DOI: 10.1073/pnas.2306800121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 04/03/2024] [Indexed: 07/04/2024] Open
Abstract
Understanding the genesis of shared trial-to-trial variability in neuronal population activity within the sensory cortex is critical to uncovering the biological basis of information processing in the brain. Shared variability is often a reflection of the structure of cortical connectivity since it likely arises, in part, from local circuit inputs. A series of experiments from segregated networks of (excitatory) pyramidal neurons in the mouse primary visual cortex challenge this view. Specifically, the across-network correlations were found to be larger than predicted given the known weak cross-network connectivity. We aim to uncover the circuit mechanisms responsible for these enhanced correlations through biologically motivated cortical circuit models. Our central finding is that coupling each excitatory subpopulation with a specific inhibitory subpopulation provides the most robust network-intrinsic solution in shaping these enhanced correlations. This result argues for the existence of excitatory-inhibitory functional assemblies in early sensory areas which mirror not just response properties but also connectivity between pyramidal cells. Furthermore, our findings provide theoretical support for recent experimental observations showing that cortical inhibition forms structural and functional subnetworks with excitatory cells, in contrast to the classical view that inhibition is a nonspecific blanket suppression of local excitation.
Collapse
Affiliation(s)
- Alex Negrón
- Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL60616
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL60637
| | - Matthew P. Getz
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL60637
- Department of Neurobiology, University of Chicago, Chicago, IL60637
- Department of Statistics, University of Chicago, Chicago, IL60637
| | - Gregory Handy
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL60637
- Department of Neurobiology, University of Chicago, Chicago, IL60637
- Department of Statistics, University of Chicago, Chicago, IL60637
| | - Brent Doiron
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL60637
- Department of Neurobiology, University of Chicago, Chicago, IL60637
- Department of Statistics, University of Chicago, Chicago, IL60637
| |
Collapse
|
3
|
Tsubo Y, Shinomoto S. Nondifferentiable activity in the brain. PNAS NEXUS 2024; 3:pgae261. [PMID: 38994500 PMCID: PMC11238849 DOI: 10.1093/pnasnexus/pgae261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 06/05/2024] [Indexed: 07/13/2024]
Abstract
Spike raster plots of numerous neurons show vertical stripes, indicating that neurons exhibit synchronous activity in the brain. We seek to determine whether these coherent dynamics are caused by smooth brainwave activity or by something else. By analyzing biological data, we find that their cross-correlograms exhibit not only slow undulation but also a cusp at the origin, in addition to possible signs of monosynaptic connectivity. Here we show that undulation emerges if neurons are subject to smooth brainwave oscillations while a cusp results from nondifferentiable fluctuations. While modern analysis methods have achieved good connectivity estimation by adapting the models to slow undulation, they still make false inferences due to the cusp. We devise a new analysis method that may solve both problems. We also demonstrate that oscillations and nondifferentiable fluctuations may emerge in simulations of large-scale neural networks.
Collapse
Affiliation(s)
- Yasuhiro Tsubo
- College of Information Science and Engineering, Ritsumeikan University, Osaka 567-8570, Japan
| | - Shigeru Shinomoto
- Research Organization of Open Innovation and Collaboration, Ritsumeikan University, Osaka 567-8570, Japan
- Graduate School of Biostudies, Kyoto University, Kyoto 606-8501, Japan
| |
Collapse
|
4
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
5
|
Crosser JT, Brinkman BAW. Applications of information geometry to spiking neural network activity. Phys Rev E 2024; 109:024302. [PMID: 38491696 DOI: 10.1103/physreve.109.024302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/10/2024] [Indexed: 03/18/2024]
Abstract
The space of possible behaviors that complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, although the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model outputs change as a function of their parameters, giving a quantitative notion of "distances" between outputs. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
Collapse
Affiliation(s)
- Jacob T Crosser
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
6
|
Zhang WH, Wu S, Josić K, Doiron B. Sampling-based Bayesian inference in recurrent circuits of stochastic spiking neurons. Nat Commun 2023; 14:7074. [PMID: 37925497 PMCID: PMC10625605 DOI: 10.1038/s41467-023-41743-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 09/15/2023] [Indexed: 11/06/2023] Open
Abstract
Two facts about cortex are widely accepted: neuronal responses show large spiking variability with near Poisson statistics and cortical circuits feature abundant recurrent connections between neurons. How these spiking and circuit properties combine to support sensory representation and information processing is not well understood. We build a theoretical framework showing that these two ubiquitous features of cortex combine to produce optimal sampling-based Bayesian inference. Recurrent connections store an internal model of the external world, and Poissonian variability of spike responses drives flexible sampling from the posterior stimulus distributions obtained by combining feedforward and recurrent neuronal inputs. We illustrate how this framework for sampling-based inference can be used by cortex to represent latent multivariate stimuli organized either hierarchically or in parallel. A neural signature of such network sampling are internally generated differential correlations whose amplitude is determined by the prior stored in the circuit, which provides an experimentally testable prediction for our framework.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Si Wu
- School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, 100871, China
- Center of Quantitative Biology, Peking University, Beijing, 100871, China
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, TX, USA.
- Department of Biology and Biochemistry, University of Houston, Houston, TX, USA.
| | - Brent Doiron
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA.
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA.
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
7
|
Skaar JEW, Haug N, Stasik AJ, Einevoll GT, Tøndel K. Metamodelling of a two-population spiking neural network. PLoS Comput Biol 2023; 19:e1011625. [PMID: 38032904 PMCID: PMC10688753 DOI: 10.1371/journal.pcbi.1011625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/23/2023] [Indexed: 12/02/2023] Open
Abstract
In computational neuroscience, hypotheses are often formulated as bottom-up mechanistic models of the systems in question, consisting of differential equations that can be numerically integrated forward in time. Candidate models can then be validated by comparison against experimental data. The model outputs of neural network models depend on both neuron parameters, connectivity parameters and other model inputs. Successful model fitting requires sufficient exploration of the model parameter space, which can be computationally demanding. Additionally, identifying degeneracy in the parameters, i.e. different combinations of parameter values that produce similar outputs, is of interest, as they define the subset of parameter values consistent with the data. In this computational study, we apply metamodels to a two-population recurrent spiking network of point-neurons, the so-called Brunel network. Metamodels are data-driven approximations to more complex models with more desirable computational properties, which can be run considerably faster than the original model. Specifically, we apply and compare two different metamodelling techniques, masked autoregressive flows (MAF) and deep Gaussian process regression (DGPR), to estimate the power spectra of two different signals; the population spiking activities and the local field potential. We find that the metamodels are able to accurately model the power spectra in the asynchronous irregular regime, and that the DGPR metamodel provides a more accurate representation of the simulator compared to the MAF metamodel. Using the metamodels, we estimate the posterior probability distributions over parameters given observed simulator outputs separately for both LFP and population spiking activities. We find that these distributions correctly identify parameter combinations that give similar model outputs, and that some parameters are significantly more constrained by observing the LFP than by observing the population spiking activities.
Collapse
Affiliation(s)
- Jan-Eirik W. Skaar
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Nicolai Haug
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Alexander J. Stasik
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | - Gaute T. Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | - Kristin Tøndel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| |
Collapse
|
8
|
Cimeša L, Ciric L, Ostojic S. Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 2023; 19:e1011315. [PMID: 37549194 PMCID: PMC10461857 DOI: 10.1371/journal.pcbi.1011315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/28/2023] [Accepted: 06/27/2023] [Indexed: 08/09/2023] Open
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Collapse
Affiliation(s)
- Ljubica Cimeša
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Lazar Ciric
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
9
|
Handy G, Borisyuk A. Investigating the ability of astrocytes to drive neural network synchrony. PLoS Comput Biol 2023; 19:e1011290. [PMID: 37556468 PMCID: PMC10441806 DOI: 10.1371/journal.pcbi.1011290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 08/21/2023] [Accepted: 06/21/2023] [Indexed: 08/11/2023] Open
Abstract
Recent experimental works have implicated astrocytes as a significant cell type underlying several neuronal processes in the mammalian brain, from encoding sensory information to neurological disorders. Despite this progress, it is still unclear how astrocytes are communicating with and driving their neuronal neighbors. While previous computational modeling works have helped propose mechanisms responsible for driving these interactions, they have primarily focused on interactions at the synaptic level, with microscale models of calcium dynamics and neurotransmitter diffusion. Since it is computationally infeasible to include the intricate microscale details in a network-scale model, little computational work has been done to understand how astrocytes may be influencing spiking patterns and synchronization of large networks. We overcome this issue by first developing an "effective" astrocyte that can be easily implemented to already established network frameworks. We do this by showing that the astrocyte proximity to a synapse makes synaptic transmission faster, weaker, and less reliable. Thus, our "effective" astrocytes can be incorporated by considering heterogeneous synaptic time constants, which are parametrized only by the degree of astrocytic proximity at that synapse. We then apply our framework to large networks of exponential integrate-and-fire neurons with various spatial structures. Depending on key parameters, such as the number of synapses ensheathed and the strength of this ensheathment, we show that astrocytes can push the network to a synchronous state and exhibit spatially correlated patterns.
Collapse
Affiliation(s)
- Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, Illinois, United States of America
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, Illinois, United States of America
| | - Alla Borisyuk
- Department of Mathematics, University of Utah, Salt Lake City, Utah, United States of America
| |
Collapse
|
10
|
Pérez-Cervera A, Gutkin B, Thomas PJ, Lindner B. A universal description of stochastic oscillators. Proc Natl Acad Sci U S A 2023; 120:e2303222120. [PMID: 37432992 PMCID: PMC10629544 DOI: 10.1073/pnas.2303222120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 05/18/2023] [Indexed: 07/13/2023] Open
Abstract
Many systems in physics, chemistry, and biology exhibit oscillations with a pronounced random component. Such stochastic oscillations can emerge via different mechanisms, for example, linear dynamics of a stable focus with fluctuations, limit-cycle systems perturbed by noise, or excitable systems in which random inputs lead to a train of pulses. Despite their diverse origins, the phenomenology of random oscillations can be strikingly similar. Here, we introduce a nonlinear transformation of stochastic oscillators to a complex-valued function [Formula: see text](x) that greatly simplifies and unifies the mathematical description of the oscillator's spontaneous activity, its response to an external time-dependent perturbation, and the correlation statistics of different oscillators that are weakly coupled. The function [Formula: see text] (x) is the eigenfunction of the Kolmogorov backward operator with the least negative (but nonvanishing) eigenvalue λ1 = μ1 + iω1. The resulting power spectrum of the complex-valued function is exactly given by a Lorentz spectrum with peak frequency ω1 and half-width μ1; its susceptibility with respect to a weak external forcing is given by a simple one-pole filter, centered around ω1; and the cross-spectrum between two coupled oscillators can be easily expressed by a combination of the spontaneous power spectra of the uncoupled systems and their susceptibilities. Our approach makes qualitatively different stochastic oscillators comparable, provides simple characteristics for the coherence of the random oscillation, and gives a framework for the description of weakly coupled oscillators.
Collapse
Affiliation(s)
- Alberto Pérez-Cervera
- Department of Applied Mathematics, Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid28040, Spain
| | - Boris Gutkin
- Group for Neural Theory, LNC2 INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure - Paris Science Letters University, Paris75005, France
| | - Peter J. Thomas
- Department of Mathematics, Applied Mathematics, and Statistics, Case Western Reserve University, Cleveland, OH44106
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Berlin10115, Germany
- Department of Physics, Humboldt Universität zu Berlin, BerlinD-12489, Germany
| |
Collapse
|
11
|
Negrón A, Getz MP, Handy G, Doiron B. The mechanics of correlated variability in segregated cortical excitatory subnetworks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.25.538323. [PMID: 37162867 PMCID: PMC10168290 DOI: 10.1101/2023.04.25.538323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Understanding the genesis of shared trial-to-trial variability in neural activity within sensory cortex is critical to uncovering the biological basis of information processing in the brain. Shared variability is often a reflection of the structure of cortical connectivity since this variability likely arises, in part, from local circuit inputs. A series of experiments from segregated networks of (excitatory) pyramidal neurons in mouse primary visual cortex challenge this view. Specifically, the across-network correlations were found to be larger than predicted given the known weak cross-network connectivity. We aim to uncover the circuit mechanisms responsible for these enhanced correlations through biologically motivated cortical circuit models. Our central finding is that coupling each excitatory subpopulation with a specific inhibitory subpopulation provides the most robust network-intrinsic solution in shaping these enhanced correlations. This result argues for the existence of excitatory-inhibitory functional assemblies in early sensory areas which mirror not just response properties but also connectivity between pyramidal cells.
Collapse
Affiliation(s)
- Alex Negrón
- Department of Applied Mathematics, Illinois Institute of Technology
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Matthew P. Getz
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Brent Doiron
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| |
Collapse
|
12
|
Morales GB, di Santo S, Muñoz MA. Quasiuniversal scaling in mouse-brain neuronal activity stems from edge-of-instability critical dynamics. Proc Natl Acad Sci U S A 2023; 120:e2208998120. [PMID: 36827262 PMCID: PMC9992863 DOI: 10.1073/pnas.2208998120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 12/31/2022] [Indexed: 02/25/2023] Open
Abstract
The brain is in a state of perpetual reverberant neural activity, even in the absence of specific tasks or stimuli. Shedding light on the origin and functional significance of such a dynamical state is essential to understanding how the brain transmits, processes, and stores information. An inspiring, albeit controversial, conjecture proposes that some statistical characteristics of empirically observed neuronal activity can be understood by assuming that brain networks operate in a dynamical regime with features, including the emergence of scale invariance, resembling those seen typically near phase transitions. Here, we present a data-driven analysis based on simultaneous high-throughput recordings of the activity of thousands of individual neurons in various regions of the mouse brain. To analyze these data, we construct a unified theoretical framework that synergistically combines a phenomenological renormalization group approach and techniques that infer the general dynamical state of a neural population, while designing complementary tools. This strategy allows us to uncover strong signatures of scale invariance that are "quasiuniversal" across brain regions and experiments, revealing that all the analyzed areas operate, to a greater or lesser extent, near the edge of instability.
Collapse
Affiliation(s)
- Guillermo B. Morales
- Departamento de Electromagnetismo y Física de la Materia, Instituto Carlos I de Física Teórica y Computacional Universidad de Granada, GranadaE-18071, Spain
| | - Serena di Santo
- Morton B. Zuckerman Mind Brain Behavior Institute Columbia University, New York, NY10027
| | - Miguel A. Muñoz
- Departamento de Electromagnetismo y Física de la Materia, Instituto Carlos I de Física Teórica y Computacional Universidad de Granada, GranadaE-18071, Spain
| |
Collapse
|
13
|
Shomali SR, Rasuli SN, Ahmadabadi MN, Shimazaki H. Uncovering hidden network architecture from spiking activities using an exact statistical input-output relation of neurons. Commun Biol 2023; 6:169. [PMID: 36792689 PMCID: PMC9932086 DOI: 10.1038/s42003-023-04511-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 01/20/2023] [Indexed: 02/17/2023] Open
Abstract
Identifying network architecture from observed neural activities is crucial in neuroscience studies. A key requirement is knowledge of the statistical input-output relation of single neurons in vivo. By utilizing an exact analytical solution of the spike-timing for leaky integrate-and-fire neurons under noisy inputs balanced near the threshold, we construct a framework that links synaptic type, strength, and spiking nonlinearity with the statistics of neuronal population activity. The framework explains structured pairwise and higher-order interactions of neurons receiving common inputs under different architectures. We compared the theoretical predictions with the activity of monkey and mouse V1 neurons and found that excitatory inputs given to pairs explained the observed sparse activity characterized by strong negative triple-wise interactions, thereby ruling out the alternative explanation by shared inhibition. Moreover, we showed that the strong interactions are a signature of excitatory rather than inhibitory inputs whenever the spontaneous rate is low. We present a guide map of neural interactions that help researchers to specify the hidden neuronal motifs underlying observed interactions found in empirical data.
Collapse
Affiliation(s)
- Safura Rashid Shomali
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5746, Iran.
| | - Seyyed Nader Rasuli
- grid.418744.a0000 0000 8841 7951School of Physics, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5531 Iran ,grid.411872.90000 0001 2087 2250Department of Physics, University of Guilan, Rasht, 41335-1914 Iran
| | - Majid Nili Ahmadabadi
- grid.46072.370000 0004 0612 7950Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14395-515 Iran
| | - Hideaki Shimazaki
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan. .,Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Hokkaido, 060-0812, Japan.
| |
Collapse
|
14
|
Shao Y, Ostojic S. Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. PLoS Comput Biol 2023; 19:e1010855. [PMID: 36689488 PMCID: PMC9894562 DOI: 10.1371/journal.pcbi.1010855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/02/2023] [Accepted: 01/06/2023] [Indexed: 01/24/2023] Open
Abstract
How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.
Collapse
Affiliation(s)
- Yuxiu Shao
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| |
Collapse
|
15
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
16
|
Hu Y, Sompolinsky H. The spectrum of covariance matrices of randomly connected recurrent neuronal networks with linear dynamics. PLoS Comput Biol 2022; 18:e1010327. [PMID: 35862445 PMCID: PMC9345493 DOI: 10.1371/journal.pcbi.1010327] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 08/02/2022] [Accepted: 06/24/2022] [Indexed: 11/18/2022] Open
Abstract
A key question in theoretical neuroscience is the relation between the connectivity structure and the collective dynamics of a network of neurons. Here we study the connectivity-dynamics relation as reflected in the distribution of eigenvalues of the covariance matrix of the dynamic fluctuations of the neuronal activities, which is closely related to the network dynamics’ Principal Component Analysis (PCA) and the associated effective dimensionality. We consider the spontaneous fluctuations around a steady state in a randomly connected recurrent network of stochastic neurons. An exact analytical expression for the covariance eigenvalue distribution in the large-network limit can be obtained using results from random matrices. The distribution has a finitely supported smooth bulk spectrum and exhibits an approximate power-law tail for coupling matrices near the critical edge. We generalize the results to include second-order connectivity motifs and discuss extensions to excitatory-inhibitory networks. The theoretical results are compared with those from finite-size networks and the effects of temporal and spatial sampling are studied. Preliminary application to whole-brain imaging data is presented. Using simple connectivity models, our work provides theoretical predictions for the covariance spectrum, a fundamental property of recurrent neuronal dynamics, that can be compared with experimental data. Here we study the distribution of eigenvalues, or spectrum, of the neuron-to-neuron covariance matrix in recurrently connected neuronal networks. The covariance spectrum is an important global feature of neuron population dynamics that requires simultaneous recordings of neurons. The spectrum is essential to the widely used Principal Component Analysis (PCA) and generalizes the dimensionality measure of population dynamics. We use a simple model to emulate the complex connections between neurons, where all pairs of neurons interact linearly at a strength specified randomly and independently. We derive a closed-form expression of the covariance spectrum, revealing an interesting long tail of large eigenvalues following a power law as the connection strength increases. To incorporate connectivity features important to biological neural circuits, we generalize the result to networks with an additional low-rank connectivity component that could come from learning and networks consisting of sparsely connected excitatory and inhibitory neurons. To facilitate comparing the theoretical results to experimental data, we derive the precise modifications needed to account for the effect of limited time samples and having unobserved neurons. Preliminary applications to large-scale calcium imaging data suggest our model can well capture the high dimensional population activity of neurons.
Collapse
Affiliation(s)
- Yu Hu
- Department of Mathematics and Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong SAR, China
- * E-mail: (YH); (HS)
| | - Haim Sompolinsky
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America
- * E-mail: (YH); (HS)
| |
Collapse
|
17
|
Layer M, Senk J, Essink S, van Meegen A, Bos H, Helias M. NNMT: Mean-Field Based Analysis Tools for Neuronal Network Models. Front Neuroinform 2022; 16:835657. [PMID: 35712677 PMCID: PMC9196133 DOI: 10.3389/fninf.2022.835657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
Mean-field theory of neuronal networks has led to numerous advances in our analytical and intuitive understanding of their dynamics during the past decades. In order to make mean-field based analysis tools more accessible, we implemented an extensible, easy-to-use open-source Python toolbox that collects a variety of mean-field methods for the leaky integrate-and-fire neuron model. The Neuronal Network Mean-field Toolbox (NNMT) in its current state allows for estimating properties of large neuronal networks, such as firing rates, power spectra, and dynamical stability in mean-field and linear response approximation, without running simulations. In this article, we describe how the toolbox is implemented, show how it is used to reproduce results of previous studies, and discuss different use-cases, such as parameter space explorations, or mapping different network models. Although the initial version of the toolbox focuses on methods for leaky integrate-and-fire neurons, its structure is designed to be open and extensible. It aims to provide a platform for collecting analytical methods for neuronal network model analysis, such that the neuroscientific community can take maximal advantage of them.
Collapse
Affiliation(s)
- Moritz Layer
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Simon Essink
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Alexander van Meegen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Institute of Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Hannah Bos
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
18
|
Zheng C, Pikovsky A. Stochastic bursting in networks of excitable units with delayed coupling. BIOLOGICAL CYBERNETICS 2022; 116:121-128. [PMID: 34181074 PMCID: PMC9068677 DOI: 10.1007/s00422-021-00883-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/17/2021] [Indexed: 06/13/2023]
Abstract
We investigate the phenomenon of stochastic bursting in a noisy excitable unit with multiple weak delay feedbacks, by virtue of a directed tree lattice model. We find statistical properties of the appearing sequence of spikes and expressions for the power spectral density. This simple model is extended to a network of three units with delayed coupling of a star type. We find the power spectral density of each unit and the cross-spectral density between any two units. The basic assumptions behind the analytical approach are the separation of timescales, allowing for a description of the spike train as a point process, and weakness of coupling, allowing for a representation of the action of overlapped spikes via the sum of the one-spike excitation probabilities.
Collapse
Affiliation(s)
- Chunming Zheng
- Institute for Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Strasse 24/25, 14476, Potsdam-Golm, Germany
| | - Arkady Pikovsky
- Institute for Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Strasse 24/25, 14476, Potsdam-Golm, Germany.
- Department of Control Theory, Nizhny Novgorod State University, Gagarin Avenue 23, Nizhny Novgorod, Russia, 606950.
| |
Collapse
|
19
|
Knoll G, Lindner B. Information transmission in recurrent networks: Consequences of network noise for synchronous and asynchronous signal encoding. Phys Rev E 2022; 105:044411. [PMID: 35590546 DOI: 10.1103/physreve.105.044411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/04/2022] [Indexed: 06/15/2023]
Abstract
Information about natural time-dependent stimuli encoded by the sensory periphery or communication between cortical networks may span a large frequency range or be localized to a smaller frequency band. Biological systems have been shown to multiplex such disparate broadband and narrow-band signals and then discriminate them in later populations by employing either an integration (low-pass) or coincidence detection (bandpass) encoding strategy. Analytical expressions have been developed for both encoding methods in feedforward populations of uncoupled neurons and confirm that the integration of a population's output low-pass filters the information, whereas synchronous output encodes less information overall and retains signal information in a selected frequency band. The present study extends the theory to recurrent networks and shows that recurrence may sharpen the synchronous bandpass filter. The frequency of the pass band is significantly influenced by the synaptic strengths, especially for inhibition-dominated networks. Synchronous information transfer is also increased when network models take into account heterogeneity that arises from the stochastic distribution of the synaptic weights.
Collapse
Affiliation(s)
- Gregory Knoll
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| |
Collapse
|
20
|
Dahmen D, Layer M, Deutz L, Dąbrowska PA, Voges N, von Papen M, Brochier T, Riehle A, Diesmann M, Grün S, Helias M. Global organization of neuronal activity only requires unstructured local connectivity. eLife 2022; 11:e68422. [PMID: 35049496 PMCID: PMC8776256 DOI: 10.7554/elife.68422] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.
Collapse
Affiliation(s)
- David Dahmen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Moritz Layer
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Lukas Deutz
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- School of Computing, University of LeedsLeedsUnited Kingdom
| | - Paulina Anna Dąbrowska
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Nicole Voges
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Michael von Papen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Alexa Riehle
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Markus Diesmann
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen UniversityAachenGermany
| | - Sonja Grün
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Theoretical Systems Neurobiology, RWTH Aachen UniversityAachenGermany
| | - Moritz Helias
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
| |
Collapse
|
21
|
Kullmann R, Knoll G, Bernardi D, Lindner B. Critical current for giant Fano factor in neural models with bistable firing dynamics and implications for signal transmission. Phys Rev E 2022; 105:014416. [PMID: 35193262 DOI: 10.1103/physreve.105.014416] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 01/05/2022] [Indexed: 06/14/2023]
Abstract
Bistability in the firing rate is a prominent feature in different types of neurons as well as in neural networks. We show that for a constant input below a critical value, such bistability can lead to a giant spike-count diffusion. We study the transmission of a periodic signal and demonstrate that close to the critical bias current, the signal-to-noise ratio suffers a sharp increase, an effect that can be traced back to the giant diffusion and large Fano factor.
Collapse
Affiliation(s)
- Richard Kullmann
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| | - Gregory Knoll
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| | - Davide Bernardi
- Center for Translational Neurophysiology of Speech and Communication, Fondazione Istituto Italiano di Tecnologia, via Fossato di Mortara 19, 44121 Ferrara, Italy
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| |
Collapse
|
22
|
Azeredo da Silveira R, Rieke F. The Geometry of Information Coding in Correlated Neural Populations. Annu Rev Neurosci 2021; 44:403-424. [PMID: 33863252 DOI: 10.1146/annurev-neuro-120320-082744] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code.
Collapse
Affiliation(s)
| | - Fred Rieke
- Department of Physics, Ecole Normale Supérieure, 75005 Paris, France;
| |
Collapse
|
23
|
Bernardi D, Doron G, Brecht M, Lindner B. A network model of the barrel cortex combined with a differentiator detector reproduces features of the behavioral response to single-neuron stimulation. PLoS Comput Biol 2021; 17:e1007831. [PMID: 33556070 PMCID: PMC7895413 DOI: 10.1371/journal.pcbi.1007831] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 02/19/2021] [Accepted: 01/17/2021] [Indexed: 11/23/2022] Open
Abstract
The stimulation of a single neuron in the rat somatosensory cortex can elicit a behavioral response. The probability of a behavioral response does not depend appreciably on the duration or intensity of a constant stimulation, whereas the response probability increases significantly upon injection of an irregular current. Biological mechanisms that can potentially suppress a constant input signal are present in the dynamics of both neurons and synapses and seem ideal candidates to explain these experimental findings. Here, we study a large network of integrate-and-fire neurons with several salient features of neuronal populations in the rat barrel cortex. The model includes cellular spike-frequency adaptation, experimentally constrained numbers and types of chemical synapses endowed with short-term plasticity, and gap junctions. Numerical simulations of this model indicate that cellular and synaptic adaptation mechanisms alone may not suffice to account for the experimental results if the local network activity is read out by an integrator. However, a circuit that approximates a differentiator can detect the single-cell stimulation with a reliability that barely depends on the length or intensity of the stimulus, but that increases when an irregular signal is used. This finding is in accordance with the experimental results obtained for the stimulation of a regularly-spiking excitatory cell. It is widely assumed that only a large group of neurons can encode a stimulus or control behavior. This tenet of neuroscience has been challenged by experiments in which stimulating a single cortical neuron has had a measurable effect on an animal’s behavior. Recently, theoretical studies have explored how a single-neuron stimulation could be detected in a large recurrent network. However, these studies missed essential biological mechanisms of cortical networks and are unable to explain more recent experiments in the barrel cortex. Here, to describe the stimulated brain area, we propose and study a network model endowed with many important biological features of the barrel cortex. Importantly, we also investigate different readout mechanisms, i.e. ways in which the stimulation effects can propagate to other brain areas. We show that a readout network which tracks rapid variations in the local network activity is in agreement with the experiments. Our model demonstrates a possible mechanism for how the stimulation of a single neuron translates into a signal at the population level, which is taken as a proxy of the animal’s response. Our results illustrate the power of spiking neural networks to properly describe the effects of a single neuron’s activity.
Collapse
Affiliation(s)
- Davide Bernardi
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Institut für Physik, Humboldt-Universität zu Berlin, Berlin, Germany
- Center for Translational Neurophysiology of Speech and Communication, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy
- * E-mail:
| | - Guy Doron
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Michael Brecht
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Institut für Physik, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
24
|
Linear Response of General Observables in Spiking Neuronal Network Models. ENTROPY 2021; 23:e23020155. [PMID: 33514033 PMCID: PMC7911777 DOI: 10.3390/e23020155] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 01/20/2021] [Accepted: 01/21/2021] [Indexed: 11/17/2022]
Abstract
We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allow us to predict the influence of a weak amplitude time dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how the linear response is explicitly related to the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike train statistics. We illustrate our results with numerical simulations performed over a discrete time integrate and fire model.
Collapse
|
25
|
Kim J, Augustine GJ. Molecular Layer Interneurons: Key Elements of Cerebellar Network Computation and Behavior. Neuroscience 2020; 462:22-35. [PMID: 33075461 DOI: 10.1016/j.neuroscience.2020.10.008] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 10/02/2020] [Accepted: 10/05/2020] [Indexed: 02/05/2023]
Abstract
Molecular layer interneurons (MLIs) play an important role in cerebellar information processing by controlling Purkinje cell (PC) activity via inhibitory synaptic transmission. A local MLI network, constructed from both chemical and electrical synapses, is organized into spatially structured clusters that amplify feedforward and lateral inhibition to shape the temporal and spatial patterns of PC activity. Several recent in vivo studies indicate that such MLI circuits contribute not only to sensorimotor information processing, but also to precise motor coordination and cognitive processes. Here, we review current understanding of the organization of MLI circuits and their roles in the function of the mammalian cerebellum.
Collapse
Affiliation(s)
- Jinsook Kim
- Lee Kong Chian School of Medicine Nanyang Technological University Singapore 308238, Singapore
| | - George J Augustine
- Lee Kong Chian School of Medicine Nanyang Technological University Singapore 308238, Singapore.
| |
Collapse
|
26
|
Abstract
Brains are composed of networks of neurons that are highly interconnected. A central question in neuroscience is how such neuronal networks operate in tandem to make a functioning brain. To understand this, we need to study how neurons interact with each other in action, such as when viewing a visual scene or performing a motor task. One way to approach this question is by perturbing the activity of functioning neurons and measuring the resulting influence on other neurons. By using computational models of neuronal networks, we studied how this influence in visual networks depends on connectivity. Our results help to interpret contradictory results from previous experimental studies and explain how different connectivity patterns can enhance information processing during natural vision. To unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modeling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory–inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images, and this was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding and paves the road to map the perturbome of neuronal networks in future studies.
Collapse
|
27
|
Barbosa J, Stein H, Martinez RL, Galan-Gadea A, Li S, Dalmau J, Adam KCS, Valls-Solé J, Constantinidis C, Compte A. Interplay between persistent activity and activity-silent dynamics in the prefrontal cortex underlies serial biases in working memory. Nat Neurosci 2020; 23:1016-1024. [PMID: 32572236 PMCID: PMC7392810 DOI: 10.1038/s41593-020-0644-4] [Citation(s) in RCA: 106] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 04/21/2020] [Indexed: 11/16/2022]
Abstract
Persistent neuronal spiking has long been considered the mechanism underlying working memory, but recent proposals argue for alternative 'activity-silent' substrates. Using monkey and human electrophysiology data, we show here that attractor dynamics that control neural spiking during mnemonic periods interact with activity-silent mechanisms in the prefrontal cortex (PFC). This interaction allows memory reactivations, which enhance serial biases in spatial working memory. Stimulus information was not decodable between trials, but remained present in activity-silent traces inferred from spiking synchrony in the PFC. Just before the new stimulus, this latent trace was reignited into activity that recapitulated the previous stimulus representation. Importantly, the reactivation strength correlated with the strength of serial biases in both monkeys and humans, as predicted by a computational model that integrates activity-based and activity-silent mechanisms. Finally, single-pulse transcranial magnetic stimulation applied to the human PFC between successive trials enhanced serial biases, thus demonstrating the causal role of prefrontal reactivations in determining working-memory behavior.
Collapse
Affiliation(s)
- Joao Barbosa
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Heike Stein
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Rebecca L Martinez
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Adrià Galan-Gadea
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Sihai Li
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Josep Dalmau
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
- Service of Neurology, Hospital Clínic, Barcelona, Spain
- University of Barcelona, Barcelona, Spain
- ICREA, Barcelona, Spain
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| | - Kirsten C S Adam
- Department of Psychology and Institute for Neural Computation, University of California San Diego, La Jolla, CA, USA
| | - Josep Valls-Solé
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Christos Constantinidis
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Albert Compte
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain.
| |
Collapse
|
28
|
Stapmanns J, Kühn T, Dahmen D, Luu T, Honerkamp C, Helias M. Self-consistent formulations for stochastic nonlinear neuronal dynamics. Phys Rev E 2020; 101:042124. [PMID: 32422832 DOI: 10.1103/physreve.101.042124] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 12/18/2019] [Indexed: 01/28/2023]
Abstract
Neural dynamics is often investigated with tools from bifurcation theory. However, many neuron models are stochastic, mimicking fluctuations in the input from unknown parts of the brain or the spiking nature of signals. Noise changes the dynamics with respect to the deterministic model; in particular classical bifurcation theory cannot be applied. We formulate the stochastic neuron dynamics in the Martin-Siggia-Rose de Dominicis-Janssen (MSRDJ) formalism and present the fluctuation expansion of the effective action and the functional renormalization group (fRG) as two systematic ways to incorporate corrections to the mean dynamics and time-dependent statistics due to fluctuations in the presence of nonlinear neuronal gain. To formulate self-consistency equations, we derive a fundamental link between the effective action in the Onsager-Machlup (OM) formalism, which allows the study of phase transitions, and the MSRDJ effective action, which is computationally advantageous. These results in particular allow the derivation of an OM effective action for systems with non-Gaussian noise. This approach naturally leads to effective deterministic equations for the first moment of the stochastic system; they explain how nonlinearities and noise cooperate to produce memory effects. Moreover, the MSRDJ formulation yields an effective linear system that has identical power spectra and linear response. Starting from the better known loopwise approximation, we then discuss the use of the fRG as a method to obtain self-consistency beyond the mean. We present a new efficient truncation scheme for the hierarchy of flow equations for the vertex functions by adapting the Blaizot, Méndez, and Wschebor approximation from the derivative expansion to the vertex expansion. The methods are presented by means of the simplest possible example of a stochastic differential equation that has generic features of neuronal dynamics.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - Tobias Kühn
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Thomas Luu
- Institut für Kernphysik (IKP-3), Institute for Advanced Simulation (IAS-4) and Jülich Center for Hadron Physics, Jülich Research Centre, Jülich, Germany
| | - Carsten Honerkamp
- Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany.,JARA-FIT, Jülich Aachen Research Alliance-Fundamentals of Future Information Technology, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| |
Collapse
|
29
|
Fu Y, Kang Y, Chen G. Stochastic Resonance Based Visual Perception Using Spiking Neural Networks. Front Comput Neurosci 2020; 14:24. [PMID: 32499690 PMCID: PMC7242793 DOI: 10.3389/fncom.2020.00024] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 03/17/2020] [Indexed: 01/20/2023] Open
Abstract
Our aim is to propose an efficient algorithm for enhancing the contrast of dark images based on the principle of stochastic resonance in a global feedback spiking network of integrate-and-fire neurons. By linear approximation and direct simulation, we disclose the dependence of the peak signal-to-noise ratio on the spiking threshold and the feedback coupling strength. Based on this theoretical analysis, we then develop a dynamical system algorithm for enhancing dark images. In the new algorithm, an explicit formula is given on how to choose a suitable spiking threshold for the images to be enhanced, and a more effective quantifying index, the variance of image, is used to replace the commonly used measure. Numerical tests verify the efficiency of the new algorithm. The investigation provides a good example for the application of stochastic resonance, and it might be useful for explaining the biophysical mechanism behind visual perception.
Collapse
Affiliation(s)
- Yuxuan Fu
- Department of Applied Mathematics, School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China
| | - Yanmei Kang
- Department of Applied Mathematics, School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China
| | - Guanrong Chen
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China
| |
Collapse
|
30
|
Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLoS Comput Biol 2020; 16:e1007835. [PMID: 32384081 PMCID: PMC7239496 DOI: 10.1371/journal.pcbi.1007835] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/20/2020] [Accepted: 03/31/2020] [Indexed: 01/08/2023] Open
Abstract
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.
Collapse
Affiliation(s)
- Lisandro Montangie
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Christoph Miehl
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
- * E-mail:
| |
Collapse
|
31
|
Inference of synaptic connectivity and external variability in neural microcircuits. J Comput Neurosci 2020; 48:123-147. [PMID: 32080777 DOI: 10.1007/s10827-020-00739-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 11/15/2019] [Accepted: 01/03/2020] [Indexed: 10/25/2022]
Abstract
A major goal in neuroscience is to estimate neural connectivity from large scale extracellular recordings of neural activity in vivo. This is challenging in part because any such activity is modulated by the unmeasured external synaptic input to the network, known as the common input problem. Many different measures of functional connectivity have been proposed in the literature, but their direct relationship to synaptic connectivity is often assumed or ignored. For in vivo data, measurements of this relationship would require a knowledge of ground truth connectivity, which is nearly always unavailable. Instead, many studies use in silico simulations as benchmarks for investigation, but such approaches necessarily rely upon a variety of simplifying assumptions about the simulated network and can depend on numerous simulation parameters. We combine neuronal network simulations, mathematical analysis, and calcium imaging data to address the question of when and how functional connectivity, synaptic connectivity, and latent external input variability can be untangled. We show numerically and analytically that, even though the precision matrix of recorded spiking activity does not uniquely determine synaptic connectivity, it is in practice often closely related to synaptic connectivity. This relation becomes more pronounced when the spatial structure of neuronal variability is jointly considered.
Collapse
|
32
|
Synaptic Plasticity Shapes Brain Connectivity: Implications for Network Topology. Int J Mol Sci 2019; 20:ijms20246193. [PMID: 31817968 PMCID: PMC6940892 DOI: 10.3390/ijms20246193] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/02/2019] [Accepted: 12/06/2019] [Indexed: 12/13/2022] Open
Abstract
Studies of brain network connectivity improved understanding on brain changes and adaptation in response to different pathologies. Synaptic plasticity, the ability of neurons to modify their connections, is involved in brain network remodeling following different types of brain damage (e.g., vascular, neurodegenerative, inflammatory). Although synaptic plasticity mechanisms have been extensively elucidated, how neural plasticity can shape network organization is far from being completely understood. Similarities existing between synaptic plasticity and principles governing brain network organization could be helpful to define brain network properties and reorganization profiles after damage. In this review, we discuss how different forms of synaptic plasticity, including homeostatic and anti-homeostatic mechanisms, could be directly involved in generating specific brain network characteristics. We propose that long-term potentiation could represent the neurophysiological basis for the formation of highly connected nodes (hubs). Conversely, homeostatic plasticity may contribute to stabilize network activity preventing poor and excessive connectivity in the peripheral nodes. In addition, synaptic plasticity dysfunction may drive brain network disruption in neuropsychiatric conditions such as Alzheimer's disease and schizophrenia. Optimal network architecture, characterized by efficient information processing and resilience, and reorganization after damage strictly depend on the balance between these forms of plasticity.
Collapse
|
33
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
34
|
Chen J, Mandel HB, Fitzgerald JE, Clark DA. Asymmetric ON-OFF processing of visual motion cancels variability induced by the structure of natural scenes. eLife 2019; 8:e47579. [PMID: 31613221 PMCID: PMC6884396 DOI: 10.7554/elife.47579] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 10/12/2019] [Indexed: 02/05/2023] Open
Abstract
Animals detect motion using a variety of visual cues that reflect regularities in the natural world. Experiments in animals across phyla have shown that motion percepts incorporate both pairwise and triplet spatiotemporal correlations that could theoretically benefit motion computation. However, it remains unclear how visual systems assemble these cues to build accurate motion estimates. Here, we used systematic behavioral measurements of fruit fly motion perception to show how flies combine local pairwise and triplet correlations to reduce variability in motion estimates across natural scenes. By generating synthetic images with statistics controlled by maximum entropy distributions, we show that the triplet correlations are useful only when images have light-dark asymmetries that mimic natural ones. This suggests that asymmetric ON-OFF processing is tuned to the particular statistics of natural scenes. Since all animals encounter the world's light-dark asymmetries, many visual systems are likely to use asymmetric ON-OFF processing to improve motion estimation.
Collapse
Affiliation(s)
- Juyue Chen
- Interdepartmental Neuroscience ProgramYale UniversityNew HavenUnited States
| | - Holly B Mandel
- Department of Molecular, Cellular and Developmental BiologyYale UniversityNew HavenUnited States
| | - James E Fitzgerald
- Janelia Research CampusHoward Hughes Medical InstituteAshburnUnited States
| | - Damon A Clark
- Interdepartmental Neuroscience ProgramYale UniversityNew HavenUnited States
- Department of Molecular, Cellular and Developmental BiologyYale UniversityNew HavenUnited States
- Department of PhysicsYale UniversityNew HavenUnited States
- Department of NeuroscienceYale UniversityNew HavenUnited States
| |
Collapse
|
35
|
Marcos E, Londei F, Genovesio A. Hidden Markov Models Predict the Future Choice Better Than a PSTH-Based Method. Neural Comput 2019; 31:1874-1890. [PMID: 31335289 DOI: 10.1162/neco_a_01216] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Beyond average firing rate, other measurable signals of neuronal activity are fundamental to an understanding of behavior. Recently, hidden Markov models (HMMs) have been applied to neural recordings and have described how neuronal ensembles process information by going through sequences of different states. Such collective dynamics are impossible to capture by just looking at the average firing rate. To estimate how well HMMs can decode information contained in single trials, we compared HMMs with a recently developed classification method based on the peristimulus time histogram (PSTH). The accuracy of the two methods was tested by using the activity of prefrontal neurons recorded while two monkeys were engaged in a strategy task. In this task, the monkeys had to select one of three spatial targets based on an instruction cue and on their previous choice. We show that by using the single trial's neural activity in a period preceding action execution, both models were able to classify the monkeys' choice with an accuracy higher than by chance. Moreover, the HMM was significantly more accurate than the PSTH-based method, even in cases in which the HMM performance was low, although always above chance. Furthermore, the accuracy of both methods was related to the number of neurons exhibiting spatial selectivity within an experimental session. Overall, our study shows that neural activity is better described when not only the mean activity of individual neurons is considered and that therefore, the study of other signals rather than only the average firing rate is fundamental to an understanding of the dynamics of neuronal ensembles.
Collapse
Affiliation(s)
- Encarni Marcos
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy, and Instituto de Neurociencias de Alicante, Consejo Superior de Investigaciones Científicas-Universidad Miguel Hernández de Elche, Sant Joan d'Alacant, Alicante 03550, Spain
| | - Fabrizio Londei
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| |
Collapse
|
36
|
Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLoS Comput Biol 2019; 15:e1006446. [PMID: 31299044 PMCID: PMC6655892 DOI: 10.1371/journal.pcbi.1006446] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 07/24/2019] [Accepted: 04/03/2019] [Indexed: 11/25/2022] Open
Abstract
The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.
Collapse
Affiliation(s)
- Stefano Recanatesi
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A. Buice
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
37
|
Dahmen D, Grün S, Diesmann M, Helias M. Second type of criticality in the brain uncovers rich multiple-neuron dynamics. Proc Natl Acad Sci U S A 2019; 116:13051-13060. [PMID: 31189590 PMCID: PMC6600928 DOI: 10.1073/pnas.1818972116] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Cortical networks that have been found to operate close to a critical point exhibit joint activations of large numbers of neurons. However, in motor cortex of the awake macaque monkey, we observe very different dynamics: massively parallel recordings of 155 single-neuron spiking activities show weak fluctuations on the population level. This a priori suggests that motor cortex operates in a noncritical regime, which in models, has been found to be suboptimal for computational performance. However, here, we show the opposite: The large dispersion of correlations across neurons is the signature of a second critical regime. This regime exhibits a rich dynamical repertoire hidden from macroscopic brain signals but essential for high performance in such concepts as reservoir computing. An analytical link between the eigenvalue spectrum of the dynamics, the heterogeneity of connectivity, and the dispersion of correlations allows us to assess the closeness to the critical point.
Collapse
Affiliation(s)
- David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany;
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
- Theoretical Systems Neurobiology, RWTH Aachen University, 52056 Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, 52074 Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52062 Aachen, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52062 Aachen, Germany
| |
Collapse
|
38
|
Baker C, Ebsch C, Lampl I, Rosenbaum R. Correlated states in balanced neuronal networks. Phys Rev E 2019; 99:052414. [PMID: 31212573 DOI: 10.1103/physreve.99.052414] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Indexed: 06/09/2023]
Abstract
Understanding the magnitude and structure of interneuronal correlations and their relationship to synaptic connectivity structure is an important and difficult problem in computational neuroscience. Early studies show that neuronal network models with excitatory-inhibitory balance naturally create very weak spike train correlations, defining the "asynchronous state." Later work showed that, under some connectivity structures, balanced networks can produce larger correlations between some neuron pairs, even when the average correlation is very small. All of these previous studies assume that the local network receives feedforward synaptic input from a population of uncorrelated spike trains. We show that when spike trains providing feedforward input are correlated, the downstream recurrent network produces much larger correlations. We provide an in-depth analysis of the resulting "correlated state" in balanced networks and show that, unlike the asynchronous state, it produces a tight excitatory-inhibitory balance consistent with in vivo cortical recordings.
Collapse
Affiliation(s)
- Cody Baker
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
| | - Christopher Ebsch
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
| | - Ilan Lampl
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, 7610001, Israel
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana 46556, USA
| |
Collapse
|
39
|
Zheng C, Pikovsky A. Stochastic bursting in unidirectionally delay-coupled noisy excitable systems. CHAOS (WOODBURY, N.Y.) 2019; 29:041103. [PMID: 31042942 DOI: 10.1063/1.5093180] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 03/21/2019] [Indexed: 06/09/2023]
Abstract
We show that "stochastic bursting" is observed in a ring of unidirectional delay-coupled noisy excitable systems, thanks to the combinational action of time-delayed coupling and noise. Under the approximation of timescale separation, i.e., when the time delays in each connection are much larger than the characteristic duration of the spikes, the observed rather coherent spike pattern can be described by an idealized coupled point process with a leader-follower relationship. We derive analytically the statistics of the spikes in each unit, the pairwise correlations between any two units, and the spectrum of the total output from the network. Theory is in good agreement with the simulations with a network of theta-neurons.
Collapse
Affiliation(s)
- Chunming Zheng
- Institute for Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Strasse 24/25, 14476 Potsdam-Golm, Germany
| | - Arkady Pikovsky
- Institute for Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Strasse 24/25, 14476 Potsdam-Golm, Germany
| |
Collapse
|
40
|
Abstract
Background: The roles of neuromodulation in a neural network, such as in a cortical microcolumn, are still incompletely understood. Neuromodulation influences neural processing by presynaptic and postsynaptic regulation of synaptic efficacy. Neuromodulation also affects ion channels and intrinsic excitability. Methods: Synaptic efficacy modulation is an effective way to rapidly alter network density and topology. We alter network topology and density to measure the effect on spike synchronization. We also operate with differently parameterized neuron models which alter the neuron's intrinsic excitability, i.e., activation function. Results: We find that (a) fast synaptic efficacy modulation influences the amount of correlated spiking in a network. Also, (b) synchronization in a network influences the read-out of intrinsic properties. Highly synchronous input drives neurons, such that differences in intrinsic properties disappear, while asynchronous input lets intrinsic properties determine output behavior. Thus, altering network topology can alter the balance between intrinsically vs. synaptically driven network activity. Conclusion: We conclude that neuromodulation may allow a network to shift between a more synchronized transmission mode and a more asynchronous intrinsic read-out mode. This has significant implications for our understanding of the flexibility of cortical computations.
Collapse
Affiliation(s)
- Gabriele Scheler
- Carl Correns Foundation for Mathematical Biology, Mountain View, CA, 94040, USA
| |
Collapse
|
41
|
Williamson RC, Doiron B, Smith MA, Yu BM. Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction. Curr Opin Neurobiol 2019; 55:40-47. [PMID: 30677702 DOI: 10.1016/j.conb.2018.12.009] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Revised: 12/16/2018] [Accepted: 12/17/2018] [Indexed: 12/21/2022]
Abstract
A long-standing goal in neuroscience has been to bring together neuronal recordings and neural network modeling to understand brain function. Neuronal recordings can inform the development of network models, and network models can in turn provide predictions for subsequent experiments. Traditionally, neuronal recordings and network models have been related using single-neuron and pairwise spike train statistics. We review here recent studies that have begun to relate neuronal recordings and network models based on the multi-dimensional structure of neuronal population activity, as identified using dimensionality reduction. This approach has been used to study working memory, decision making, motor control, and more. Dimensionality reduction has provided common ground for incisive comparisons and tight interplay between neuronal recordings and network models.
Collapse
Affiliation(s)
- Ryan C Williamson
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA, USA; School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Brent Doiron
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Matthew A Smith
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
42
|
van Meegen A, Lindner B. Self-Consistent Correlations of Randomly Coupled Rotators in the Asynchronous State. PHYSICAL REVIEW LETTERS 2018; 121:258302. [PMID: 30608814 DOI: 10.1103/physrevlett.121.258302] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Revised: 10/09/2018] [Indexed: 06/09/2023]
Abstract
We study a network of unidirectionally coupled rotators with independent identically distributed (i.i.d.) frequencies and i.i.d. coupling coefficients. Similar to biological networks, this system can attain an asynchronous state with pronounced temporal autocorrelations of the rotators. We derive differential equations for the self-consistent autocorrelation function that can be solved analytically in limit cases. For more involved scenarios, its numerical solution is confirmed by simulations of networks with Gaussian or sparsely distributed coupling coefficients. The theory is finally generalized for pulse-coupled units and tested on a standard model of computational neuroscience, a recurrent network of sparsely coupled exponential integrate-and-fire neurons.
Collapse
Affiliation(s)
- Alexander van Meegen
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| |
Collapse
|
43
|
Kapoor V, Besserve M, Logothetis NK, Panagiotaropoulos TI. Parallel and functionally segregated processing of task phase and conscious content in the prefrontal cortex. Commun Biol 2018; 1:215. [PMID: 30534607 PMCID: PMC6281663 DOI: 10.1038/s42003-018-0225-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 11/08/2018] [Indexed: 11/30/2022] Open
Abstract
The role of lateral prefrontal cortex (LPFC) in mediating conscious perception has been recently questioned due to potential confounds resulting from the parallel operation of task related processes. We have previously demonstrated encoding of contents of visual consciousness in LPFC neurons during a no-report task involving perceptual suppression. Here, we report a separate LPFC population that exhibits task-phase related activity during the same task. The activity profile of these neurons could be captured as canonical response patterns (CRPs), with their peak amplitudes sequentially distributed across different task phases. Perceptually suppressed visual input had a negligible impact on sequential firing and functional connectivity structure. Importantly, task-phase related neurons were functionally segregated from the neuronal population, which encoded conscious perception. These results suggest that neurons exhibiting task-phase related activity operate in the LPFC concurrently with, but segregated from neurons representing conscious content during a no-report task involving perceptual suppression. Vishal Kapoor et al. identify a population of cells in the lateral prefrontal cortex that exhibits task phase-related activity during a no-report task. This cell population is functionally segregated from the population encoding conscious perception, although the two operate in parallel.
Collapse
Affiliation(s)
- Vishal Kapoor
- 1Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany.,2Graduate School of Neural and Behavioral Sciences, International Max Planck Research School, Eberhard-Karls University of Tübingen, 72074 Tübingen, Germany
| | - Michel Besserve
- 1Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany.,Department of Empirical Inference, Max Planck Institute for Intelligent Systems and Max Planck ETH Center for Learning Systems, 72076 Tübingen, Germany
| | - Nikos K Logothetis
- 1Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany.,4Imaging Science and Biomedical Engineering, University of Manchester, Manchester, M13 9PL UK
| | - Theofanis I Panagiotaropoulos
- 1Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany.,5Cognitive Neuroimaging Unit, CEA, DSV/I2BM, INSERM, Universite Paris-Sud, Universite Paris-Saclay, Neurospin Center, 91191 Gif/Yvette, France
| |
Collapse
|
44
|
Abstract
Background: The roles of neuromodulation in a neural network, such as in a cortical microcolumn, are still incompletely understood. Neuromodulation influences neural processing by presynaptic and postsynaptic regulation of synaptic efficacy. Neuromodulation also affects ion channels and intrinsic excitability. Methods: Synaptic efficacy modulation is an effective way to rapidly alter network density and topology. We alter network topology and density to measure the effect on spike synchronization. We also operate with differently parameterized neuron models which alter the neuron's intrinsic excitability, i.e., activation function. Results: We find that (a) fast synaptic efficacy modulation influences the amount of correlated spiking in a network. Also, (b) synchronization in a network influences the read-out of intrinsic properties. Highly synchronous input drives neurons, such that differences in intrinsic properties disappear, while asynchronous input lets intrinsic properties determine output behavior. Thus, altering network topology can alter the balance between intrinsically vs. synaptically driven network activity. Conclusion: We conclude that neuromodulation may allow a network to shift between a more synchronized transmission mode and a more asynchronous intrinsic read-out mode. This has significant implications for our understanding of the flexibility of cortical computations.
Collapse
Affiliation(s)
- Gabriele Scheler
- Carl Correns Foundation for Mathematical Biology, Mountain View, CA, 94040, USA
| |
Collapse
|
45
|
Barreiro AK, Ly C. Investigating the Correlation-Firing Rate Relationship in Heterogeneous Recurrent Networks. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2018; 8:8. [PMID: 29872932 PMCID: PMC5989010 DOI: 10.1186/s13408-018-0063-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 05/21/2018] [Indexed: 05/13/2023]
Abstract
The structure of spiking activity in cortical networks has important implications for how the brain ultimately codes sensory signals. However, our understanding of how network and intrinsic cellular mechanisms affect spiking is still incomplete. In particular, whether cell pairs in a neural network show a positive (or no) relationship between pairwise spike count correlation and average firing rate is generally unknown. This relationship is important because it has been observed experimentally in some sensory systems, and it can enhance information in a common population code. Here we extend our prior work in developing mathematical tools to succinctly characterize the correlation and firing rate relationship in heterogeneous coupled networks. We find that very modest changes in how heterogeneous networks occupy parameter space can dramatically alter the correlation-firing rate relationship.
Collapse
Affiliation(s)
| | - Cheng Ly
- Department of Statistical Science and Operations Research, Virginia Commonwealth University, Richmond, USA
| |
Collapse
|
46
|
Pena RFO, Vellmer S, Bernardi D, Roque AC, Lindner B. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks. Front Comput Neurosci 2018; 12:9. [PMID: 29551968 PMCID: PMC5840464 DOI: 10.3389/fncom.2018.00009] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 02/07/2018] [Indexed: 11/13/2022] Open
Abstract
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.
Collapse
Affiliation(s)
- Rodrigo F O Pena
- Laboratório de Sistemas Neurais, Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, São Paulo, Brazil
| | - Sebastian Vellmer
- Theory of Complex Systems and Neurophysics, Bernstein Center for Computational Neuroscience, Berlin, Germany.,Department of Physics, Humboldt Universität zu Berlin, Berlin, Germany
| | - Davide Bernardi
- Theory of Complex Systems and Neurophysics, Bernstein Center for Computational Neuroscience, Berlin, Germany.,Department of Physics, Humboldt Universität zu Berlin, Berlin, Germany
| | - Antonio C Roque
- Laboratório de Sistemas Neurais, Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, São Paulo, Brazil
| | - Benjamin Lindner
- Theory of Complex Systems and Neurophysics, Bernstein Center for Computational Neuroscience, Berlin, Germany.,Department of Physics, Humboldt Universität zu Berlin, Berlin, Germany
| |
Collapse
|
47
|
Kass RE, Amari SI, Arai K, Brown EN, Diekman CO, Diesmann M, Doiron B, Eden UT, Fairhall AL, Fiddyment GM, Fukai T, Grün S, Harrison MT, Helias M, Nakahara H, Teramae JN, Thomas PJ, Reimers M, Rodu J, Rotstein HG, Shea-Brown E, Shimazaki H, Shinomoto S, Yu BM, Kramer MA. Computational Neuroscience: Mathematical and Statistical Perspectives. ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION 2018; 5:183-214. [PMID: 30976604 PMCID: PMC6454918 DOI: 10.1146/annurev-statistics-041715-033733] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Mathematical and statistical models have played important roles in neuroscience, especially by describing the electrical activity of neurons recorded individually, or collectively across large networks. As the field moves forward rapidly, new challenges are emerging. For maximal effectiveness, those working to advance computational neuroscience will need to appreciate and exploit the complementary strengths of mechanistic theory and the statistical paradigm.
Collapse
Affiliation(s)
- Robert E Kass
- Carnegie Mellon University, Pittsburgh, PA, USA, 15213;
| | - Shun-Ichi Amari
- RIKEN Brain Science Institute, Wako, Saitama Prefecture, Japan, 351-0198
| | | | - Emery N Brown
- Massachusetts Institute of Technology, Cambridge, MA, USA, 02139
- Harvard Medical School, Boston, MA, USA, 02115
| | | | - Markus Diesmann
- Jülich Research Centre, Jülich, Germany, 52428
- RWTH Aachen University, Aachen, Germany, 52062
| | - Brent Doiron
- University of Pittsburgh, Pittsburgh, PA, USA, 15260
| | - Uri T Eden
- Boston University, Boston, MA, USA, 02215
| | | | | | - Tomoki Fukai
- RIKEN Brain Science Institute, Wako, Saitama Prefecture, Japan, 351-0198
| | - Sonja Grün
- Jülich Research Centre, Jülich, Germany, 52428
- RWTH Aachen University, Aachen, Germany, 52062
| | | | - Moritz Helias
- Jülich Research Centre, Jülich, Germany, 52428
- RWTH Aachen University, Aachen, Germany, 52062
| | - Hiroyuki Nakahara
- RIKEN Brain Science Institute, Wako, Saitama Prefecture, Japan, 351-0198
| | | | - Peter J Thomas
- Case Western Reserve University, Cleveland, OH, USA, 44106
| | - Mark Reimers
- Michigan State University, East Lansing, MI, USA, 48824
| | - Jordan Rodu
- Carnegie Mellon University, Pittsburgh, PA, USA, 15213;
| | | | | | - Hideaki Shimazaki
- Honda Research Institute Japan, Wako, Saitama Prefecture, Japan, 351-0188
- Kyoto University, Kyoto, Kyoto Prefecture, Japan, 606-8502
| | | | - Byron M Yu
- Carnegie Mellon University, Pittsburgh, PA, USA, 15213;
| | | |
Collapse
|
48
|
Min B, Zhou D, Cai D. Effects of Firing Variability on Network Structures with Spike-Timing-Dependent Plasticity. Front Comput Neurosci 2018; 12:1. [PMID: 29410621 PMCID: PMC5787127 DOI: 10.3389/fncom.2018.00001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Accepted: 01/03/2018] [Indexed: 11/17/2022] Open
Abstract
Synaptic plasticity is believed to be the biological substrate underlying learning and memory. One of the most widespread forms of synaptic plasticity, spike-timing-dependent plasticity (STDP), uses the spike timing information of presynaptic and postsynaptic neurons to induce synaptic potentiation or depression. An open question is how STDP organizes the connectivity patterns in neuronal circuits. Previous studies have placed much emphasis on the role of firing rate in shaping connectivity patterns. Here, we go beyond the firing rate description to develop a self-consistent linear response theory that incorporates the information of both firing rate and firing variability. By decomposing the pairwise spike correlation into one component associated with local direct connections and the other associated with indirect connections, we identify two distinct regimes regarding the network structures learned through STDP. In one regime, the contribution of the direct-connection correlations dominates over that of the indirect-connection correlations in the learning dynamics; this gives rise to a network structure consistent with the firing rate description. In the other regime, the contribution of the indirect-connection correlations dominates in the learning dynamics, leading to a network structure different from the firing rate description. We demonstrate that the heterogeneity of firing variability across neuronal populations induces a temporally asymmetric structure of indirect-connection correlations. This temporally asymmetric structure underlies the emergence of the second regime. Our study provides a new perspective that emphasizes the role of high-order statistics of spiking activity in the spike-correlation-sensitive learning dynamics.
Collapse
Affiliation(s)
- Bin Min
- Center for Neural Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Douglas Zhou
- School of Mathematical Sciences, MOE-LSC, Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - David Cai
- Center for Neural Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.,School of Mathematical Sciences, MOE-LSC, Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
49
|
Pernice V, da Silveira RA. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits. PLoS Comput Biol 2018; 14:e1005979. [PMID: 29408930 PMCID: PMC5833435 DOI: 10.1371/journal.pcbi.1005979] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Revised: 03/01/2018] [Accepted: 01/10/2018] [Indexed: 11/18/2022] Open
Abstract
Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. The response of neurons to a stimulus is variable across trials. A natural solution for reliable coding in the face of noise is the averaging across a neural population. The nature of this averaging depends on the structure of noise correlations in the neural population. In turn, the correlation structure depends on the way noise and correlations are generated in neural circuits. It is in general difficult to identify the origin of correlations from the observed population activity alone. In this article, we explore different theoretical scenarios of the way in which correlations can be generated, and we relate these to the architecture of feed-forward and recurrent neural circuits. Analyzing population recordings of the activity in mouse auditory cortex in response to sound stimuli, we find that population statistics are consistent with those generated in a recurrent network model. Using this model, we can then quantify the effects of network properties on average population responses, noise correlations, and the representation of sensory information.
Collapse
Affiliation(s)
- Volker Pernice
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
| | - Rava Azeredo da Silveira
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- * E-mail:
| |
Collapse
|
50
|
Abstract
We expand the theory of Hawkes processes to the nonstationary case, in which the mutually exciting point processes receive time-dependent inputs. We derive an analytical expression for the time-dependent correlations, which can be applied to networks with arbitrary connectivity, and inputs with arbitrary statistics. The expression shows how the network correlations are determined by the interplay between the network topology, the transfer functions relating units within the network, and the pattern and statistics of the external inputs. We illustrate the correlation structure using several examples in which neural network dynamics are modeled as a Hawkes process. In particular, we focus on the interplay between internally and externally generated oscillations and their signatures in the spike and rate correlation functions.
Collapse
Affiliation(s)
- Neta Ravid Tannenbaum
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel and Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| |
Collapse
|