1
|
Shomali SR, Rasuli SN, Ahmadabadi MN, Shimazaki H. Uncovering hidden network architecture from spiking activities using an exact statistical input-output relation of neurons. Commun Biol 2023; 6:169. [PMID: 36792689 PMCID: PMC9932086 DOI: 10.1038/s42003-023-04511-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 01/20/2023] [Indexed: 02/17/2023] Open
Abstract
Identifying network architecture from observed neural activities is crucial in neuroscience studies. A key requirement is knowledge of the statistical input-output relation of single neurons in vivo. By utilizing an exact analytical solution of the spike-timing for leaky integrate-and-fire neurons under noisy inputs balanced near the threshold, we construct a framework that links synaptic type, strength, and spiking nonlinearity with the statistics of neuronal population activity. The framework explains structured pairwise and higher-order interactions of neurons receiving common inputs under different architectures. We compared the theoretical predictions with the activity of monkey and mouse V1 neurons and found that excitatory inputs given to pairs explained the observed sparse activity characterized by strong negative triple-wise interactions, thereby ruling out the alternative explanation by shared inhibition. Moreover, we showed that the strong interactions are a signature of excitatory rather than inhibitory inputs whenever the spontaneous rate is low. We present a guide map of neural interactions that help researchers to specify the hidden neuronal motifs underlying observed interactions found in empirical data.
Collapse
Affiliation(s)
- Safura Rashid Shomali
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5746, Iran.
| | - Seyyed Nader Rasuli
- grid.418744.a0000 0000 8841 7951School of Physics, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5531 Iran ,grid.411872.90000 0001 2087 2250Department of Physics, University of Guilan, Rasht, 41335-1914 Iran
| | - Majid Nili Ahmadabadi
- grid.46072.370000 0004 0612 7950Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14395-515 Iran
| | - Hideaki Shimazaki
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan. .,Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Hokkaido, 060-0812, Japan.
| |
Collapse
|
2
|
Chelaru MI, Eagleman S, Andrei AR, Milton R, Kharas N, Dragoi V. High-order interactions explain the collective behavior of cortical populations in executive but not sensory areas. Neuron 2021; 109:3954-3961.e5. [PMID: 34665999 PMCID: PMC8678300 DOI: 10.1016/j.neuron.2021.09.042] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 08/09/2021] [Accepted: 09/22/2021] [Indexed: 02/04/2023]
Abstract
One influential view in neuroscience is that pairwise cell interactions explain the firing patterns of large populations. Despite its prevalence, this view originates from studies in the retina and visual cortex of anesthetized animals. Whether pairwise interactions predict the firing patterns of neurons across multiple brain areas in behaving animals remains unknown. Here, we performed multi-area electrical recordings to find that 2nd-order interactions explain a high fraction of entropy of the population response in macaque cortical areas V1 and V4. Surprisingly, despite the brain-state modulation of neuronal responses, the model based on pairwise interactions captured ∼90% of the spiking activity structure during wakefulness and sleep. However, regardless of brain state, pairwise interactions fail to explain experimentally observed entropy in neural populations from the prefrontal cortex. Thus, while simple pairwise interactions explain the collective behavior of visual cortical networks across brain states, explaining the population dynamics in downstream areas involves higher-order interactions.
Collapse
Affiliation(s)
- Mircea I Chelaru
- Department of Neurobiology and Anatomy, McGovern Medical School, University of Texas, Houston, Houston, TX 77030, USA
| | - Sarah Eagleman
- Department of Neurobiology and Anatomy, McGovern Medical School, University of Texas, Houston, Houston, TX 77030, USA; Department of Anesthesiology, Stanford School of Medicine, Palo Alto, CA 94304, USA
| | - Ariana R Andrei
- Department of Neurobiology and Anatomy, McGovern Medical School, University of Texas, Houston, Houston, TX 77030, USA
| | - Russell Milton
- Department of Neurobiology and Anatomy, McGovern Medical School, University of Texas, Houston, Houston, TX 77030, USA
| | - Natasha Kharas
- Department of Neurobiology and Anatomy, McGovern Medical School, University of Texas, Houston, Houston, TX 77030, USA
| | - Valentin Dragoi
- Department of Neurobiology and Anatomy, McGovern Medical School, University of Texas, Houston, Houston, TX 77030, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX 77026, USA.
| |
Collapse
|
3
|
Azeredo da Silveira R, Rieke F. The Geometry of Information Coding in Correlated Neural Populations. Annu Rev Neurosci 2021; 44:403-424. [PMID: 33863252 DOI: 10.1146/annurev-neuro-120320-082744] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code.
Collapse
Affiliation(s)
| | - Fred Rieke
- Department of Physics, Ecole Normale Supérieure, 75005 Paris, France;
| |
Collapse
|
4
|
Bojanek K, Zhu Y, MacLean J. Cyclic transitions between higher order motifs underlie sustained asynchronous spiking in sparse recurrent networks. PLoS Comput Biol 2020; 16:e1007409. [PMID: 32997658 PMCID: PMC7549833 DOI: 10.1371/journal.pcbi.1007409] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/12/2020] [Accepted: 07/28/2020] [Indexed: 12/26/2022] Open
Abstract
A basic—yet nontrivial—function which neocortical circuitry must satisfy is the ability to maintain stable spiking activity over time. Stable neocortical activity is asynchronous, critical, and low rate, and these features of spiking dynamics contribute to efficient computation and optimal information propagation. However, it remains unclear how neocortex maintains this asynchronous spiking regime. Here we algorithmically construct spiking neural network models, each composed of 5000 neurons. Network construction synthesized topological statistics from neocortex with a set of objective functions identifying naturalistic low-rate, asynchronous, and critical activity. We find that simulations run on the same topology exhibit sustained asynchronous activity under certain sets of initial membrane voltages but truncated activity under others. Synchrony, rate, and criticality do not provide a full explanation of this dichotomy. Consequently, in order to achieve mechanistic understanding of sustained asynchronous activity, we summarized activity as functional graphs where edges between units are defined by pairwise spike dependencies. We then analyzed the intersection between functional edges and synaptic connectivity- i.e. recruitment networks. Higher-order patterns, such as triplet or triangle motifs, have been tied to cooperativity and integration. We find, over time in each sustained simulation, low-variance periodic transitions between isomorphic triangle motifs in the recruitment networks. We quantify the phenomenon as a Markov process and discover that if the network fails to engage this stereotyped regime of motif dominance “cycling”, spiking activity truncates early. Cycling of motif dominance generalized across manipulations of synaptic weights and topologies, demonstrating the robustness of this regime for maintenance of network activity. Our results point to the crucial role of excitatory higher-order patterns in sustaining asynchronous activity in sparse recurrent networks. They also provide a possible explanation why such connectivity and activity patterns have been prominently reported in neocortex. Neocortical spiking activity tends to be low-rate and non-rhythmic, and to operate near the critical point of a phase transition. It remains unclear how this kind of spiking activity can be maintained within a neuronal network. Neurons are leaky and individual synaptic connections are sparse and weak, making the maintenance of an asynchronous regime a nontrivial problem. Higher order patterns involving more than two units abound in neocortex, and several lines of evidence suggest that they may be instrumental for brain function. For example, stable activity in vivo displays elevated clustering dominated by specific three-node (triplet) motifs. In this study we demonstrate a link between the maintenance of asynchronous activity and triplet motifs. We algorithmically build spiking neural network models to mimic the topology of neocortex and the spiking statistics that characterize wakefulness. We show that higher order coordination of synapses is always present during sustained asynchronous activity. Coordination takes the form of transitions in time between specific triangle motifs. These motifs summarize the way spikes traverse the underlying synaptic topology. The results of our model are consistent with numerous experimental observations, and their generalizability to other weakly and sparsely connected networks is predicted.
Collapse
Affiliation(s)
- Kyle Bojanek
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Yuqing Zhu
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Jason MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
5
|
Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLoS Comput Biol 2020; 16:e1007835. [PMID: 32384081 PMCID: PMC7239496 DOI: 10.1371/journal.pcbi.1007835] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/20/2020] [Accepted: 03/31/2020] [Indexed: 01/08/2023] Open
Abstract
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.
Collapse
Affiliation(s)
- Lisandro Montangie
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Christoph Miehl
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
- * E-mail:
| |
Collapse
|
6
|
Baravalle R, Montani F. Higher-Order Cumulants Drive Neuronal Activity Patterns, Inducing UP-DOWN States in Neural Populations. ENTROPY 2020; 22:e22040477. [PMID: 33286251 PMCID: PMC7516951 DOI: 10.3390/e22040477] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 04/14/2020] [Accepted: 04/16/2020] [Indexed: 11/16/2022]
Abstract
A major challenge in neuroscience is to understand the role of the higher-order correlations structure of neuronal populations. The dichotomized Gaussian model (DG) generates spike trains by means of thresholding a multivariate Gaussian random variable. The DG inputs are Gaussian distributed, and thus have no interactions beyond the second order in their inputs; however, they can induce higher-order correlations in the outputs. We propose a combination of analytical and numerical techniques to estimate higher-order, above the second, cumulants of the firing probability distributions. Our findings show that a large amount of pairwise interactions in the inputs can induce the system into two possible regimes, one with low activity (“DOWN state”) and another one with high activity (“UP state”), and the appearance of these states is due to a combination between the third- and fourth-order cumulant. This could be part of a mechanism that would help the neural code to upgrade specific information about the stimuli, motivating us to examine the behavior of the critical fluctuations through the Binder cumulant close to the critical point. We show, using the Binder cumulant, that higher-order correlations in the outputs generate a critical neural system that portrays a second-order phase transition.
Collapse
Affiliation(s)
- Roman Baravalle
- Instituto de Física de La Plata (IFLP), Universidad Nacional de La Plata, CONICET CCT-La Plata, Diagonal 113 entre 63 y 64, La Plata, Buenos Aires 1900, Argentina;
- Departamento de Física, Facultad de Ciencias Exactas, UNLP Calle 49 y 115. C.C. 67, La Plata, Buenos Aires 1900, Argentina
| | - Fernando Montani
- Instituto de Física de La Plata (IFLP), Universidad Nacional de La Plata, CONICET CCT-La Plata, Diagonal 113 entre 63 y 64, La Plata, Buenos Aires 1900, Argentina;
- Departamento de Física, Facultad de Ciencias Exactas, UNLP Calle 49 y 115. C.C. 67, La Plata, Buenos Aires 1900, Argentina
- Correspondence:
| |
Collapse
|
7
|
Brinkman BAW, Rieke F, Shea-Brown E, Buice MA. Predicting how and when hidden neurons skew measured synaptic interactions. PLoS Comput Biol 2018; 14:e1006490. [PMID: 30346943 PMCID: PMC6219819 DOI: 10.1371/journal.pcbi.1006490] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 11/06/2018] [Accepted: 09/05/2018] [Indexed: 11/18/2022] Open
Abstract
A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the “hidden” portion of the network. To properly interpret neural data and determine how biological structure gives rise to neural circuit function, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r′ to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r′ to r plus corrections from every directed path from r′ to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have—or do not have—major effects on reshaping the interactions among observed neurons. As a particular example of interest, we derive a formula for the impact of hidden units in random networks with “strong” coupling—connection weights that scale with 1/N, where N is the network size, precisely the scaling observed in recent experiments. With this quantitative relationship between measured and true interactions, we can study how network properties shape effective interactions, which properties are relevant for neural computations, and how to manipulate effective interactions. No experiment in neuroscience can record from more than a tiny fraction of the total number of neurons present in a circuit. This severely complicates measurement of a network’s true properties, as unobserved neurons skew measurements away from what would be measured if all neurons were observed. For example, the measured post-synaptic response of a neuron to a spike from a particular pre-synaptic neuron incorporates direct connections between the two neurons as well as the effect of any number of indirect connections, including through unobserved neurons. To understand how measured quantities are distorted by unobserved neurons, we calculate a general relationship between measured “effective” synaptic interactions and the ground-truth interactions in the network. This allows us to identify conditions under which hidden neurons substantially alter measured interactions. Moreover, it provides a foundation for future work on manipulating effective interactions between neurons to better understand and potentially alter circuit function—or dysfunction.
Collapse
Affiliation(s)
- Braden A W Brinkman
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A Buice
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
8
|
A Moment-Based Maximum Entropy Model for Fitting Higher-Order Interactions in Neural Data. ENTROPY 2018; 20:e20070489. [PMID: 33265579 PMCID: PMC7513015 DOI: 10.3390/e20070489] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 06/15/2018] [Accepted: 06/19/2018] [Indexed: 11/22/2022]
Abstract
Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting “Reliable Moment” model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns.
Collapse
|
9
|
Zylberberg J, Pouget A, Latham PE, Shea-Brown E. Robust information propagation through noisy neural circuits. PLoS Comput Biol 2017; 13:e1005497. [PMID: 28419098 PMCID: PMC5413111 DOI: 10.1371/journal.pcbi.1005497] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 05/02/2017] [Accepted: 04/03/2017] [Indexed: 12/31/2022] Open
Abstract
Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise. Information about the outside world, which originates in sensory neurons, propagates through multiple stages of processing before reaching the neural structures that control behavior. While much work in neuroscience has investigated the factors that affect the amount of information contained in peripheral sensory areas, very little work has asked how much of that information makes it through subsequent processing stages. That’s the focus of this paper, and it’s an important issue because information that fails to propagate cannot be used to affect decision-making. We find a tradeoff between information content and information transmission: neural codes which contain a large amount of information can transmit that information poorly to subsequent processing stages. Thus, the problem of robust information propagation—which has largely been overlooked in previous research—may be critical for determining how our sensory organs communicate with our brains. We identify the conditions under which information propagates well—or poorly—through multiple stages of neural processing.
Collapse
Affiliation(s)
- Joel Zylberberg
- Department of Physiology and Biophysics, Center for Neuroscience, and Computational Bioscience Program, University of Colorado School of Medicine, Aurora, Colorado, United States of America
- Department of Applied Mathematics, University of Colorado, Boulder, Colorado, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- Learning in Machines and Brains Program, Canadian Institute For Advanced Research, Toronto, Ontario, Canada
- * E-mail:
| | - Alexandre Pouget
- Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
| | - Peter E. Latham
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- Department of Physiology and Biophysics, Program in Neuroscience, University of Washington Institute for Neuroengineering, and Center for Sensorimotor Neural Engineering, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
10
|
Jovanović S, Rotter S. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks. PLoS Comput Biol 2016; 12:e1004963. [PMID: 27271768 PMCID: PMC4894630 DOI: 10.1371/journal.pcbi.1004963] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 05/02/2016] [Indexed: 01/06/2023] Open
Abstract
The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology—random networks of Erdős-Rényi type and networks with highly interconnected hubs—we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations. Many biological phenomena can be viewed as dynamical processes on a graph. Understanding coordinated activity of nodes in such a network is of some importance, as it helps to characterize the behavior of the complex system. Of course, the topology of a network plays a pivotal role in determining the level of coordination among its different vertices. In particular, correlations between triplets of events (here: action potentials generated by neurons) have recently garnered some interest in the theoretical neuroscience community. In this paper, we present a decomposition of an average measure of third-order coordinated activity of neurons in a spiking neuronal network in terms of the relevant topological motifs present in the underlying graph. We study different network topologies and show, in particular, that the presence of certain tree motifs in the synaptic connectivity graph greatly affects the strength of third-order correlations between spike trains of different neurons.
Collapse
Affiliation(s)
- Stojan Jovanović
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg, Germany
- CB, CSC, KTH Royal Institute of Technology, Stockholm, Sweden
- * E-mail:
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg, Germany
| |
Collapse
|
11
|
Leen DA, Shea-Brown E. A Simple Mechanism for Beyond-Pairwise Correlations in Integrate-and-Fire Neurons. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2015; 5:30. [PMID: 26265217 PMCID: PMC4554967 DOI: 10.1186/s13408-015-0030-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Accepted: 07/23/2015] [Indexed: 06/04/2023]
Abstract
The collective dynamics of neural populations are often characterized in terms of correlations in the spike activity of different neurons. We have developed an understanding of the circuit mechanisms that lead to correlations among cell pairs, but little is known about what determines the population firing statistics among larger groups of cells. Here, we examine this question for a simple, but ubiquitous, circuit feature: common fluctuating input arriving to spiking neurons of integrate-and-fire type. We show that this leads to strong beyond-pairwise correlations-that is, correlations that cannot be captured by maximum entropy models that extrapolate from pairwise statistics-as for earlier work with discrete threshold crossing (dichotomous Gaussian) models. Moreover, we find that the same is true for another widely used, doubly stochastic model of neural spiking, the linear-nonlinear cascade. We demonstrate the strong connection between the collective dynamics produced by integrate-and-fire and dichotomous Gaussian models, and show that the latter is a surprisingly accurate model of the former. Our conclusion is that beyond-pairwise correlations can be both broadly expected and possible to describe by simplified (and tractable) statistical models.
Collapse
Affiliation(s)
- David A. Leen
- />Department of Applied Mathematics, University of Washington, Seattle, WA USA
| | - Eric Shea-Brown
- />Department of Applied Mathematics, University of Washington, Seattle, WA USA
- />Department of Physiology and Biophysics, University of Washington, Seattle, WA USA
- />Program in Neuroscience, University of Washington, Seattle, WA USA
- />Allen Institute for Brain Science, Seattle, WA USA
| |
Collapse
|
12
|
Zylberberg J, Shea-Brown E. Input nonlinearities can shape beyond-pairwise correlations and improve information transmission by neural populations. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:062707. [PMID: 26764727 DOI: 10.1103/physreve.92.062707] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2012] [Indexed: 06/05/2023]
Abstract
While recent recordings from neural populations show beyond-pairwise, or higher-order, correlations (HOC), we have little understanding of how HOC arise from network interactions and of how they impact encoded information. Here, we show that input nonlinearities imply HOC in spin-glass-type statistical models. We then discuss one such model with parametrized pairwise- and higher-order interactions, revealing conditions under which beyond-pairwise interactions increase the mutual information between a given stimulus type and the population responses. For jointly Gaussian stimuli, coding performance is improved by shaping output HOC only when neural firing rates are constrained to be low. For stimuli with skewed probability distributions (like natural image luminances), performance improves for all firing rates. Our work suggests surprising connections between nonlinear integration of neural inputs, stimulus statistics, and normative theories of population coding. Moreover, it suggests that the inclusion of beyond-pairwise interactions could improve the performance of Boltzmann machines for machine learning and signal processing applications.
Collapse
Affiliation(s)
- Joel Zylberberg
- Department of Applied Mathematics, University of Washington, Seattle, Washington 98195, USA
| | - Eric Shea-Brown
- Department of Applied Mathematics, Program in Neuroscience, Department of Physiology and Biophysics, University of Washington, Seattle, Washington 98195, USA
| |
Collapse
|
13
|
Zylberberg J, Hyde RA, Strowbridge BW. Dynamics of robust pattern separability in the hippocampal dentate gyrus. Hippocampus 2015; 26:623-32. [PMID: 26482936 DOI: 10.1002/hipo.22546] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Revised: 10/05/2015] [Accepted: 10/09/2015] [Indexed: 11/05/2022]
Abstract
The dentate gyrus (DG) is thought to perform pattern separation on inputs received from the entorhinal cortex, such that the DG forms distinct representations of different input patterns. Neuronal responses, however, are known to be variable, and that variability has the potential to confuse the representations of different inputs, thereby hindering the pattern separation function. This variability can be especially problematic for tissues such as the DG, in which the responses can persist for tens of seconds following stimulation: the long response duration allows for variability from many different sources to accumulate. To understand how the DG can robustly encode different input patterns, we investigated a recently developed in vitro hippocampal DG preparation that generates persistent responses to transient electrical stimulation. For 10-20 s after stimulation, the responses are indicative of the pattern of stimulation that was applied, even though the responses exhibit significant trial-to-trial variability. Analyzing the dynamical trajectories of the evoked responses, we found that, following stimulation, the neural responses follow distinct paths through the space of possible neural activations, with a different path associated with each stimulation pattern. The neural responses' trial-to-trial variability shifts the responses along these paths rather than between them, maintaining the separability of the input patterns. Manipulations that redistributed the variability more isotropically over the space of possible neural activations impeded the pattern separation function. Consequently, we conclude that the confinement of neuronal variability to these one-dimensional paths mitigates the impacts of variability on pattern encoding and, thus, may be an important aspect of the DG's ability to robustly encode input patterns.
Collapse
Affiliation(s)
- Joel Zylberberg
- Department of Applied Mathematics, University of Washington, Seattle, Washington.,Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| | - Robert A Hyde
- Department of Neurosciences, Case Western Reserve University, Cleveland, Ohio
| | - Ben W Strowbridge
- Department of Neurosciences, Case Western Reserve University, Cleveland, Ohio
| |
Collapse
|