1
|
Paliwal S, Ocker GK, Brinkman BAW. Metastability in networks of nonlinear stochastic integrate-and-fire neurons. ARXIV 2024:arXiv:2406.07445v1. [PMID: 38947936 PMCID: PMC11213153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Neurons in the brain continuously process the barrage of sensory inputs they receive from the environment. A wide array of experimental work has shown that the collective activity of neural populations encodes and processes this constant bombardment of information. How these collective patterns of activity depend on single neuron properties is often unclear. Single-neuron recordings have shown that individual neural responses to inputs are nonlinear, which prevents a straightforward extrapolation from single neuron features to emergent collective states. In this work, we use a field theoretic formulation of a stochastic leaky integrate-and-fire model to study the impact of nonlinear intensity functions on macroscopic network activity. We show that the interplay between nonlinear spike emission and membrane potential resets can i) give rise to metastable transitions between active firing rate states, and ii) can enhance or suppress mean firing rates and membrane potentials in opposite directions.
Collapse
Affiliation(s)
- Siddharth Paliwal
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Gabriel Koch Ocker
- Department of Mathematics and Statistics, Boston University, Boston, MA, 02215, USA
| | - Braden A W Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY, 11794, USA
| |
Collapse
|
2
|
Olsen VK, Whitlock JR, Roudi Y. The quality and complexity of pairwise maximum entropy models for large cortical populations. PLoS Comput Biol 2024; 20:e1012074. [PMID: 38696532 DOI: 10.1371/journal.pcbi.1012074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 05/14/2024] [Accepted: 04/10/2024] [Indexed: 05/04/2024] Open
Abstract
We investigate the ability of the pairwise maximum entropy (PME) model to describe the spiking activity of large populations of neurons recorded from the visual, auditory, motor, and somatosensory cortices. To quantify this performance, we use (1) Kullback-Leibler (KL) divergences, (2) the extent to which the pairwise model predicts third-order correlations, and (3) its ability to predict the probability that multiple neurons are simultaneously active. We compare these with the performance of a model with independent neurons and study the relationship between the different performance measures, while varying the population size, mean firing rate of the chosen population, and the bin size used for binarizing the data. We confirm the previously reported excellent performance of the PME model for small population sizes N < 20. But we also find that larger mean firing rates and bin sizes generally decreases performance. The performance for larger populations were generally not as good. For large populations, pairwise models may be good in terms of predicting third-order correlations and the probability of multiple neurons being active, but still significantly worse than small populations in terms of their improvement over the independent model in KL-divergence. We show that these results are independent of the cortical area and of whether approximate methods or Boltzmann learning are used for inferring the pairwise couplings. We compared the scaling of the inferred couplings with N and find it to be well explained by the Sherrington-Kirkpatrick (SK) model, whose strong coupling regime shows a complex phase with many metastable states. We find that, up to the maximum population size studied here, the fitted PME model remains outside its complex phase. However, the standard deviation of the couplings compared to their mean increases, and the model gets closer to the boundary of the complex phase as the population size grows.
Collapse
Affiliation(s)
- Valdemar Kargård Olsen
- Kavli Institute for Systems Neuroscience, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jonathan R Whitlock
- Kavli Institute for Systems Neuroscience, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Yasser Roudi
- Kavli Institute for Systems Neuroscience, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Mathematics, King's College London, London, United Kingdom
| |
Collapse
|
3
|
Liang T, Brinkman BAW. Statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances. Phys Rev E 2024; 109:044404. [PMID: 38755896 DOI: 10.1103/physreve.109.044404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/29/2024] [Indexed: 05/18/2024]
Abstract
Statistically inferred neuronal connections from observed spike train data are often skewed from ground truth by factors such as model mismatch, unobserved neurons, and limited data. Spike train covariances, sometimes referred to as "functional connections," are often used as a proxy for the connections between pairs of neurons, but reflect statistical relationships between neurons, not anatomical connections. Moreover, covariances are not causal: spiking activity is correlated in both the past and the future, whereas neurons respond only to synaptic inputs in the past. Connections inferred by maximum likelihood inference, however, can be constrained to be causal. However, we show in this work that the inferred connections in spontaneously active networks modeled by stochastic leaky integrate-and-fire networks strongly correlate with the covariances between neurons, and may reflect noncausal relationships, when many neurons are unobserved or when neurons are weakly coupled. This phenomenon occurs across different network structures, including random networks and balanced excitatory-inhibitory networks. We use a combination of simulations and a mean-field analysis with fluctuation corrections to elucidate the relationships between spike train covariances, inferred synaptic filters, and ground-truth connections in partially observed networks.
Collapse
Affiliation(s)
- Tong Liang
- Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794, USA
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
4
|
Crosser JT, Brinkman BAW. Applications of information geometry to spiking neural network activity. Phys Rev E 2024; 109:024302. [PMID: 38491696 DOI: 10.1103/physreve.109.024302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/10/2024] [Indexed: 03/18/2024]
Abstract
The space of possible behaviors that complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, although the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model outputs change as a function of their parameters, giving a quantitative notion of "distances" between outputs. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
Collapse
Affiliation(s)
- Jacob T Crosser
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
5
|
Affiliation(s)
- Max Dabagia
- School of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA
| | - Konrad P Kording
- Department of Biomedical Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Eva L Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
6
|
Dahmen D, Layer M, Deutz L, Dąbrowska PA, Voges N, von Papen M, Brochier T, Riehle A, Diesmann M, Grün S, Helias M. Global organization of neuronal activity only requires unstructured local connectivity. eLife 2022; 11:e68422. [PMID: 35049496 PMCID: PMC8776256 DOI: 10.7554/elife.68422] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.
Collapse
Affiliation(s)
- David Dahmen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Moritz Layer
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Lukas Deutz
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- School of Computing, University of LeedsLeedsUnited Kingdom
| | - Paulina Anna Dąbrowska
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Nicole Voges
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Michael von Papen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Alexa Riehle
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Markus Diesmann
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen UniversityAachenGermany
| | - Sonja Grün
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Theoretical Systems Neurobiology, RWTH Aachen UniversityAachenGermany
| | - Moritz Helias
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
| |
Collapse
|
7
|
Hurwitz C, Kudryashova N, Onken A, Hennig MH. Building population models for large-scale neural recordings: Opportunities and pitfalls. Curr Opin Neurobiol 2021; 70:64-73. [PMID: 34411907 DOI: 10.1016/j.conb.2021.07.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 06/11/2021] [Accepted: 07/14/2021] [Indexed: 11/15/2022]
Abstract
Modern recording technologies now enable simultaneous recording from large numbers of neurons. This has driven the development of new statistical models for analyzing and interpreting neural population activity. Here, we provide a broad overview of recent developments in this area. We compare and contrast different approaches, highlight strengths and limitations, and discuss biological and mechanistic insights that these methods provide.
Collapse
Affiliation(s)
- Cole Hurwitz
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom
| | - Nina Kudryashova
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom
| | - Arno Onken
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom
| | - Matthias H Hennig
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom.
| |
Collapse
|
8
|
Randi F, Leifer AM. Nonequilibrium Green's Functions for Functional Connectivity in the Brain. PHYSICAL REVIEW LETTERS 2021; 126:118102. [PMID: 33798383 PMCID: PMC8454901 DOI: 10.1103/physrevlett.126.118102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 12/29/2020] [Accepted: 02/18/2021] [Indexed: 05/28/2023]
Abstract
A theoretical framework describing the set of interactions between neurons in the brain, or functional connectivity, should include dynamical functions representing the propagation of signal from one neuron to another. Green's functions and response functions are natural candidates for this but, while they are conceptually very useful, they are usually defined only for linear time-translationally invariant systems. The brain, instead, behaves nonlinearly and in a time-dependent way. Here, we use nonequilibrium Green's functions to describe the time-dependent functional connectivity of a continuous-variable network of neurons. We show how the connectivity is related to the measurable response functions, and provide two illustrative examples via numerical calculations, inspired from Caenorhabditis elegans.
Collapse
Affiliation(s)
- Francesco Randi
- Department of Physics, Princeton University, Jadwin Hall, Princeton, New Jersey 08544, USA
| | - Andrew M. Leifer
- Department of Physics, Princeton University, Jadwin Hall, Princeton, New Jersey 08544, USA
- Princeton Neuroscience Institute, Princeton University, New Jersey 08544, USA
| |
Collapse
|
9
|
Das A, Fiete IR. Systematic errors in connectivity inferred from activity in strongly recurrent networks. Nat Neurosci 2020; 23:1286-1296. [DOI: 10.1038/s41593-020-0699-2] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 07/28/2020] [Indexed: 11/09/2022]
|
10
|
Stapmanns J, Kühn T, Dahmen D, Luu T, Honerkamp C, Helias M. Self-consistent formulations for stochastic nonlinear neuronal dynamics. Phys Rev E 2020; 101:042124. [PMID: 32422832 DOI: 10.1103/physreve.101.042124] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 12/18/2019] [Indexed: 01/28/2023]
Abstract
Neural dynamics is often investigated with tools from bifurcation theory. However, many neuron models are stochastic, mimicking fluctuations in the input from unknown parts of the brain or the spiking nature of signals. Noise changes the dynamics with respect to the deterministic model; in particular classical bifurcation theory cannot be applied. We formulate the stochastic neuron dynamics in the Martin-Siggia-Rose de Dominicis-Janssen (MSRDJ) formalism and present the fluctuation expansion of the effective action and the functional renormalization group (fRG) as two systematic ways to incorporate corrections to the mean dynamics and time-dependent statistics due to fluctuations in the presence of nonlinear neuronal gain. To formulate self-consistency equations, we derive a fundamental link between the effective action in the Onsager-Machlup (OM) formalism, which allows the study of phase transitions, and the MSRDJ effective action, which is computationally advantageous. These results in particular allow the derivation of an OM effective action for systems with non-Gaussian noise. This approach naturally leads to effective deterministic equations for the first moment of the stochastic system; they explain how nonlinearities and noise cooperate to produce memory effects. Moreover, the MSRDJ formulation yields an effective linear system that has identical power spectra and linear response. Starting from the better known loopwise approximation, we then discuss the use of the fRG as a method to obtain self-consistency beyond the mean. We present a new efficient truncation scheme for the hierarchy of flow equations for the vertex functions by adapting the Blaizot, Méndez, and Wschebor approximation from the derivative expansion to the vertex expansion. The methods are presented by means of the simplest possible example of a stochastic differential equation that has generic features of neuronal dynamics.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - Tobias Kühn
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Thomas Luu
- Institut für Kernphysik (IKP-3), Institute for Advanced Simulation (IAS-4) and Jülich Center for Hadron Physics, Jülich Research Centre, Jülich, Germany
| | - Carsten Honerkamp
- Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany.,JARA-FIT, Jülich Aachen Research Alliance-Fundamentals of Future Information Technology, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| |
Collapse
|
11
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
12
|
Rule ME, O'Leary T, Harvey CD. Causes and consequences of representational drift. Curr Opin Neurobiol 2019; 58:141-147. [PMID: 31569062 PMCID: PMC7385530 DOI: 10.1016/j.conb.2019.08.005] [Citation(s) in RCA: 98] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 08/13/2019] [Accepted: 08/27/2019] [Indexed: 01/27/2023]
Abstract
The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.
Collapse
Affiliation(s)
- Michael E Rule
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom.
| | | |
Collapse
|
13
|
Constraining computational models using electron microscopy wiring diagrams. Curr Opin Neurobiol 2019; 58:94-100. [PMID: 31470252 DOI: 10.1016/j.conb.2019.07.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Accepted: 07/25/2019] [Indexed: 12/18/2022]
Abstract
Numerous efforts to generate "connectomes," or synaptic wiring diagrams, of large neural circuits or entire nervous systems are currently underway. These efforts promise an abundance of data to guide theoretical models of neural computation and test their predictions. However, there is not yet a standard set of tools for incorporating the connectivity constraints that these datasets provide into the models typically studied in theoretical neuroscience. This article surveys recent approaches to building models with constrained wiring diagrams and the insights they have provided. It also describes challenges and the need for new techniques to scale these approaches to ever more complex datasets.
Collapse
|
14
|
Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLoS Comput Biol 2019; 15:e1006446. [PMID: 31299044 PMCID: PMC6655892 DOI: 10.1371/journal.pcbi.1006446] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 07/24/2019] [Accepted: 04/03/2019] [Indexed: 11/25/2022] Open
Abstract
The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.
Collapse
Affiliation(s)
- Stefano Recanatesi
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A. Buice
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
15
|
Dahmen D, Grün S, Diesmann M, Helias M. Second type of criticality in the brain uncovers rich multiple-neuron dynamics. Proc Natl Acad Sci U S A 2019; 116:13051-13060. [PMID: 31189590 PMCID: PMC6600928 DOI: 10.1073/pnas.1818972116] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Cortical networks that have been found to operate close to a critical point exhibit joint activations of large numbers of neurons. However, in motor cortex of the awake macaque monkey, we observe very different dynamics: massively parallel recordings of 155 single-neuron spiking activities show weak fluctuations on the population level. This a priori suggests that motor cortex operates in a noncritical regime, which in models, has been found to be suboptimal for computational performance. However, here, we show the opposite: The large dispersion of correlations across neurons is the signature of a second critical regime. This regime exhibits a rich dynamical repertoire hidden from macroscopic brain signals but essential for high performance in such concepts as reservoir computing. An analytical link between the eigenvalue spectrum of the dynamics, the heterogeneity of connectivity, and the dispersion of correlations allows us to assess the closeness to the critical point.
Collapse
Affiliation(s)
- David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany;
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
- Theoretical Systems Neurobiology, RWTH Aachen University, 52056 Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, 52074 Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52062 Aachen, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52062 Aachen, Germany
| |
Collapse
|