1
|
Tian GJ, Zhu O, Shirhatti V, Greenspon CM, Downey JE, Freedman DJ, Doiron B. Neuronal firing rate diversity lowers the dimension of population covariability. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.30.610535. [PMID: 39257801 PMCID: PMC11383671 DOI: 10.1101/2024.08.30.610535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Populations of neurons produce activity with two central features. First, neuronal responses are very diverse - specific stimuli or behaviors prompt some neurons to emit many action potentials, while other neurons remain relatively silent. Second, the trial-to-trial fluctuations of neuronal response occupy a low dimensional space, owing to significant correlations between the activity of neurons. These two features define the quality of neuronal representation. We link these two aspects of population response using a recurrent circuit model and derive the following relation: the more diverse the firing rates of neurons in a population, the lower the effective dimension of population trial-to-trial covariability. This surprising prediction is tested and validated using simultaneously recorded neuronal populations from numerous brain areas in mice, non-human primates, and in the motor cortex of human participants. Using our relation we present a theory where a more diverse neuronal code leads to better fine discrimination performance from population activity. In line with this theory, we show that neuronal populations across the brain exhibit both more diverse mean responses and lower-dimensional fluctuations when the brain is in more heightened states of information processing. In sum, we present a key organizational principle of neuronal population response that is widely observed across the nervous system and acts to synergistically improve population representation.
Collapse
|
2
|
Negrón A, Getz MP, Handy G, Doiron B. The mechanics of correlated variability in segregated cortical excitatory subnetworks. Proc Natl Acad Sci U S A 2024; 121:e2306800121. [PMID: 38959037 PMCID: PMC11252788 DOI: 10.1073/pnas.2306800121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 04/03/2024] [Indexed: 07/04/2024] Open
Abstract
Understanding the genesis of shared trial-to-trial variability in neuronal population activity within the sensory cortex is critical to uncovering the biological basis of information processing in the brain. Shared variability is often a reflection of the structure of cortical connectivity since it likely arises, in part, from local circuit inputs. A series of experiments from segregated networks of (excitatory) pyramidal neurons in the mouse primary visual cortex challenge this view. Specifically, the across-network correlations were found to be larger than predicted given the known weak cross-network connectivity. We aim to uncover the circuit mechanisms responsible for these enhanced correlations through biologically motivated cortical circuit models. Our central finding is that coupling each excitatory subpopulation with a specific inhibitory subpopulation provides the most robust network-intrinsic solution in shaping these enhanced correlations. This result argues for the existence of excitatory-inhibitory functional assemblies in early sensory areas which mirror not just response properties but also connectivity between pyramidal cells. Furthermore, our findings provide theoretical support for recent experimental observations showing that cortical inhibition forms structural and functional subnetworks with excitatory cells, in contrast to the classical view that inhibition is a nonspecific blanket suppression of local excitation.
Collapse
Affiliation(s)
- Alex Negrón
- Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL60616
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL60637
| | - Matthew P. Getz
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL60637
- Department of Neurobiology, University of Chicago, Chicago, IL60637
- Department of Statistics, University of Chicago, Chicago, IL60637
| | - Gregory Handy
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL60637
- Department of Neurobiology, University of Chicago, Chicago, IL60637
- Department of Statistics, University of Chicago, Chicago, IL60637
| | - Brent Doiron
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL60637
- Department of Neurobiology, University of Chicago, Chicago, IL60637
- Department of Statistics, University of Chicago, Chicago, IL60637
| |
Collapse
|
3
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
4
|
Crosser JT, Brinkman BAW. Applications of information geometry to spiking neural network activity. Phys Rev E 2024; 109:024302. [PMID: 38491696 DOI: 10.1103/physreve.109.024302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/10/2024] [Indexed: 03/18/2024]
Abstract
The space of possible behaviors that complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, although the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model outputs change as a function of their parameters, giving a quantitative notion of "distances" between outputs. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
Collapse
Affiliation(s)
- Jacob T Crosser
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
5
|
Oldenburg IA, Hendricks WD, Handy G, Shamardani K, Bounds HA, Doiron B, Adesnik H. The logic of recurrent circuits in the primary visual cortex. Nat Neurosci 2024; 27:137-147. [PMID: 38172437 PMCID: PMC10774145 DOI: 10.1038/s41593-023-01510-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 10/27/2023] [Indexed: 01/05/2024]
Abstract
Recurrent cortical activity sculpts visual perception by refining, amplifying or suppressing visual input. However, the rules that govern the influence of recurrent activity remain enigmatic. We used ensemble-specific two-photon optogenetics in the mouse visual cortex to isolate the impact of recurrent activity from external visual input. We found that the spatial arrangement and the visual feature preference of the stimulated ensemble and the neighboring neurons jointly determine the net effect of recurrent activity. Photoactivation of these ensembles drives suppression in all cells beyond 30 µm but uniformly drives activation in closer similarly tuned cells. In nonsimilarly tuned cells, compact, cotuned ensembles drive net suppression, while diffuse, cotuned ensembles drive activation. Computational modeling suggests that highly local recurrent excitatory connectivity and selective convergence onto inhibitory neurons explain these effects. Our findings reveal a straightforward logic in which space and feature preference of cortical ensembles determine their impact on local recurrent activity.
Collapse
Affiliation(s)
- Ian Antón Oldenburg
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA.
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School, and Center for Advanced Biotechnology and Medicine, Rutgers University, Piscataway, NJ, USA.
| | - William D Hendricks
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Gregory Handy
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA.
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA.
- Department of Mathematics, University of Minnesota, Minneapolis, MN, USA.
| | - Kiarash Shamardani
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
| | - Hayley A Bounds
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Brent Doiron
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Hillel Adesnik
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA.
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
6
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact Analysis of the Subthreshold Variability for Conductance-Based Neuronal Models with Synchronous Synaptic Inputs. PHYSICAL REVIEW. X 2024; 14:011021. [PMID: 38911939 PMCID: PMC11194039 DOI: 10.1103/physrevx.14.011021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically, we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects postspiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime yields realistic subthreshold variability (voltage variance ≃4-9 mV2) only when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that, without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Baowang Li
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Learning and Memory, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Psychology, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Nicholas J. Priebe
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Learning and Memory, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Eyal Seidemann
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Psychology, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Mathematics, The University of Texas at Austin, Austin, Texas 78712, USA
| |
Collapse
|
7
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact analysis of the subthreshold variability for conductance-based neuronal models with synchronous synaptic inputs. ARXIV 2023:arXiv:2304.09280v3. [PMID: 37131877 PMCID: PMC10153295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects post-spiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime only yields realistic subthreshold variability (voltage variance ≃ 4 - 9 m V 2 ) when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
| | - Baowang Li
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Center for Learning and Memory, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
| | - Nicholas J. Priebe
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Learning and Memory, The University of Texas at Austin
| | - Eyal Seidemann
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Department of Mathematics, The University of Texas at Austin
| |
Collapse
|
8
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact analysis of the subthreshold variability for conductance-based neuronal models with synchronous synaptic inputs. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.17.536739. [PMID: 37131647 PMCID: PMC10153111 DOI: 10.1101/2023.04.17.536739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects post-spiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime only yields realistic subthreshold variability (voltage variance ≅ 4-9mV 2 ) when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
|
9
|
Bernáez Timón L, Ekelmans P, Kraynyukova N, Rose T, Busse L, Tchumatchenko T. How to incorporate biological insights into network models and why it matters. J Physiol 2023; 601:3037-3053. [PMID: 36069408 DOI: 10.1113/jp282755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 08/24/2022] [Indexed: 11/08/2022] Open
Abstract
Due to the staggering complexity of the brain and its neural circuitry, neuroscientists rely on the analysis of mathematical models to elucidate its function. From Hodgkin and Huxley's detailed description of the action potential in 1952 to today, new theories and increasing computational power have opened up novel avenues to study how neural circuits implement the computations that underlie behaviour. Computational neuroscientists have developed many models of neural circuits that differ in complexity, biological realism or emergent network properties. With recent advances in experimental techniques for detailed anatomical reconstructions or large-scale activity recordings, rich biological data have become more available. The challenge when building network models is to reflect experimental results, either through a high level of detail or by finding an appropriate level of abstraction. Meanwhile, machine learning has facilitated the development of artificial neural networks, which are trained to perform specific tasks. While they have proven successful at achieving task-oriented behaviour, they are often abstract constructs that differ in many features from the physiology of brain circuits. Thus, it is unclear whether the mechanisms underlying computation in biological circuits can be investigated by analysing artificial networks that accomplish the same function but differ in their mechanisms. Here, we argue that building biologically realistic network models is crucial to establishing causal relationships between neurons, synapses, circuits and behaviour. More specifically, we advocate for network models that consider the connectivity structure and the recorded activity dynamics while evaluating task performance.
Collapse
Affiliation(s)
- Laura Bernáez Timón
- Institute for Physiological Chemistry, University of Mainz Medical Center, Mainz, Germany
| | - Pierre Ekelmans
- Frankfurt Institute for Advanced Studies, Frankfurt, Germany
| | - Nataliya Kraynyukova
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| | - Tobias Rose
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, Munich, Germany
- Bernstein Center for Computational Neuroscience, Munich, Germany
| | - Tatjana Tchumatchenko
- Institute for Physiological Chemistry, University of Mainz Medical Center, Mainz, Germany
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| |
Collapse
|
10
|
Handy G, Borisyuk A. Investigating the ability of astrocytes to drive neural network synchrony. PLoS Comput Biol 2023; 19:e1011290. [PMID: 37556468 PMCID: PMC10441806 DOI: 10.1371/journal.pcbi.1011290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 08/21/2023] [Accepted: 06/21/2023] [Indexed: 08/11/2023] Open
Abstract
Recent experimental works have implicated astrocytes as a significant cell type underlying several neuronal processes in the mammalian brain, from encoding sensory information to neurological disorders. Despite this progress, it is still unclear how astrocytes are communicating with and driving their neuronal neighbors. While previous computational modeling works have helped propose mechanisms responsible for driving these interactions, they have primarily focused on interactions at the synaptic level, with microscale models of calcium dynamics and neurotransmitter diffusion. Since it is computationally infeasible to include the intricate microscale details in a network-scale model, little computational work has been done to understand how astrocytes may be influencing spiking patterns and synchronization of large networks. We overcome this issue by first developing an "effective" astrocyte that can be easily implemented to already established network frameworks. We do this by showing that the astrocyte proximity to a synapse makes synaptic transmission faster, weaker, and less reliable. Thus, our "effective" astrocytes can be incorporated by considering heterogeneous synaptic time constants, which are parametrized only by the degree of astrocytic proximity at that synapse. We then apply our framework to large networks of exponential integrate-and-fire neurons with various spatial structures. Depending on key parameters, such as the number of synapses ensheathed and the strength of this ensheathment, we show that astrocytes can push the network to a synchronous state and exhibit spatially correlated patterns.
Collapse
Affiliation(s)
- Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, Illinois, United States of America
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, Illinois, United States of America
| | - Alla Borisyuk
- Department of Mathematics, University of Utah, Salt Lake City, Utah, United States of America
| |
Collapse
|
11
|
Negrón A, Getz MP, Handy G, Doiron B. The mechanics of correlated variability in segregated cortical excitatory subnetworks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.25.538323. [PMID: 37162867 PMCID: PMC10168290 DOI: 10.1101/2023.04.25.538323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Understanding the genesis of shared trial-to-trial variability in neural activity within sensory cortex is critical to uncovering the biological basis of information processing in the brain. Shared variability is often a reflection of the structure of cortical connectivity since this variability likely arises, in part, from local circuit inputs. A series of experiments from segregated networks of (excitatory) pyramidal neurons in mouse primary visual cortex challenge this view. Specifically, the across-network correlations were found to be larger than predicted given the known weak cross-network connectivity. We aim to uncover the circuit mechanisms responsible for these enhanced correlations through biologically motivated cortical circuit models. Our central finding is that coupling each excitatory subpopulation with a specific inhibitory subpopulation provides the most robust network-intrinsic solution in shaping these enhanced correlations. This result argues for the existence of excitatory-inhibitory functional assemblies in early sensory areas which mirror not just response properties but also connectivity between pyramidal cells.
Collapse
Affiliation(s)
- Alex Negrón
- Department of Applied Mathematics, Illinois Institute of Technology
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Matthew P. Getz
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Brent Doiron
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| |
Collapse
|
12
|
Morales GB, di Santo S, Muñoz MA. Quasiuniversal scaling in mouse-brain neuronal activity stems from edge-of-instability critical dynamics. Proc Natl Acad Sci U S A 2023; 120:e2208998120. [PMID: 36827262 PMCID: PMC9992863 DOI: 10.1073/pnas.2208998120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 12/31/2022] [Indexed: 02/25/2023] Open
Abstract
The brain is in a state of perpetual reverberant neural activity, even in the absence of specific tasks or stimuli. Shedding light on the origin and functional significance of such a dynamical state is essential to understanding how the brain transmits, processes, and stores information. An inspiring, albeit controversial, conjecture proposes that some statistical characteristics of empirically observed neuronal activity can be understood by assuming that brain networks operate in a dynamical regime with features, including the emergence of scale invariance, resembling those seen typically near phase transitions. Here, we present a data-driven analysis based on simultaneous high-throughput recordings of the activity of thousands of individual neurons in various regions of the mouse brain. To analyze these data, we construct a unified theoretical framework that synergistically combines a phenomenological renormalization group approach and techniques that infer the general dynamical state of a neural population, while designing complementary tools. This strategy allows us to uncover strong signatures of scale invariance that are "quasiuniversal" across brain regions and experiments, revealing that all the analyzed areas operate, to a greater or lesser extent, near the edge of instability.
Collapse
Affiliation(s)
- Guillermo B. Morales
- Departamento de Electromagnetismo y Física de la Materia, Instituto Carlos I de Física Teórica y Computacional Universidad de Granada, GranadaE-18071, Spain
| | - Serena di Santo
- Morton B. Zuckerman Mind Brain Behavior Institute Columbia University, New York, NY10027
| | - Miguel A. Muñoz
- Departamento de Electromagnetismo y Física de la Materia, Instituto Carlos I de Física Teórica y Computacional Universidad de Granada, GranadaE-18071, Spain
| |
Collapse
|
13
|
Shomali SR, Rasuli SN, Ahmadabadi MN, Shimazaki H. Uncovering hidden network architecture from spiking activities using an exact statistical input-output relation of neurons. Commun Biol 2023; 6:169. [PMID: 36792689 PMCID: PMC9932086 DOI: 10.1038/s42003-023-04511-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 01/20/2023] [Indexed: 02/17/2023] Open
Abstract
Identifying network architecture from observed neural activities is crucial in neuroscience studies. A key requirement is knowledge of the statistical input-output relation of single neurons in vivo. By utilizing an exact analytical solution of the spike-timing for leaky integrate-and-fire neurons under noisy inputs balanced near the threshold, we construct a framework that links synaptic type, strength, and spiking nonlinearity with the statistics of neuronal population activity. The framework explains structured pairwise and higher-order interactions of neurons receiving common inputs under different architectures. We compared the theoretical predictions with the activity of monkey and mouse V1 neurons and found that excitatory inputs given to pairs explained the observed sparse activity characterized by strong negative triple-wise interactions, thereby ruling out the alternative explanation by shared inhibition. Moreover, we showed that the strong interactions are a signature of excitatory rather than inhibitory inputs whenever the spontaneous rate is low. We present a guide map of neural interactions that help researchers to specify the hidden neuronal motifs underlying observed interactions found in empirical data.
Collapse
Affiliation(s)
- Safura Rashid Shomali
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5746, Iran.
| | - Seyyed Nader Rasuli
- grid.418744.a0000 0000 8841 7951School of Physics, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5531 Iran ,grid.411872.90000 0001 2087 2250Department of Physics, University of Guilan, Rasht, 41335-1914 Iran
| | - Majid Nili Ahmadabadi
- grid.46072.370000 0004 0612 7950Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14395-515 Iran
| | - Hideaki Shimazaki
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan. .,Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Hokkaido, 060-0812, Japan.
| |
Collapse
|
14
|
Shao Y, Ostojic S. Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. PLoS Comput Biol 2023; 19:e1010855. [PMID: 36689488 PMCID: PMC9894562 DOI: 10.1371/journal.pcbi.1010855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/02/2023] [Accepted: 01/06/2023] [Indexed: 01/24/2023] Open
Abstract
How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.
Collapse
Affiliation(s)
- Yuxiu Shao
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| |
Collapse
|
15
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
16
|
Panzeri S, Moroni M, Safaai H, Harvey CD. The structures and functions of correlations in neural population codes. Nat Rev Neurosci 2022; 23:551-567. [PMID: 35732917 DOI: 10.1038/s41583-022-00606-4] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/19/2022] [Indexed: 12/17/2022]
Abstract
The collective activity of a population of neurons, beyond the properties of individual cells, is crucial for many brain functions. A fundamental question is how activity correlations between neurons affect how neural populations process information. Over the past 30 years, major progress has been made on how the levels and structures of correlations shape the encoding of information in population codes. Correlations influence population coding through the organization of pairwise-activity correlations with respect to the similarity of tuning of individual neurons, by their stimulus modulation and by the presence of higher-order correlations. Recent work has shown that correlations also profoundly shape other important functions performed by neural populations, including generating codes across multiple timescales and facilitating information transmission to, and readout by, downstream brain areas to guide behaviour. Here, we review this recent work and discuss how the structures of correlations can have opposite effects on the different functions of neural populations, thus creating trade-offs and constraints for the structure-function relationships of population codes. Further, we present ideas on how to combine large-scale simultaneous recordings of neural populations, computational models, analyses of behaviour, optogenetics and anatomy to unravel how the structures of correlations might be optimized to serve multiple functions.
Collapse
Affiliation(s)
- Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany. .,Istituto Italiano di Tecnologia, Rovereto, Italy.
| | | | - Houman Safaai
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|
17
|
Huang C, Pouget A, Doiron B. Internally generated population activity in cortical networks hinders information transmission. SCIENCE ADVANCES 2022; 8:eabg5244. [PMID: 35648863 PMCID: PMC9159697 DOI: 10.1126/sciadv.abg5244] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 04/14/2022] [Indexed: 06/15/2023]
Abstract
How neuronal variability affects sensory coding is a central question in systems neuroscience, often with complex and model-dependent answers. Many studies explore population models with a parametric structure for response tuning and variability, preventing an analysis of how synaptic circuitry establishes neural codes. We study stimulus coding in networks of spiking neuron models with spatially ordered excitatory and inhibitory connectivity. The wiring structure is capable of producing rich population-wide shared neuronal variability that agrees with many features of recorded cortical activity. While both the spatial scales of feedforward and recurrent projections strongly affect noise correlations, only recurrent projections, and in particular inhibitory projections, can introduce correlations that limit the stimulus information available to a decoder. Using a spatial neural field model, we relate the recurrent circuit conditions for information limiting noise correlations to how recurrent excitation and inhibition can form spatiotemporal patterns of population-wide activity.
Collapse
Affiliation(s)
- Chengcheng Huang
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Alexandre Pouget
- Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| |
Collapse
|
18
|
Bloch J, Greaves-Tunnell A, Shea-Brown E, Harchaoui Z, Shojaie A, Yazdan-Shahmorad A. Network structure mediates functional reorganization induced by optogenetic stimulation of non-human primate sensorimotor cortex. iScience 2022; 25:104285. [PMID: 35573193 PMCID: PMC9095749 DOI: 10.1016/j.isci.2022.104285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 03/22/2022] [Accepted: 04/19/2022] [Indexed: 11/04/2022] Open
Abstract
Because aberrant network-level functional connectivity underlies a variety of neural disorders, the ability to induce targeted functional reorganization would be a profound development toward therapies for neural disorders. Brain stimulation has been shown to induce large-scale network-wide functional connectivity changes (FCC), but the mapping from stimulation to the induced changes is unclear. Here, we develop a model which jointly considers the stimulation protocol and the cortical network structure to accurately predict network-wide FCC in response to optogenetic stimulation of non-human primate primary sensorimotor cortex. We observe that the network structure has a much stronger effect than the stimulation protocol on the resulting FCC. We also observe that the mappings from these input features to the FCC diverge over frequency bands and successive stimulations. Our framework represents a paradigm shift for targeted neural stimulation and can be used to interrogate, improve, and develop stimulation-based interventions for neural disorders.
Collapse
Affiliation(s)
- Julien Bloch
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
- Center for Neurotechnology, University of Washington, Seattle, WA 98105, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA 98105, USA
- Washington National Primate Research Center, University of Washington, Seattle, WA 98105, USA
| | | | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, WA 98105, USA
- Center for Neurotechnology, University of Washington, Seattle, WA 98105, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA 98105, USA
| | - Zaid Harchaoui
- Department of Statistics, University of Washington, Seattle, WA 98105, USA
| | - Ali Shojaie
- Department of Biostatistics, University of Washington, Seattle, WA 98105, USA
| | - Azadeh Yazdan-Shahmorad
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98105, USA
- Center for Neurotechnology, University of Washington, Seattle, WA 98105, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA 98105, USA
- Washington National Primate Research Center, University of Washington, Seattle, WA 98105, USA
| |
Collapse
|
19
|
Xiao ZC, Lin KK, Young LS. A data-informed mean-field approach to mapping of cortical parameter landscapes. PLoS Comput Biol 2021; 17:e1009718. [PMID: 34941863 PMCID: PMC8741023 DOI: 10.1371/journal.pcbi.1009718] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 01/07/2022] [Accepted: 12/02/2021] [Indexed: 11/19/2022] Open
Abstract
Constraining the many biological parameters that govern cortical dynamics is computationally and conceptually difficult because of the curse of dimensionality. This paper addresses these challenges by proposing (1) a novel data-informed mean-field (MF) approach to efficiently map the parameter space of network models; and (2) an organizing principle for studying parameter space that enables the extraction biologically meaningful relations from this high-dimensional data. We illustrate these ideas using a large-scale network model of the Macaque primary visual cortex. Of the 10-20 model parameters, we identify 7 that are especially poorly constrained, and use the MF algorithm in (1) to discover the firing rate contours in this 7D parameter cube. Defining a "biologically plausible" region to consist of parameters that exhibit spontaneous Excitatory and Inhibitory firing rates compatible with experimental values, we find that this region is a slightly thickened codimension-1 submanifold. An implication of this finding is that while plausible regimes depend sensitively on parameters, they are also robust and flexible provided one compensates appropriately when parameters are varied. Our organizing principle for conceptualizing parameter dependence is to focus on certain 2D parameter planes that govern lateral inhibition: Intersecting these planes with the biologically plausible region leads to very simple geometric structures which, when suitably scaled, have a universal character independent of where the intersections are taken. In addition to elucidating the geometry of the plausible region, this invariance suggests useful approximate scaling relations. Our study offers, for the first time, a complete characterization of the set of all biologically plausible parameters for a detailed cortical model, which has been out of reach due to the high dimensionality of parameter space.
Collapse
Affiliation(s)
- Zhuo-Cheng Xiao
- Courant Institute of Mathematical Sciences, New York University, New York, New York, United States of America
| | - Kevin K. Lin
- Department of Mathematics, University of Arizona, Tucson, Arizona, United States of America
| | - Lai-Sang Young
- Courant Institute of Mathematical Sciences, New York University, New York, New York, United States of America
- Institute for Advanced Study, Princeton, New Jersey, United States of America
- * E-mail:
| |
Collapse
|
20
|
Huang C. Modulation of the dynamical state in cortical network models. Curr Opin Neurobiol 2021; 70:43-50. [PMID: 34403890 PMCID: PMC8688204 DOI: 10.1016/j.conb.2021.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 05/18/2021] [Accepted: 07/14/2021] [Indexed: 11/29/2022]
Abstract
Cortical neural responses can be modulated by various factors, such as stimulus inputs and the behavior state of the animal. Understanding the circuit mechanisms underlying modulations of network dynamics is important to understand the flexibility of circuit computations. Identifying the dynamical state of a network is an important first step to predict network responses to external stimulus and top-down modulatory inputs. Models in stable or unstable dynamical regimes require different analytic tools to estimate the network responses to inputs and the structure of neural variability. In this article, I review recent cortical models of state-dependent responses and their predictions about the underlying modulatory mechanisms.
Collapse
Affiliation(s)
- Chengcheng Huang
- Departments of Neuroscience and Mathematics, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
21
|
Bojanek K, Zhu Y, MacLean J. Cyclic transitions between higher order motifs underlie sustained asynchronous spiking in sparse recurrent networks. PLoS Comput Biol 2020; 16:e1007409. [PMID: 32997658 PMCID: PMC7549833 DOI: 10.1371/journal.pcbi.1007409] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/12/2020] [Accepted: 07/28/2020] [Indexed: 12/26/2022] Open
Abstract
A basic—yet nontrivial—function which neocortical circuitry must satisfy is the ability to maintain stable spiking activity over time. Stable neocortical activity is asynchronous, critical, and low rate, and these features of spiking dynamics contribute to efficient computation and optimal information propagation. However, it remains unclear how neocortex maintains this asynchronous spiking regime. Here we algorithmically construct spiking neural network models, each composed of 5000 neurons. Network construction synthesized topological statistics from neocortex with a set of objective functions identifying naturalistic low-rate, asynchronous, and critical activity. We find that simulations run on the same topology exhibit sustained asynchronous activity under certain sets of initial membrane voltages but truncated activity under others. Synchrony, rate, and criticality do not provide a full explanation of this dichotomy. Consequently, in order to achieve mechanistic understanding of sustained asynchronous activity, we summarized activity as functional graphs where edges between units are defined by pairwise spike dependencies. We then analyzed the intersection between functional edges and synaptic connectivity- i.e. recruitment networks. Higher-order patterns, such as triplet or triangle motifs, have been tied to cooperativity and integration. We find, over time in each sustained simulation, low-variance periodic transitions between isomorphic triangle motifs in the recruitment networks. We quantify the phenomenon as a Markov process and discover that if the network fails to engage this stereotyped regime of motif dominance “cycling”, spiking activity truncates early. Cycling of motif dominance generalized across manipulations of synaptic weights and topologies, demonstrating the robustness of this regime for maintenance of network activity. Our results point to the crucial role of excitatory higher-order patterns in sustaining asynchronous activity in sparse recurrent networks. They also provide a possible explanation why such connectivity and activity patterns have been prominently reported in neocortex. Neocortical spiking activity tends to be low-rate and non-rhythmic, and to operate near the critical point of a phase transition. It remains unclear how this kind of spiking activity can be maintained within a neuronal network. Neurons are leaky and individual synaptic connections are sparse and weak, making the maintenance of an asynchronous regime a nontrivial problem. Higher order patterns involving more than two units abound in neocortex, and several lines of evidence suggest that they may be instrumental for brain function. For example, stable activity in vivo displays elevated clustering dominated by specific three-node (triplet) motifs. In this study we demonstrate a link between the maintenance of asynchronous activity and triplet motifs. We algorithmically build spiking neural network models to mimic the topology of neocortex and the spiking statistics that characterize wakefulness. We show that higher order coordination of synapses is always present during sustained asynchronous activity. Coordination takes the form of transitions in time between specific triangle motifs. These motifs summarize the way spikes traverse the underlying synaptic topology. The results of our model are consistent with numerous experimental observations, and their generalizability to other weakly and sparsely connected networks is predicted.
Collapse
Affiliation(s)
- Kyle Bojanek
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Yuqing Zhu
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Jason MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
22
|
Stapmanns J, Kühn T, Dahmen D, Luu T, Honerkamp C, Helias M. Self-consistent formulations for stochastic nonlinear neuronal dynamics. Phys Rev E 2020; 101:042124. [PMID: 32422832 DOI: 10.1103/physreve.101.042124] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 12/18/2019] [Indexed: 01/28/2023]
Abstract
Neural dynamics is often investigated with tools from bifurcation theory. However, many neuron models are stochastic, mimicking fluctuations in the input from unknown parts of the brain or the spiking nature of signals. Noise changes the dynamics with respect to the deterministic model; in particular classical bifurcation theory cannot be applied. We formulate the stochastic neuron dynamics in the Martin-Siggia-Rose de Dominicis-Janssen (MSRDJ) formalism and present the fluctuation expansion of the effective action and the functional renormalization group (fRG) as two systematic ways to incorporate corrections to the mean dynamics and time-dependent statistics due to fluctuations in the presence of nonlinear neuronal gain. To formulate self-consistency equations, we derive a fundamental link between the effective action in the Onsager-Machlup (OM) formalism, which allows the study of phase transitions, and the MSRDJ effective action, which is computationally advantageous. These results in particular allow the derivation of an OM effective action for systems with non-Gaussian noise. This approach naturally leads to effective deterministic equations for the first moment of the stochastic system; they explain how nonlinearities and noise cooperate to produce memory effects. Moreover, the MSRDJ formulation yields an effective linear system that has identical power spectra and linear response. Starting from the better known loopwise approximation, we then discuss the use of the fRG as a method to obtain self-consistency beyond the mean. We present a new efficient truncation scheme for the hierarchy of flow equations for the vertex functions by adapting the Blaizot, Méndez, and Wschebor approximation from the derivative expansion to the vertex expansion. The methods are presented by means of the simplest possible example of a stochastic differential equation that has generic features of neuronal dynamics.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - Tobias Kühn
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Thomas Luu
- Institut für Kernphysik (IKP-3), Institute for Advanced Simulation (IAS-4) and Jülich Center for Hadron Physics, Jülich Research Centre, Jülich, Germany
| | - Carsten Honerkamp
- Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany.,JARA-FIT, Jülich Aachen Research Alliance-Fundamentals of Future Information Technology, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| |
Collapse
|
23
|
Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLoS Comput Biol 2020; 16:e1007835. [PMID: 32384081 PMCID: PMC7239496 DOI: 10.1371/journal.pcbi.1007835] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/20/2020] [Accepted: 03/31/2020] [Indexed: 01/08/2023] Open
Abstract
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.
Collapse
Affiliation(s)
- Lisandro Montangie
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Christoph Miehl
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
- * E-mail:
| |
Collapse
|
24
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
25
|
Constraining computational models using electron microscopy wiring diagrams. Curr Opin Neurobiol 2019; 58:94-100. [PMID: 31470252 DOI: 10.1016/j.conb.2019.07.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Accepted: 07/25/2019] [Indexed: 12/18/2022]
Abstract
Numerous efforts to generate "connectomes," or synaptic wiring diagrams, of large neural circuits or entire nervous systems are currently underway. These efforts promise an abundance of data to guide theoretical models of neural computation and test their predictions. However, there is not yet a standard set of tools for incorporating the connectivity constraints that these datasets provide into the models typically studied in theoretical neuroscience. This article surveys recent approaches to building models with constrained wiring diagrams and the insights they have provided. It also describes challenges and the need for new techniques to scale these approaches to ever more complex datasets.
Collapse
|
26
|
Shamir M. Theories of rhythmogenesis. Curr Opin Neurobiol 2019; 58:70-77. [PMID: 31408837 DOI: 10.1016/j.conb.2019.07.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2019] [Accepted: 07/14/2019] [Indexed: 12/31/2022]
Abstract
Rhythmogenesis is the process that develops the capacity for rhythmic activity in a non-rhythmic system. Theoretical works suggested a wide array of possible mechanisms for rhythmogenesis ranging from the regulation of cellular properties to top-down control. Here we discuss theories of rhythmogenesis with an emphasis on spike timing-dependent plasticity. We argue that even though the specifics of different mechanisms vary greatly they all share certain key features. Namely, rhythmogenesis can be described as a flow on the phase diagram leading the system into a rhythmic region and stabilizing it on a specific manifold characterized by the desired rhythmic activity. Functionality is retained despite biological diversity by forcing the system into a specific manifold, but allowing fluctuations within that manifold.
Collapse
Affiliation(s)
- Maoz Shamir
- Department of Physiology and Cell Biology, Faculty of Health Sciences, Department of Physics, Faculty of Natural Sciences, Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Be'er-Sheva, Israel; The Kavli Institute for Theoretical Physics, University of California, Santa Barbara, USA.
| |
Collapse
|
27
|
Correlation Transfer by Layer 5 Cortical Neurons Under Recreated Synaptic Inputs In Vitro. J Neurosci 2019; 39:7648-7663. [PMID: 31346031 DOI: 10.1523/jneurosci.3169-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 07/06/2019] [Accepted: 07/12/2019] [Indexed: 11/21/2022] Open
Abstract
Correlated electrical activity in neurons is a prominent characteristic of cortical microcircuits. Despite a growing amount of evidence concerning both spike-count and subthreshold membrane potential pairwise correlations, little is known about how different types of cortical neurons convert correlated inputs into correlated outputs. We studied pyramidal neurons and two classes of GABAergic interneurons of layer 5 in neocortical brain slices obtained from rats of both sexes, and we stimulated them with biophysically realistic correlated inputs, generated using dynamic clamp. We found that the physiological differences between cell types manifested unique features in their capacity to transfer correlated inputs. We used linear response theory and computational modeling to gain clear insights into how cellular properties determine both the gain and timescale of correlation transfer, thus tying single-cell features with network interactions. Our results provide further ground for the functionally distinct roles played by various types of neuronal cells in the cortical microcircuit.SIGNIFICANCE STATEMENT No matter how we probe the brain, we find correlated neuronal activity over a variety of spatial and temporal scales. For the cerebral cortex, significant evidence has accumulated on trial-to-trial covariability in synaptic inputs activation, subthreshold membrane potential fluctuations, and output spike trains. Although we do not yet fully understand their origin and whether they are detrimental or beneficial for information processing, we believe that clarifying how correlations emerge is pivotal for understanding large-scale neuronal network dynamics and computation. Here, we report quantitative differences between excitatory and inhibitory cells, as they relay input correlations into output correlations. We explain this heterogeneity by simple biophysical models and provide the most experimentally validated test of a theory for the emergence of correlations.
Collapse
|
28
|
Mogensen H, Norrlid J, Enander JMD, Wahlbom A, Jörntell H. Absence of Repetitive Correlation Patterns Between Pairs of Adjacent Neocortical Neurons in vivo. Front Neural Circuits 2019; 13:48. [PMID: 31379516 PMCID: PMC6658836 DOI: 10.3389/fncir.2019.00048] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 07/05/2019] [Indexed: 11/13/2022] Open
Abstract
Neuroanatomy suggests that adjacent neocortical neurons share a similar set of afferent synaptic inputs, as opposed to neurons localized to different areas of the neocortex. In the present study, we made simultaneous single-electrode patch clamp recordings from two or three adjacent neurons in the primary somatosensory cortex (S1) of the ketamine-xylazine anesthetized rat in vivo to study the correlation patterns in their spike firing during both spontaneous and sensory-evoked activity. One difference with previous studies of pairwise neuronal spike firing correlations was that here we identified several different quantifiable parameters in the correlation patterns by which different pairs could be compared. The questions asked were if the correlation patterns between adjacent pairs were similar and if there was a relationship between the degree of similarity and the layer location of the pairs. In contrast, our results show that for putative pyramidal neurons within layer III and within layer V, each pair of neurons is to some extent unique in terms of their spiking correlation patterns. Interestingly, our results also indicated that these correlation patterns did not substantially alter between spontaneous and evoked activity. Our findings are compatible with the view that the synaptic input connectivity to each neocortical neuron is at least in some aspects unique. A possible interpretation is that plasticity mechanisms, which could either be initiating or be supported by transcriptomic differences, tend to differentiate rather than harmonize the synaptic weight distributions between adjacent neurons of the same type.
Collapse
Affiliation(s)
- Hannes Mogensen
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Johanna Norrlid
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Jonas M D Enander
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Anders Wahlbom
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Henrik Jörntell
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| |
Collapse
|
29
|
Curto C, Morrison K. Relating network connectivity to dynamics: opportunities and challenges for theoretical neuroscience. Curr Opin Neurobiol 2019; 58:11-20. [PMID: 31319287 DOI: 10.1016/j.conb.2019.06.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 06/22/2019] [Indexed: 11/29/2022]
Abstract
We review recent work relating network connectivity to the dynamics of neural activity. While concepts stemming from network science provide a valuable starting point, the interpretation of graph-theoretic structures and measures can be highly dependent on the dynamics associated to the network. Properties that are quite meaningful for linear dynamics, such as random walk and network flow models, may be of limited relevance in the neuroscience setting. Theoretical and computational neuroscience are playing a vital role in understanding the relationship between network connectivity and the nonlinear dynamics associated to neural networks.
Collapse
Affiliation(s)
- Carina Curto
- The Pennsylvania State University, PA 16802, United States.
| | - Katherine Morrison
- School of Mathematical Sciences, University of Northern Colorado, Greeley, CO 80639, USA
| |
Collapse
|
30
|
Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLoS Comput Biol 2019; 15:e1006446. [PMID: 31299044 PMCID: PMC6655892 DOI: 10.1371/journal.pcbi.1006446] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 07/24/2019] [Accepted: 04/03/2019] [Indexed: 11/25/2022] Open
Abstract
The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.
Collapse
Affiliation(s)
- Stefano Recanatesi
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A. Buice
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
31
|
Baker C, Ebsch C, Lampl I, Rosenbaum R. Correlated states in balanced neuronal networks. Phys Rev E 2019; 99:052414. [PMID: 31212573 DOI: 10.1103/physreve.99.052414] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Indexed: 06/09/2023]
Abstract
Understanding the magnitude and structure of interneuronal correlations and their relationship to synaptic connectivity structure is an important and difficult problem in computational neuroscience. Early studies show that neuronal network models with excitatory-inhibitory balance naturally create very weak spike train correlations, defining the "asynchronous state." Later work showed that, under some connectivity structures, balanced networks can produce larger correlations between some neuron pairs, even when the average correlation is very small. All of these previous studies assume that the local network receives feedforward synaptic input from a population of uncorrelated spike trains. We show that when spike trains providing feedforward input are correlated, the downstream recurrent network produces much larger correlations. We provide an in-depth analysis of the resulting "correlated state" in balanced networks and show that, unlike the asynchronous state, it produces a tight excitatory-inhibitory balance consistent with in vivo cortical recordings.
Collapse
Affiliation(s)
- Cody Baker
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
| | - Christopher Ebsch
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
| | - Ilan Lampl
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, 7610001, Israel
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana 46556, USA
| |
Collapse
|
32
|
Kanashiro T, Ocker GK, Cohen MR, Doiron B. Attentional modulation of neuronal variability in circuit models of cortex. eLife 2017; 6. [PMID: 28590902 PMCID: PMC5476447 DOI: 10.7554/elife.23978] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Accepted: 05/20/2017] [Indexed: 01/12/2023] Open
Abstract
The circuit mechanisms behind shared neural variability (noise correlation) and its dependence on neural state are poorly understood. Visual attention is well-suited to constrain cortical models of response variability because attention both increases firing rates and their stimulus sensitivity, as well as decreases noise correlations. We provide a novel analysis of population recordings in rhesus primate visual area V4 showing that a single biophysical mechanism may underlie these diverse neural correlates of attention. We explore model cortical networks where top-down mediated increases in excitability, distributed across excitatory and inhibitory targets, capture the key neuronal correlates of attention. Our models predict that top-down signals primarily affect inhibitory neurons, whereas excitatory neurons are more sensitive to stimulus specific bottom-up inputs. Accounting for trial variability in models of state dependent modulation of neuronal activity is a critical step in building a mechanistic theory of neuronal cognition. DOI:http://dx.doi.org/10.7554/eLife.23978.001 The world around us is complex and our brains need to navigate this complexity. We must focus on relevant inputs from our senses – such as the bus we need to catch – while ignoring distractions – such as the eye-catching displays in the shop windows we pass on the same street. Selective attention is a tool that enables us to filter complex sensory scenes and focus on whatever is most important at the time. But how does selective attention work? Our sense of vision results from the activity of cells in a region of the brain called visual cortex. Paying attention to an object affects the activity of visual cortex in two ways. First, it causes the average activity of the brain cells in the visual cortex that respond to that object to increase. Second, it reduces spontaneous moment-to-moment fluctuations in the activity of those brain cells, known as noise. Both of these effects make it easier for the brain to process the object in question. Kanashiro et al. set out to build a mathematical model of visual cortex that captures these two components of selective attention. The cortex contains two types of brain cells: excitatory neurons, which activate other cells, and inhibitory neurons, which suppress other cells. Experiments suggest that excitatory neurons contribute to the flow of activity within the cortex, whereas inhibitory neurons help cancel out noise. The new mathematical model predicts that paying attention affects inhibitory neurons far more than excitatory ones. According to the model, selective attention works mainly by reducing the noise that would otherwise distort the activity of visual cortex. The next step is to test this prediction directly. This will require measuring the activity of the inhibitory neurons in an animal performing a selective attention task. Such experiments, which should be achievable using existing technologies, will allow scientists to confirm or disprove the current model, and to dissect the mechanisms that underlie visual attention. DOI:http://dx.doi.org/10.7554/eLife.23978.002
Collapse
Affiliation(s)
- Tatjana Kanashiro
- Program for Neural Computation, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, United States.,Department of Mathematics, University of Pittsburgh, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Pittsburgh, United States
| | - Gabriel Koch Ocker
- Department of Mathematics, University of Pittsburgh, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Pittsburgh, United States.,Allen Institute for Brain Science, Seattle, United States
| | - Marlene R Cohen
- Center for the Neural Basis of Cognition, Pittsburgh, United States.,Department of Neuroscience, University of Pittsburgh, Pittsburgh, United States
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Pittsburgh, United States
| |
Collapse
|