1
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
2
|
Mejías JF, Wang XJ. Mechanisms of distributed working memory in a large-scale network of macaque neocortex. eLife 2022; 11:72136. [PMID: 35200137 PMCID: PMC8871396 DOI: 10.7554/elife.72136] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 01/19/2022] [Indexed: 12/15/2022] Open
Abstract
Neural activity underlying working memory is not a local phenomenon but distributed across multiple brain regions. To elucidate the circuit mechanism of such distributed activity, we developed an anatomically constrained computational model of large-scale macaque cortex. We found that mnemonic internal states may emerge from inter-areal reverberation, even in a regime where none of the isolated areas is capable of generating self-sustained activity. The mnemonic activity pattern along the cortical hierarchy indicates a transition in space, separating areas engaged in working memory and those which do not. A host of spatially distinct attractor states is found, potentially subserving various internal processes. The model yields testable predictions, including the idea of counterstream inhibitory bias, the role of prefrontal areas in controlling distributed attractors, and the resilience of distributed activity to lesions or inactivation. This work provides a theoretical framework for identifying large-scale brain mechanisms and computational principles of distributed cognitive processes.
Collapse
Affiliation(s)
- Jorge F Mejías
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Xiao-Jing Wang
- Center for Neural Science, New York University, New York, United States
| |
Collapse
|
3
|
Dasbach S, Tetzlaff T, Diesmann M, Senk J. Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution. Front Neurosci 2021; 15:757790. [PMID: 35002599 PMCID: PMC8740282 DOI: 10.3389/fnins.2021.757790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.
Collapse
Affiliation(s)
- Stefan Dasbach
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
4
|
Di Volo M, Destexhe A. Optimal responsiveness and information flow in networks of heterogeneous neurons. Sci Rep 2021; 11:17611. [PMID: 34475456 PMCID: PMC8413388 DOI: 10.1038/s41598-021-96745-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/11/2021] [Indexed: 02/07/2023] Open
Abstract
Cerebral cortex is characterized by a strong neuron-to-neuron heterogeneity, but it is unclear what consequences this may have for cortical computations, while most computational models consider networks of identical units. Here, we study network models of spiking neurons endowed with heterogeneity, that we treat independently for excitatory and inhibitory neurons. We find that heterogeneous networks are generally more responsive, with an optimal responsiveness occurring for levels of heterogeneity found experimentally in different published datasets, for both excitatory and inhibitory neurons. To investigate the underlying mechanisms, we introduce a mean-field model of heterogeneous networks. This mean-field model captures optimal responsiveness and suggests that it is related to the stability of the spontaneous asynchronous state. The mean-field model also predicts that new dynamical states can emerge from heterogeneity, a prediction which is confirmed by network simulations. Finally we show that heterogeneous networks maximise the information flow in large-scale networks, through recurrent connections. We conclude that neuronal heterogeneity confers different responsiveness to neural networks, which should be taken into account to investigate their information processing capabilities.
Collapse
Affiliation(s)
- Matteo Di Volo
- Laboratoire de Physique Théorique et Modélisation, Université de Cergy-Pontoise, CNRS, UMR 8089, 95302, Cergy-Pontoise cedex, France.
| | - Alain Destexhe
- Paris-Saclay University, Institute of Neuroscience, CNRS, Gif sur Yvette, France
| |
Collapse
|
5
|
Laing CR. Effects of degree distributions in random networks of type-I neurons. Phys Rev E 2021; 103:052305. [PMID: 34134197 DOI: 10.1103/physreve.103.052305] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 04/28/2021] [Indexed: 11/07/2022]
Abstract
We consider large networks of theta neurons and use the Ott-Antonsen ansatz to derive degree-based mean-field equations governing the expected dynamics of the networks. Assuming random connectivity, we investigate the effects of varying the widths of the in- and out-degree distributions on the dynamics of excitatory or inhibitory synaptically coupled networks and gap junction coupled networks. For synaptically coupled networks, the dynamics are independent of the out-degree distribution. Broadening the in-degree distribution destroys oscillations in inhibitory networks and decreases the range of bistability in excitatory networks. For gap junction coupled neurons, broadening the degree distribution varies the values of parameters at which there is an onset of collective oscillations. Many of the results are shown to also occur in networks of more realistic neurons.
Collapse
Affiliation(s)
- Carlo R Laing
- School of Natural and Computational Sciences, Massey University, Private Bag 102-904 NSMC, Auckland, New Zealand
| |
Collapse
|
6
|
Laing CR, Bläsche C. The effects of within-neuron degree correlations in networks of spiking neurons. BIOLOGICAL CYBERNETICS 2020; 114:337-347. [PMID: 32124039 DOI: 10.1007/s00422-020-00822-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Accepted: 02/15/2020] [Indexed: 05/20/2023]
Abstract
We consider the effects of correlations between the in- and out-degrees of individual neurons on the dynamics of a network of neurons. By using theta neurons, we can derive a set of coupled differential equations for the expected dynamics of neurons with the same in-degree. A Gaussian copula is used to introduce correlations between a neuron's in- and out-degree, and numerical bifurcation analysis is used determine the effects of these correlations on the network's dynamics. For excitatory coupling, we find that inducing positive correlations has a similar effect to increasing the coupling strength between neurons, while for inhibitory coupling it has the opposite effect. We also determine the propensity of various two- and three-neuron motifs to occur as correlations are varied and give a plausible explanation for the observed changes in dynamics.
Collapse
Affiliation(s)
- Carlo R Laing
- School of Natural and Computational Sciences, Massey University, NSMC, Private Bag 102-904, Auckland, New Zealand.
| | - Christian Bläsche
- School of Natural and Computational Sciences, Massey University, NSMC, Private Bag 102-904, Auckland, New Zealand
| |
Collapse
|
7
|
Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLoS Comput Biol 2020; 16:e1007835. [PMID: 32384081 PMCID: PMC7239496 DOI: 10.1371/journal.pcbi.1007835] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/20/2020] [Accepted: 03/31/2020] [Indexed: 01/08/2023] Open
Abstract
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.
Collapse
Affiliation(s)
- Lisandro Montangie
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Christoph Miehl
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
- * E-mail:
| |
Collapse
|
8
|
Synaptic Plasticity Shapes Brain Connectivity: Implications for Network Topology. Int J Mol Sci 2019; 20:ijms20246193. [PMID: 31817968 PMCID: PMC6940892 DOI: 10.3390/ijms20246193] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/02/2019] [Accepted: 12/06/2019] [Indexed: 12/13/2022] Open
Abstract
Studies of brain network connectivity improved understanding on brain changes and adaptation in response to different pathologies. Synaptic plasticity, the ability of neurons to modify their connections, is involved in brain network remodeling following different types of brain damage (e.g., vascular, neurodegenerative, inflammatory). Although synaptic plasticity mechanisms have been extensively elucidated, how neural plasticity can shape network organization is far from being completely understood. Similarities existing between synaptic plasticity and principles governing brain network organization could be helpful to define brain network properties and reorganization profiles after damage. In this review, we discuss how different forms of synaptic plasticity, including homeostatic and anti-homeostatic mechanisms, could be directly involved in generating specific brain network characteristics. We propose that long-term potentiation could represent the neurophysiological basis for the formation of highly connected nodes (hubs). Conversely, homeostatic plasticity may contribute to stabilize network activity preventing poor and excessive connectivity in the peripheral nodes. In addition, synaptic plasticity dysfunction may drive brain network disruption in neuropsychiatric conditions such as Alzheimer's disease and schizophrenia. Optimal network architecture, characterized by efficient information processing and resilience, and reorganization after damage strictly depend on the balance between these forms of plasticity.
Collapse
|
9
|
Vegué M, Roxin A. Firing rate distributions in spiking networks with heterogeneous connectivity. Phys Rev E 2019; 100:022208. [PMID: 31574753 DOI: 10.1103/physreve.100.022208] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Indexed: 11/07/2022]
Abstract
Mean-field theory for networks of spiking neurons based on the so-called diffusion approximation has been used to calculate certain measures of neuronal activity which can be compared with experimental data. This includes the distribution of firing rates across the network. However, the theory in its current form applies only to networks in which there is relatively little heterogeneity in the number of incoming and outgoing connections per neuron. Here we extend this theory to include networks with arbitrary degree distributions. Furthermore, the theory takes into account correlations in the in-degree and out-degree of neurons, which would arise, e.g., in the case of networks with hublike neurons. Finally, we show that networks with broad and positively correlated degrees can generate a large-amplitude sustained response to transient stimuli which does not occur in more homogeneous networks.
Collapse
Affiliation(s)
- Marina Vegué
- Centre de Recerca Matemàtica, Bellaterra, Spain and Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain and Instituto de Neurociencias, Consejo Superior de Investigaciones Científicas y Universidad Miguel Hernández, Sant Joan d'Alacant, Alicante, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Bellaterra, Spain and Barcelona Graduate School of Mathematics, Barcelona, Spain
| |
Collapse
|
10
|
Curto C, Morrison K. Relating network connectivity to dynamics: opportunities and challenges for theoretical neuroscience. Curr Opin Neurobiol 2019; 58:11-20. [PMID: 31319287 DOI: 10.1016/j.conb.2019.06.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 06/22/2019] [Indexed: 11/29/2022]
Abstract
We review recent work relating network connectivity to the dynamics of neural activity. While concepts stemming from network science provide a valuable starting point, the interpretation of graph-theoretic structures and measures can be highly dependent on the dynamics associated to the network. Properties that are quite meaningful for linear dynamics, such as random walk and network flow models, may be of limited relevance in the neuroscience setting. Theoretical and computational neuroscience are playing a vital role in understanding the relationship between network connectivity and the nonlinear dynamics associated to neural networks.
Collapse
Affiliation(s)
- Carina Curto
- The Pennsylvania State University, PA 16802, United States.
| | - Katherine Morrison
- School of Mathematical Sciences, University of Northern Colorado, Greeley, CO 80639, USA
| |
Collapse
|
11
|
Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLoS Comput Biol 2019; 15:e1006446. [PMID: 31299044 PMCID: PMC6655892 DOI: 10.1371/journal.pcbi.1006446] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 07/24/2019] [Accepted: 04/03/2019] [Indexed: 11/25/2022] Open
Abstract
The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.
Collapse
Affiliation(s)
- Stefano Recanatesi
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A. Buice
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
12
|
Krauss P, Schuster M, Dietrich V, Schilling A, Schulze H, Metzner C. Weight statistics controls dynamics in recurrent neural networks. PLoS One 2019; 14:e0214541. [PMID: 30964879 PMCID: PMC6456246 DOI: 10.1371/journal.pone.0214541] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 03/14/2019] [Indexed: 11/19/2022] Open
Abstract
Recurrent neural networks are complex non-linear systems, capable of ongoing activity in the absence of driving inputs. The dynamical properties of these systems, in particular their long-time attractor states, are determined on the microscopic level by the connection strengths wij between the individual neurons. However, little is known to which extent network dynamics is tunable on a more coarse-grained level by the statistical features of the weight matrix. In this work, we investigate the dynamics of recurrent networks of Boltzmann neurons. In particular we study the impact of three statistical parameters: density (the fraction of non-zero connections), balance (the ratio of excitatory to inhibitory connections), and symmetry (the fraction of neuron pairs with wij = wji). By computing a 'phase diagram' of network dynamics, we find that balance is the essential control parameter: Its gradual increase from negative to positive values drives the system from oscillatory behavior into a chaotic regime, and eventually into stationary fixed points. Only directly at the border of the chaotic regime do the neural networks display rich but regular dynamics, thus enabling actual information processing. These results suggest that the brain, too, is fine-tuned to the 'edge of chaos' by assuring a proper balance between excitatory and inhibitory neural connections.
Collapse
Affiliation(s)
- Patrick Krauss
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Department of English and American Studies, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Marc Schuster
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Verena Dietrich
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Achim Schilling
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Department of English and American Studies, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Holger Schulze
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Claus Metzner
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Biophysics Group, Department of Physics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| |
Collapse
|
13
|
Gu Y, Qi Y, Gong P. Rich-club connectivity, diverse population coupling, and dynamical activity patterns emerging from local cortical circuits. PLoS Comput Biol 2019; 15:e1006902. [PMID: 30939135 PMCID: PMC6461296 DOI: 10.1371/journal.pcbi.1006902] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 04/12/2019] [Accepted: 02/25/2019] [Indexed: 11/19/2022] Open
Abstract
Experimental studies have begun revealing essential properties of the structural connectivity and the spatiotemporal activity dynamics of cortical circuits. To integrate these properties from anatomy and physiology, and to elucidate the links between them, we develop a novel cortical circuit model that captures a range of realistic features of synaptic connectivity. We show that the model accounts for the emergence of higher-order connectivity structures, including highly connected hub neurons that form an interconnected rich-club. The circuit model exhibits a rich repertoire of dynamical activity states, ranging from asynchronous to localized and global propagating wave states. We find that around the transition between asynchronous and localized propagating wave states, our model quantitatively reproduces a variety of major empirical findings regarding neural spatiotemporal dynamics, which otherwise remain disjointed in existing studies. These dynamics include diverse coupling (correlation) between spiking activity of individual neurons and the population, dynamical wave patterns with variable speeds and precise temporal structures of neural spikes. We further illustrate how these neural dynamics are related to the connectivity properties by analysing structural contributions to variable spiking dynamics and by showing that the rich-club structure is related to the diverse population coupling. These findings establish an integrated account of structural connectivity and activity dynamics of local cortical circuits, and provide new insights into understanding their working mechanisms. To integrate essential anatomical and physiological properties of local cortical circuits and to elucidate mechanistic links between them, we develop a novel circuit model capturing key synaptic connectivity features. We show that the model explains the emergence of a range of connectivity patterns such as rich-club connectivity, and gives rise to a rich repertoire of cortical states. We identify both the anatomical and physiological mechanisms underlying the transition of these cortical states, and show that our model reconciles an otherwise disparate set of key physiological findings on neural activity dynamics. We further illustrate how these neural dynamics are related to the connectivity properties by analysing structural contributions to variable spiking dynamics and by showing that the rich-club structure is related to diverse neural population correlations as observed recently. Our model thus provides a framework for integrating and explaining a variety of neural connectivity properties and spatiotemporal activity dynamics observed in experimental studies, and provides novel experimentally testable predictions.
Collapse
Affiliation(s)
- Yifan Gu
- School of Physics, University of Sydney, New South Wales, Australia
- ARC Centre of Excellence for Integrative Brain Function, University of Sydney, New South Wales, Australia
| | - Yang Qi
- School of Physics, University of Sydney, New South Wales, Australia
- ARC Centre of Excellence for Integrative Brain Function, University of Sydney, New South Wales, Australia
| | - Pulin Gong
- School of Physics, University of Sydney, New South Wales, Australia
- ARC Centre of Excellence for Integrative Brain Function, University of Sydney, New South Wales, Australia
- * E-mail:
| |
Collapse
|
14
|
Duarte R, Morrison A. Leveraging heterogeneity for neural computation with fading memory in layer 2/3 cortical microcircuits. PLoS Comput Biol 2019; 15:e1006781. [PMID: 31022182 PMCID: PMC6504118 DOI: 10.1371/journal.pcbi.1006781] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 05/07/2019] [Accepted: 01/09/2019] [Indexed: 11/24/2022] Open
Abstract
Complexity and heterogeneity are intrinsic to neurobiological systems, manifest in every process, at every scale, and are inextricably linked to the systems' emergent collective behaviours and function. However, the majority of studies addressing the dynamics and computational properties of biologically inspired cortical microcircuits tend to assume (often for the sake of analytical tractability) a great degree of homogeneity in both neuronal and synaptic/connectivity parameters. While simplification and reductionism are necessary to understand the brain's functional principles, disregarding the existence of the multiple heterogeneities in the cortical composition, which may be at the core of its computational proficiency, will inevitably fail to account for important phenomena and limit the scope and generalizability of cortical models. We address these issues by studying the individual and composite functional roles of heterogeneities in neuronal, synaptic and structural properties in a biophysically plausible layer 2/3 microcircuit model, built and constrained by multiple sources of empirical data. This approach was made possible by the emergence of large-scale, well curated databases, as well as the substantial improvements in experimental methodologies achieved over the last few years. Our results show that variability in single neuron parameters is the dominant source of functional specialization, leading to highly proficient microcircuits with much higher computational power than their homogeneous counterparts. We further show that fully heterogeneous circuits, which are closest to the biophysical reality, owe their response properties to the differential contribution of different sources of heterogeneity.
Collapse
Affiliation(s)
- Renato Duarte
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Biology, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
15
|
Gu QLL, Li S, Dai WP, Zhou D, Cai D. Balanced Active Core in Heterogeneous Neuronal Networks. Front Comput Neurosci 2019; 12:109. [PMID: 30745868 PMCID: PMC6360995 DOI: 10.3389/fncom.2018.00109] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 12/21/2018] [Indexed: 11/23/2022] Open
Abstract
It is hypothesized that cortical neuronal circuits operate in a global balanced state, i.e., the majority of neurons fire irregularly by receiving balanced inputs of excitation and inhibition. Meanwhile, it has been observed in experiments that sensory information is often sparsely encoded by only a small set of firing neurons, while neurons in the rest of the network are silent. The phenomenon of sparse coding challenges the hypothesis of a global balanced state in the brain. To reconcile this, here we address the issue of whether a balanced state can exist in a small number of firing neurons by taking account of the heterogeneity of network structure such as scale-free and small-world networks. We propose necessary conditions and show that, under these conditions, for sparsely but strongly connected heterogeneous networks with various types of single-neuron dynamics, despite the fact that the whole network receives external inputs, there is a small active subnetwork (active core) inherently embedded within it. The neurons in this active core have relatively high firing rates while the neurons in the rest of the network are quiescent. Surprisingly, although the whole network is heterogeneous and unbalanced, the active core possesses a balanced state and its connectivity structure is close to a homogeneous Erdös-Rényi network. The dynamics of the active core can be well-predicted using the Fokker-Planck equation. Our results suggest that the balanced state may be maintained by a small group of spiking neurons embedded in a large heterogeneous network in the brain. The existence of the small active core reconciles the balanced state and the sparse coding, and also provides a potential dynamical scenario underlying sparse coding in neuronal networks.
Collapse
Affiliation(s)
- Qing-Long L Gu
- School of Mathematical Sciences, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Songting Li
- School of Mathematical Sciences, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Wei P Dai
- Department of Physics and Astronomy, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Douglas Zhou
- School of Mathematical Sciences, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - David Cai
- School of Mathematical Sciences, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China.,Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY, United States.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
16
|
Wu S, Zhang Y, Cui Y, Li H, Wang J, Guo L, Xia Y, Yao D, Xu P, Guo D. Heterogeneity of synaptic input connectivity regulates spike-based neuronal avalanches. Neural Netw 2018; 110:91-103. [PMID: 30508808 DOI: 10.1016/j.neunet.2018.10.017] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 09/26/2018] [Accepted: 10/30/2018] [Indexed: 10/27/2022]
Abstract
Our mysterious brain is believed to operate near a non-equilibrium point and generate critical self-organized avalanches in neuronal activity. A central topic in neuroscience is to elucidate the underlying circuitry mechanisms of neuronal avalanches in the brain. Recent experimental evidence has revealed significant heterogeneity in both synaptic input and output connectivity, but whether the structural heterogeneity participates in the regulation of neuronal avalanches remains poorly understood. By computational modeling, we predict that different types of structural heterogeneity contribute distinct effects on avalanche neurodynamics. In particular, neuronal avalanches can be triggered at an intermediate level of input heterogeneity, but heterogeneous output connectivity cannot evoke avalanche dynamics. In the criticality region, the co-emergence of multi-scale cortical activities is observed, and both the avalanche dynamics and neuronal oscillations are modulated by the input heterogeneity. Remarkably, we show similar results can be reproduced in networks with various types of in- and out-degree distributions. Overall, these findings not only provide details on the underlying circuitry mechanisms of nonrandom synaptic connectivity in the regulation of neuronal avalanches, but also inspire testable hypotheses for future experimental studies.
Collapse
Affiliation(s)
- Shengdun Wu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Yangsong Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Yan Cui
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Heng Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Jiakang Wang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Lijun Guo
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Yang Xia
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Daqing Guo
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China.
| |
Collapse
|
17
|
Martí D, Brunel N, Ostojic S. Correlations between synapses in pairs of neurons slow down dynamics in randomly connected neural networks. Phys Rev E 2018; 97:062314. [PMID: 30011528 DOI: 10.1103/physreve.97.062314] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Indexed: 01/11/2023]
Abstract
Networks of randomly connected neurons are among the most popular models in theoretical neuroscience. The connectivity between neurons in the cortex is however not fully random, the simplest and most prominent deviation from randomness found in experimental data being the overrepresentation of bidirectional connections among pyramidal cells. Using numerical and analytical methods, we investigate the effects of partially symmetric connectivity on the dynamics in networks of rate units. We consider the two dynamical regimes exhibited by random neural networks: the weak-coupling regime, where the firing activity decays to a single fixed point unless the network is stimulated, and the strong-coupling or chaotic regime, characterized by internally generated fluctuating firing rates. In the weak-coupling regime, we compute analytically, for an arbitrary degree of symmetry, the autocorrelation of network activity in the presence of external noise. In the chaotic regime, we perform simulations to determine the timescale of the intrinsic fluctuations. In both cases, symmetry increases the characteristic asymptotic decay time of the autocorrelation function and therefore slows down the dynamics in the network.
Collapse
Affiliation(s)
- Daniel Martí
- Laboratoire de Neurosciences Cognitives, Inserm UMR No. 960, Ecole Normale Supérieure, PSL Research University, 75230 Paris, France
| | - Nicolas Brunel
- Department of Statistics and Department of Neurobiology, University of Chicago, Chicago, Illinois 60637, USA.,Department of Neurobiology and Department of Physics, Duke University, Durham, North Carolina 27710, USA
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, Inserm UMR No. 960, Ecole Normale Supérieure, PSL Research University, 75230 Paris, France
| |
Collapse
|
18
|
Heiberg T, Kriener B, Tetzlaff T, Einevoll GT, Plesser HE. Firing-rate models for neurons with a broad repertoire of spiking behaviors. J Comput Neurosci 2018; 45:103-132. [PMID: 30146661 PMCID: PMC6208914 DOI: 10.1007/s10827-018-0693-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 08/01/2018] [Accepted: 08/02/2018] [Indexed: 11/29/2022]
Abstract
Capturing the response behavior of spiking neuron models with rate-based models facilitates the investigation of neuronal networks using powerful methods for rate-based network dynamics. To this end, we investigate the responses of two widely used neuron model types, the Izhikevich and augmented multi-adapative threshold (AMAT) models, to a range of spiking inputs ranging from step responses to natural spike data. We find (i) that linear-nonlinear firing rate models fitted to test data can be used to describe the firing-rate responses of AMAT and Izhikevich spiking neuron models in many cases; (ii) that firing-rate responses are generally too complex to be captured by first-order low-pass filters but require bandpass filters instead; (iii) that linear-nonlinear models capture the response of AMAT models better than of Izhikevich models; (iv) that the wide range of response types evoked by current-injection experiments collapses to few response types when neurons are driven by stationary or sinusoidally modulated Poisson input; and (v) that AMAT and Izhikevich models show different responses to spike input despite identical responses to current injections. Together, these findings suggest that rate-based models of network dynamics may capture a wider range of neuronal response properties by incorporating second-order bandpass filters fitted to responses of spiking model neurons. These models may contribute to bringing rate-based network modeling closer to the reality of biological neuronal networks.
Collapse
Affiliation(s)
- Thomas Heiberg
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Birgit Kriener
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.,Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Gaute T Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.,Department of Physics, University of Oslo, Oslo, Norway
| | - Hans E Plesser
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway. .,Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.
| |
Collapse
|
19
|
Zhao J, Qin YM, Che YQ. Effects of topologies on signal propagation in feedforward networks. CHAOS (WOODBURY, N.Y.) 2018; 28:013117. [PMID: 29390642 DOI: 10.1063/1.4999996] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.
Collapse
Affiliation(s)
- Jia Zhao
- Key Laboratory of Cognition and Personality (Ministry of Education) and Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Ying-Mei Qin
- Tianjin Key Laboratory of Information Sensing and Intelligent Control, Tianjin University of Technology and Education, Tianjin 300222, China
| | - Yan-Qiu Che
- Tianjin Key Laboratory of Information Sensing and Intelligent Control, Tianjin University of Technology and Education, Tianjin 300222, China
| |
Collapse
|
20
|
Berg RW. Neuronal Population Activity in Spinal Motor Circuits: Greater Than the Sum of Its Parts. Front Neural Circuits 2017; 11:103. [PMID: 29311842 PMCID: PMC5742103 DOI: 10.3389/fncir.2017.00103] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Accepted: 11/29/2017] [Indexed: 11/27/2022] Open
Abstract
The core elements of stereotypical movements such as locomotion, scratching and breathing are generated by networks in the lower brainstem and the spinal cord. Ensemble activities in spinal motor networks had until recently been merely a black box, but with the emergence of ultra-thin Silicon multi-electrode technology it was possible to reveal the spiking activity of larger parts of the network. A series of experiments revealed unexpected features of spinal networks, such as multiple spiking regimes and lognormal firing rate distributions. The lognormality renders the widespread idea of a typical firing rate ± standard deviation an ill-suited description, and therefore these findings define a new arithmetic of motor networks. Focusing on the population activity behind motor pattern generation this review summarizes this advance and discusses its implications.
Collapse
Affiliation(s)
- Rune W. Berg
- Department of Neuroscience, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
21
|
Ocker GK, Hu Y, Buice MA, Doiron B, Josić K, Rosenbaum R, Shea-Brown E. From the statistics of connectivity to the statistics of spike times in neuronal networks. Curr Opin Neurobiol 2017; 46:109-119. [PMID: 28863386 DOI: 10.1016/j.conb.2017.07.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 07/21/2017] [Accepted: 07/27/2017] [Indexed: 10/19/2022]
Abstract
An essential step toward understanding neural circuits is linking their structure and their dynamics. In general, this relationship can be almost arbitrarily complex. Recent theoretical work has, however, begun to identify some broad principles underlying collective spiking activity in neural circuits. The first is that local features of network connectivity can be surprisingly effective in predicting global statistics of activity across a network. The second is that, for the important case of large networks with excitatory-inhibitory balance, correlated spiking persists or vanishes depending on the spatial scales of recurrent and feedforward connectivity. We close by showing how these ideas, together with plasticity rules, can help to close the loop between network structure and activity statistics.
Collapse
Affiliation(s)
| | - Yu Hu
- Center for Brain Science, Harvard University, United States
| | - Michael A Buice
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, United States; Center for the Neural Basis of Cognition, Pittsburgh, United States
| | - Krešimir Josić
- Department of Mathematics, University of Houston, United States; Department of Biology and Biochemistry, University of Houston, United States; Department of BioSciences, Rice University, United States
| | - Robert Rosenbaum
- Department of Mathematics, University of Notre Dame, United States
| | - Eric Shea-Brown
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States; Department of Physiology and Biophysics, and University of Washington Institute for Neuroengineering, United States.
| |
Collapse
|
22
|
On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes. J Neurosci 2017; 37:8498-8510. [PMID: 28760860 DOI: 10.1523/jneurosci.0984-17.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Revised: 06/23/2017] [Accepted: 07/18/2017] [Indexed: 02/05/2023] Open
Abstract
The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering.SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in their global properties. This apparent paradox is a consequence of the small numbers of simultaneously recorded neurons in experiment: when inferred via small sample sizes, many networks may be indistinguishable despite being globally distinct. We develop a connectivity measure that successfully classifies networks even when estimated locally with a few neurons at a time. We show that data from rat cortex is consistent with a network in which the likelihood of a connection between neurons depends on spatial distance and on nonspatial, asymmetric clustering.
Collapse
|
23
|
Hennequin G, Agnes EJ, Vogels TP. Inhibitory Plasticity: Balance, Control, and Codependence. Annu Rev Neurosci 2017; 40:557-579. [DOI: 10.1146/annurev-neuro-072116-031005] [Citation(s) in RCA: 140] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Guillaume Hennequin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 3EJ, United Kingdom
| | - Everton J. Agnes
- Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3SR, United Kingdom
| | - Tim P. Vogels
- Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3SR, United Kingdom
| |
Collapse
|
24
|
Mirzakhalili E, Gourgou E, Booth V, Epureanu B. Synaptic Impairment and Robustness of Excitatory Neuronal Networks with Different Topologies. Front Neural Circuits 2017; 11:38. [PMID: 28659765 PMCID: PMC5468411 DOI: 10.3389/fncir.2017.00038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 05/22/2017] [Indexed: 11/13/2022] Open
Abstract
Synaptic deficiencies are a known hallmark of neurodegenerative diseases, but the diagnosis of impaired synapses on the cellular level is not an easy task. Nonetheless, changes in the system-level dynamics of neuronal networks with damaged synapses can be detected using techniques that do not require high spatial resolution. This paper investigates how the structure/topology of neuronal networks influences their dynamics when they suffer from synaptic loss. We study different neuronal network structures/topologies by specifying their degree distributions. The modes of the degree distribution can be used to construct networks that consist of rich clubs and resemble small world networks, as well. We define two dynamical metrics to compare the activity of networks with different structures: persistent activity (namely, the self-sustained activity of the network upon removal of the initial stimulus) and quality of activity (namely, percentage of neurons that participate in the persistent activity of the network). Our results show that synaptic loss affects the persistent activity of networks with bimodal degree distributions less than it affects random networks. The robustness of neuronal networks enhances when the distance between the modes of the degree distribution increases, suggesting that the rich clubs of networks with distinct modes keep the whole network active. In addition, a tradeoff is observed between the quality of activity and the persistent activity. For a range of distributions, both of these dynamical metrics are considerably high for networks with bimodal degree distribution compared to random networks. We also propose three different scenarios of synaptic impairment, which may correspond to different pathological or biological conditions. Regardless of the network structure/topology, results demonstrate that synaptic loss has more severe effects on the activity of the network when impairments are correlated with the activity of the neurons.
Collapse
Affiliation(s)
- Ehsan Mirzakhalili
- Department of Mechanical Engineering, University of MichiganAnn Arbor, MI, United States
| | - Eleni Gourgou
- Department of Mechanical Engineering, University of MichiganAnn Arbor, MI, United States.,Division of Geriatrics, Department of Internal Medicine, Medical School, University of MichiganAnn Arbor, MI, United States
| | - Victoria Booth
- Department of Mathematics, University of MichiganAnn Arbor, MI, United States.,Department of Anesthesiology, Medical School, University of MichiganAnn Arbor, MI, United States
| | - Bogdan Epureanu
- Department of Mechanical Engineering, University of MichiganAnn Arbor, MI, United States
| |
Collapse
|
25
|
Abstract
Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. Neuronal networks, like many biological systems, exhibit variable activity. This activity is shaped by both the underlying biology of the component neurons and the structure of their interactions. How can we combine knowledge of these two things—that is, models of individual neurons and of their interactions—to predict the statistics of single- and multi-neuron activity? Current approaches rely on linearizing neural activity around a stationary state. In the face of neural nonlinearities, however, these linear methods can fail to predict spiking statistics and even fail to correctly predict whether activity is stable or pathological. Here, we show how to calculate any spike train cumulant in a broad class of models, while systematically accounting for nonlinear effects. We then study a fundamental effect of nonlinear input-rate transfer–coupling between different orders of spiking statistic–and how this depends on single-neuron and network properties.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Krešimir Josić
- Department of Mathematics and Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- Department of BioSciences, Rice University, Houston, Texas, United States of America
| | - Eric Shea-Brown
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- Department of Physiology and Biophysics, and UW Institute of Neuroengineering, University of Washington, Seattle, Washington, United States of America
| | - Michael A. Buice
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
26
|
Setareh H, Deger M, Petersen CCH, Gerstner W. Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons. Front Comput Neurosci 2017; 11:52. [PMID: 28690508 PMCID: PMC5480278 DOI: 10.3389/fncom.2017.00052] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Accepted: 05/29/2017] [Indexed: 01/21/2023] Open
Abstract
Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly.
Collapse
Affiliation(s)
- Hesam Setareh
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Moritz Deger
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland.,Faculty of Mathematics and Natural Sciences, Institute for Zoology, University of CologneCologne, Germany
| | - Carl C H Petersen
- Laboratory of Sensory Processing, Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Wulfram Gerstner
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| |
Collapse
|
27
|
Gal E, London M, Globerson A, Ramaswamy S, Reimann MW, Muller E, Markram H, Segev I. Rich cell-type-specific network topology in neocortical microcircuitry. Nat Neurosci 2017; 20:1004-1013. [DOI: 10.1038/nn.4576] [Citation(s) in RCA: 87] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Accepted: 05/03/2017] [Indexed: 12/14/2022]
|
28
|
Nykamp DQ, Friedman D, Shaker S, Shinn M, Vella M, Compte A, Roxin A. Mean-field equations for neuronal networks with arbitrary degree distributions. Phys Rev E 2017; 95:042323. [PMID: 28505854 DOI: 10.1103/physreve.95.042323] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Indexed: 06/07/2023]
Abstract
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
Collapse
Affiliation(s)
- Duane Q Nykamp
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Daniel Friedman
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Sammy Shaker
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Maxwell Shinn
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Michael Vella
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Albert Compte
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Carrer Rosselló 149, 08036 Barcelona, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Campus de Bellaterra, Edifici C 08193 Bellaterra, Spain
| |
Collapse
|
29
|
Aljadeff J, Renfrew D, Vegué M, Sharpee TO. Low-dimensional dynamics of structured random networks. Phys Rev E 2016; 93:022302. [PMID: 26986347 DOI: 10.1103/physreve.93.022302] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Indexed: 01/12/2023]
Abstract
Using a generalized random recurrent neural network model, and by extending our recently developed mean-field approach [J. Aljadeff, M. Stern, and T. Sharpee, Phys. Rev. Lett. 114, 088101 (2015)], we study the relationship between the network connectivity structure and its low-dimensional dynamics. Each connection in the network is a random number with mean 0 and variance that depends on pre- and postsynaptic neurons through a sufficiently smooth function g of their identities. We find that these networks undergo a phase transition from a silent to a chaotic state at a critical point we derive as a function of g. Above the critical point, although unit activation levels are chaotic, their autocorrelation functions are restricted to a low-dimensional subspace. This provides a direct link between the network's structure and some of its functional characteristics. We discuss example applications of the general results to neuroscience where we derive the support of the spectrum of connectivity matrices with heterogeneous and possibly correlated degree distributions, and to ecology where we study the stability of the cascade model for food web structure.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Department of Neurobiology, University of Chicago, Chicago, Illinois, USA.,Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, USA
| | - David Renfrew
- Department of Mathematics, University of California Los Angeles, Los Angeles, California, USA
| | - Marina Vegué
- Centre de Recerca Matemàtica, Campus de Bellaterra, Barcelona, Spain.,Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, USA
| |
Collapse
|
30
|
Wainrib G, Galtier M. Regular graphs maximize the variability of random neural networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:032802. [PMID: 26465523 DOI: 10.1103/physreve.92.032802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Indexed: 06/05/2023]
Abstract
In this work we study the dynamics of systems composed of numerous interacting elements interconnected through a random weighted directed graph, such as models of random neural networks. We develop an original theoretical approach based on a combination of a classical mean-field theory originally developed in the context of dynamical spin-glass models, and the heterogeneous mean-field theory developed to study epidemic propagation on graphs. Our main result is that, surprisingly, increasing the variance of the in-degree distribution does not result in a more variable dynamical behavior, but on the contrary that the most variable behaviors are obtained in the regular graph setting. We further study how the dynamical complexity of the attractors is influenced by the statistical properties of the in-degree distribution.
Collapse
Affiliation(s)
- Gilles Wainrib
- Ecole Normale Supérieure, Département d'Informatique, équipe DATA, Paris, France
| | - Mathieu Galtier
- European Institute for Theoretical Neuroscience, Paris, France
| |
Collapse
|
31
|
Ocker GK, Litwin-Kumar A, Doiron B. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses. PLoS Comput Biol 2015; 11:e1004458. [PMID: 26291697 PMCID: PMC4546203 DOI: 10.1371/journal.pcbi.1004458] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 07/19/2015] [Indexed: 11/18/2022] Open
Abstract
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
| | - Ashok Litwin-Kumar
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - Brent Doiron
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
32
|
Schmeltzer C, Kihara AH, Sokolov IM, Rüdiger S. Degree Correlations Optimize Neuronal Network Sensitivity to Sub-Threshold Stimuli. PLoS One 2015; 10:e0121794. [PMID: 26115374 PMCID: PMC4482728 DOI: 10.1371/journal.pone.0121794] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2014] [Accepted: 01/02/2015] [Indexed: 11/19/2022] Open
Abstract
Information processing in the brain crucially depends on the topology of the neuronal connections. We investigate how the topology influences the response of a population of leaky integrate-and-fire neurons to a stimulus. We devise a method to calculate firing rates from a self-consistent system of equations taking into account the degree distribution and degree correlations in the network. We show that assortative degree correlations strongly improve the sensitivity for weak stimuli and propose that such networks possess an advantage in signal processing. We moreover find that there exists an optimum in assortativity at an intermediate level leading to a maximum in input/output mutual information.
Collapse
Affiliation(s)
| | | | | | - Sten Rüdiger
- Institut für Physik, Humboldt-Universität zu Berlin, Germany
- * E-mail:
| |
Collapse
|
33
|
Higgins D, Graupner M, Brunel N. Memory maintenance in synapses with calcium-based plasticity in the presence of background activity. PLoS Comput Biol 2014; 10:e1003834. [PMID: 25275319 PMCID: PMC4183374 DOI: 10.1371/journal.pcbi.1003834] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2014] [Accepted: 07/28/2014] [Indexed: 11/19/2022] Open
Abstract
Most models of learning and memory assume that memories are maintained in neuronal circuits by persistent synaptic modifications induced by specific patterns of pre- and postsynaptic activity. For this scenario to be viable, synaptic modifications must survive the ubiquitous ongoing activity present in neural circuits in vivo. In this paper, we investigate the time scales of memory maintenance in a calcium-based synaptic plasticity model that has been shown recently to be able to fit different experimental data-sets from hippocampal and neocortical preparations. We find that in the presence of background activity on the order of 1 Hz parameters that fit pyramidal layer 5 neocortical data lead to a very fast decay of synaptic efficacy, with time scales of minutes. We then identify two ways in which this memory time scale can be extended: (i) the extracellular calcium concentration in the experiments used to fit the model are larger than estimated concentrations in vivo. Lowering extracellular calcium concentration to in vivo levels leads to an increase in memory time scales of several orders of magnitude; (ii) adding a bistability mechanism so that each synapse has two stable states at sufficiently low background activity leads to a further boost in memory time scale, since memory decay is no longer described by an exponential decay from an initial state, but by an escape from a potential well. We argue that both features are expected to be present in synapses in vivo. These results are obtained first in a single synapse connecting two independent Poisson neurons, and then in simulations of a large network of excitatory and inhibitory integrate-and-fire neurons. Our results emphasise the need for studying plasticity at physiological extracellular calcium concentration, and highlight the role of synaptic bi- or multistability in the stability of learned synaptic structures. Synaptic plasticity is widely believed to be the main mechanism underlying learning and memory. In recent years, several mathematical plasticity rules have been shown to fit satisfactorily a wide range of experimental data in hippocampal and neocortical in vitro preparations. In particular, a model in which plasticity is driven by the postsynaptic calcium concentration was shown to reproduce successfully how synaptic changes depend on spike timing, specific spike patterns, and firing rate. The advantage of calcium-based rules is the possibility of predicting how changes in extracellular concentrations will affect plasticity. This is particularly significant in the view that in vitro studies are typically done at higher concentrations than the ones measured in vivo. Using such a rule, with parameters fitting in vitro data, we explore how long the memory of a particular synaptic change can be maintained in the presence of background neuronal activity, ubiquitously observed in cortex. We find that the memory time scales increase by several orders of magnitude when calcium concentrations are lowered from typical in vitro experiments to in vivo. Furthermore, we find that synaptic bistability further extends the memory time scale, and estimate that synaptic changes in vivo could be stable on the scale of weeks to months.
Collapse
Affiliation(s)
- David Higgins
- IBENS, École Normale Supérieure, Paris, France
- Departments of Statistics and Neurobiology, University of Chicago, Chicago, Illinois, United States of America
| | - Michael Graupner
- Center for Neural Science, New York University, New York, New York, United States of America
| | - Nicolas Brunel
- Departments of Statistics and Neurobiology, University of Chicago, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
34
|
Gururangan SS, Sadovsky AJ, MacLean JN. Analysis of graph invariants in functional neocortical circuitry reveals generalized features common to three areas of sensory cortex. PLoS Comput Biol 2014; 10:e1003710. [PMID: 25010654 PMCID: PMC4091703 DOI: 10.1371/journal.pcbi.1003710] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2014] [Accepted: 05/24/2014] [Indexed: 11/25/2022] Open
Abstract
Correlations in local neocortical spiking activity can provide insight into the underlying organization of cortical microcircuitry. However, identifying structure in patterned multi-neuronal spiking remains a daunting task due to the high dimensionality of the activity. Using two-photon imaging, we monitored spontaneous circuit dynamics in large, densely sampled neuronal populations within slices of mouse primary auditory, somatosensory, and visual cortex. Using the lagged correlation of spiking activity between neurons, we generated functional wiring diagrams to gain insight into the underlying neocortical circuitry. By establishing the presence of graph invariants, which are label-independent characteristics common to all circuit topologies, our study revealed organizational features that generalized across functionally distinct cortical regions. Regardless of sensory area, random and -nearest neighbors null graphs failed to capture the structure of experimentally derived functional circuitry. These null models indicated that despite a bias in the data towards spatially proximal functional connections, functional circuit structure is best described by non-random and occasionally distal connections. Eigenvector centrality, which quantifies the importance of a neuron in the temporal flow of circuit activity, was highly related to feedforwardness in all functional circuits. The number of nodes participating in a functional circuit did not scale with the number of neurons imaged regardless of sensory area, indicating that circuit size is not tied to the sampling of neocortex. Local circuit flow comprehensively covered angular space regardless of the spatial scale that we tested, demonstrating that circuitry itself does not bias activity flow toward pia. Finally, analysis revealed that a minimal numerical sample size of neurons was necessary to capture at least 90 percent of functional circuit topology. These data and analyses indicated that functional circuitry exhibited rules of organization which generalized across three areas of sensory neocortex. Information in the brain is represented and processed by populations of interconnected neurons. However, there is a lack of a clear understanding of the structure and organization of circuit wiring, particularly at the mesoscale which spans multiple columns and layers. In this study, we sought to evaluate whether functional circuit architecture generalizes across the neocortex, testing the existence of a functional analogue to the neocortical microcircuit hypothesis. We analyzed the correlational structure of spontaneous circuit activations in primary auditory, somatosensory, and visual neocortex to generate functional topologies. In these graphs, neurons were represented as nodes, and time-lagged firing between neurons were directed edges. Edge weights reflected how many times the lagged firing occurred and was synonymous to the strength of the functional connection between two neurons. The presence of label-independent features, identified by investigating functional circuit topologies under a graph invariant framework, suggest that functionally distinct areas of the neocortex carry features of a generalized functional cortical circuit. Furthermore, our analyses show that the simultaneous recording of large sections of cortical circuitry is necessary to recognize these features and avoid undersampling errors.
Collapse
Affiliation(s)
- Suchin S. Gururangan
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
| | - Alexander J. Sadovsky
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Jason N. MacLean
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
35
|
|
36
|
Tomm C, Avermann M, Petersen C, Gerstner W, Vogels TP. Connection-type-specific biases make uniform random network models consistent with cortical recordings. J Neurophysiol 2014; 112:1801-14. [PMID: 24944218 PMCID: PMC4200009 DOI: 10.1152/jn.00629.2013] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Uniform random sparse network architectures are ubiquitous in computational neuroscience, but the implicit hypothesis that they are a good representation of real neuronal networks has been met with skepticism. Here we used two experimental data sets, a study of triplet connectivity statistics and a data set measuring neuronal responses to channelrhodopsin stimuli, to evaluate the fidelity of thousands of model networks. Network architectures comprised three neuron types (excitatory, fast spiking, and nonfast spiking inhibitory) and were created from a set of rules that govern the statistics of the resulting connection types. In a high-dimensional parameter scan, we varied the degree distributions (i.e., how many cells each neuron connects with) and the synaptic weight correlations of synapses from or onto the same neuron. These variations converted initially uniform random and homogeneously connected networks, in which every neuron sent and received equal numbers of synapses with equal synaptic strength distributions, to highly heterogeneous networks in which the number of synapses per neuron, as well as average synaptic strength of synapses from or to a neuron were variable. By evaluating the impact of each variable on the network structure and dynamics, and their similarity to the experimental data, we could falsify the uniform random sparse connectivity hypothesis for 7 of 36 connectivity parameters, but we also confirmed the hypothesis in 8 cases. Twenty-one parameters had no substantial impact on the results of the test protocols we used.
Collapse
Affiliation(s)
- Christian Tomm
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Michael Avermann
- Laboratory of Sensory Processing, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland; and
| | - Carl Petersen
- Laboratory of Sensory Processing, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland; and
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Tim P Vogels
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Centre for Neural Circuits and Behaviour, Department of Anatomy, Physiology and Genetics, The University of Oxford, Oxford, United Kingdom
| |
Collapse
|
37
|
Mechanisms of zero-lag synchronization in cortical motifs. PLoS Comput Biol 2014; 10:e1003548. [PMID: 24763382 PMCID: PMC3998884 DOI: 10.1371/journal.pcbi.1003548] [Citation(s) in RCA: 90] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2013] [Accepted: 02/20/2014] [Indexed: 12/04/2022] Open
Abstract
Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomenon of “dynamical relaying” – a mechanism that relies on a specific network motif – has proven to be the most robust with respect to parameter mismatch and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair – a “resonance pair” – plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying) from those that do not (such as the common driving triad). Remarkably, minor structural changes to the common driving motif that incorporate a reciprocal pair recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization, outline the conditions for its occurrence, and propose that it may be a general mechanism to promote zero-lag synchrony in the brain. Understanding large-scale neuronal dynamics – and how they relate to the cortical anatomy – is one of the key areas of neuroscience research. Despite a wealth of recent research, the key principles of this relationship have yet to be established. Here we employ computational modeling to study neuronal dynamics on small subgraphs – or motifs – across a hierarchy of spatial scales. We establish a novel organizing principle that we term a “resonance pair” (two mutually coupled nodes), which promotes stable, zero-lag synchrony amongst motif nodes. The bidirectional coupling between a resonance pair acts to mutually adjust their dynamics onto a common and relatively stable synchronized regime, which then propagates and stabilizes the synchronization of other nodes within the motif. Remarkably, we find that this effect can propagate along chains of coupled nodes and hence holds the potential to promote stable zero-lag synchrony in larger sub-networks of cortical systems. Our findings hence suggest a potential unifying account of the existence of zero-lag synchrony, an important phenomenon that may underlie crucial cognitive processes in the brain. Moreover, such pairs of mutually coupled oscillators are found in a wide variety of physical and biological systems suggesting a new, broadly relevant and unifying principle.
Collapse
|
38
|
McDonnell MD, Ward LM. Small modifications to network topology can induce stochastic bistable spiking dynamics in a balanced cortical model. PLoS One 2014; 9:e88254. [PMID: 24743633 PMCID: PMC3990528 DOI: 10.1371/journal.pone.0088254] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Accepted: 01/06/2014] [Indexed: 12/27/2022] Open
Abstract
Directed random graph models frequently are used successfully in modeling the population dynamics of networks of cortical neurons connected by chemical synapses. Experimental results consistently reveal that neuronal network topology is complex, however, in the sense that it differs statistically from a random network, and differs for classes of neurons that are physiologically different. This suggests that complex network models whose subnetworks have distinct topological structure may be a useful, and more biologically realistic, alternative to random networks. Here we demonstrate that the balanced excitation and inhibition frequently observed in small cortical regions can transiently disappear in otherwise standard neuronal-scale models of fluctuation-driven dynamics, solely because the random network topology was replaced by a complex clustered one, whilst not changing the in-degree of any neurons. In this network, a small subset of cells whose inhibition comes only from outside their local cluster are the cause of bistable population dynamics, where different clusters of these cells irregularly switch back and forth from a sparsely firing state to a highly active state. Transitions to the highly active state occur when a cluster of these cells spikes sufficiently often to cause strong unbalanced positive feedback to each other. Transitions back to the sparsely firing state rely on occasional large fluctuations in the amount of non-local inhibition received. Neurons in the model are homogeneous in their intrinsic dynamics and in-degrees, but differ in the abundance of various directed feedback motifs in which they participate. Our findings suggest that (i) models and simulations should take into account complex structure that varies for neuron and synapse classes; (ii) differences in the dynamics of neurons with similar intrinsic properties may be caused by their membership in distinctive local networks; (iii) it is important to identify neurons that share physiological properties and location, but differ in their connectivity.
Collapse
Affiliation(s)
- Mark D. McDonnell
- Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, University of South Australia, Mawson Lakes, South Australia, Australia
- * E-mail:
| | - Lawrence M. Ward
- Department of Psychology and Brain Research Centre, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
39
|
Lallouette J, De Pittà M, Ben-Jacob E, Berry H. Sparse short-distance connections enhance calcium wave propagation in a 3D model of astrocyte networks. Front Comput Neurosci 2014; 8:45. [PMID: 24795613 PMCID: PMC3997029 DOI: 10.3389/fncom.2014.00045] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2013] [Accepted: 03/27/2013] [Indexed: 11/13/2022] Open
Abstract
Traditionally, astrocytes have been considered to couple via gap-junctions into a syncytium with only rudimentary spatial organization. However, this view is challenged by growing experimental evidence that astrocytes organize as a proper gap-junction mediated network with more complex region-dependent properties. On the other hand, the propagation range of intercellular calcium waves (ICW) within astrocyte populations is as well highly variable, depending on the brain region considered. This suggests that the variability of the topology of gap-junction couplings could play a role in the variability of the ICW propagation range. Since this hypothesis is very difficult to investigate with current experimental approaches, we explore it here using a biophysically realistic model of three-dimensional astrocyte networks in which we varied the topology of the astrocyte network, while keeping intracellular properties and spatial cell distribution and density constant. Computer simulations of the model suggest that changing the topology of the network is indeed sufficient to reproduce the distinct ranges of ICW propagation reported experimentally. Unexpectedly, our simulations also predict that sparse connectivity and restriction of gap-junction couplings to short distances should favor propagation while long–distance or dense connectivity should impair it. Altogether, our results provide support to recent experimental findings that point toward a significant functional role of the organization of gap-junction couplings into proper astroglial networks. Dynamic control of this topology by neurons and signaling molecules could thus constitute a new type of regulation of neuron-glia and glia-glia interactions.
Collapse
Affiliation(s)
- Jules Lallouette
- EPI Beagle, INRIA Rhône-Alpes Villeurbanne, France ; LIRIS, UMR 5205 CNRS-INSA, Université de Lyon Villeurbanne, France
| | - Maurizio De Pittà
- EPI Beagle, INRIA Rhône-Alpes Villeurbanne, France ; LIRIS, UMR 5205 CNRS-INSA, Université de Lyon Villeurbanne, France ; School of Physics and Astronomy, Tel Aviv University Ramat Aviv, Israel
| | - Eshel Ben-Jacob
- School of Physics and Astronomy, Tel Aviv University Ramat Aviv, Israel ; Center for Theoretical Biological Physics, Rice University Houston, TX, USA
| | - Hugues Berry
- EPI Beagle, INRIA Rhône-Alpes Villeurbanne, France ; LIRIS, UMR 5205 CNRS-INSA, Université de Lyon Villeurbanne, France
| |
Collapse
|
40
|
van Ooyen A, Carnell A, de Ridder S, Tarigan B, Mansvelder HD, Bijma F, de Gunst M, van Pelt J. Independently outgrowing neurons and geometry-based synapse formation produce networks with realistic synaptic connectivity. PLoS One 2014; 9:e85858. [PMID: 24454938 PMCID: PMC3894200 DOI: 10.1371/journal.pone.0085858] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2013] [Accepted: 12/03/2013] [Indexed: 11/18/2022] Open
Abstract
Neuronal signal integration and information processing in cortical networks critically depend on the organization of synaptic connectivity. During development, neurons can form synaptic connections when their axonal and dendritic arborizations come within close proximity of each other. Although many signaling cues are thought to be involved in guiding neuronal extensions, the extent to which accidental appositions between axons and dendrites can already account for synaptic connectivity remains unclear. To investigate this, we generated a local network of cortical L2/3 neurons that grew out independently of each other and that were not guided by any extracellular cues. Synapses were formed when axonal and dendritic branches came by chance within a threshold distance of each other. Despite the absence of guidance cues, we found that the emerging synaptic connectivity showed a good agreement with available experimental data on spatial locations of synapses on dendrites and axons, number of synapses by which neurons are connected, connection probability between neurons, distance between connected neurons, and pattern of synaptic connectivity. The connectivity pattern had a small-world topology but was not scale free. Together, our results suggest that baseline synaptic connectivity in local cortical circuits may largely result from accidentally overlapping axonal and dendritic branches of independently outgrowing neurons.
Collapse
Affiliation(s)
- Arjen van Ooyen
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| | - Andrew Carnell
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| | - Sander de Ridder
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| | - Bernadetta Tarigan
- Department of Mathematics, VU University Amsterdam, Amsterdam, The Netherlands
| | - Huibert D. Mansvelder
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| | - Fetsje Bijma
- Department of Mathematics, VU University Amsterdam, Amsterdam, The Netherlands
| | - Mathisca de Gunst
- Department of Mathematics, VU University Amsterdam, Amsterdam, The Netherlands
| | - Jaap van Pelt
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
41
|
Vasquez JC, Houweling AR, Tiesinga P. Simultaneous stability and sensitivity in model cortical networks is achieved through anti-correlations between the in- and out-degree of connectivity. Front Comput Neurosci 2013; 7:156. [PMID: 24223550 PMCID: PMC3819735 DOI: 10.3389/fncom.2013.00156] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 10/17/2013] [Indexed: 12/03/2022] Open
Abstract
Neuronal networks in rodent barrel cortex are characterized by stable low baseline firing rates. However, they are sensitive to the action potentials of single neurons as suggested by recent single-cell stimulation experiments that reported quantifiable behavioral responses in response to short spike trains elicited in single neurons. Hence, these networks are stable against internally generated fluctuations in firing rate but at the same time remain sensitive to similarly-sized externally induced perturbations. We investigated stability and sensitivity in a simple recurrent network of stochastic binary neurons and determined numerically the effects of correlation between the number of afferent (“in-degree”) and efferent (“out-degree”) connections in neurons. The key advance reported in this work is that anti-correlation between in-/out-degree distributions increased the stability of the network in comparison to networks with no correlation or positive correlations, while being able to achieve the same level of sensitivity. The experimental characterization of degree distributions is difficult because all pre-synaptic and post-synaptic neurons have to be identified and counted. We explored whether the statistics of network motifs, which requires the characterization of connections between small subsets of neurons, could be used to detect evidence for degree anti-correlations. We find that the sample frequency of the 3-neuron “ring” motif (1→2→3→1), can be used to detect degree anti-correlation for sub-networks of size 30 using about 50 samples, which is of significance because the necessary measurements are achievable experimentally in the near future. Taken together, we hypothesize that barrel cortex networks exhibit degree anti-correlations and specific network motif statistics.
Collapse
Affiliation(s)
- Juan C Vasquez
- Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen Nijmegen, Netherlands
| | | | | |
Collapse
|
42
|
Mäki-Marttunen T, Aćimović J, Ruohonen K, Linne ML. Structure-dynamics relationships in bursting neuronal networks revealed using a prediction framework. PLoS One 2013; 8:e69373. [PMID: 23935998 PMCID: PMC3723901 DOI: 10.1371/journal.pone.0069373] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2013] [Accepted: 06/07/2013] [Indexed: 11/25/2022] Open
Abstract
The question of how the structure of a neuronal network affects its functionality has gained a lot of attention in neuroscience. However, the vast majority of the studies on structure-dynamics relationships consider few types of network structures and assess limited numbers of structural measures. In this in silico study, we employ a wide diversity of network topologies and search among many possibilities the aspects of structure that have the greatest effect on the network excitability. The network activity is simulated using two point-neuron models, where the neurons are activated by noisy fluctuation of the membrane potential and their connections are described by chemical synapse models, and statistics on the number and quality of the emergent network bursts are collected for each network type. We apply a prediction framework to the obtained data in order to find out the most relevant aspects of network structure. In this framework, predictors that use different sets of graph-theoretic measures are trained to estimate the activity properties, such as burst count or burst length, of the networks. The performances of these predictors are compared with each other. We show that the best performance in prediction of activity properties for networks with sharp in-degree distribution is obtained when the prediction is based on clustering coefficient. By contrast, for networks with broad in-degree distribution, the maximum eigenvalue of the connectivity graph gives the most accurate prediction. The results shown for small () networks hold with few exceptions when different neuron models, different choices of neuron population and different average degrees are applied. We confirm our conclusions using larger () networks as well. Our findings reveal the relevance of different aspects of network structure from the viewpoint of network excitability, and our integrative method could serve as a general framework for structure-dynamics studies in biosciences.
Collapse
Affiliation(s)
- Tuomo Mäki-Marttunen
- Department of Signal Processing, Tampere University of Technology, Tampere, Finland.
| | | | | | | |
Collapse
|
43
|
Voges N, Perrinet L. Complex dynamics in recurrent cortical networks based on spatially realistic connectivities. Front Comput Neurosci 2012; 6:41. [PMID: 22787446 PMCID: PMC3392693 DOI: 10.3389/fncom.2012.00041] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2012] [Accepted: 06/09/2012] [Indexed: 11/13/2022] Open
Abstract
Most studies on the dynamics of recurrent cortical networks are either based on purely random wiring or neighborhood couplings. Neuronal cortical connectivity, however, shows a complex spatial pattern composed of local and remote patchy connections. We ask to what extent such geometric traits influence the “idle” dynamics of two-dimensional (2d) cortical network models composed of conductance-based integrate-and-fire (iaf) neurons. In contrast to the typical 1 mm2 used in most studies, we employ an enlarged spatial set-up of 25 mm2 to provide for long-range connections. Our models range from purely random to distance-dependent connectivities including patchy projections, i.e., spatially clustered synapses. Analyzing the characteristic measures for synchronicity and regularity in neuronal spiking, we explore and compare the phase spaces and activity patterns of our simulation results. Depending on the input parameters, different dynamical states appear, similar to the known synchronous regular “SR” or asynchronous irregular “AI” firing in random networks. Our structured networks, however, exhibit shifted and sharper transitions, as well as more complex activity patterns. Distance-dependent connectivity structures induce a spatio-temporal spread of activity, e.g., propagating waves, that random networks cannot account for. Spatially and temporally restricted activity injections reveal that a high amount of local coupling induces rather unstable AI dynamics. We find that the amount of local versus long-range connections is an important parameter, whereas the structurally advantageous wiring cost optimization of patchy networks has little bearing on the phase space.
Collapse
Affiliation(s)
- N Voges
- Institut des Neurosciences de la Timone (INT), Aix-Marseille Université, CNRS (UMR 7289) Marseille, France
| | | |
Collapse
|
44
|
Hennequin G, Vogels TP, Gerstner W. Non-normal amplification in random balanced neuronal networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 86:011909. [PMID: 23005454 DOI: 10.1103/physreve.86.011909] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Indexed: 06/01/2023]
Abstract
In dynamical models of cortical networks, the recurrent connectivity can amplify the input given to the network in two distinct ways. One is induced by the presence of near-critical eigenvalues in the connectivity matrix W, producing large but slow activity fluctuations along the corresponding eigenvectors (dynamical slowing). The other relies on W not being normal, which allows the network activity to make large but fast excursions along specific directions. Here we investigate the trade-off between non-normal amplification and dynamical slowing in the spontaneous activity of large random neuronal networks composed of excitatory and inhibitory neurons. We use a Schur decomposition of W to separate the two amplification mechanisms. Assuming linear stochastic dynamics, we derive an exact expression for the expected amount of purely non-normal amplification. We find that amplification is very limited if dynamical slowing must be kept weak. We conclude that, to achieve strong transient amplification with little slowing, the connectivity must be structured. We show that unidirectional connections between neurons of the same type together with reciprocal connections between neurons of different types, allow for amplification already in the fast dynamical regime. Finally, our results also shed light on the differences between balanced networks in which inhibition exactly cancels excitation and those where inhibition dominates.
Collapse
Affiliation(s)
- Guillaume Hennequin
- School of Computer and Communication Sciences and Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, 1015 EPFL, Switzerland.
| | | | | |
Collapse
|
45
|
Luczak A, Maclean JN. Default activity patterns at the neocortical microcircuit level. Front Integr Neurosci 2012; 6:30. [PMID: 22701405 PMCID: PMC3373160 DOI: 10.3389/fnint.2012.00030] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2011] [Accepted: 05/24/2012] [Indexed: 11/17/2022] Open
Abstract
Even in absence of sensory stimuli cortical networks exhibit complex, self-organized activity patterns. While the function of those spontaneous patterns of activation remains poorly understood, recent studies both in vivo and in vitro have demonstrated that neocortical neurons activate in a surprisingly similar sequential order both spontaneously and following input into cortex. For example, neurons that tend to fire earlier within spontaneous bursts of activity also fire earlier than other neurons in response to sensory stimuli. These “default patterns” can last hundreds of milliseconds and are strongly conserved under a variety of conditions. In this paper, we will review recent evidence for these default patterns at the local cortical level. We speculate that cortical architecture imposes common constraints on spontaneous and evoked activity flow, which result in the similarity of the patterns.
Collapse
Affiliation(s)
- Artur Luczak
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada
| | | |
Collapse
|
46
|
Cardanobile S, Pernice V, Deger M, Rotter S. Inferring general relations between network characteristics from specific network ensembles. PLoS One 2012; 7:e37911. [PMID: 22701586 PMCID: PMC3368903 DOI: 10.1371/journal.pone.0037911] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2012] [Accepted: 04/30/2012] [Indexed: 12/01/2022] Open
Abstract
Different network models have been suggested for the topology underlying complex interactions in natural systems. These models are aimed at replicating specific statistical features encountered in real-world networks. However, it is rarely considered to which degree the results obtained for one particular network class can be extrapolated to real-world networks. We address this issue by comparing different classical and more recently developed network models with respect to their ability to generate networks with large structural variability. In particular, we consider the statistical constraints which the respective construction scheme imposes on the generated networks. After having identified the most variable networks, we address the issue of which constraints are common to all network classes and are thus suitable candidates for being generic statistical laws of complex networks. In fact, we find that generic, not model-related dependencies between different network characteristics do exist. This makes it possible to infer global features from local ones using regression models trained on networks with high generalization power. Our results confirm and extend previous findings regarding the synchronization properties of neural networks. Our method seems especially relevant for large networks, which are difficult to map completely, like the neural networks in the brain. The structure of such large networks cannot be fully sampled with the present technology. Our approach provides a method to estimate global properties of under-sampled networks in good approximation. Finally, we demonstrate on three different data sets (C. elegans neuronal network, R. prowazekii metabolic network, and a network of synonyms extracted from Roget’s Thesaurus) that real-world networks have statistical relations compatible with those obtained using regression models.
Collapse
Affiliation(s)
- Stefano Cardanobile
- Bernstein Center Freiburg, University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| | - Volker Pernice
- Bernstein Center Freiburg, University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| | - Moritz Deger
- Bernstein Center Freiburg, University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg, University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
- * E-mail:
| |
Collapse
|
47
|
Tattini L, Olmi S, Torcini A. Coherent periodic activity in excitatory Erdös-Renyi neural networks: the role of network connectivity. CHAOS (WOODBURY, N.Y.) 2012; 22:023133. [PMID: 22757540 DOI: 10.1063/1.4723839] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this article, we investigate the role of connectivity in promoting coherent activity in excitatory neural networks. In particular, we would like to understand if the onset of collective oscillations can be related to a minimal average connectivity and how this critical connectivity depends on the number of neurons in the networks. For these purposes, we consider an excitatory random network of leaky integrate-and-fire pulse coupled neurons. The neurons are connected as in a directed Erdös-Renyi graph with average connectivity <k> scaling as a power law with the number of neurons in the network. The scaling is controlled by a parameter γ, which allows to pass from massively connected to sparse networks and therefore to modify the topology of the system. At a macroscopic level, we observe two distinct dynamical phases: an asynchronous state corresponding to a desynchronized dynamics of the neurons and a regime of partial synchronization (PS) associated with a coherent periodic activity of the network. At low connectivity, the system is in an asynchronous state, while PS emerges above a certain critical average connectivity <k>(c). For sufficiently large networks, <k>(c) saturates to a constant value suggesting that a minimal average connectivity is sufficient to observe coherent activity in systems of any size irrespectively of the kind of considered network: sparse or massively connected. However, this value depends on the nature of the synapses: reliable or unreliable. For unreliable synapses, the critical value required to observe the onset of macroscopic behaviors is noticeably smaller than for reliable synaptic transmission. Due to the disorder present in the system, for finite number of neurons we have inhomogeneities in the neuronal behaviors, inducing a weak form of chaos, which vanishes in the thermodynamic limit. In such a limit, the disordered systems exhibit regular (non chaotic) dynamics and their properties correspond to that of a homogeneous fully connected network for any γ-value. Apart for the peculiar exception of sparse networks, which remain intrinsically inhomogeneous at any system size.
Collapse
Affiliation(s)
- Lorenzo Tattini
- CNR-Consiglio Nazionale delle Ricerche-Istituto dei Sistemi Complessi, via Madonna del Piano 10, I-50019 Sesto Fiorentino, Italy.
| | | | | |
Collapse
|
48
|
Trousdale J, Hu Y, Shea-Brown E, Josić K. Impact of network structure and cellular response on spike time correlations. PLoS Comput Biol 2012; 8:e1002408. [PMID: 22457608 PMCID: PMC3310711 DOI: 10.1371/journal.pcbi.1002408] [Citation(s) in RCA: 97] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2011] [Accepted: 01/11/2012] [Indexed: 11/18/2022] Open
Abstract
Novel experimental techniques reveal the simultaneous activity of larger and larger numbers of neurons. As a result there is increasing interest in the structure of cooperative – or correlated – activity in neural populations, and in the possible impact of such correlations on the neural code. A fundamental theoretical challenge is to understand how the architecture of network connectivity along with the dynamical properties of single cells shape the magnitude and timescale of correlations. We provide a general approach to this problem by extending prior techniques based on linear response theory. We consider networks of general integrate-and-fire cells with arbitrary architecture, and provide explicit expressions for the approximate cross-correlation between constituent cells. These correlations depend strongly on the operating point (input mean and variance) of the neurons, even when connectivity is fixed. Moreover, the approximations admit an expansion in powers of the matrices that describe the network architecture. This expansion can be readily interpreted in terms of paths between different cells. We apply our results to large excitatory-inhibitory networks, and demonstrate first how precise balance – or lack thereof – between the strengths and timescales of excitatory and inhibitory synapses is reflected in the overall correlation structure of the network. We then derive explicit expressions for the average correlation structure in randomly connected networks. These expressions help to identify the important factors that shape coordinated neural activity in such networks. Is neural activity more than the sum of its individual parts? What is the impact of cooperative, or correlated, spiking among multiple cells? We can start addressing these questions, as rapid advances in experimental techniques allow simultaneous recordings from ever-increasing populations. However, we still lack a general understanding of the origin and consequences of the joint activity that is revealed. The challenge is compounded by the fact that both the intrinsic dynamics of single cells and the correlations among then vary depending on the overall state of the network. Here, we develop a toolbox that addresses this issue. Specifically, we show how linear response theory allows for the expression of correlations explicitly in terms of the underlying network connectivity and known single-cell properties – and that the predictions of this theory accurately match simulations of a touchstone, nonlinear model in computational neuroscience, the general integrate-and-fire cell. Thus, our theory should help unlock the relationship between network architecture, single-cell dynamics, and correlated activity in diverse neural circuits.
Collapse
Affiliation(s)
- James Trousdale
- Department of Mathematics, University of Houston, Houston, Texas, USA.
| | | | | | | |
Collapse
|
49
|
Pernice V, Staude B, Cardanobile S, Rotter S. Recurrent interactions in spiking networks with arbitrary topology. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:031916. [PMID: 22587132 DOI: 10.1103/physreve.85.031916] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2011] [Revised: 01/04/2012] [Indexed: 05/31/2023]
Abstract
The population activity of random networks of excitatory and inhibitory leaky integrate-and-fire neurons has been studied extensively. In particular, a state of asynchronous activity with low firing rates and low pairwise correlations emerges in sparsely connected networks. We apply linear response theory to evaluate the influence of detailed network structure on neuron dynamics. It turns out that pairwise correlations induced by direct and indirect network connections can be related to the matrix of direct linear interactions. Furthermore, we study the influence of the characteristics of the neuron model. Interpreting the reset as self-inhibition, we examine its influence, via the spectrum of single-neuron activity, on network autocorrelation functions and the overall correlation level. The neuron model also affects the form of interaction kernels and consequently the time-dependent correlation functions. We find that a linear instability of networks with Erdös-Rényi topology coincides with a global transition to a highly correlated network state. Our work shows that recurrent interactions have a profound impact on spike train statistics and provides tools to study the effects of specific network topologies.
Collapse
Affiliation(s)
- Volker Pernice
- Bernstein Center Freiburg and Faculty of Biology, University of Freiburg, Freiburg, Germany.
| | | | | | | |
Collapse
|
50
|
McDonnell MD, Mohan A, Stricker C, Ward LM. Input-rate modulation of γ oscillations is sensitive to network topology, delays and short-term plasticity. Brain Res 2011; 1434:162-77. [PMID: 22000590 DOI: 10.1016/j.brainres.2011.08.070] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2011] [Revised: 08/29/2011] [Accepted: 08/30/2011] [Indexed: 11/24/2022]
Abstract
Simulated networks of excitatory and inhibitory neurons have previously been shown to reproduce critical features of experimental data regarding neural coding in V1, such as a positive relationship between thalamic input spike rate and the power of gamma frequency oscillations. This effect, referred to as modulated gamma power, could represent a neural code in V1 for stimulus characteristics that affect thalamic spike rate such as contrast or intensity. The simulated network's assumptions included homogeneous random connectivity, equal synaptic delays after spike arrival, and constant synaptic efficacies. Plausible alternative assumptions include small world connectivity, a wide distribution of axonal propagation delays, and short term synaptic plasticity, and here we assess the individual impact of each of these on the model's success in reproducing modulated gamma power. First, we developed several alternative algorithms for simulating directed networks with clustered connectivity and balanced excitation and inhibition. We found that modulated gamma power was absent in all small-world networks that had a relatively low abundance of reciprocal connectivity, which suggests that such motifs are present in V1 cortical networks at levels at least equal to those found in random networks. We also found in a different network type that the balance of excitation and inhibition could be destroyed when the network was in the small-world regime. Given all neurons had identical in-degrees, this result suggests that balance relies on motif distributions as well as mean connectivity. Second, altering the distribution of axonal delays had little effect, but increasing the mean delay led to a secondary gamma modulation at harmonics of the main peak, and since this is not observed experimentally, it suggests a mean delay in V1 networks less than 2 ms. Finally, we compared two types of excitatory synaptic plasticity, and found that modulated beta power emerged in addition to gamma power for one type, in the presence of short term depression in interneurons. This article is part of a Special Issue entitled "Neural Coding".
Collapse
Affiliation(s)
- Mark D McDonnell
- Computational & Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, University of South Australia, Mawson Lakes, SA 5095, Australia.
| | | | | | | |
Collapse
|