1
|
Negrón A, Getz MP, Handy G, Doiron B. The mechanics of correlated variability in segregated cortical excitatory subnetworks. Proc Natl Acad Sci U S A 2024; 121:e2306800121. [PMID: 38959037 DOI: 10.1073/pnas.2306800121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 04/03/2024] [Indexed: 07/04/2024] Open
Abstract
Understanding the genesis of shared trial-to-trial variability in neuronal population activity within the sensory cortex is critical to uncovering the biological basis of information processing in the brain. Shared variability is often a reflection of the structure of cortical connectivity since it likely arises, in part, from local circuit inputs. A series of experiments from segregated networks of (excitatory) pyramidal neurons in the mouse primary visual cortex challenge this view. Specifically, the across-network correlations were found to be larger than predicted given the known weak cross-network connectivity. We aim to uncover the circuit mechanisms responsible for these enhanced correlations through biologically motivated cortical circuit models. Our central finding is that coupling each excitatory subpopulation with a specific inhibitory subpopulation provides the most robust network-intrinsic solution in shaping these enhanced correlations. This result argues for the existence of excitatory-inhibitory functional assemblies in early sensory areas which mirror not just response properties but also connectivity between pyramidal cells. Furthermore, our findings provide theoretical support for recent experimental observations showing that cortical inhibition forms structural and functional subnetworks with excitatory cells, in contrast to the classical view that inhibition is a nonspecific blanket suppression of local excitation.
Collapse
Affiliation(s)
- Alex Negrón
- Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637
| | - Matthew P Getz
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637
- Department of Neurobiology, University of Chicago, Chicago, IL 60637
- Department of Statistics, University of Chicago, Chicago, IL 60637
| | - Gregory Handy
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637
- Department of Neurobiology, University of Chicago, Chicago, IL 60637
- Department of Statistics, University of Chicago, Chicago, IL 60637
| | - Brent Doiron
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637
- Department of Neurobiology, University of Chicago, Chicago, IL 60637
- Department of Statistics, University of Chicago, Chicago, IL 60637
| |
Collapse
|
2
|
Waitzmann F, Wu YK, Gjorgjieva J. Top-down modulation in canonical cortical circuits with short-term plasticity. Proc Natl Acad Sci U S A 2024; 121:e2311040121. [PMID: 38593083 PMCID: PMC11032497 DOI: 10.1073/pnas.2311040121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 02/14/2024] [Indexed: 04/11/2024] Open
Abstract
Cortical dynamics and computations are strongly influenced by diverse GABAergic interneurons, including those expressing parvalbumin (PV), somatostatin (SST), and vasoactive intestinal peptide (VIP). Together with excitatory (E) neurons, they form a canonical microcircuit and exhibit counterintuitive nonlinear phenomena. One instance of such phenomena is response reversal, whereby SST neurons show opposite responses to top-down modulation via VIP depending on the presence of bottom-up sensory input, indicating that the network may function in different regimes under different stimulation conditions. Combining analytical and computational approaches, we demonstrate that model networks with multiple interneuron subtypes and experimentally identified short-term plasticity mechanisms can implement response reversal. Surprisingly, despite not directly affecting SST and VIP activity, PV-to-E short-term depression has a decisive impact on SST response reversal. We show how response reversal relates to inhibition stabilization and the paradoxical effect in the presence of several short-term plasticity mechanisms demonstrating that response reversal coincides with a change in the indispensability of SST for network stabilization. In summary, our work suggests a role of short-term plasticity mechanisms in generating nonlinear phenomena in networks with multiple interneuron subtypes and makes several experimentally testable predictions.
Collapse
Affiliation(s)
- Felix Waitzmann
- School of Life Sciences, Technical University of Munich, 85354Freising, Germany
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, 60438Frankfurt, Germany
| | - Yue Kris Wu
- School of Life Sciences, Technical University of Munich, 85354Freising, Germany
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, 60438Frankfurt, Germany
| | - Julijana Gjorgjieva
- School of Life Sciences, Technical University of Munich, 85354Freising, Germany
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, 60438Frankfurt, Germany
| |
Collapse
|
3
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
4
|
Masuda N, Aihara K, MacLaren NG. Anticipating regime shifts by mixing early warning signals from different nodes. Nat Commun 2024; 15:1086. [PMID: 38316802 PMCID: PMC10844243 DOI: 10.1038/s41467-024-45476-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 01/25/2024] [Indexed: 02/07/2024] Open
Abstract
Real systems showing regime shifts, such as ecosystems, are often composed of many dynamical elements interacting on a network. Various early warning signals have been proposed for anticipating regime shifts from observed data. However, it is unclear how one should combine early warning signals from different nodes for better performance. Based on theory of stochastic differential equations, we propose a method to optimize the node set from which to construct an early warning signal. The proposed method takes into account that uncertainty as well as the magnitude of the signal affects its predictive performance, that a large magnitude or small uncertainty of the signal in one situation does not imply the signal's high performance, and that combining early warning signals from different nodes is often but not always beneficial. The method performs well particularly when different nodes are subjected to different amounts of dynamical noise and stress.
Collapse
Affiliation(s)
- Naoki Masuda
- Department of Mathematics, State University of New York at Buffalo, Buffalo, NY, 14260-2900, USA.
- Institute for Artificial Intelligence and Data Science, State University of New York at Buffalo, Buffalo, NY, 14260-5030, USA.
| | - Kazuyuki Aihara
- International Research Center for Neurointelligence, The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Bunkyo City, Japan
| | - Neil G MacLaren
- Department of Mathematics, State University of New York at Buffalo, Buffalo, NY, 14260-2900, USA
| |
Collapse
|
5
|
Lizier JT, Bauer F, Atay FM, Jost J. Analytic relationship of relative synchronizability to network structure and motifs. Proc Natl Acad Sci U S A 2023; 120:e2303332120. [PMID: 37669393 PMCID: PMC10500263 DOI: 10.1073/pnas.2303332120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/07/2023] [Indexed: 09/07/2023] Open
Abstract
Synchronization phenomena on networks have attracted much attention in studies of neural, social, economic, and biological systems, yet we still lack a systematic understanding of how relative synchronizability relates to underlying network structure. Indeed, this question is of central importance to the key theme of how dynamics on networks relate to their structure more generally. We present an analytic technique to directly measure the relative synchronizability of noise-driven time-series processes on networks, in terms of the directed network structure. We consider both discrete-time autoregressive processes and continuous-time Ornstein-Uhlenbeck dynamics on networks, which can represent linearizations of nonlinear systems. Our technique builds on computation of the network covariance matrix in the space orthogonal to the synchronized state, enabling it to be more general than previous work in not requiring either symmetric (undirected) or diagonalizable connectivity matrices and allowing arbitrary self-link weights. More importantly, our approach quantifies the relative synchronization specifically in terms of the contribution of process motif (walk) structures. We demonstrate that in general the relative abundance of process motifs with convergent directed walks (including feedback and feedforward loops) hinders synchronizability. We also reveal subtle differences between the motifs involved for discrete or continuous-time dynamics. Our insights analytically explain several known general results regarding synchronizability of networks, including that small-world and regular networks are less synchronizable than random networks.
Collapse
Affiliation(s)
- Joseph T. Lizier
- School of Computer Science and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, NSW2006, Australia
- Max Planck Institute for Mathematics in the Sciences, Leipzig04103, Germany
| | - Frank Bauer
- Max Planck Institute for Mathematics in the Sciences, Leipzig04103, Germany
- Department of Mathematics, Harvard University, Cambridge, MA02138
| | - Fatihcan M. Atay
- Max Planck Institute for Mathematics in the Sciences, Leipzig04103, Germany
- Department of Mathematics, Bilkent University, Ankara06800, Turkey
| | - Jürgen Jost
- Max Planck Institute for Mathematics in the Sciences, Leipzig04103, Germany
- Santa Fe Institute, Santa Fe, NM87501
| |
Collapse
|
6
|
Negrón A, Getz MP, Handy G, Doiron B. The mechanics of correlated variability in segregated cortical excitatory subnetworks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.25.538323. [PMID: 37162867 PMCID: PMC10168290 DOI: 10.1101/2023.04.25.538323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Understanding the genesis of shared trial-to-trial variability in neural activity within sensory cortex is critical to uncovering the biological basis of information processing in the brain. Shared variability is often a reflection of the structure of cortical connectivity since this variability likely arises, in part, from local circuit inputs. A series of experiments from segregated networks of (excitatory) pyramidal neurons in mouse primary visual cortex challenge this view. Specifically, the across-network correlations were found to be larger than predicted given the known weak cross-network connectivity. We aim to uncover the circuit mechanisms responsible for these enhanced correlations through biologically motivated cortical circuit models. Our central finding is that coupling each excitatory subpopulation with a specific inhibitory subpopulation provides the most robust network-intrinsic solution in shaping these enhanced correlations. This result argues for the existence of excitatory-inhibitory functional assemblies in early sensory areas which mirror not just response properties but also connectivity between pyramidal cells.
Collapse
Affiliation(s)
- Alex Negrón
- Department of Applied Mathematics, Illinois Institute of Technology
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Matthew P. Getz
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Brent Doiron
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| |
Collapse
|
7
|
Morales GB, di Santo S, Muñoz MA. Quasiuniversal scaling in mouse-brain neuronal activity stems from edge-of-instability critical dynamics. Proc Natl Acad Sci U S A 2023; 120:e2208998120. [PMID: 36827262 PMCID: PMC9992863 DOI: 10.1073/pnas.2208998120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 12/31/2022] [Indexed: 02/25/2023] Open
Abstract
The brain is in a state of perpetual reverberant neural activity, even in the absence of specific tasks or stimuli. Shedding light on the origin and functional significance of such a dynamical state is essential to understanding how the brain transmits, processes, and stores information. An inspiring, albeit controversial, conjecture proposes that some statistical characteristics of empirically observed neuronal activity can be understood by assuming that brain networks operate in a dynamical regime with features, including the emergence of scale invariance, resembling those seen typically near phase transitions. Here, we present a data-driven analysis based on simultaneous high-throughput recordings of the activity of thousands of individual neurons in various regions of the mouse brain. To analyze these data, we construct a unified theoretical framework that synergistically combines a phenomenological renormalization group approach and techniques that infer the general dynamical state of a neural population, while designing complementary tools. This strategy allows us to uncover strong signatures of scale invariance that are "quasiuniversal" across brain regions and experiments, revealing that all the analyzed areas operate, to a greater or lesser extent, near the edge of instability.
Collapse
Affiliation(s)
- Guillermo B. Morales
- Departamento de Electromagnetismo y Física de la Materia, Instituto Carlos I de Física Teórica y Computacional Universidad de Granada, GranadaE-18071, Spain
| | - Serena di Santo
- Morton B. Zuckerman Mind Brain Behavior Institute Columbia University, New York, NY10027
| | - Miguel A. Muñoz
- Departamento de Electromagnetismo y Física de la Materia, Instituto Carlos I de Física Teórica y Computacional Universidad de Granada, GranadaE-18071, Spain
| |
Collapse
|
8
|
Shomali SR, Rasuli SN, Ahmadabadi MN, Shimazaki H. Uncovering hidden network architecture from spiking activities using an exact statistical input-output relation of neurons. Commun Biol 2023; 6:169. [PMID: 36792689 PMCID: PMC9932086 DOI: 10.1038/s42003-023-04511-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 01/20/2023] [Indexed: 02/17/2023] Open
Abstract
Identifying network architecture from observed neural activities is crucial in neuroscience studies. A key requirement is knowledge of the statistical input-output relation of single neurons in vivo. By utilizing an exact analytical solution of the spike-timing for leaky integrate-and-fire neurons under noisy inputs balanced near the threshold, we construct a framework that links synaptic type, strength, and spiking nonlinearity with the statistics of neuronal population activity. The framework explains structured pairwise and higher-order interactions of neurons receiving common inputs under different architectures. We compared the theoretical predictions with the activity of monkey and mouse V1 neurons and found that excitatory inputs given to pairs explained the observed sparse activity characterized by strong negative triple-wise interactions, thereby ruling out the alternative explanation by shared inhibition. Moreover, we showed that the strong interactions are a signature of excitatory rather than inhibitory inputs whenever the spontaneous rate is low. We present a guide map of neural interactions that help researchers to specify the hidden neuronal motifs underlying observed interactions found in empirical data.
Collapse
Affiliation(s)
- Safura Rashid Shomali
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5746, Iran.
| | - Seyyed Nader Rasuli
- grid.418744.a0000 0000 8841 7951School of Physics, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5531 Iran ,grid.411872.90000 0001 2087 2250Department of Physics, University of Guilan, Rasht, 41335-1914 Iran
| | - Majid Nili Ahmadabadi
- grid.46072.370000 0004 0612 7950Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14395-515 Iran
| | - Hideaki Shimazaki
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan. .,Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Hokkaido, 060-0812, Japan.
| |
Collapse
|
9
|
Evaluating the statistical similarity of neural network activity and connectivity via eigenvector angles. Biosystems 2023; 223:104813. [PMID: 36460172 DOI: 10.1016/j.biosystems.2022.104813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 11/15/2022] [Accepted: 11/15/2022] [Indexed: 12/02/2022]
Abstract
Neural systems are networks, and strategic comparisons between multiple networks are a prevalent task in many research scenarios. In this study, we construct a statistical test for the comparison of matrices representing pairwise aspects of neural networks, in particular, the correlation between spiking activity and connectivity. The "eigenangle test" quantifies the similarity of two matrices by the angles between their ranked eigenvectors. We calibrate the behavior of the test for use with correlation matrices using stochastic models of correlated spiking activity and demonstrate how it compares to classical two-sample tests, such as the Kolmogorov-Smirnov distance, in the sense that it is able to evaluate also structural aspects of pairwise measures. Furthermore, the principle of the eigenangle test can be applied to compare the similarity of adjacency matrices of certain types of networks. Thus, the approach can be used to quantitatively explore the relationship between connectivity and activity with the same metric. By applying the eigenangle test to the comparison of connectivity matrices and correlation matrices of a random balanced network model before and after a specific synaptic rewiring intervention, we gauge the influence of connectivity features on the correlated activity. Potential applications of the eigenangle test include simulation experiments, model validation, and data analysis.
Collapse
|
10
|
Inferring the location of neurons within an artificial network from their activity. Neural Netw 2023; 157:160-175. [DOI: 10.1016/j.neunet.2022.10.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 08/14/2022] [Accepted: 10/11/2022] [Indexed: 11/07/2022]
|
11
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
12
|
Predicting brain structural network using functional connectivity. Med Image Anal 2022; 79:102463. [PMID: 35490597 DOI: 10.1016/j.media.2022.102463] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 03/06/2022] [Accepted: 04/15/2022] [Indexed: 12/13/2022]
Abstract
Uncovering the non-trivial brain structure-function relationship is fundamentally important for revealing organizational principles of human brain. However, it is challenging to infer a reliable relationship between individual brain structure and function, e.g., the relations between individual brain structural connectivity (SC) and functional connectivity (FC). Brain structure-function displays a distributed and heterogeneous pattern, that is, many functional relationships arise from non-overlapping sets of anatomical connections. This complex relation can be interwoven with widely existed individual structural and functional variations. Motivated by the advances of generative adversarial network (GAN) and graph convolutional network (GCN) in the deep learning field, in this work, we proposed a multi-GCN based GAN (MGCN-GAN) to infer individual SC based on corresponding FC by automatically learning the complex associations between individual brain structural and functional networks. The generator of MGCN-GAN is composed of multiple multi-layer GCNs which are designed to model complex indirect connections in brain network. The discriminator of MGCN-GAN is a single multi-layer GCN which aims to distinguish the predicted SC from real SC. To overcome the inherent unstable behavior of GAN, we designed a new structure-preserving (SP) loss function to guide the generator to learn the intrinsic SC patterns more effectively. Using Human Connectome Project (HCP) dataset and Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset as test beds, our MGCN-GAN model can generate reliable individual SC from FC. This result implies that there may exist a common regulation between specific brain structural and functional architectures across different individuals.
Collapse
|
13
|
Mean-field limits for non-linear Hawkes processes with excitation and inhibition. Stoch Process Their Appl 2022. [DOI: 10.1016/j.spa.2022.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
14
|
Hu Y, Sompolinsky H. The spectrum of covariance matrices of randomly connected recurrent neuronal networks with linear dynamics. PLoS Comput Biol 2022; 18:e1010327. [PMID: 35862445 PMCID: PMC9345493 DOI: 10.1371/journal.pcbi.1010327] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 08/02/2022] [Accepted: 06/24/2022] [Indexed: 11/18/2022] Open
Abstract
A key question in theoretical neuroscience is the relation between the connectivity structure and the collective dynamics of a network of neurons. Here we study the connectivity-dynamics relation as reflected in the distribution of eigenvalues of the covariance matrix of the dynamic fluctuations of the neuronal activities, which is closely related to the network dynamics’ Principal Component Analysis (PCA) and the associated effective dimensionality. We consider the spontaneous fluctuations around a steady state in a randomly connected recurrent network of stochastic neurons. An exact analytical expression for the covariance eigenvalue distribution in the large-network limit can be obtained using results from random matrices. The distribution has a finitely supported smooth bulk spectrum and exhibits an approximate power-law tail for coupling matrices near the critical edge. We generalize the results to include second-order connectivity motifs and discuss extensions to excitatory-inhibitory networks. The theoretical results are compared with those from finite-size networks and the effects of temporal and spatial sampling are studied. Preliminary application to whole-brain imaging data is presented. Using simple connectivity models, our work provides theoretical predictions for the covariance spectrum, a fundamental property of recurrent neuronal dynamics, that can be compared with experimental data. Here we study the distribution of eigenvalues, or spectrum, of the neuron-to-neuron covariance matrix in recurrently connected neuronal networks. The covariance spectrum is an important global feature of neuron population dynamics that requires simultaneous recordings of neurons. The spectrum is essential to the widely used Principal Component Analysis (PCA) and generalizes the dimensionality measure of population dynamics. We use a simple model to emulate the complex connections between neurons, where all pairs of neurons interact linearly at a strength specified randomly and independently. We derive a closed-form expression of the covariance spectrum, revealing an interesting long tail of large eigenvalues following a power law as the connection strength increases. To incorporate connectivity features important to biological neural circuits, we generalize the result to networks with an additional low-rank connectivity component that could come from learning and networks consisting of sparsely connected excitatory and inhibitory neurons. To facilitate comparing the theoretical results to experimental data, we derive the precise modifications needed to account for the effect of limited time samples and having unobserved neurons. Preliminary applications to large-scale calcium imaging data suggest our model can well capture the high dimensional population activity of neurons.
Collapse
Affiliation(s)
- Yu Hu
- Department of Mathematics and Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong SAR, China
- * E-mail: (YH); (HS)
| | - Haim Sompolinsky
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America
- * E-mail: (YH); (HS)
| |
Collapse
|
15
|
Metastable spiking networks in the replica-mean-field limit. PLoS Comput Biol 2022; 18:e1010215. [PMID: 35714155 PMCID: PMC9246178 DOI: 10.1371/journal.pcbi.1010215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 06/30/2022] [Accepted: 05/16/2022] [Indexed: 11/19/2022] Open
Abstract
Characterizing metastable neural dynamics in finite-size spiking networks remains a daunting challenge. We propose to address this challenge in the recently introduced replica-mean-field (RMF) limit. In this limit, networks are made of infinitely many replicas of the finite network of interest, but with randomized interactions across replicas. Such randomization renders certain excitatory networks fully tractable at the cost of neglecting activity correlations, but with explicit dependence on the finite size of the neural constituents. However, metastable dynamics typically unfold in networks with mixed inhibition and excitation. Here, we extend the RMF computational framework to point-process-based neural network models with exponential stochastic intensities, allowing for mixed excitation and inhibition. Within this setting, we show that metastable finite-size networks admit multistable RMF limits, which are fully characterized by stationary firing rates. Technically, these stationary rates are determined as the solutions of a set of delayed differential equations under certain regularity conditions that any physical solutions shall satisfy. We solve this original problem by combining the resolvent formalism and singular-perturbation theory. Importantly, we find that these rates specify probabilistic pseudo-equilibria which accurately capture the neural variability observed in the original finite-size network. We also discuss the emergence of metastability as a stochastic bifurcation, which can be interpreted as a static phase transition in the RMF limits. In turn, we expect to leverage the static picture of RMF limits to infer purely dynamical features of metastable finite-size networks, such as the transition rates between pseudo-equilibria.
Collapse
|
16
|
Layer M, Senk J, Essink S, van Meegen A, Bos H, Helias M. NNMT: Mean-Field Based Analysis Tools for Neuronal Network Models. Front Neuroinform 2022; 16:835657. [PMID: 35712677 PMCID: PMC9196133 DOI: 10.3389/fninf.2022.835657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
Mean-field theory of neuronal networks has led to numerous advances in our analytical and intuitive understanding of their dynamics during the past decades. In order to make mean-field based analysis tools more accessible, we implemented an extensible, easy-to-use open-source Python toolbox that collects a variety of mean-field methods for the leaky integrate-and-fire neuron model. The Neuronal Network Mean-field Toolbox (NNMT) in its current state allows for estimating properties of large neuronal networks, such as firing rates, power spectra, and dynamical stability in mean-field and linear response approximation, without running simulations. In this article, we describe how the toolbox is implemented, show how it is used to reproduce results of previous studies, and discuss different use-cases, such as parameter space explorations, or mapping different network models. Although the initial version of the toolbox focuses on methods for leaky integrate-and-fire neurons, its structure is designed to be open and extensible. It aims to provide a platform for collecting analytical methods for neuronal network model analysis, such that the neuroscientific community can take maximal advantage of them.
Collapse
Affiliation(s)
- Moritz Layer
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Simon Essink
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Alexander van Meegen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Institute of Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Hannah Bos
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
17
|
Bloch J, Greaves-Tunnell A, Shea-Brown E, Harchaoui Z, Shojaie A, Yazdan-Shahmorad A. Network structure mediates functional reorganization induced by optogenetic stimulation of non-human primate sensorimotor cortex. iScience 2022; 25:104285. [PMID: 35573193 PMCID: PMC9095749 DOI: 10.1016/j.isci.2022.104285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 03/22/2022] [Accepted: 04/19/2022] [Indexed: 11/04/2022] Open
Abstract
Because aberrant network-level functional connectivity underlies a variety of neural disorders, the ability to induce targeted functional reorganization would be a profound development toward therapies for neural disorders. Brain stimulation has been shown to induce large-scale network-wide functional connectivity changes (FCC), but the mapping from stimulation to the induced changes is unclear. Here, we develop a model which jointly considers the stimulation protocol and the cortical network structure to accurately predict network-wide FCC in response to optogenetic stimulation of non-human primate primary sensorimotor cortex. We observe that the network structure has a much stronger effect than the stimulation protocol on the resulting FCC. We also observe that the mappings from these input features to the FCC diverge over frequency bands and successive stimulations. Our framework represents a paradigm shift for targeted neural stimulation and can be used to interrogate, improve, and develop stimulation-based interventions for neural disorders.
Collapse
Affiliation(s)
- Julien Bloch
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
- Center for Neurotechnology, University of Washington, Seattle, WA 98105, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA 98105, USA
- Washington National Primate Research Center, University of Washington, Seattle, WA 98105, USA
| | | | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, WA 98105, USA
- Center for Neurotechnology, University of Washington, Seattle, WA 98105, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA 98105, USA
| | - Zaid Harchaoui
- Department of Statistics, University of Washington, Seattle, WA 98105, USA
| | - Ali Shojaie
- Department of Biostatistics, University of Washington, Seattle, WA 98105, USA
| | - Azadeh Yazdan-Shahmorad
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98105, USA
- Center for Neurotechnology, University of Washington, Seattle, WA 98105, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA 98105, USA
- Washington National Primate Research Center, University of Washington, Seattle, WA 98105, USA
| |
Collapse
|
18
|
Ziaei M, Oestreich L, Persson J, Reutens DC, Ebner NC. Neural correlates of affective empathy in aging: A multimodal imaging and multivariate approach. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2022; 29:577-598. [PMID: 35156904 DOI: 10.1080/13825585.2022.2036684] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 01/27/2022] [Indexed: 06/14/2023]
Abstract
Empathy is one such social-cognitive capacity that undergoes age-related change. C urrently, however, not well understood is the structural and functional neurocircuitry underlying age-related differences in empathy. This study aimed to delineate brain structural and functional networks that subserve affective empathic response in younger and older adults using a modified version of the Multifaceted Empathy Task to both positive and negative emotions. Combining multimodal neuroimaging with multivariate partial least square analysis resulted in two novel findings in older but not younger adults: (a) faster empathic responding to negative emotions was related to greater fractional anisotropy of the anterior cingulum and greater functional activity of the anterior cingulate network; (b) however, empathic responding to positive emotions was related to greater fractional anisotropy of the posterior cingulum and greater functional activity of the posterior cingulate network. Such differentiation of structural and functional networks might have critical implications for prosocial behavior and social connections among older adults.
Collapse
Affiliation(s)
- Maryam Ziaei
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway
- Jebsen Centre for Alzheimer's Diseases, Norwegian University of Science and Technology, Trondheim, Norway
- Queensland Brain Institute, University of Queensland, Brisbane, Australia
- Centre for Advanced Imaging, University of Queensland, Brisbane, Australia
| | - Lena Oestreich
- Centre for Advanced Imaging, University of Queensland, Brisbane, Australia
- UQ Centre for Clinical Research, Faculty of Medicine, University of Queensland, Brisbane, Australia
| | - Jonas Persson
- Center for Lifespan Developmental Research, School of Law, Psychology and Social Work, Örebro University, Örebro, Sweden
- Aging Research Center, Karolinska Institute, Stockholm, Sweden
| | - David C Reutens
- Centre for Advanced Imaging, University of Queensland, Brisbane, Australia
| | - Natalie C Ebner
- Department of Psychology, University of Florida, Gainesville, FL, USA
- Department of Aging and Geriatric Research, Institute on Aging, University of Florida, Gainesville, FL, USA
- Center for Cognitive Aging and Memory, Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| |
Collapse
|
19
|
Zheng C, Pikovsky A. Stochastic bursting in networks of excitable units with delayed coupling. BIOLOGICAL CYBERNETICS 2022; 116:121-128. [PMID: 34181074 PMCID: PMC9068677 DOI: 10.1007/s00422-021-00883-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/17/2021] [Indexed: 06/13/2023]
Abstract
We investigate the phenomenon of stochastic bursting in a noisy excitable unit with multiple weak delay feedbacks, by virtue of a directed tree lattice model. We find statistical properties of the appearing sequence of spikes and expressions for the power spectral density. This simple model is extended to a network of three units with delayed coupling of a star type. We find the power spectral density of each unit and the cross-spectral density between any two units. The basic assumptions behind the analytical approach are the separation of timescales, allowing for a description of the spike train as a point process, and weakness of coupling, allowing for a representation of the action of overlapped spikes via the sum of the one-spike excitation probabilities.
Collapse
Affiliation(s)
- Chunming Zheng
- Institute for Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Strasse 24/25, 14476, Potsdam-Golm, Germany
| | - Arkady Pikovsky
- Institute for Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Strasse 24/25, 14476, Potsdam-Golm, Germany.
- Department of Control Theory, Nizhny Novgorod State University, Gagarin Avenue 23, Nizhny Novgorod, Russia, 606950.
| |
Collapse
|
20
|
Knoll G, Lindner B. Information transmission in recurrent networks: Consequences of network noise for synchronous and asynchronous signal encoding. Phys Rev E 2022; 105:044411. [PMID: 35590546 DOI: 10.1103/physreve.105.044411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/04/2022] [Indexed: 06/15/2023]
Abstract
Information about natural time-dependent stimuli encoded by the sensory periphery or communication between cortical networks may span a large frequency range or be localized to a smaller frequency band. Biological systems have been shown to multiplex such disparate broadband and narrow-band signals and then discriminate them in later populations by employing either an integration (low-pass) or coincidence detection (bandpass) encoding strategy. Analytical expressions have been developed for both encoding methods in feedforward populations of uncoupled neurons and confirm that the integration of a population's output low-pass filters the information, whereas synchronous output encodes less information overall and retains signal information in a selected frequency band. The present study extends the theory to recurrent networks and shows that recurrence may sharpen the synchronous bandpass filter. The frequency of the pass band is significantly influenced by the synaptic strengths, especially for inhibition-dominated networks. Synchronous information transfer is also increased when network models take into account heterogeneity that arises from the stochastic distribution of the synaptic weights.
Collapse
Affiliation(s)
- Gregory Knoll
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| |
Collapse
|
21
|
Lei Z, Liu J, Zhao Y, Liu F, Qian Y, Zheng Z. New Burst-Oscillation Mode in Paced One-Dimensional Excitable Systems. Front Physiol 2022; 13:854887. [PMID: 35399268 PMCID: PMC8984196 DOI: 10.3389/fphys.2022.854887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 02/21/2022] [Indexed: 11/17/2022] Open
Abstract
A new type of burst-oscillation mode (BOM) is reported for the first time, by extensively investigating the response dynamics of a one-dimensional (1D) paced excitable system with unidirectional coupling. The BOM state is an alternating transition between two distinct phases, i.e., the phase with multiple short spikes and the phase with a long interval. The realizable region and the unrealizable region for the evolution of BOM are identified, which is determined by the initial pulse number in the system. It is revealed that, in the realizable region, the initial inhomogeneous BOM will eventually evolve to the homogeneously distributed spike-oscillation mode (SOM), while it can maintain in the unrealizable region. Furthermore, several dynamical features of BOM and SOM are theoretically predicted and have been verified in numerical simulations. The mechanisms of the emergence of BOM are discussed in detail. It is revealed that three key factors, i.e., the linking time, the system length, and the local dynamics, can effectively modulate the pattern of BOM. Moreover, the suitable parameter region of the external pacing (A, f) that can produce the new type of BOM, has been explicitly revealed. These results may facilitate a deeper understanding of bursts in nature and will have a useful impact in related fields.
Collapse
Affiliation(s)
- Zhao Lei
- College of Physics and Optoelectronic Technology, Baoji University of Arts and Sciences, Baoji, China
- Advanced Titanium Alloys and Functional Coatings Cooperative Innovation Center, Baoji, China
| | - Jiajing Liu
- College of Physics and Optoelectronic Technology, Baoji University of Arts and Sciences, Baoji, China
- Advanced Titanium Alloys and Functional Coatings Cooperative Innovation Center, Baoji, China
| | - Yaru Zhao
- College of Physics and Optoelectronic Technology, Baoji University of Arts and Sciences, Baoji, China
- Advanced Titanium Alloys and Functional Coatings Cooperative Innovation Center, Baoji, China
| | - Fei Liu
- College of Physics and Optoelectronic Technology, Baoji University of Arts and Sciences, Baoji, China
- Advanced Titanium Alloys and Functional Coatings Cooperative Innovation Center, Baoji, China
- *Correspondence: Fei Liu
| | - Yu Qian
- College of Physics and Optoelectronic Technology, Baoji University of Arts and Sciences, Baoji, China
- Advanced Titanium Alloys and Functional Coatings Cooperative Innovation Center, Baoji, China
- Yu Qian
| | - Zhigang Zheng
- Institute of Systems Science, Huaqiao University, Xiamen, China
- School of Mathematical Sciences, Huaqiao University, Quanzhou, China
- College of Information Science and Engineering, Huaqiao University, Xiamen, China
- Zhigang Zheng
| |
Collapse
|
22
|
Dahmen D, Layer M, Deutz L, Dąbrowska PA, Voges N, von Papen M, Brochier T, Riehle A, Diesmann M, Grün S, Helias M. Global organization of neuronal activity only requires unstructured local connectivity. eLife 2022; 11:e68422. [PMID: 35049496 PMCID: PMC8776256 DOI: 10.7554/elife.68422] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.
Collapse
Affiliation(s)
- David Dahmen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Moritz Layer
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Lukas Deutz
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- School of Computing, University of LeedsLeedsUnited Kingdom
| | - Paulina Anna Dąbrowska
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Nicole Voges
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Michael von Papen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Alexa Riehle
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Markus Diesmann
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen UniversityAachenGermany
| | - Sonja Grün
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Theoretical Systems Neurobiology, RWTH Aachen UniversityAachenGermany
| | - Moritz Helias
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
| |
Collapse
|
23
|
Frizzell TO, Phull E, Khan M, Song X, Grajauskas LA, Gawryluk J, D'Arcy RCN. Imaging functional neuroplasticity in human white matter tracts. Brain Struct Funct 2022; 227:381-392. [PMID: 34812936 PMCID: PMC8741691 DOI: 10.1007/s00429-021-02407-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 09/26/2021] [Indexed: 12/17/2022]
Abstract
Magnetic resonance imaging (MRI) studies are sensitive to biological mechanisms of neuroplasticity in white matter (WM). In particular, diffusion tensor imaging (DTI) has been used to investigate structural changes. Historically, functional MRI (fMRI) neuroplasticity studies have been restricted to gray matter, as fMRI studies have only recently expanded to WM. The current study evaluated WM neuroplasticity pre-post motor training in healthy adults, focusing on motor learning in the non-dominant hand. Neuroplasticity changes were evaluated in two established WM regions-of-interest: the internal capsule and the corpus callosum. Behavioral improvements following training were greater for the non-dominant hand, which corresponded with MRI-based neuroplasticity changes in the internal capsule for DTI fractional anisotropy, fMRI hemodynamic response functions, and low-frequency oscillations (LFOs). In the corpus callosum, MRI-based neuroplasticity changes were detected in LFOs, DTI, and functional correlation tensors (FCT). Taken together, the LFO results converged as significant amplitude reductions, implicating a common underlying mechanism of optimized transmission through altered myelination. The structural and functional neuroplasticity findings open new avenues for direct WM investigations into mapping connectomes and advancing MRI clinical applications.
Collapse
Affiliation(s)
- Tory O Frizzell
- BrainNET, Health and Technology District, Surrey, BC, Canada
- Faculty of Applied Sciences and Science, Simon Fraser University, Vancouver, BC, Canada
| | - Elisha Phull
- BrainNET, Health and Technology District, Surrey, BC, Canada
- Faculty of Applied Sciences and Science, Simon Fraser University, Vancouver, BC, Canada
| | - Mishaa Khan
- BrainNET, Health and Technology District, Surrey, BC, Canada
- Faculty of Applied Sciences and Science, Simon Fraser University, Vancouver, BC, Canada
| | - Xiaowei Song
- BrainNET, Health and Technology District, Surrey, BC, Canada
- Faculty of Applied Sciences and Science, Simon Fraser University, Vancouver, BC, Canada
- Health Sciences and Innovation, Surrey Memorial Hospital, Surrey, BC, Canada
| | - Lukas A Grajauskas
- BrainNET, Health and Technology District, Surrey, BC, Canada
- Faculty of Applied Sciences and Science, Simon Fraser University, Vancouver, BC, Canada
- Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Jodie Gawryluk
- Division of Medical Sciences, Department of Psychology, University of Victoria, Victoria, BC, Canada
- DM Centre for Brain Health (Radiology), University of British Columbia, Vancouver, BC, Canada
| | - Ryan C N D'Arcy
- BrainNET, Health and Technology District, Surrey, BC, Canada.
- Faculty of Applied Sciences and Science, Simon Fraser University, Vancouver, BC, Canada.
- Health Sciences and Innovation, Surrey Memorial Hospital, Surrey, BC, Canada.
- DM Centre for Brain Health (Radiology), University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
24
|
Wang X, Shojaie A. Causal Discovery in High-Dimensional Point Process Networks with Hidden Nodes. ENTROPY (BASEL, SWITZERLAND) 2021; 23:1622. [PMID: 34945928 PMCID: PMC8700240 DOI: 10.3390/e23121622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 11/20/2021] [Accepted: 11/27/2021] [Indexed: 12/01/2022]
Abstract
Thanks to technological advances leading to near-continuous time observations, emerging multivariate point process data offer new opportunities for causal discovery. However, a key obstacle in achieving this goal is that many relevant processes may not be observed in practice. Naïve estimation approaches that ignore these hidden variables can generate misleading results because of the unadjusted confounding. To plug this gap, we propose a deconfounding procedure to estimate high-dimensional point process networks with only a subset of the nodes being observed. Our method allows flexible connections between the observed and unobserved processes. It also allows the number of unobserved processes to be unknown and potentially larger than the number of observed nodes. Theoretical analyses and numerical studies highlight the advantages of the proposed method in identifying causal interactions among the observed processes.
Collapse
Affiliation(s)
| | - Ali Shojaie
- Department of Biostatistics, University of Washington, Seattle, WA 98195, USA;
| |
Collapse
|
25
|
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. Proc Natl Acad Sci U S A 2021; 118:2023832118. [PMID: 34772802 DOI: 10.1073/pnas.2023832118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2021] [Indexed: 11/18/2022] Open
Abstract
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Collapse
|
26
|
Diffusion approximation of multi-class Hawkes processes: Theoretical and numerical analysis. ADV APPL PROBAB 2021. [DOI: 10.1017/apr.2020.73] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractOscillatory systems of interacting Hawkes processes with Erlang memory kernels were introduced by Ditlevsen and Löcherbach (Stoch. Process. Appl., 2017). They are piecewise deterministic Markov processes (PDMP) and can be approximated by a stochastic diffusion. In this paper, first, a strong error bound between the PDMP and the diffusion is proved. Second, moment bounds for the resulting diffusion are derived. Third, approximation schemes for the diffusion, based on the numerical splitting approach, are proposed. These schemes are proved to converge with mean-square order 1 and to preserve the properties of the diffusion, in particular the hypoellipticity, the ergodicity, and the moment bounds. Finally, the PDMP and the diffusion are compared through numerical experiments, where the PDMP is simulated with an adapted thinning procedure.
Collapse
|
27
|
Robinson PA, Henderson JA, Gabay NC, Aquino KM, Babaie-Janvier T, Gao X. Determination of Dynamic Brain Connectivity via Spectral Analysis. Front Hum Neurosci 2021; 15:655576. [PMID: 34335207 PMCID: PMC8323754 DOI: 10.3389/fnhum.2021.655576] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 06/03/2021] [Indexed: 11/30/2022] Open
Abstract
Spectral analysis based on neural field theory is used to analyze dynamic connectivity via methods based on the physical eigenmodes that are the building blocks of brain dynamics. These approaches integrate over space instead of averaging over time and thereby greatly reduce or remove the temporal averaging effects, windowing artifacts, and noise at fine spatial scales that have bedeviled the analysis of dynamical functional connectivity (FC). The dependences of FC on dynamics at various timescales, and on windowing, are clarified and the results are demonstrated on simple test cases, demonstrating how modes provide directly interpretable insights that can be related to brain structure and function. It is shown that FC is dynamic even when the brain structure and effective connectivity are fixed, and that the observed patterns of FC are dominated by relatively few eigenmodes. Common artifacts introduced by statistical analyses that do not incorporate the physical nature of the brain are discussed and it is shown that these are avoided by spectral analysis using eigenmodes. Unlike most published artificially discretized “resting state networks” and other statistically-derived patterns, eigenmodes overlap, with every mode extending across the whole brain and every region participating in every mode—just like the vibrations that give rise to notes of a musical instrument. Despite this, modes are independent and do not interact in the linear limit. It is argued that for many purposes the intrinsic limitations of covariance-based FC instead favor the alternative of tracking eigenmode coefficients vs. time, which provide a compact representation that is directly related to biophysical brain dynamics.
Collapse
Affiliation(s)
- Peter A Robinson
- School of Physics, University of Sydney, Sydney, NSW, Australia.,Center of Excellence for Integrative Brain Function, University of Sydney, Sydney, NSW, Australia
| | - James A Henderson
- School of Physics, University of Sydney, Sydney, NSW, Australia.,Center of Excellence for Integrative Brain Function, University of Sydney, Sydney, NSW, Australia
| | - Natasha C Gabay
- School of Physics, University of Sydney, Sydney, NSW, Australia.,Center of Excellence for Integrative Brain Function, University of Sydney, Sydney, NSW, Australia
| | - Kevin M Aquino
- School of Physics, University of Sydney, Sydney, NSW, Australia.,Center of Excellence for Integrative Brain Function, University of Sydney, Sydney, NSW, Australia
| | - Tara Babaie-Janvier
- School of Physics, University of Sydney, Sydney, NSW, Australia.,Center of Excellence for Integrative Brain Function, University of Sydney, Sydney, NSW, Australia
| | - Xiao Gao
- School of Physics, University of Sydney, Sydney, NSW, Australia.,Center of Excellence for Integrative Brain Function, University of Sydney, Sydney, NSW, Australia.,Department of Biomedical Engineering, University of Melbourne, Parkville, VIC, Australia
| |
Collapse
|
28
|
Miller DR, Guenther DT, Maurer AP, Hansen CA, Zalesky A, Khoshbouei H. Dopamine Transporter Is a Master Regulator of Dopaminergic Neural Network Connectivity. J Neurosci 2021; 41:5453-5470. [PMID: 33980544 PMCID: PMC8221606 DOI: 10.1523/jneurosci.0223-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 04/19/2021] [Accepted: 05/01/2021] [Indexed: 12/13/2022] Open
Abstract
Dopaminergic neurons of the substantia nigra pars compacta (SNC) and ventral tegmental area (VTA) exhibit spontaneous firing activity. The dopaminergic neurons in these regions have been shown to exhibit differential sensitivity to neuronal loss and psychostimulants targeting dopamine transporter. However, it remains unclear whether these regional differences scale beyond individual neuronal activity to regional neuronal networks. Here, we used live-cell calcium imaging to show that network connectivity greatly differs between SNC and VTA regions with higher incidence of hub-like neurons in the VTA. Specifically, the frequency of hub-like neurons was significantly lower in SNC than in the adjacent VTA, consistent with the interpretation of a lower network resilience to SNC neuronal loss. We tested this hypothesis, in DAT-cre/loxP-GCaMP6f mice of either sex, when activity of an individual dopaminergic neuron is suppressed, through whole-cell patch clamp electrophysiology, in either SNC or VTA networks. Neuronal loss in the SNC increased network clustering, whereas the larger number of hub-neurons in the VTA overcompensated by decreasing network clustering in the VTA. We further show that network properties are regulatable via a dopamine transporter but not a D2 receptor dependent mechanism. Our results demonstrate novel regulatory mechanisms of functional network topology in dopaminergic brain regions.SIGNIFICANCE STATEMENT In this work, we begin to untangle the differences in complex network properties between the substantia nigra pars compacta (SNC) and VTA, that may underlie differential sensitivity between regions. The methods and analysis employed provide a springboard for investigations of network topology in multiple deep brain structures and disorders.
Collapse
Affiliation(s)
- Douglas R Miller
- Department of Neuroscience, University of Florida, Gainesville, Florida
| | - Dylan T Guenther
- Department of Neuroscience, University of Florida, Gainesville, Florida
| | - Andrew P Maurer
- Department of Neuroscience, University of Florida, Gainesville, Florida
| | - Carissa A Hansen
- Department of Neuroscience, University of Florida, Gainesville, Florida
| | - Andrew Zalesky
- Melbourne Neuropsychiatry Centre, The University of Melbourne and Melbourne Health, Melbourne, Victoria 3010, Australia
- Department of Biomedical Engineering, Melbourne School of Engineering, The University of Melbourne, Melbourne, Victoria 3010, Australia
| | | |
Collapse
|
29
|
Smith RJ, Alipourjeddi E, Garner C, Maser AL, Shrey DW, Lopour BA. Infant functional networks are modulated by state of consciousness and circadian rhythm. Netw Neurosci 2021; 5:614-630. [PMID: 34189380 PMCID: PMC8233111 DOI: 10.1162/netn_a_00194] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 03/22/2021] [Indexed: 01/05/2023] Open
Abstract
Functional connectivity networks are valuable tools for studying development, cognition, and disease in the infant brain. In adults, such networks are modulated by the state of consciousness and the circadian rhythm; however, it is unknown if infant brain networks exhibit similar variation, given the unique temporal properties of infant sleep and circadian patterning. To address this, we analyzed functional connectivity networks calculated from long-term EEG recordings (average duration 20.8 hr) from 19 healthy infants. Networks were subject specific, as intersubject correlations between weighted adjacency matrices were low. However, within individual subjects, both sleep and wake networks were stable over time, with stronger functional connectivity during sleep than wakefulness. Principal component analysis revealed the presence of two dominant networks; visual sleep scoring confirmed that these corresponded to sleep and wakefulness. Lastly, we found that network strength, degree, clustering coefficient, and path length significantly varied with time of day, when measured in either wakefulness or sleep at the group level. Together, these results suggest that modulation of healthy functional networks occurs over ∼24 hr and is robust and repeatable. Accounting for such temporal periodicities may improve the physiological interpretation and use of functional connectivity analysis to investigate brain function in health and disease.
Collapse
Affiliation(s)
- Rachel J. Smith
- Department of Biomedical Engineering, University of California, Irvine, CA, USA
| | - Ehsan Alipourjeddi
- Department of Biomedical Engineering, University of California, Irvine, CA, USA
| | - Cristal Garner
- Division of Neurology, Children’s Hospital of Orange County, Orange, CA, USA
| | - Amy L. Maser
- Department of Psychology, Children’s Hospital of Orange County, Orange, CA, USA
| | - Daniel W. Shrey
- Division of Neurology, Children’s Hospital of Orange County, Orange, CA, USA
- Department of Pediatrics, University of California, Irvine, Irvine, CA, USA
| | - Beth A. Lopour
- Department of Biomedical Engineering, University of California, Irvine, CA, USA
| |
Collapse
|
30
|
Robinson PA. Neural field theory of neural avalanche exponents. BIOLOGICAL CYBERNETICS 2021; 115:237-243. [PMID: 33939016 DOI: 10.1007/s00422-021-00875-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Accepted: 04/10/2021] [Indexed: 06/12/2023]
Abstract
The power-law exponents of observed size and lifetime distributions of near-critical neural avalanches are calculated from neural field theory using diagrammatic methods. This brings neural avalanches within the ambit of neural field theory, which has also previously explained near-critical 1/f spectra and many other observed features of neural activity. This strengthens the case for near-criticality of the brain and opens the way for these other phenomena to be interrelated with avalanches and their dynamics.
Collapse
Affiliation(s)
- P A Robinson
- School of Physics, The University of Sydney, Sydney, New South Wales, 2006, Australia.
- Center of Excellence for Integrative Brain Function, The University of Sydney, Sydney, New South Wales, 2006, Australia.
| |
Collapse
|
31
|
Novelli L, Lizier JT. Inferring network properties from time series using transfer entropy and mutual information: Validation of multivariate versus bivariate approaches. Netw Neurosci 2021; 5:373-404. [PMID: 34189370 PMCID: PMC8233116 DOI: 10.1162/netn_a_00178] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 12/03/2020] [Indexed: 02/02/2023] Open
Abstract
Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting properties of these networks requires inferred network models to reflect key underlying structural features. However, even a few spurious links can severely distort network measures, posing a challenge for functional connectomes. We study the extent to which micro- and macroscopic properties of underlying networks can be inferred by algorithms based on mutual information and bivariate/multivariate transfer entropy. The validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Simulations are based on a neural mass model and on autoregressive dynamics (employing Gaussian estimators for direct comparison to functional connectivity and Granger causality). We find that multivariate transfer entropy captures key properties of all network structures for longer time series. Bivariate methods can achieve higher recall (sensitivity) for shorter time series but are unable to control false positives (lower specificity) as available data increases. This leads to overestimated clustering, small-world, and rich-club coefficients, underestimated shortest path lengths and hub centrality, and fattened degree distribution tails. Caution should therefore be used when interpreting network properties of functional connectomes obtained via correlation or pairwise statistical dependence measures, rather than more holistic (yet data-hungry) multivariate models. We compare bivariate and multivariate methods for inferring networks from time series, which are generated using a neural mass model and autoregressive dynamics. We assess their ability to reproduce key properties of the underlying structural network. Validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Even a few spurious links can severely bias key network properties. Multivariate transfer entropy performs best on all topologies for longer time series.
Collapse
Affiliation(s)
- Leonardo Novelli
- Centre for Complex Systems, Faculty of Engineering, University of Sydney, Sydney, Australia
| | - Joseph T Lizier
- Centre for Complex Systems, Faculty of Engineering, University of Sydney, Sydney, Australia
| |
Collapse
|
32
|
Azeredo da Silveira R, Rieke F. The Geometry of Information Coding in Correlated Neural Populations. Annu Rev Neurosci 2021; 44:403-424. [PMID: 33863252 DOI: 10.1146/annurev-neuro-120320-082744] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code.
Collapse
Affiliation(s)
| | - Fred Rieke
- Department of Physics, Ecole Normale Supérieure, 75005 Paris, France;
| |
Collapse
|
33
|
Raimondo S, De Domenico M. Measuring topological descriptors of complex networks under uncertainty. Phys Rev E 2021; 103:022311. [PMID: 33735966 DOI: 10.1103/physreve.103.022311] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 01/13/2021] [Indexed: 11/07/2022]
Abstract
Revealing the structural features of a complex system from the observed collective dynamics is a fundamental problem in network science. To compute the various topological descriptors commonly used to characterize the structure of a complex system (e.g., the degree, the clustering coefficient, etc.), it is usually necessary to completely reconstruct the network of relations between the subsystems. Several methods are available to detect the existence of interactions between the nodes of a network. By observing some physical quantities through time, the structural relationships are inferred using various discriminating statistics (e.g., correlations, mutual information, etc.). In this setting, the uncertainty about the existence of the edges is reflected in the uncertainty about the topological descriptors. In this study, we propose a methodological framework to evaluate this uncertainty, replacing the topological descriptors, even at the level of a single node, with appropriate probability distributions, eluding the reconstruction phase. Our theoretical framework agrees with the numerical experiments performed on a large set of synthetic and real-world networks. Our results provide a grounded framework for the analysis and the interpretation of widely used topological descriptors, such as degree centrality, clustering, and clusters, in scenarios in which the existence of network connectivity is statistically inferred or when the probabilities of existence π_{ij} of the edges are known. To this purpose, we also provide a simple and mathematically grounded process to transform the discriminating statistics into the probabilities π_{ij}.
Collapse
Affiliation(s)
- Sebastian Raimondo
- CoMuNe Lab, Center for Information and Communication Technology, Fondazione Bruno Kessler, Via Sommarive 18, 38123 Povo (TN), Italy and Department of Mathematics, University of Trento, Via Sommarive 9, 38123 Povo (TN), Italy
| | - Manlio De Domenico
- CoMuNe Lab, Center for Information and Communication Technology, Fondazione Bruno Kessler, Via Sommarive 18, 38123 Povo (TN), Italy
| |
Collapse
|
34
|
|
35
|
Abstract
We present a method for assembling directed networks given a prescribed bi-degree (in- and out-degree) sequence. This method utilises permutations of initial adjacency matrix assemblies that conform to the prescribed in-degree sequence, yet violate the given out-degree sequence. It combines directed edge-swapping and constrained Monte-Carlo edge-mixing for improving approximations to the given out-degree sequence until it is exactly matched. Our method permits inclusion or exclusion of 'multi-edges', allowing assembly of weighted or binary networks. It further allows prescribing the overall percentage of such multiple connections-permitting exploration of a weighted synthetic network space unlike any other method currently available for comparison of real-world networks with controlled multi-edge proportion null spaces. The graph space is sampled by the method non-uniformly, yet the algorithm provides weightings for the sample space across all possible realisations allowing computation of statistical averages of network metrics as if they were sampled uniformly. Given a sequence of in- and out- degrees, the method can also produce simple graphs for sequences that satisfy conditions of graphicality. Our method successfully builds networks with order O(107) edges on the scale of minutes with a laptop running Matlab. We provide our implementation of the method on the GitHub repository for immediate use by the research community, and demonstrate its application to three real-world networks for null-space comparisons as well as the study of dynamics of neuronal networks.
Collapse
|
36
|
Abstract
Brains are composed of networks of neurons that are highly interconnected. A central question in neuroscience is how such neuronal networks operate in tandem to make a functioning brain. To understand this, we need to study how neurons interact with each other in action, such as when viewing a visual scene or performing a motor task. One way to approach this question is by perturbing the activity of functioning neurons and measuring the resulting influence on other neurons. By using computational models of neuronal networks, we studied how this influence in visual networks depends on connectivity. Our results help to interpret contradictory results from previous experimental studies and explain how different connectivity patterns can enhance information processing during natural vision. To unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modeling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory–inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images, and this was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding and paves the road to map the perturbome of neuronal networks in future studies.
Collapse
|
37
|
|
38
|
Individual differences in local functional brain connectivity affect TMS effects on behavior. Sci Rep 2020; 10:10422. [PMID: 32591568 PMCID: PMC7320140 DOI: 10.1038/s41598-020-67162-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 05/18/2020] [Indexed: 11/25/2022] Open
Abstract
Behavioral effects of transcranial magnetic stimulation (TMS) often show substantial differences between subjects. One factor that might contribute to these inter-individual differences is the interaction of current brain states with the effects of local brain network perturbation. The aim of the current study was to identify brain regions whose connectivity before and following right parietal perturbation affects individual behavioral effects during a visuospatial target detection task. 20 subjects participated in an fMRI experiment where their brain hemodynamic response was measured during resting state, and then during a visuospatial target detection task following 1 Hz rTMS and sham stimulation. To select a parsimonious set of associated brain regions, an elastic net analysis was used in combination with a whole-brain voxel-wise functional connectivity analysis. TMS-induced changes in accuracy were significantly correlated with the pattern of functional connectivity during the task state following TMS. The functional connectivity of the left superior temporal, angular, and precentral gyri was identified as key explanatory variable for the individual behavioral TMS effects. Our results suggest that the brain must reach an appropriate state in which right parietal TMS can induce improvements in visual target detection. The ability to reach this state appears to vary between individuals.
Collapse
|
39
|
O'Brien JD, Aleta A, Moreno Y, Gleeson JP. Quantifying uncertainty in a predictive model for popularity dynamics. Phys Rev E 2020; 101:062311. [PMID: 32688513 DOI: 10.1103/physreve.101.062311] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2020] [Accepted: 06/03/2020] [Indexed: 11/07/2022]
Abstract
The Hawkes process has garnered attention in recent years for its suitability to describe the behavior of online information cascades. Here we present a fully tractable approach to analytically describe the distribution of the number of events in a Hawkes process, which, in contrast to purely empirical studies or simulation-based models, enables the effect of process parameters on cascade dynamics to be analyzed. We show that the presented theory also allows predictions regarding the future distribution of events after a given number of events have been observed during a time window. Our results are derived through a differential-equation approach to attain the governing equations of a general branching process. We confirm our theoretical findings through extensive simulations of such processes. This work provides the basis for more complete analyses of the self-exciting processes that govern the spreading of information through many communication platforms, including the potential to predict cascade dynamics within confidence limits.
Collapse
Affiliation(s)
- Joseph D O'Brien
- MACSI, Department of Mathematics and Statistics, University of Limerick, Limerick V94 T9PX, Ireland
| | - Alberto Aleta
- Institute for Biocomputation and Physics of Complex Systems, University of Zaragoza, Zaragoza 50018, Spain.,ISI Foundation, 10126 Turin, Italy
| | - Yamir Moreno
- Institute for Biocomputation and Physics of Complex Systems, University of Zaragoza, Zaragoza 50018, Spain.,ISI Foundation, 10126 Turin, Italy.,Department of Theoretical Physics, Faculty of Sciences, University of Zaragoza, Zaragoza 50009, Spain
| | - James P Gleeson
- MACSI, Department of Mathematics and Statistics, University of Limerick, Limerick V94 T9PX, Ireland
| |
Collapse
|
40
|
Stapmanns J, Kühn T, Dahmen D, Luu T, Honerkamp C, Helias M. Self-consistent formulations for stochastic nonlinear neuronal dynamics. Phys Rev E 2020; 101:042124. [PMID: 32422832 DOI: 10.1103/physreve.101.042124] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 12/18/2019] [Indexed: 01/28/2023]
Abstract
Neural dynamics is often investigated with tools from bifurcation theory. However, many neuron models are stochastic, mimicking fluctuations in the input from unknown parts of the brain or the spiking nature of signals. Noise changes the dynamics with respect to the deterministic model; in particular classical bifurcation theory cannot be applied. We formulate the stochastic neuron dynamics in the Martin-Siggia-Rose de Dominicis-Janssen (MSRDJ) formalism and present the fluctuation expansion of the effective action and the functional renormalization group (fRG) as two systematic ways to incorporate corrections to the mean dynamics and time-dependent statistics due to fluctuations in the presence of nonlinear neuronal gain. To formulate self-consistency equations, we derive a fundamental link between the effective action in the Onsager-Machlup (OM) formalism, which allows the study of phase transitions, and the MSRDJ effective action, which is computationally advantageous. These results in particular allow the derivation of an OM effective action for systems with non-Gaussian noise. This approach naturally leads to effective deterministic equations for the first moment of the stochastic system; they explain how nonlinearities and noise cooperate to produce memory effects. Moreover, the MSRDJ formulation yields an effective linear system that has identical power spectra and linear response. Starting from the better known loopwise approximation, we then discuss the use of the fRG as a method to obtain self-consistency beyond the mean. We present a new efficient truncation scheme for the hierarchy of flow equations for the vertex functions by adapting the Blaizot, Méndez, and Wschebor approximation from the derivative expansion to the vertex expansion. The methods are presented by means of the simplest possible example of a stochastic differential equation that has generic features of neuronal dynamics.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - Tobias Kühn
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Thomas Luu
- Institut für Kernphysik (IKP-3), Institute for Advanced Simulation (IAS-4) and Jülich Center for Hadron Physics, Jülich Research Centre, Jülich, Germany
| | - Carsten Honerkamp
- Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany.,JARA-FIT, Jülich Aachen Research Alliance-Fundamentals of Future Information Technology, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| |
Collapse
|
41
|
Baajour SJ, Chowdury A, Thomas P, Rajan U, Khatib D, Zajac-Benitez C, Falco D, Haddad L, Amirsadri A, Bressler S, Stanley JA, Diwadkar VA. Disordered directional brain network interactions during learning dynamics in schizophrenia revealed by multivariate autoregressive models. Hum Brain Mapp 2020; 41:3594-3607. [PMID: 32436639 PMCID: PMC7416040 DOI: 10.1002/hbm.25032] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 04/09/2020] [Accepted: 04/28/2020] [Indexed: 12/12/2022] Open
Abstract
Directional network interactions underpin normative brain function in key domains including associative learning. Schizophrenia (SCZ) is characterized by altered learning dynamics, yet dysfunctional directional functional connectivity (dFC) evoked during learning is rarely assessed. Here, nonlinear learning dynamics were induced using a paradigm alternating between conditions (Encoding and Retrieval). Evoked fMRI time series data were modeled using multivariate autoregressive (MVAR) models, to discover dysfunctional direction interactions between brain network constituents during learning stages (Early vs. Late), and conditions. A functionally derived subnetwork of coactivated (healthy controls [HC] ∩ SCZ] nodes was identified. MVAR models quantified directional interactions between pairs of nodes, and coefficients were evaluated for intergroup differences (HC ≠ SCZ). In exploratory analyses, we quantified statistical effects of neuroleptic dosage on performance and MVAR measures. During Early Encoding, SCZ showed reduced dFC within a frontal–hippocampal–fusiform network, though during Late Encoding reduced dFC was associated with pathways toward the dorsolateral prefrontal cortex (dlPFC). During Early Retrieval, SCZ showed increased dFC in pathways to and from the dorsal anterior cingulate cortex, though during Late Retrieval, patients showed increased dFC in pathways toward the dlPFC, but decreased dFC in pathways from the dlPFC. These discoveries constitute novel extensions of our understanding of task‐evoked dysconnection in schizophrenia and motivate understanding of the directional aspect of the dysconnection in schizophrenia. Disordered directionality should be investigated using computational psychiatric approaches that complement the MVAR method used in our work.
Collapse
Affiliation(s)
- Shahira J Baajour
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Asadur Chowdury
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Patricia Thomas
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Usha Rajan
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Dalal Khatib
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Caroline Zajac-Benitez
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Dimitri Falco
- Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida, USA
| | - Luay Haddad
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Alireza Amirsadri
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Steven Bressler
- Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida, USA.,Department of Psychology, Florida Atlantic University, Boca Raton, Florida, USA
| | - Jeffery A Stanley
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Vaibhav A Diwadkar
- Department of Psychiatry and Behavioral Neuroscience, Wayne State University School of Medicine, Detroit, Michigan, USA
| |
Collapse
|
42
|
Fu Y, Kang Y, Chen G. Stochastic Resonance Based Visual Perception Using Spiking Neural Networks. Front Comput Neurosci 2020; 14:24. [PMID: 32499690 PMCID: PMC7242793 DOI: 10.3389/fncom.2020.00024] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 03/17/2020] [Indexed: 01/20/2023] Open
Abstract
Our aim is to propose an efficient algorithm for enhancing the contrast of dark images based on the principle of stochastic resonance in a global feedback spiking network of integrate-and-fire neurons. By linear approximation and direct simulation, we disclose the dependence of the peak signal-to-noise ratio on the spiking threshold and the feedback coupling strength. Based on this theoretical analysis, we then develop a dynamical system algorithm for enhancing dark images. In the new algorithm, an explicit formula is given on how to choose a suitable spiking threshold for the images to be enhanced, and a more effective quantifying index, the variance of image, is used to replace the commonly used measure. Numerical tests verify the efficiency of the new algorithm. The investigation provides a good example for the application of stochastic resonance, and it might be useful for explaining the biophysical mechanism behind visual perception.
Collapse
Affiliation(s)
- Yuxuan Fu
- Department of Applied Mathematics, School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China
| | - Yanmei Kang
- Department of Applied Mathematics, School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China
| | - Guanrong Chen
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China
| |
Collapse
|
43
|
Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLoS Comput Biol 2020; 16:e1007835. [PMID: 32384081 PMCID: PMC7239496 DOI: 10.1371/journal.pcbi.1007835] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/20/2020] [Accepted: 03/31/2020] [Indexed: 01/08/2023] Open
Abstract
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.
Collapse
Affiliation(s)
- Lisandro Montangie
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Christoph Miehl
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
- * E-mail:
| |
Collapse
|
44
|
Novelli L, Atay FM, Jost J, Lizier JT. Deriving pairwise transfer entropy from network structure and motifs. Proc Math Phys Eng Sci 2020; 476:20190779. [PMID: 32398937 PMCID: PMC7209155 DOI: 10.1098/rspa.2019.0779] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2019] [Accepted: 03/24/2020] [Indexed: 11/12/2022] Open
Abstract
Transfer entropy (TE) is an established method for quantifying directed statistical dependencies in neuroimaging and complex systems datasets. The pairwise (or bivariate) TE from a source to a target node in a network does not depend solely on the local source-target link weight, but on the wider network structure that the link is embedded in. This relationship is studied using a discrete-time linearly coupled Gaussian model, which allows us to derive the TE for each link from the network topology. It is shown analytically that the dependence on the directed link weight is only a first approximation, valid for weak coupling. More generally, the TE increases with the in-degree of the source and decreases with the in-degree of the target, indicating an asymmetry of information transfer between hubs and low-degree nodes. In addition, the TE is directly proportional to weighted motif counts involving common parents or multiple walks from the source to the target, which are more abundant in networks with a high clustering coefficient than in random networks. Our findings also apply to Granger causality, which is equivalent to TE for Gaussian variables. Moreover, similar empirical results on random Boolean networks suggest that the dependence of the TE on the in-degree extends to nonlinear dynamics.
Collapse
Affiliation(s)
- Leonardo Novelli
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Fatihcan M. Atay
- Department of Mathematics, Bilkent University, 06800 Ankara, Turkey
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
| | - Jürgen Jost
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
- Santa Fe Institute for the Sciences of Complexity, Santa Fe, New Mexico 87501, USA
| | - Joseph T. Lizier
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
| |
Collapse
|
45
|
Inference of synaptic connectivity and external variability in neural microcircuits. J Comput Neurosci 2020; 48:123-147. [PMID: 32080777 DOI: 10.1007/s10827-020-00739-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 11/15/2019] [Accepted: 01/03/2020] [Indexed: 10/25/2022]
Abstract
A major goal in neuroscience is to estimate neural connectivity from large scale extracellular recordings of neural activity in vivo. This is challenging in part because any such activity is modulated by the unmeasured external synaptic input to the network, known as the common input problem. Many different measures of functional connectivity have been proposed in the literature, but their direct relationship to synaptic connectivity is often assumed or ignored. For in vivo data, measurements of this relationship would require a knowledge of ground truth connectivity, which is nearly always unavailable. Instead, many studies use in silico simulations as benchmarks for investigation, but such approaches necessarily rely upon a variety of simplifying assumptions about the simulated network and can depend on numerous simulation parameters. We combine neuronal network simulations, mathematical analysis, and calcium imaging data to address the question of when and how functional connectivity, synaptic connectivity, and latent external input variability can be untangled. We show numerically and analytically that, even though the precision matrix of recorded spiking activity does not uniquely determine synaptic connectivity, it is in practice often closely related to synaptic connectivity. This relation becomes more pronounced when the spatial structure of neuronal variability is jointly considered.
Collapse
|
46
|
Synaptic Plasticity Shapes Brain Connectivity: Implications for Network Topology. Int J Mol Sci 2019; 20:ijms20246193. [PMID: 31817968 PMCID: PMC6940892 DOI: 10.3390/ijms20246193] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/02/2019] [Accepted: 12/06/2019] [Indexed: 12/13/2022] Open
Abstract
Studies of brain network connectivity improved understanding on brain changes and adaptation in response to different pathologies. Synaptic plasticity, the ability of neurons to modify their connections, is involved in brain network remodeling following different types of brain damage (e.g., vascular, neurodegenerative, inflammatory). Although synaptic plasticity mechanisms have been extensively elucidated, how neural plasticity can shape network organization is far from being completely understood. Similarities existing between synaptic plasticity and principles governing brain network organization could be helpful to define brain network properties and reorganization profiles after damage. In this review, we discuss how different forms of synaptic plasticity, including homeostatic and anti-homeostatic mechanisms, could be directly involved in generating specific brain network characteristics. We propose that long-term potentiation could represent the neurophysiological basis for the formation of highly connected nodes (hubs). Conversely, homeostatic plasticity may contribute to stabilize network activity preventing poor and excessive connectivity in the peripheral nodes. In addition, synaptic plasticity dysfunction may drive brain network disruption in neuropsychiatric conditions such as Alzheimer's disease and schizophrenia. Optimal network architecture, characterized by efficient information processing and resilience, and reorganization after damage strictly depend on the balance between these forms of plasticity.
Collapse
|
47
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
48
|
From space to time: Spatial inhomogeneities lead to the emergence of spatiotemporal sequences in spiking neuronal networks. PLoS Comput Biol 2019; 15:e1007432. [PMID: 31652259 PMCID: PMC6834288 DOI: 10.1371/journal.pcbi.1007432] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 11/06/2019] [Accepted: 09/24/2019] [Indexed: 01/01/2023] Open
Abstract
Spatio-temporal sequences of neuronal activity are observed in many brain regions in a variety of tasks and are thought to form the basis of meaningful behavior. However, mechanisms by which a neuronal network can generate spatio-temporal activity sequences have remained obscure. Existing models are biologically untenable because they either require manual embedding of a feedforward network within a random network or supervised learning to train the connectivity of a network to generate sequences. Here, we propose a biologically plausible, generative rule to create spatio-temporal activity sequences in a network of spiking neurons with distance-dependent connectivity. We show that the emergence of spatio-temporal activity sequences requires: (1) individual neurons preferentially project a small fraction of their axons in a specific direction, and (2) the preferential projection direction of neighboring neurons is similar. Thus, an anisotropic but correlated connectivity of neuron groups suffices to generate spatio-temporal activity sequences in an otherwise random neuronal network model.
Collapse
|
49
|
Interneuronal correlations at longer time scales predict decision signals for bistable structure-from-motion perception. Sci Rep 2019; 9:11449. [PMID: 31391489 PMCID: PMC6686021 DOI: 10.1038/s41598-019-47786-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Accepted: 07/19/2019] [Indexed: 12/25/2022] Open
Abstract
Perceptual decisions are thought to depend on the activation of task-relevant neurons, whose activity is often correlated in time. Here, we examined how the temporal structure of shared variability in neuronal firing relates to perceptual choices. We recorded stimulus-selective neurons from visual area V5/MT while two monkeys (Macaca mulatta) made perceptual decisions about the rotation direction of structure-from-motion cylinders. Interneuronal correlations for a perceptually ambiguous cylinder stimulus were significantly higher than those for unambiguous cylinders or for random 2D motion during passive viewing. Much of the difference arose from correlations at relatively long timescales (hundreds of milliseconds). Choice-related neural activity (quantified as choice probability; CP) for ambiguous cylinders was positively correlated with interneuronal correlations and was specifically associated with their long timescale component. Furthermore, the slope of the long timescale - but not the instantaneous - component of the correlation predicted higher CPs towards the end of the trial i.e. close to the decision. Our results suggest that the perceptual stability of structure-from-motion cylinders may be controlled by enhanced interneuronal correlations on longer timescales. We propose this as a potential signature of top-down influences onto V5/MT processing that shape and stabilize the appearance of 3D-motion percepts.
Collapse
|
50
|
Marcos E, Londei F, Genovesio A. Hidden Markov Models Predict the Future Choice Better Than a PSTH-Based Method. Neural Comput 2019; 31:1874-1890. [PMID: 31335289 DOI: 10.1162/neco_a_01216] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Beyond average firing rate, other measurable signals of neuronal activity are fundamental to an understanding of behavior. Recently, hidden Markov models (HMMs) have been applied to neural recordings and have described how neuronal ensembles process information by going through sequences of different states. Such collective dynamics are impossible to capture by just looking at the average firing rate. To estimate how well HMMs can decode information contained in single trials, we compared HMMs with a recently developed classification method based on the peristimulus time histogram (PSTH). The accuracy of the two methods was tested by using the activity of prefrontal neurons recorded while two monkeys were engaged in a strategy task. In this task, the monkeys had to select one of three spatial targets based on an instruction cue and on their previous choice. We show that by using the single trial's neural activity in a period preceding action execution, both models were able to classify the monkeys' choice with an accuracy higher than by chance. Moreover, the HMM was significantly more accurate than the PSTH-based method, even in cases in which the HMM performance was low, although always above chance. Furthermore, the accuracy of both methods was related to the number of neurons exhibiting spatial selectivity within an experimental session. Overall, our study shows that neural activity is better described when not only the mean activity of individual neurons is considered and that therefore, the study of other signals rather than only the average firing rate is fundamental to an understanding of the dynamics of neuronal ensembles.
Collapse
Affiliation(s)
- Encarni Marcos
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy, and Instituto de Neurociencias de Alicante, Consejo Superior de Investigaciones Científicas-Universidad Miguel Hernández de Elche, Sant Joan d'Alacant, Alicante 03550, Spain
| | - Fabrizio Londei
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| |
Collapse
|