1
|
Olsen VK, Whitlock JR, Roudi Y. The quality and complexity of pairwise maximum entropy models for large cortical populations. PLoS Comput Biol 2024; 20:e1012074. [PMID: 38696532 DOI: 10.1371/journal.pcbi.1012074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 05/14/2024] [Accepted: 04/10/2024] [Indexed: 05/04/2024] Open
Abstract
We investigate the ability of the pairwise maximum entropy (PME) model to describe the spiking activity of large populations of neurons recorded from the visual, auditory, motor, and somatosensory cortices. To quantify this performance, we use (1) Kullback-Leibler (KL) divergences, (2) the extent to which the pairwise model predicts third-order correlations, and (3) its ability to predict the probability that multiple neurons are simultaneously active. We compare these with the performance of a model with independent neurons and study the relationship between the different performance measures, while varying the population size, mean firing rate of the chosen population, and the bin size used for binarizing the data. We confirm the previously reported excellent performance of the PME model for small population sizes N < 20. But we also find that larger mean firing rates and bin sizes generally decreases performance. The performance for larger populations were generally not as good. For large populations, pairwise models may be good in terms of predicting third-order correlations and the probability of multiple neurons being active, but still significantly worse than small populations in terms of their improvement over the independent model in KL-divergence. We show that these results are independent of the cortical area and of whether approximate methods or Boltzmann learning are used for inferring the pairwise couplings. We compared the scaling of the inferred couplings with N and find it to be well explained by the Sherrington-Kirkpatrick (SK) model, whose strong coupling regime shows a complex phase with many metastable states. We find that, up to the maximum population size studied here, the fitted PME model remains outside its complex phase. However, the standard deviation of the couplings compared to their mean increases, and the model gets closer to the boundary of the complex phase as the population size grows.
Collapse
Affiliation(s)
- Valdemar Kargård Olsen
- Kavli Institute for Systems Neuroscience, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jonathan R Whitlock
- Kavli Institute for Systems Neuroscience, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Yasser Roudi
- Kavli Institute for Systems Neuroscience, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Mathematics, King's College London, London, United Kingdom
| |
Collapse
|
2
|
Liang T, Brinkman BAW. Statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances. Phys Rev E 2024; 109:044404. [PMID: 38755896 DOI: 10.1103/physreve.109.044404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/29/2024] [Indexed: 05/18/2024]
Abstract
Statistically inferred neuronal connections from observed spike train data are often skewed from ground truth by factors such as model mismatch, unobserved neurons, and limited data. Spike train covariances, sometimes referred to as "functional connections," are often used as a proxy for the connections between pairs of neurons, but reflect statistical relationships between neurons, not anatomical connections. Moreover, covariances are not causal: spiking activity is correlated in both the past and the future, whereas neurons respond only to synaptic inputs in the past. Connections inferred by maximum likelihood inference, however, can be constrained to be causal. However, we show in this work that the inferred connections in spontaneously active networks modeled by stochastic leaky integrate-and-fire networks strongly correlate with the covariances between neurons, and may reflect noncausal relationships, when many neurons are unobserved or when neurons are weakly coupled. This phenomenon occurs across different network structures, including random networks and balanced excitatory-inhibitory networks. We use a combination of simulations and a mean-field analysis with fluctuation corrections to elucidate the relationships between spike train covariances, inferred synaptic filters, and ground-truth connections in partially observed networks.
Collapse
Affiliation(s)
- Tong Liang
- Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794, USA
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
3
|
Jiang W, Wang S. Detecting hidden nodes in networks based on random variable resetting method. CHAOS (WOODBURY, N.Y.) 2023; 33:043109. [PMID: 37097951 DOI: 10.1063/5.0134953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 03/13/2023] [Indexed: 06/19/2023]
Abstract
Reconstructing network connections from measurable data facilitates our understanding of the mechanism of interactions between nodes. However, the unmeasurable nodes in real networks, also known as hidden nodes, introduce new challenges for reconstruction. There have been some hidden node detection methods, but most of them are limited by system models, network structures, and other conditions. In this paper, we propose a general theoretical method for detecting hidden nodes based on the random variable resetting method. We construct a new time series containing hidden node information based on the reconstruction results of random variable resetting, theoretically analyze the autocovariance of the time series, and finally provide a quantitative criterion for detecting hidden nodes. We numerically simulate our method in discrete and continuous systems and analyze the influence of main factors. The simulation results validate our theoretical derivation and illustrate the robustness of the detection method under different conditions.
Collapse
Affiliation(s)
- Weinuo Jiang
- School of Sciences, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Shihong Wang
- School of Sciences, Beijing University of Posts and Telecommunications, Beijing 100876, China
| |
Collapse
|
4
|
Da Costa L, Friston K, Heins C, Pavliotis GA. Bayesian mechanics for stationary processes. Proc Math Phys Eng Sci 2022; 477:20210518. [PMID: 35153603 PMCID: PMC8652275 DOI: 10.1098/rspa.2021.0518] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 10/27/2021] [Indexed: 01/02/2023] Open
Abstract
This paper develops a Bayesian mechanics for adaptive systems. Firstly, we model the interface between a system and its environment with a Markov blanket. This affords conditions under which states internal to the blanket encode information about external states. Second, we introduce dynamics and represent adaptive systems as Markov blankets at steady state. This allows us to identify a wide class of systems whose internal states appear to infer external states, consistent with variational inference in Bayesian statistics and theoretical neuroscience. Finally, we partition the blanket into sensory and active states. It follows that active states can be seen as performing active inference and well-known forms of stochastic control (such as PID control), which are prominent formulations of adaptive behaviour in theoretical biology and engineering.
Collapse
Affiliation(s)
- Lancelot Da Costa
- Department of Mathematics, Imperial College London, London SW7 2AZ, UK.,Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK
| | - Conor Heins
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz D-78457, Germany.,Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Konstanz D-78457, Germany.,Department of Biology, University of Konstanz, Konstanz D-78457, Germany
| | | |
Collapse
|
5
|
Decelle A, Hwang S, Rocchi J, Tantari D. Inverse problems for structured datasets using parallel TAP equations and restricted Boltzmann machines. Sci Rep 2021; 11:19990. [PMID: 34620934 PMCID: PMC8497629 DOI: 10.1038/s41598-021-99353-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 09/07/2021] [Indexed: 11/25/2022] Open
Abstract
We propose an efficient algorithm to solve inverse problems in the presence of binary clustered datasets. We consider the paradigmatic Hopfield model in a teacher student scenario, where this situation is found in the retrieval phase. This problem has been widely analyzed through various methods such as mean-field approaches or the pseudo-likelihood optimization. Our approach is based on the estimation of the posterior using the Thouless-Anderson-Palmer (TAP) equations in a parallel updating scheme. Unlike other methods, it allows to retrieve the original patterns of the teacher dataset and thanks to the parallel update it can be applied to large system sizes. We tackle the same problem using a restricted Boltzmann machine (RBM) and discuss analogies and differences between our algorithm and RBM learning.
Collapse
Affiliation(s)
- Aurelien Decelle
- Laboratoire Interdisciplinaire des Sciences du Numérique, Université Paris-Saclay, CNRS, INRIA TAU team, 91190, Gif-sur-Yvette, France
- Departamento de Física Téorica I, Universidad Complutense, 28040, Madrid, Spain
| | - Sungmin Hwang
- LPTMS, Université Paris-Sud 11, UMR 8626 CNRS, Bat. 100, 91405, Orsay, Cedex, France
| | - Jacopo Rocchi
- LPTMS, Université Paris-Sud 11, UMR 8626 CNRS, Bat. 100, 91405, Orsay, Cedex, France
| | - Daniele Tantari
- Mathematics Department, University of Bologna, Piazza di Porta S. Donato 5, 40126, Bologna, Italy.
| |
Collapse
|
6
|
de Heuvel J, Wilting J, Becker M, Priesemann V, Zierenberg J. Characterizing spreading dynamics of subsampled systems with nonstationary external input. Phys Rev E 2021; 102:040301. [PMID: 33212575 DOI: 10.1103/physreve.102.040301] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 09/21/2020] [Indexed: 11/07/2022]
Abstract
Many systems with propagation dynamics, such as spike propagation in neural networks and spreading of infectious diseases, can be approximated by autoregressive models. The estimation of model parameters can be complicated by the experimental limitation that one observes only a fraction of the system (subsampling) and potentially time-dependent parameters, leading to incorrect estimates. We show analytically how to overcome the subsampling bias when estimating the propagation rate for systems with certain nonstationary external input. This approach is readily applicable to trial-based experimental setups and seasonal fluctuations as demonstrated on spike recordings from monkey prefrontal cortex and spreading of norovirus and measles.
Collapse
Affiliation(s)
- Jorge de Heuvel
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
| | - Jens Wilting
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
| | - Moritz Becker
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany.,Department of Computational Neuroscience, Third Institute of Physics-Biophysics, Georg-August-University, 37077 Göttingen, Germany
| | - Viola Priesemann
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
| | - Johannes Zierenberg
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
| |
Collapse
|
7
|
Lee S, Periwal V, Jo J. Inference of stochastic time series with missing data. Phys Rev E 2021; 104:024119. [PMID: 34525568 PMCID: PMC9531145 DOI: 10.1103/physreve.104.024119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 07/22/2021] [Indexed: 11/07/2022]
Abstract
Inferring dynamics from time series is an important objective in data analysis. In particular, it is challenging to infer stochastic dynamics given incomplete data. We propose an expectation maximization (EM) algorithm that iterates between alternating two steps: E-step restores missing data points, while M-step infers an underlying network model from the restored data. Using synthetic data of a kinetic Ising model, we confirm that the algorithm works for restoring missing data points as well as inferring the underlying model. At the initial iteration of the EM algorithm, the model inference shows better model-data consistency with observed data points than with missing data points. As we keep iterating, however, missing data points show better model-data consistency. We find that demanding equal consistency of observed and missing data points provides an effective stopping criterion for the iteration to prevent going beyond the most accurate model inference. Using the EM algorithm and the stopping criterion together, we infer missing data points from a time-series data of real neuronal activities. Our method reproduces collective properties of neuronal activities such as correlations and firing statistics even when 70% of data points are masked as missing points.
Collapse
Affiliation(s)
- Sangwon Lee
- Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
| | - Vipul Periwal
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892, USA
| | - Junghyo Jo
- Department of Physics Education and Center for Theoretical Physics and Artificial Intelligence Institute, Seoul National University, Seoul 08826, Korea
- School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, Korea
| |
Collapse
|
8
|
Gemao B, Lai PY. Effects of hidden nodes on noisy network dynamics. Phys Rev E 2021; 103:062302. [PMID: 34271711 DOI: 10.1103/physreve.103.062302] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Accepted: 05/24/2021] [Indexed: 12/26/2022]
Abstract
We consider coupled network dynamics under uncorrelated noises, but only a subset of the network and their node dynamics can be observed. The effects of hidden nodes on the dynamics of the observed nodes can be viewed as having an extra effective noise acting on the observed nodes. These effective noises possess spatial and temporal correlations whose properties are related to the hidden connections. The spatial and temporal correlations of these effective noises are analyzed analytically and the results are verified by simulations on undirected and directed weighted random networks and small-world networks. Furthermore, by exploiting the network reconstruction relation for the observed network noisy dynamics, we propose a scheme to infer information of the effects of the hidden nodes such as the total number of hidden nodes and the weighted total hidden connections on each observed node. The accuracy of these results are demonstrated by explicit simulations.
Collapse
Affiliation(s)
- Beverly Gemao
- Department of Physics and Center for Complex Systems, National Central University, Chung-Li District, Taoyuan City 320, Taiwan, Republic of China.,Physics Department, MSU-Iligan Institute of Technology, 9200 Iligan City, Philippines
| | - Pik-Yin Lai
- Department of Physics and Center for Complex Systems, National Central University, Chung-Li District, Taoyuan City 320, Taiwan, Republic of China
| |
Collapse
|
9
|
Randi F, Leifer AM. Nonequilibrium Green's Functions for Functional Connectivity in the Brain. PHYSICAL REVIEW LETTERS 2021; 126:118102. [PMID: 33798383 PMCID: PMC8454901 DOI: 10.1103/physrevlett.126.118102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 12/29/2020] [Accepted: 02/18/2021] [Indexed: 05/28/2023]
Abstract
A theoretical framework describing the set of interactions between neurons in the brain, or functional connectivity, should include dynamical functions representing the propagation of signal from one neuron to another. Green's functions and response functions are natural candidates for this but, while they are conceptually very useful, they are usually defined only for linear time-translationally invariant systems. The brain, instead, behaves nonlinearly and in a time-dependent way. Here, we use nonequilibrium Green's functions to describe the time-dependent functional connectivity of a continuous-variable network of neurons. We show how the connectivity is related to the measurable response functions, and provide two illustrative examples via numerical calculations, inspired from Caenorhabditis elegans.
Collapse
Affiliation(s)
- Francesco Randi
- Department of Physics, Princeton University, Jadwin Hall, Princeton, New Jersey 08544, USA
| | - Andrew M. Leifer
- Department of Physics, Princeton University, Jadwin Hall, Princeton, New Jersey 08544, USA
- Princeton Neuroscience Institute, Princeton University, New Jersey 08544, USA
| |
Collapse
|
10
|
Terada Y, Obuchi T, Isomura T, Kabashima Y. Inferring Neuronal Couplings From Spiking Data Using a Systematic Procedure With a Statistical Criterion. Neural Comput 2020; 32:2187-2211. [PMID: 32946715 DOI: 10.1162/neco_a_01324] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Recent remarkable advances in experimental techniques have provided a background for inferring neuronal couplings from point process data that include a great number of neurons. Here, we propose a systematic procedure for pre- and postprocessing generic point process data in an objective manner to handle data in the framework of a binary simple statistical model, the Ising or generalized McCulloch-Pitts model. The procedure has two steps: (1) determining time bin size for transforming the point process data into discrete-time binary data and (2) screening relevant couplings from the estimated couplings. For the first step, we decide the optimal time bin size by introducing the null hypothesis that all neurons would fire independently, then choosing a time bin size so that the null hypothesis is rejected with the strict criteria. The likelihood associated with the null hypothesis is analytically evaluated and used for the rejection process. For the second postprocessing step, after a certain estimator of coupling is obtained based on the preprocessed data set (any estimator can be used with the proposed procedure), the estimate is compared with many other estimates derived from data sets obtained by randomizing the original data set in the time direction. We accept the original estimate as relevant only if its absolute value is sufficiently larger than those of randomized data sets. These manipulations suppress false positive couplings induced by statistical noise. We apply this inference procedure to spiking data from synthetic and in vitro neuronal networks. The results show that the proposed procedure identifies the presence or absence of synaptic couplings fairly well, including their signs, for the synthetic and experimental data. In particular, the results support that we can infer the physical connections of underlying systems in favorable situations, even when using a simple statistical model.
Collapse
Affiliation(s)
- Yu Terada
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| | - Tomoyuki Obuchi
- Department of Systems Science, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
| | - Takuya Isomura
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| | - Yoshiyuki Kabashima
- Institute for Physics of Intelligence, Graduate School of Science, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
11
|
Campajola C, Lillo F, Tantari D. Inference of the kinetic Ising model with heterogeneous missing data. Phys Rev E 2019; 99:062138. [PMID: 31330593 DOI: 10.1103/physreve.99.062138] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Indexed: 11/07/2022]
Abstract
We consider the problem of inferring a causality structure from multiple binary time series by using the kinetic Ising model in datasets where a fraction of observations is missing. Inspired by recent work on mean field methods for the inference of the model with hidden spins, we develop a pseudo-expectation-maximization algorithm that is able to work even in conditions of severe data sparsity. The methodology relies on the Martin-Siggia-Rose path integral method with second-order saddle-point solution to make it possible to approximate the log-likelihood in polynomial time, giving as output an estimate of the couplings matrix and of the missing observations. We also propose a recursive version of the algorithm, where at every iteration some missing values are substituted by their maximum-likelihood estimate, showing that the method can be used together with sparsification schemes such as lasso regularization or decimation. We test the performance of the algorithm on synthetic data and find interesting properties regarding the dependency on heterogeneity of the observation frequency of spins and when some of the hypotheses that are necessary to the saddle-point approximation are violated, such as the small couplings limit and the assumption of statistical independence between couplings.
Collapse
Affiliation(s)
- Carlo Campajola
- Scuola Normale Superiore di Pisa, piazza dei Cavalieri 7, 56126 Pisa, Italy
| | - Fabrizio Lillo
- University of Bologna - Department of Mathematics, piazza di Porta San Donato 5, 40126 Bologna, Italy
| | - Daniele Tantari
- University of Florence - Department of Economics and Management, via delle Pandette 9, 50127 Firenze, Italy
| |
Collapse
|
12
|
Hoang DT, Jo J, Periwal V. Data-driven inference of hidden nodes in networks. Phys Rev E 2019; 99:042114. [PMID: 31108681 DOI: 10.1103/physreve.99.042114] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Indexed: 01/12/2023]
Abstract
The explosion of activity in finding interactions in complex systems is driven by availability of copious observations of complex natural systems. However, such systems, e.g., the human brain, are rarely completely observable. Interaction network inference must then contend with hidden variables affecting the behavior of the observed parts of the system. We present an effective approach for model inference with hidden variables. From configurations of observed variables, we identify the observed-to-observed, hidden-to-observed, observed-to-hidden, and hidden-to-hidden interactions, the configurations of hidden variables, and the number of hidden variables. We demonstrate the performance of our method by simulating a kinetic Ising model, and show that our method outperforms existing methods. Turning to real data, we infer the hidden nodes in a neuronal network in the salamander retina and a stock market network. We show that predictive modeling with hidden variables is significantly more accurate than that without hidden variables. Finally, an important hidden variable problem is to find the number of clusters in a dataset. We apply our method to classify MNIST handwritten digits. We find that there are about 60 clusters which are roughly equally distributed among the digits.
Collapse
Affiliation(s)
- Danh-Tai Hoang
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892, USA.,Department of Natural Sciences, Quang Binh University, Dong Hoi, Quang Binh 510000, Vietnam
| | - Junghyo Jo
- Department of Statistics, Keimyung University, Daegu 42601, Korea.,School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, Korea
| | - Vipul Periwal
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892, USA
| |
Collapse
|
13
|
Brinkman BAW, Rieke F, Shea-Brown E, Buice MA. Predicting how and when hidden neurons skew measured synaptic interactions. PLoS Comput Biol 2018; 14:e1006490. [PMID: 30346943 PMCID: PMC6219819 DOI: 10.1371/journal.pcbi.1006490] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 11/06/2018] [Accepted: 09/05/2018] [Indexed: 11/18/2022] Open
Abstract
A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the “hidden” portion of the network. To properly interpret neural data and determine how biological structure gives rise to neural circuit function, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r′ to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r′ to r plus corrections from every directed path from r′ to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have—or do not have—major effects on reshaping the interactions among observed neurons. As a particular example of interest, we derive a formula for the impact of hidden units in random networks with “strong” coupling—connection weights that scale with 1/N, where N is the network size, precisely the scaling observed in recent experiments. With this quantitative relationship between measured and true interactions, we can study how network properties shape effective interactions, which properties are relevant for neural computations, and how to manipulate effective interactions. No experiment in neuroscience can record from more than a tiny fraction of the total number of neurons present in a circuit. This severely complicates measurement of a network’s true properties, as unobserved neurons skew measurements away from what would be measured if all neurons were observed. For example, the measured post-synaptic response of a neuron to a spike from a particular pre-synaptic neuron incorporates direct connections between the two neurons as well as the effect of any number of indirect connections, including through unobserved neurons. To understand how measured quantities are distorted by unobserved neurons, we calculate a general relationship between measured “effective” synaptic interactions and the ground-truth interactions in the network. This allows us to identify conditions under which hidden neurons substantially alter measured interactions. Moreover, it provides a foundation for future work on manipulating effective interactions between neurons to better understand and potentially alter circuit function—or dysfunction.
Collapse
Affiliation(s)
- Braden A W Brinkman
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A Buice
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
14
|
Donner C, Opper M. Inverse Ising problem in continuous time: A latent variable approach. Phys Rev E 2017; 96:062104. [PMID: 29347355 DOI: 10.1103/physreve.96.062104] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Indexed: 06/07/2023]
Abstract
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
Collapse
Affiliation(s)
- Christian Donner
- Artificial Intelligence Group, Technische Universität, Marchstr. 23, 10587 Berlin, Germany
| | - Manfred Opper
- Artificial Intelligence Group, Technische Universität, Marchstr. 23, 10587 Berlin, Germany
| |
Collapse
|
15
|
Lin TW, Das A, Krishnan GP, Bazhenov M, Sejnowski TJ. Differential Covariance: A New Class of Methods to Estimate Sparse Connectivity from Neural Recordings. Neural Comput 2017; 29:2581-2632. [PMID: 28777719 DOI: 10.1162/neco_a_01008] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008 ), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005 ; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005 ; Pillow et al., 2008 ), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals.
Collapse
Affiliation(s)
- Tiger W Lin
- Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92092, U.S.A.
| | - Anup Das
- Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Jacobs School of Engineering, University of California San Diego, La Jolla, CA 92092, U.S.A.
| | - Giri P Krishnan
- Department of Medicine, University of California San Diego, La Jolla, CA 92092, U.S.A.
| | - Maxim Bazhenov
- Department of Medicine, University of California San Diego, La Jolla, CA 92092, U.S.A.
| | - Terrence J Sejnowski
- Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Institute for Neural Computation, University of California San Diego, La Jolla, CA 92092, U.S.A.
| |
Collapse
|
16
|
Learning with unknowns: Analyzing biological data in the presence of hidden variables. ACTA ACUST UNITED AC 2017. [DOI: 10.1016/j.coisb.2016.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
17
|
Donner C, Obermayer K, Shimazaki H. Approximate Inference for Time-Varying Interactions and Macroscopic Dynamics of Neural Populations. PLoS Comput Biol 2017; 13:e1005309. [PMID: 28095421 PMCID: PMC5283755 DOI: 10.1371/journal.pcbi.1005309] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Revised: 01/31/2017] [Accepted: 12/12/2016] [Indexed: 11/29/2022] Open
Abstract
The models in statistical physics such as an Ising model offer a convenient way to characterize stationary activity of neural populations. Such stationary activity of neurons may be expected for recordings from in vitro slices or anesthetized animals. However, modeling activity of cortical circuitries of awake animals has been more challenging because both spike-rates and interactions can change according to sensory stimulation, behavior, or an internal state of the brain. Previous approaches modeling the dynamics of neural interactions suffer from computational cost; therefore, its application was limited to only a dozen neurons. Here by introducing multiple analytic approximation methods to a state-space model of neural population activity, we make it possible to estimate dynamic pairwise interactions of up to 60 neurons. More specifically, we applied the pseudolikelihood approximation to the state-space model, and combined it with the Bethe or TAP mean-field approximation to make the sequential Bayesian estimation of the model parameters possible. The large-scale analysis allows us to investigate dynamics of macroscopic properties of neural circuitries underlying stimulus processing and behavior. We show that the model accurately estimates dynamics of network properties such as sparseness, entropy, and heat capacity by simulated data, and demonstrate utilities of these measures by analyzing activity of monkey V4 neurons as well as a simulated balanced network of spiking neurons. Simultaneous analysis of large-scale neural populations is necessary to understand coding principles of neurons because they concertedly process information. Methods of thermodynamics and statistical mechanics are useful to understand collective phenomena of the interacting elements, and they have been successfully used to understand diverse activity of neurons. However, most analysis methods assume stationary data, in which activity rates of neurons and their correlations are constant over time. This assumption is easily violated in the data recorded from awake animals. Neural correlations likely organize dynamically during behavior and cognition, and this may be independent from the modulated activity rates of individual neurons. Recently several methods were proposed to simultaneously estimate dynamics of neural interactions. However, these methods are applicable to up to about 10 neurons. Here by combining multiple analytic approximation methods, we made it possible to estimate time-varying interactions of much larger neural populations. The method allows us to trace dynamic macroscopic properties of neural circuitries such as sparseness, entropy, and sensitivity. Using these statistics, researchers can now quantify to what extent neurons are correlated or de-correlated, and test if neural systems are susceptible within a specific behavioral period.
Collapse
Affiliation(s)
- Christian Donner
- Bernstein Center for Computational Neuroscience, Berlin, Germany
- Neural Information Processing Group, Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Group for Methods of Artificial Intelligence, Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| | - Klaus Obermayer
- Bernstein Center for Computational Neuroscience, Berlin, Germany
- Neural Information Processing Group, Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| | | |
Collapse
|
18
|
Bravi B, Opper M, Sollich P. Inferring hidden states in Langevin dynamics on large networks: Average case performance. Phys Rev E 2017; 95:012122. [PMID: 28208380 DOI: 10.1103/physreve.95.012122] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Indexed: 06/06/2023]
Abstract
We present average performance results for dynamical inference problems in large networks, where a set of nodes is hidden while the time trajectories of the others are observed. Examples of this scenario can occur in signal transduction and gene regulation networks. We focus on the linear stochastic dynamics of continuous variables interacting via random Gaussian couplings of generic symmetry. We analyze the inference error, given by the variance of the posterior distribution over hidden paths, in the thermodynamic limit and as a function of the system parameters and the ratio α between the number of hidden and observed nodes. By applying Kalman filter recursions we find that the posterior dynamics is governed by an "effective" drift that incorporates the effect of the observations. We present two approaches for characterizing the posterior variance that allow us to tackle, respectively, equilibrium and nonequilibrium dynamics. The first appeals to Random Matrix Theory and reveals average spectral properties of the inference error and typical posterior relaxation times; the second is based on dynamical functionals and yields the inference error as the solution of an algebraic equation.
Collapse
Affiliation(s)
- B Bravi
- Department of Mathematics, King's College London, Strand, London WC2R 2LS, United Kingdom
| | - M Opper
- Department of Artificial Intelligence, Technische Universität Berlin, Marchstraße 23, Berlin 10587, Germany
| | - P Sollich
- Department of Mathematics, King's College London, Strand, London WC2R 2LS, United Kingdom
| |
Collapse
|
19
|
Roudi Y, Taylor G. Learning with hidden variables. Curr Opin Neurobiol 2015; 35:110-8. [DOI: 10.1016/j.conb.2015.07.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2015] [Revised: 07/23/2015] [Accepted: 07/23/2015] [Indexed: 10/23/2022]
|
20
|
Roudi Y, Dunn B, Hertz J. Multi-neuronal activity and functional connectivity in cell assemblies. Curr Opin Neurobiol 2015; 32:38-44. [DOI: 10.1016/j.conb.2014.10.011] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2014] [Revised: 10/20/2014] [Accepted: 10/20/2014] [Indexed: 12/01/2022]
|
21
|
Tyrcha J, Hertz J. Network inference with hidden units. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2014; 11:149-165. [PMID: 24245678 DOI: 10.3934/mbe.2014.11.149] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We derive learning rules for finding the connections between units in stochastic dynamical networks from the recorded history of a "visible'' subset of the units. We consider two models. In both of them, the visible units are binary and stochastic. In one model the "hidden'' units are continuous-valued, with sigmoidal activation functions, and in the other they are binary and stochastic like the visible ones. We derive exact learning rules for both cases. For the stochastic case, performing the exact calculation requires, in general, repeated summations over an number of configurations that grows exponentially with the size of the system and the data length, which is not feasible for large systems. We derive a mean field theory, based on a factorized ansatz for the distribution of hidden-unit states, which offers an attractive alternative for large systems. We present the results of some numerical calculations that illustrate key features of the two models and, for the stochastic case, the exact and approximate calculations.
Collapse
Affiliation(s)
- Joanna Tyrcha
- Department of Mathematics, Stockholm University, Kraftriket, S-106 91 Stockholm, Sweden.
| | | |
Collapse
|