1
|
Safavi S, Panagiotaropoulos TI, Kapoor V, Ramirez-Villegas JF, Logothetis NK, Besserve M. Uncovering the organization of neural circuits with Generalized Phase Locking Analysis. PLoS Comput Biol 2023; 19:e1010983. [PMID: 37011110 PMCID: PMC10109521 DOI: 10.1371/journal.pcbi.1010983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 04/17/2023] [Accepted: 02/27/2023] [Indexed: 04/05/2023] Open
Abstract
Despite the considerable progress of in vivo neural recording techniques, inferring the biophysical mechanisms underlying large scale coordination of brain activity from neural data remains challenging. One obstacle is the difficulty to link high dimensional functional connectivity measures to mechanistic models of network activity. We address this issue by investigating spike-field coupling (SFC) measurements, which quantify the synchronization between, on the one hand, the action potentials produced by neurons, and on the other hand mesoscopic "field" signals, reflecting subthreshold activities at possibly multiple recording sites. As the number of recording sites gets large, the amount of pairwise SFC measurements becomes overwhelmingly challenging to interpret. We develop Generalized Phase Locking Analysis (GPLA) as an interpretable dimensionality reduction of this multivariate SFC. GPLA describes the dominant coupling between field activity and neural ensembles across space and frequencies. We show that GPLA features are biophysically interpretable when used in conjunction with appropriate network models, such that we can identify the influence of underlying circuit properties on these features. We demonstrate the statistical benefits and interpretability of this approach in various computational models and Utah array recordings. The results suggest that GPLA, used jointly with biophysical modeling, can help uncover the contribution of recurrent microcircuits to the spatio-temporal dynamics observed in multi-channel experimental recordings.
Collapse
Affiliation(s)
- Shervin Safavi
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- IMPRS for Cognitive and Systems Neuroscience, University of Tübingen, Tübingen, Germany
| | - Theofanis I. Panagiotaropoulos
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France
| | - Vishal Kapoor
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
| | - Juan F. Ramirez-Villegas
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Institute of Science and Technology Austria (IST Austria), Klosterneuburg, Austria
| | - Nikos K. Logothetis
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
- Centre for Imaging Sciences, Biomedical Imaging Institute, The University of Manchester, Manchester, United Kingdom
| | - Michel Besserve
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Empirical Inference, Max Planck Institute for Intelligent Systems and MPI-ETH Center for Learning Systems, Tübingen, Germany
| |
Collapse
|
2
|
Zhang W, Yin M, Jiang M, Dai Q. Partitioned estimation methodology of biological neuronal networks with topology-based module detection. Comput Biol Med 2023; 154:106552. [PMID: 36738704 DOI: 10.1016/j.compbiomed.2023.106552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 12/27/2022] [Accepted: 01/11/2023] [Indexed: 02/02/2023]
Abstract
Parameter estimation of neuronal networks is closely related with information processing mechanisms in neural systems. Estimation of synaptic parameters for neuronal networks was an time consuming task. Due to complex interactions between neurons, computational efficiency and accuracy of estimation methods is relatively low. Meanwhile, inherent topological properties such as core-periphery and modular structures are not fully considered in estimation. In order to improve the efficiency and accuracy of estimation, this study proposes a two-stage PartitionMLE method which introduces detected neuronal modules as topological constraints in estimation. The proposed PartitionMLE method firstly decomposes the system into multiple non-overlapping neuronal modules, by performing topology-based module detection. Dynamic parameters including intra-modular and inter-modular parameters are estimated in two stages, using detected hubs to connect non-overlapping neuronal modules. The contributions of PartitionMLE method are two-folds: reducing estimation errors and improving the model interpretability. Experiments about neuronal networks consisting of Hodgkin-Huxley (HH) and leaky integrate-and-firing (LIF) neurons validated the effectiveness of the PartitionMLE method, with comparison to the single-stage MLE method.
Collapse
Affiliation(s)
- Wei Zhang
- Zhejiang Sci-Tech University, Second Street 928, Hangzhou, 310018, China.
| | - Muqi Yin
- Institute of Cyber-Systems and Control, Zhejiang University, Zheda Road 38, Hangzhou, 310027, China
| | - Mingfeng Jiang
- Zhejiang Sci-Tech University, Second Street 928, Hangzhou, 310018, China
| | - Qi Dai
- Zhejiang Sci-Tech University, Second Street 928, Hangzhou, 310018, China.
| |
Collapse
|
3
|
Shomali SR, Rasuli SN, Ahmadabadi MN, Shimazaki H. Uncovering hidden network architecture from spiking activities using an exact statistical input-output relation of neurons. Commun Biol 2023; 6:169. [PMID: 36792689 PMCID: PMC9932086 DOI: 10.1038/s42003-023-04511-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 01/20/2023] [Indexed: 02/17/2023] Open
Abstract
Identifying network architecture from observed neural activities is crucial in neuroscience studies. A key requirement is knowledge of the statistical input-output relation of single neurons in vivo. By utilizing an exact analytical solution of the spike-timing for leaky integrate-and-fire neurons under noisy inputs balanced near the threshold, we construct a framework that links synaptic type, strength, and spiking nonlinearity with the statistics of neuronal population activity. The framework explains structured pairwise and higher-order interactions of neurons receiving common inputs under different architectures. We compared the theoretical predictions with the activity of monkey and mouse V1 neurons and found that excitatory inputs given to pairs explained the observed sparse activity characterized by strong negative triple-wise interactions, thereby ruling out the alternative explanation by shared inhibition. Moreover, we showed that the strong interactions are a signature of excitatory rather than inhibitory inputs whenever the spontaneous rate is low. We present a guide map of neural interactions that help researchers to specify the hidden neuronal motifs underlying observed interactions found in empirical data.
Collapse
Affiliation(s)
- Safura Rashid Shomali
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5746, Iran.
| | - Seyyed Nader Rasuli
- grid.418744.a0000 0000 8841 7951School of Physics, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5531 Iran ,grid.411872.90000 0001 2087 2250Department of Physics, University of Guilan, Rasht, 41335-1914 Iran
| | - Majid Nili Ahmadabadi
- grid.46072.370000 0004 0612 7950Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14395-515 Iran
| | - Hideaki Shimazaki
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan. .,Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Hokkaido, 060-0812, Japan.
| |
Collapse
|
4
|
Reconstruction of sparse recurrent connectivity and inputs from the nonlinear dynamics of neuronal networks. J Comput Neurosci 2023; 51:43-58. [PMID: 35849304 DOI: 10.1007/s10827-022-00831-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 06/16/2022] [Accepted: 07/13/2022] [Indexed: 01/18/2023]
Abstract
Reconstructing the recurrent structural connectivity of neuronal networks is a challenge crucial to address in characterizing neuronal computations. While directly measuring the detailed connectivity structure is generally prohibitive for large networks, we develop a novel framework for reverse-engineering large-scale recurrent network connectivity matrices from neuronal dynamics by utilizing the widespread sparsity of neuronal connections. We derive a linear input-output mapping that underlies the irregular dynamics of a model network composed of both excitatory and inhibitory integrate-and-fire neurons with pulse coupling, thereby relating network inputs to evoked neuronal activity. Using this embedded mapping and experimentally feasible measurements of the firing rate as well as voltage dynamics in response to a relatively small ensemble of random input stimuli, we efficiently reconstruct the recurrent network connectivity via compressive sensing techniques. Through analogous analysis, we then recover high dimensional natural stimuli from evoked neuronal network dynamics over a short time horizon. This work provides a generalizable methodology for rapidly recovering sparse neuronal network data and underlines the natural role of sparsity in facilitating the efficient encoding of network data in neuronal dynamics.
Collapse
|
5
|
Celotto M, Lemke S, Panzeri S. Inferring the temporal evolution of synaptic weights from dynamic functional connectivity. Brain Inform 2022; 9:28. [PMID: 36480076 PMCID: PMC9732068 DOI: 10.1186/s40708-022-00178-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 11/14/2022] [Indexed: 12/13/2022] Open
Abstract
How to capture the temporal evolution of synaptic weights from measures of dynamic functional connectivity between the activity of different simultaneously recorded neurons is an important and open problem in systems neuroscience. Here, we report methodological progress to address this issue. We first simulated recurrent neural network models of spiking neurons with spike timing-dependent plasticity mechanisms that generate time-varying synaptic and functional coupling. We then used these simulations to test analytical approaches that infer fixed and time-varying properties of synaptic connectivity from directed functional connectivity measures, such as cross-covariance and transfer entropy. We found that, while both cross-covariance and transfer entropy provide robust estimates of which synapses are present in the network and their communication delays, dynamic functional connectivity measured via cross-covariance better captures the evolution of synaptic weights over time. We also established how measures of information transmission delays from static functional connectivity computed over long recording periods (i.e., several hours) can improve shorter time-scale estimates of the temporal evolution of synaptic weights from dynamic functional connectivity. These results provide useful information about how to accurately estimate the temporal variation of synaptic strength from spiking activity measures.
Collapse
Affiliation(s)
- Marco Celotto
- grid.13648.380000 0001 2180 3484Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany ,grid.25786.3e0000 0004 1764 2907Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto, Italy ,grid.6292.f0000 0004 1757 1758Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Stefan Lemke
- grid.25786.3e0000 0004 1764 2907Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto, Italy ,grid.410711.20000 0001 1034 1720Department of Cell Biology and Physiology, University of North Carolina, Chapel Hill, USA
| | - Stefano Panzeri
- grid.13648.380000 0001 2180 3484Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany ,grid.25786.3e0000 0004 1764 2907Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto, Italy
| |
Collapse
|
6
|
In vivo extracellular recordings of thalamic and cortical visual responses reveal V1 connectivity rules. Proc Natl Acad Sci U S A 2022; 119:e2207032119. [PMID: 36191204 PMCID: PMC9564935 DOI: 10.1073/pnas.2207032119] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
The brain's connectome provides the scaffold for canonical neural computations. However, a comparison of connectivity studies in the mouse primary visual cortex (V1) reveals that the average number and strength of connections between specific neuron types can vary. Can variability in V1 connectivity measurements coexist with canonical neural computations? We developed a theory-driven approach to deduce V1 network connectivity from visual responses in mouse V1 and visual thalamus (dLGN). Our method revealed that the same recorded visual responses were captured by multiple connectivity configurations. Remarkably, the magnitude and selectivity of connectivity weights followed a specific order across most of the inferred connectivity configurations. We argue that this order stems from the specific shapes of the recorded contrast response functions and contrast invariance of orientation tuning. Remarkably, despite variability across connectivity studies, connectivity weights computed from individual published connectivity reports followed the order we identified with our method, suggesting that the relations between the weights, rather than their magnitudes, represent a connectivity motif supporting canonical V1 computations.
Collapse
|
7
|
Rajalingham R, Piccato A, Jazayeri M. Recurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task. Nat Commun 2022; 13:5865. [PMID: 36195614 PMCID: PMC9532407 DOI: 10.1038/s41467-022-33581-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 09/22/2022] [Indexed: 11/09/2022] Open
Abstract
Primates can richly parse sensory inputs to infer latent information. This ability is hypothesized to rely on establishing mental models of the external world and running mental simulations of those models. However, evidence supporting this hypothesis is limited to behavioral models that do not emulate neural computations. Here, we test this hypothesis by directly comparing the behavior of primates (humans and monkeys) in a ball interception task to that of a large set of recurrent neural network (RNN) models with or without the capacity to dynamically track the underlying latent variables. Humans and monkeys exhibit similar behavioral patterns. This primate behavioral pattern is best captured by RNNs endowed with dynamic inference, consistent with the hypothesis that the primate brain uses dynamic inferences to support flexible physical predictions. Moreover, our work highlights a general strategy for using model neural systems to test computational hypotheses of higher brain function.
Collapse
Affiliation(s)
- Rishi Rajalingham
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139, USA
| | - Aída Piccato
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139, USA.,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139-4307, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139, USA. .,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139-4307, USA.
| |
Collapse
|
8
|
Optimal Population Coding for Dynamic Input by Nonequilibrium Networks. ENTROPY 2022; 24:e24050598. [PMID: 35626482 PMCID: PMC9140425 DOI: 10.3390/e24050598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 04/07/2022] [Accepted: 04/19/2022] [Indexed: 12/04/2022]
Abstract
The efficient coding hypothesis states that neural response should maximize its information about the external input. Theoretical studies focus on optimal response in single neuron and population code in networks with weak pairwise interactions. However, more biological settings with asymmetric connectivity and the encoding for dynamical stimuli have not been well-characterized. Here, we study the collective response in a kinetic Ising model that encodes the dynamic input. We apply gradient-based method and mean-field approximation to reconstruct networks given the neural code that encodes dynamic input patterns. We measure network asymmetry, decoding performance, and entropy production from networks that generate optimal population code. We analyze how stimulus correlation, time scale, and reliability of the network affect optimal encoding networks. Specifically, we find network dynamics altered by statistics of the dynamic input, identify stimulus encoding strategies, and show optimal effective temperature in the asymmetric networks. We further discuss how this approach connects to the Bayesian framework and continuous recurrent neural networks. Together, these results bridge concepts of nonequilibrium physics with the analyses of dynamics and coding in networks.
Collapse
|
9
|
Genkin M, Hughes O, Engel TA. Learning non-stationary Langevin dynamics from stochastic observations of latent trajectories. Nat Commun 2021; 12:5986. [PMID: 34645828 PMCID: PMC8514604 DOI: 10.1038/s41467-021-26202-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 09/22/2021] [Indexed: 11/09/2022] Open
Abstract
Many complex systems operating far from the equilibrium exhibit stochastic dynamics that can be described by a Langevin equation. Inferring Langevin equations from data can reveal how transient dynamics of such systems give rise to their function. However, dynamics are often inaccessible directly and can be only gleaned through a stochastic observation process, which makes the inference challenging. Here we present a non-parametric framework for inferring the Langevin equation, which explicitly models the stochastic observation process and non-stationary latent dynamics. The framework accounts for the non-equilibrium initial and final states of the observed system and for the possibility that the system's dynamics define the duration of observations. Omitting any of these non-stationary components results in incorrect inference, in which erroneous features arise in the dynamics due to non-stationary data distribution. We illustrate the framework using models of neural dynamics underlying decision making in the brain.
Collapse
Affiliation(s)
- Mikhail Genkin
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | | | | |
Collapse
|
10
|
Shorten DP, Spinney RE, Lizier JT. Estimating Transfer Entropy in Continuous Time Between Neural Spike Trains or Other Event-Based Data. PLoS Comput Biol 2021; 17:e1008054. [PMID: 33872296 PMCID: PMC8084348 DOI: 10.1371/journal.pcbi.1008054] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 04/29/2021] [Accepted: 02/19/2021] [Indexed: 11/24/2022] Open
Abstract
Transfer entropy (TE) is a widely used measure of directed information flows in a number of domains including neuroscience. Many real-world time series for which we are interested in information flows come in the form of (near) instantaneous events occurring over time. Examples include the spiking of biological neurons, trades on stock markets and posts to social media, amongst myriad other systems involving events in continuous time throughout the natural and social sciences. However, there exist severe limitations to the current approach to TE estimation on such event-based data via discretising the time series into time bins: it is not consistent, has high bias, converges slowly and cannot simultaneously capture relationships that occur with very fine time precision as well as those that occur over long time intervals. Building on recent work which derived a theoretical framework for TE in continuous time, we present an estimation framework for TE on event-based data and develop a k-nearest-neighbours estimator within this framework. This estimator is provably consistent, has favourable bias properties and converges orders of magnitude more quickly than the current state-of-the-art in discrete-time estimation on synthetic examples. We demonstrate failures of the traditionally-used source-time-shift method for null surrogate generation. In order to overcome these failures, we develop a local permutation scheme for generating surrogate time series conforming to the appropriate null hypothesis in order to test for the statistical significance of the TE and, as such, test for the conditional independence between the history of one point process and the updates of another. Our approach is shown to be capable of correctly rejecting or accepting the null hypothesis of conditional independence even in the presence of strong pairwise time-directed correlations. This capacity to accurately test for conditional independence is further demonstrated on models of a spiking neural circuit inspired by the pyloric circuit of the crustacean stomatogastric ganglion, succeeding where previous related estimators have failed.
Collapse
Affiliation(s)
- David P. Shorten
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Richard E. Spinney
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
- School of Physics and EMBL Australia Node Single Molecule Science, School of Medical Sciences, The University of New South Wales, Sydney, Australia
| | - Joseph T. Lizier
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| |
Collapse
|
11
|
Nilsson MNP, Jörntell H. Channel current fluctuations conclusively explain neuronal encoding of internal potential into spike trains. Phys Rev E 2021; 103:022407. [PMID: 33736029 DOI: 10.1103/physreve.103.022407] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 01/15/2021] [Indexed: 11/07/2022]
Abstract
Hodgkin and Huxley's seminal neuron model describes the propagation of voltage spikes in axons, but it cannot explain certain full-neuron features crucial for understanding the neural code. We consider channel current fluctuations in a trisection of the Hodgkin-Huxley model, allowing an analytic-mechanistic explanation of these features and yielding consistently excellent matches with in vivo recordings of cerebellar Purkinje neurons, which we use as model systems. This shows that the neuronal encoding is described conclusively by a soft-thresholding function having just three parameters.
Collapse
Affiliation(s)
| | - Henrik Jörntell
- Department of Experimental Medical Science, Lund University, SE-221 00 Lund, Sweden
| |
Collapse
|
12
|
Wendling KP, Ly C. Statistical Analysis of Decoding Performances of Diverse Populations of Neurons. Neural Comput 2021; 33:764-801. [PMID: 33400901 DOI: 10.1162/neco_a_01355] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A central theme in computational neuroscience is determining the neural correlates of efficient and accurate coding of sensory signals. Diversity, or heterogeneity, of intrinsic neural attributes is known to exist in many brain areas and is thought to significantly affect neural coding. Recent theoretical and experimental work has argued that in uncoupled networks, coding is most accurate at intermediate levels of heterogeneity. Here we consider this question with data from in vivo recordings of neurons in the electrosensory system of weakly electric fish subject to the same realization of noisy stimuli; we use a generalized linear model (GLM) to assess the accuracy of (Bayesian) decoding of stimulus given a population spiking response. The long recordings enable us to consider many uncoupled networks and a relatively wide range of heterogeneity, as well as many instances of the stimuli, thus enabling us to address this question with statistical power. The GLM decoding is performed on a single long time series of data to mimic realistic conditions rather than using trial-averaged data for better model fits. For a variety of fixed network sizes, we generally find that the optimal levels of heterogeneity are at intermediate values, and this holds in all core components of GLM. These results are robust to several measures of decoding performance, including the absolute value of the error, error weighted by the uncertainty of the estimated stimulus, and the correlation between the actual and estimated stimulus. Although a quadratic fit to decoding performance as a function of heterogeneity is statistically significant, the result is highly variable with low R2 values. Taken together, intermediate levels of neural heterogeneity are indeed a prominent attribute for efficient coding even within a single time series, but the performance is highly variable.
Collapse
Affiliation(s)
- Kyle P Wendling
- Department of Statistical Sciences and Operations Research, Virginia Commonwealth University, Richmond, VA 23284, U.S.A.
| | - Cheng Ly
- Department of Statistical Sciences and Operations Research, Virginia Commonwealth University, Richmond, VA 23284, U.S.A.
| |
Collapse
|
13
|
Ren N, Ito S, Hafizi H, Beggs JM, Stevenson IH. Model-based detection of putative synaptic connections from spike recordings with latency and type constraints. J Neurophysiol 2020; 124:1588-1604. [DOI: 10.1152/jn.00066.2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Detecting synaptic connections using large-scale extracellular spike recordings is a difficult statistical problem. Here, we develop an extension of a generalized linear model that explicitly separates fast synaptic effects and slow background fluctuations in cross-correlograms between pairs of neurons while incorporating circuit properties learned from the whole network. This model outperforms two previously developed synapse detection methods in the simulated networks and recovers plausible connections from hundreds of neurons in in vitro multielectrode array data.
Collapse
Affiliation(s)
- Naixin Ren
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
| | - Shinya Ito
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, California
| | - Hadi Hafizi
- Department of Physics, Indiana University, Bloomington, Indiana
| | - John M. Beggs
- Department of Physics, Indiana University, Bloomington, Indiana
| | - Ian H. Stevenson
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut
| |
Collapse
|
14
|
Cofré R, Maldonado C, Cessac B. Thermodynamic Formalism in Neuronal Dynamics and Spike Train Statistics. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1330. [PMID: 33266513 PMCID: PMC7712217 DOI: 10.3390/e22111330] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 11/13/2020] [Accepted: 11/15/2020] [Indexed: 12/04/2022]
Abstract
The Thermodynamic Formalism provides a rigorous mathematical framework for studying quantitative and qualitative aspects of dynamical systems. At its core, there is a variational principle that corresponds, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science. In particular, it has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, in order to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism.
Collapse
Affiliation(s)
- Rodrigo Cofré
- CIMFAV-Ingemat, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2340000, Chile
| | - Cesar Maldonado
- IPICYT/División de Matemáticas Aplicadas, San Luis Potosí 78216, Mexico;
| | - Bruno Cessac
- Inria Biovision team and Neuromod Institute, Université Côte d’Azur, 06901 CEDEX Inria, France;
| |
Collapse
|
15
|
Gonçalves PJ, Lueckmann JM, Deistler M, Nonnenmacher M, Öcal K, Bassetto G, Chintaluri C, Podlaski WF, Haddad SA, Vogels TP, Greenberg DS, Macke JH. Training deep neural density estimators to identify mechanistic models of neural dynamics. eLife 2020; 9:e56261. [PMID: 32940606 PMCID: PMC7581433 DOI: 10.7554/elife.56261] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 09/16/2020] [Indexed: 01/27/2023] Open
Abstract
Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-trained using model simulations-to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics.
Collapse
Affiliation(s)
- Pedro J Gonçalves
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Jan-Matthis Lueckmann
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Michael Deistler
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Machine Learning in Science, Excellence Cluster Machine Learning, Tübingen UniversityTübingenGermany
| | - Marcel Nonnenmacher
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Model-Driven Machine Learning, Institute of Coastal Research, Helmholtz Centre GeesthachtGeesthachtGermany
| | - Kaan Öcal
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Mathematical Institute, University of BonnBonnGermany
| | - Giacomo Bassetto
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Chaitanya Chintaluri
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
- Institute of Science and Technology AustriaKlosterneuburgAustria
| | - William F Podlaski
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
| | - Sara A Haddad
- Max Planck Institute for Brain ResearchFrankfurtGermany
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
- Institute of Science and Technology AustriaKlosterneuburgAustria
| | - David S Greenberg
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Model-Driven Machine Learning, Institute of Coastal Research, Helmholtz Centre GeesthachtGeesthachtGermany
| | - Jakob H Macke
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Machine Learning in Science, Excellence Cluster Machine Learning, Tübingen UniversityTübingenGermany
- Max Planck Institute for Intelligent SystemsTübingenGermany
| |
Collapse
|
16
|
Ghanbari A, Ren N, Keine C, Stoelzel C, Englitz B, Swadlow HA, Stevenson IH. Modeling the Short-Term Dynamics of in Vivo Excitatory Spike Transmission. J Neurosci 2020; 40:4185-4202. [PMID: 32303648 PMCID: PMC7244199 DOI: 10.1523/jneurosci.1482-19.2020] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 03/18/2020] [Accepted: 03/22/2020] [Indexed: 11/21/2022] Open
Abstract
Information transmission in neural networks is influenced by both short-term synaptic plasticity (STP) as well as nonsynaptic factors, such as after-hyperpolarization currents and changes in excitability. Although these effects have been widely characterized in vitro using intracellular recordings, how they interact in vivo is unclear. Here, we develop a statistical model of the short-term dynamics of spike transmission that aims to disentangle the contributions of synaptic and nonsynaptic effects based only on observed presynaptic and postsynaptic spiking. The model includes a dynamic functional connection with short-term plasticity as well as effects due to the recent history of postsynaptic spiking and slow changes in postsynaptic excitability. Using paired spike recordings, we find that the model accurately describes the short-term dynamics of in vivo spike transmission at a diverse set of identified and putative excitatory synapses, including a pair of connected neurons within thalamus in mouse, a thalamocortical connection in a female rabbit, and an auditory brainstem synapse in a female gerbil. We illustrate the utility of this modeling approach by showing how the spike transmission patterns captured by the model may be sufficient to account for stimulus-dependent differences in spike transmission in the auditory brainstem (endbulb of Held). Finally, we apply this model to large-scale multielectrode recordings to illustrate how such an approach has the potential to reveal cell type-specific differences in spike transmission in vivo Although STP parameters estimated from ongoing presynaptic and postsynaptic spiking are highly uncertain, our results are partially consistent with previous intracellular observations in these synapses.SIGNIFICANCE STATEMENT Although synaptic dynamics have been extensively studied and modeled using intracellular recordings of postsynaptic currents and potentials, inferring synaptic effects from extracellular spiking is challenging. Whether or not a synaptic current contributes to postsynaptic spiking depends not only on the amplitude of the current, but also on many other factors, including the activity of other, typically unobserved, synapses, the overall excitability of the postsynaptic neuron, and how recently the postsynaptic neuron has spiked. Here, we developed a model that, using only observations of presynaptic and postsynaptic spiking, aims to describe the dynamics of in vivo spike transmission by modeling both short-term synaptic plasticity (STP) and nonsynaptic effects. This approach may provide a novel description of fast, structured changes in spike transmission.
Collapse
Affiliation(s)
| | - Naixin Ren
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06268
| | - Christian Keine
- Carver College of Medicine, Iowa Neuroscience Institute, Department of Anatomy and Cell Biology, University of Iowa, Iowa, IA 52242
| | - Carl Stoelzel
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06268
| | - Bernhard Englitz
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, Netherlands
| | - Harvey A Swadlow
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06268
| | - Ian H Stevenson
- Department of Biomedical Engineering
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06268
| |
Collapse
|
17
|
Inference of synaptic connectivity and external variability in neural microcircuits. J Comput Neurosci 2020; 48:123-147. [PMID: 32080777 DOI: 10.1007/s10827-020-00739-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 11/15/2019] [Accepted: 01/03/2020] [Indexed: 10/25/2022]
Abstract
A major goal in neuroscience is to estimate neural connectivity from large scale extracellular recordings of neural activity in vivo. This is challenging in part because any such activity is modulated by the unmeasured external synaptic input to the network, known as the common input problem. Many different measures of functional connectivity have been proposed in the literature, but their direct relationship to synaptic connectivity is often assumed or ignored. For in vivo data, measurements of this relationship would require a knowledge of ground truth connectivity, which is nearly always unavailable. Instead, many studies use in silico simulations as benchmarks for investigation, but such approaches necessarily rely upon a variety of simplifying assumptions about the simulated network and can depend on numerous simulation parameters. We combine neuronal network simulations, mathematical analysis, and calcium imaging data to address the question of when and how functional connectivity, synaptic connectivity, and latent external input variability can be untangled. We show numerically and analytically that, even though the precision matrix of recorded spiking activity does not uniquely determine synaptic connectivity, it is in practice often closely related to synaptic connectivity. This relation becomes more pronounced when the spatial structure of neuronal variability is jointly considered.
Collapse
|