1
|
Tian ZQK, Chen K, Li S, McLaughlin DW, Zhou D. Causal connectivity measures for pulse-output network reconstruction: Analysis and applications. Proc Natl Acad Sci U S A 2024; 121:e2305297121. [PMID: 38551842 PMCID: PMC10998614 DOI: 10.1073/pnas.2305297121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 03/03/2024] [Indexed: 04/08/2024] Open
Abstract
The causal connectivity of a network is often inferred to understand network function. It is arguably acknowledged that the inferred causal connectivity relies on the causality measure one applies, and it may differ from the network's underlying structural connectivity. However, the interpretation of causal connectivity remains to be fully clarified, in particular, how causal connectivity depends on causality measures and how causal connectivity relates to structural connectivity. Here, we focus on nonlinear networks with pulse signals as measured output, e.g., neural networks with spike output, and address the above issues based on four commonly utilized causality measures, i.e., time-delayed correlation coefficient, time-delayed mutual information, Granger causality, and transfer entropy. We theoretically show how these causality measures are related to one another when applied to pulse signals. Taking a simulated Hodgkin-Huxley network and a real mouse brain network as two illustrative examples, we further verify the quantitative relations among the four causality measures and demonstrate that the causal connectivity inferred by any of the four well coincides with the underlying network structural connectivity, therefore illustrating a direct link between the causal and structural connectivity. We stress that the structural connectivity of pulse-output networks can be reconstructed pairwise without conditioning on the global information of all other nodes in a network, thus circumventing the curse of dimensionality. Our framework provides a practical and effective approach for pulse-output network reconstruction.
Collapse
Affiliation(s)
- Zhong-qi K. Tian
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai200240, China
| | - Kai Chen
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai200240, China
| | - Songting Li
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai200240, China
| | - David W. McLaughlin
- Courant Institute of Mathematical Sciences, New York University, New York, NY10012
- Center for Neural Science, New York University, New York, NY10012
- Institute of Mathematical Sciences, New York University Shanghai, Shanghai200122, China
- Neuroscience Institute of New York University Langone Health, New York University, New York, NY10016
| | - Douglas Zhou
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai200240, China
- Shanghai Frontier Science Center of Modern Analysis, Shanghai Jiao Tong University, Shanghai200240, China
| |
Collapse
|
2
|
Liang T, Brinkman BAW. Statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances. Phys Rev E 2024; 109:044404. [PMID: 38755896 DOI: 10.1103/physreve.109.044404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/29/2024] [Indexed: 05/18/2024]
Abstract
Statistically inferred neuronal connections from observed spike train data are often skewed from ground truth by factors such as model mismatch, unobserved neurons, and limited data. Spike train covariances, sometimes referred to as "functional connections," are often used as a proxy for the connections between pairs of neurons, but reflect statistical relationships between neurons, not anatomical connections. Moreover, covariances are not causal: spiking activity is correlated in both the past and the future, whereas neurons respond only to synaptic inputs in the past. Connections inferred by maximum likelihood inference, however, can be constrained to be causal. However, we show in this work that the inferred connections in spontaneously active networks modeled by stochastic leaky integrate-and-fire networks strongly correlate with the covariances between neurons, and may reflect noncausal relationships, when many neurons are unobserved or when neurons are weakly coupled. This phenomenon occurs across different network structures, including random networks and balanced excitatory-inhibitory networks. We use a combination of simulations and a mean-field analysis with fluctuation corrections to elucidate the relationships between spike train covariances, inferred synaptic filters, and ground-truth connections in partially observed networks.
Collapse
Affiliation(s)
- Tong Liang
- Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794, USA
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
3
|
Yuste R, Cossart R, Yaksi E. Neuronal ensembles: Building blocks of neural circuits. Neuron 2024; 112:875-892. [PMID: 38262413 PMCID: PMC10957317 DOI: 10.1016/j.neuron.2023.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 06/07/2023] [Accepted: 12/13/2023] [Indexed: 01/25/2024]
Abstract
Neuronal ensembles, defined as groups of neurons displaying recurring patterns of coordinated activity, represent an intermediate functional level between individual neurons and brain areas. Novel methods to measure and optically manipulate the activity of neuronal populations have provided evidence of ensembles in the neocortex and hippocampus. Ensembles can be activated intrinsically or in response to sensory stimuli and play a causal role in perception and behavior. Here we review ensemble phenomenology, developmental origin, biophysical and synaptic mechanisms, and potential functional roles across different brain areas and species, including humans. As modular units of neural circuits, ensembles could provide a mechanistic underpinning of fundamental brain processes, including neural coding, motor planning, decision-making, learning, and adaptability.
Collapse
Affiliation(s)
- Rafael Yuste
- NeuroTechnology Center, Department of Biological Sciences, Columbia University, New York, NY, USA.
| | - Rosa Cossart
- Inserm, INMED, Turing Center for Living Systems Aix-Marseille University, Marseille, France.
| | - Emre Yaksi
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway; Koç University Research Center for Translational Medicine, Koç University School of Medicine, Istanbul, Turkey.
| |
Collapse
|
4
|
Pospisil DA, Aragon MJ, Dorkenwald S, Matsliah A, Sterling AR, Schlegel P, Yu SC, McKellar CE, Costa M, Eichler K, Jefferis GSXE, Murthy M, Pillow JW. From connectome to effectome: learning the causal interaction map of the fly brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.31.564922. [PMID: 37961285 PMCID: PMC10635032 DOI: 10.1101/2023.10.31.564922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
A long-standing goal of neuroscience is to obtain a causal model of the nervous system. This would allow neuroscientists to explain animal behavior in terms of the dynamic interactions between neurons. The recently reported whole-brain fly connectome [1-7] specifies the synaptic paths by which neurons can affect each other but not whether, or how, they do affect each other in vivo. To overcome this limitation, we introduce a novel combined experimental and statistical strategy for efficiently learning a causal model of the fly brain, which we refer to as the "effectome". Specifically, we propose an estimator for a dynamical systems model of the fly brain that uses stochastic optogenetic perturbation data to accurately estimate causal effects and the connectome as a prior to drastically improve estimation efficiency. We then analyze the connectome to propose circuits that have the greatest total effect on the dynamics of the fly nervous system. We discover that, fortunately, the dominant circuits significantly involve only relatively small populations of neurons-thus imaging, stimulation, and neuronal identification are feasible. Intriguingly, we find that this approach also re-discovers known circuits and generates testable hypotheses about their dynamics. Overall, our analyses of the connectome provide evidence that global dynamics of the fly brain are generated by a large collection of small and often anatomically localized circuits operating, largely, independently of each other. This in turn implies that a causal model of a brain, a principal goal of systems neuroscience, can be feasibly obtained in the fly.
Collapse
Affiliation(s)
- Dean A Pospisil
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Max J Aragon
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Sven Dorkenwald
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Computer Science Department, Princeton University, Princeton, NJ, USA
| | - Arie Matsliah
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Amy R Sterling
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Philipp Schlegel
- Neurobiology Division, MRC Laboratory of Molecular Biology, Cambridge, UK
- Drosophila Connectomics Group, Department of Zoology, University of Cambridge, Cambridge, UK
| | - Szi-Chieh Yu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Claire E McKellar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Marta Costa
- Drosophila Connectomics Group, Department of Zoology, University of Cambridge, Cambridge, UK
| | - Katharina Eichler
- Drosophila Connectomics Group, Department of Zoology, University of Cambridge, Cambridge, UK
| | - Gregory S X E Jefferis
- Neurobiology Division, MRC Laboratory of Molecular Biology, Cambridge, UK
- Drosophila Connectomics Group, Department of Zoology, University of Cambridge, Cambridge, UK
| | - Mala Murthy
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| |
Collapse
|
5
|
Nardin M, Csicsvari J, Tkačik G, Savin C. The Structure of Hippocampal CA1 Interactions Optimizes Spatial Coding across Experience. J Neurosci 2023; 43:8140-8156. [PMID: 37758476 PMCID: PMC10697404 DOI: 10.1523/jneurosci.0194-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/11/2023] [Accepted: 09/14/2023] [Indexed: 10/03/2023] Open
Abstract
Although much is known about how single neurons in the hippocampus represent an animal's position, how circuit interactions contribute to spatial coding is less well understood. Using a novel statistical estimator and theoretical modeling, both developed in the framework of maximum entropy models, we reveal highly structured CA1 cell-cell interactions in male rats during open field exploration. The statistics of these interactions depend on whether the animal is in a familiar or novel environment. In both conditions the circuit interactions optimize the encoding of spatial information, but for regimes that differ in the informativeness of their spatial inputs. This structure facilitates linear decodability, making the information easy to read out by downstream circuits. Overall, our findings suggest that the efficient coding hypothesis is not only applicable to individual neuron properties in the sensory periphery, but also to neural interactions in the central brain.SIGNIFICANCE STATEMENT Local circuit interactions play a key role in neural computation and are dynamically shaped by experience. However, measuring and assessing their effects during behavior remains a challenge. Here, we combine techniques from statistical physics and machine learning to develop new tools for determining the effects of local network interactions on neural population activity. This approach reveals highly structured local interactions between hippocampal neurons, which make the neural code more precise and easier to read out by downstream circuits, across different levels of experience. More generally, the novel combination of theory and data analysis in the framework of maximum entropy models enables traditional neural coding questions to be asked in naturalistic settings.
Collapse
Affiliation(s)
- Michele Nardin
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147
| | - Jozsef Csicsvari
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
| | - Gašper Tkačik
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
| | - Cristina Savin
- Center for Neural Science, New York University, New York, New York 10003
- Center for Data Science, New York University, New York, New York 10011
| |
Collapse
|
6
|
Luo TZ, Kim TD, Gupta D, Bondy AG, Kopec CD, Elliot VA, DePasquale B, Brody CD. Transitions in dynamical regime and neural mode underlie perceptual decision-making. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.15.562427. [PMID: 37904994 PMCID: PMC10614809 DOI: 10.1101/2023.10.15.562427] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Perceptual decision-making is the process by which an animal uses sensory stimuli to choose an action or mental proposition. This process is thought to be mediated by neurons organized as attractor networks 1,2 . However, whether attractor dynamics underlie decision behavior and the complex neuronal responses remains unclear. Here we use an unsupervised, deep learning-based method to discover decision-related dynamics from the simultaneous activity of neurons in frontal cortex and striatum of rats while they accumulate pulsatile auditory evidence. We show that contrary to prevailing hypotheses, attractors play a role only after a transition from a regime in the dynamics that is strongly driven by inputs to one dominated by the intrinsic dynamics. The initial regime mediates evidence accumulation, and the subsequent intrinsic-dominant regime subserves decision commitment. This regime transition is coupled to a rapid reorganization in the representation of the decision process in the neural population (a change in the "neural mode" along which the process develops). A simplified model approximating the coupled transition in the dynamics and neural mode allows inferring, from each trial's neural activity, the internal decision commitment time in that trial, and captures diverse and complex single-neuron temporal profiles, such as ramping and stepping 3-5 . It also captures trial-averaged curved trajectories 6-8 , and reveals distinctions between brain regions. Our results show that the formation of a perceptual choice involves a rapid, coordinated transition in both the dynamical regime and the neural mode of the decision process, and suggest pairing deep learning and parsimonious models as a promising approach for understanding complex data.
Collapse
|
7
|
Lepperød ME, Stöber T, Hafting T, Fyhn M, Kording KP. Inferring causal connectivity from pairwise recordings and optogenetics. PLoS Comput Biol 2023; 19:e1011574. [PMID: 37934793 PMCID: PMC10656035 DOI: 10.1371/journal.pcbi.1011574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 11/17/2023] [Accepted: 10/04/2023] [Indexed: 11/09/2023] Open
Abstract
To understand the neural mechanisms underlying brain function, neuroscientists aim to quantify causal interactions between neurons, for instance by perturbing the activity of neuron A and measuring the effect on neuron B. Recently, manipulating neuron activity using light-sensitive opsins, optogenetics, has increased the specificity of neural perturbation. However, using widefield optogenetic interventions, multiple neurons are usually perturbed, producing a confound-any of the stimulated neurons can have affected the postsynaptic neuron making it challenging to discern which neurons produced the causal effect. Here, we show how such confounds produce large biases in interpretations. We explain how confounding can be reduced by combining instrumental variables (IV) and difference in differences (DiD) techniques from econometrics. Combined, these methods can estimate (causal) effective connectivity by exploiting the weak, approximately random signal resulting from the interaction between stimulation and the absolute refractory period of the neuron. In simulated neural networks, we find that estimates using ideas from IV and DiD outperform naïve techniques suggesting that methods from causal inference can be useful to disentangle neural interactions in the brain.
Collapse
Affiliation(s)
- Mikkel Elle Lepperød
- Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
- Simula Research Laboratory, Oslo, Norway
| | - Tristan Stöber
- Simula Research Laboratory, Oslo, Norway
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
- Epilepsy Center Frankfurt Rhine-Main, Department of Neurology, Goethe University, Frankfurt, Germany
| | - Torkel Hafting
- Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Marianne Fyhn
- Simula Research Laboratory, Oslo, Norway
- Department of Biosciences, University of Oslo, Oslo, Norway
| | - Konrad Paul Kording
- Department of Neuroscience, University of Pennsylvania, Pennsylvania, United States of America
| |
Collapse
|
8
|
Sachdeva P, Bak JH, Livezey J, Kirst C, Frank L, Bhattacharyya S, Bouchard KE. Resolving Non-identifiability Mitigates Bias in Models of Neural Tuning and Functional Coupling. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.11.548615. [PMID: 37503030 PMCID: PMC10370036 DOI: 10.1101/2023.07.11.548615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
In the brain, all neurons are driven by the activity of other neurons, some of which maybe simultaneously recorded, but most are not. As such, models of neuronal activity need to account for simultaneously recorded neurons and the influences of unmeasured neurons. This can be done through inclusion of model terms for observed external variables (e.g., tuning to stimuli) as well as terms for latent sources of variability. Determining the influence of groups of neurons on each other relative to other influences is important to understand brain functioning. The parameters of statistical models fit to data are commonly used to gain insight into the relative importance of those influences. Scientific interpretation of models hinge upon unbiased parameter estimates. However, evaluation of biased inference is rarely performed and sources of bias are poorly understood. Through extensive numerical study and analytic calculation, we show that common inference procedures and models are typically biased. We demonstrate that accurate parameter selection before estimation resolves model non-identifiability and mitigates bias. In diverse neurophysiology data sets, we found that contributions of coupling to other neurons are often overestimated while tuning to exogenous variables are underestimated in common methods. We explain heterogeneity in observed biases across data sets in terms of data statistics. Finally, counter to common intuition, we found that model non-identifiability contributes to bias, not variance, making it a particularly insidious form of statistical error. Together, our results identify the causes of statistical biases in common models of neural data, provide inference procedures to mitigate that bias, and reveal and explain the impact of those biases in diverse neural data sets.
Collapse
Affiliation(s)
- Pratik Sachdeva
- Physics Department, UC Berkeley
- Redwood Center for Theoretical Neuroscience, UC Berkeley
| | - Ji Hyun Bak
- Kavli Institute for Fundamental Neuroscience, UC San Francisco
- Biological Systems and Engineering Division, Lawrence Berkeley National Lab
| | - Jesse Livezey
- Biological Systems and Engineering Division, Lawrence Berkeley National Lab
| | - Christoph Kirst
- Kavli Institute for Fundamental Neuroscience, UC San Francisco
- Scientific Data Division, Lawrence Berkeley National Lab
- Deptartment of Anatomy, UC San Francisco
| | - Loren Frank
- Kavli Institute for Fundamental Neuroscience, UC San Francisco
- Departments of Physiology and Psychiatry, UC San Francisco
- Howard Hughes Medical Institute
| | | | - Kristofer E. Bouchard
- Redwood Center for Theoretical Neuroscience, UC Berkeley
- Kavli Institute for Fundamental Neuroscience, UC San Francisco
- Biological Systems and Engineering Division, Lawrence Berkeley National Lab
- Scientific Data Division, Lawrence Berkeley National Lab
- Helen Wills Neuroscience Institute, UC Berkeley
| |
Collapse
|
9
|
Negrón A, Getz MP, Handy G, Doiron B. The mechanics of correlated variability in segregated cortical excitatory subnetworks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.25.538323. [PMID: 37162867 PMCID: PMC10168290 DOI: 10.1101/2023.04.25.538323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Understanding the genesis of shared trial-to-trial variability in neural activity within sensory cortex is critical to uncovering the biological basis of information processing in the brain. Shared variability is often a reflection of the structure of cortical connectivity since this variability likely arises, in part, from local circuit inputs. A series of experiments from segregated networks of (excitatory) pyramidal neurons in mouse primary visual cortex challenge this view. Specifically, the across-network correlations were found to be larger than predicted given the known weak cross-network connectivity. We aim to uncover the circuit mechanisms responsible for these enhanced correlations through biologically motivated cortical circuit models. Our central finding is that coupling each excitatory subpopulation with a specific inhibitory subpopulation provides the most robust network-intrinsic solution in shaping these enhanced correlations. This result argues for the existence of excitatory-inhibitory functional assemblies in early sensory areas which mirror not just response properties but also connectivity between pyramidal cells.
Collapse
Affiliation(s)
- Alex Negrón
- Department of Applied Mathematics, Illinois Institute of Technology
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Matthew P. Getz
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Brent Doiron
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| |
Collapse
|
10
|
Galgali AR, Sahani M, Mante V. Residual dynamics resolves recurrent contributions to neural computation. Nat Neurosci 2023; 26:326-338. [PMID: 36635498 DOI: 10.1038/s41593-022-01230-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 11/08/2022] [Indexed: 01/14/2023]
Abstract
Relating neural activity to behavior requires an understanding of how neural computations arise from the coordinated dynamics of distributed, recurrently connected neural populations. However, inferring the nature of recurrent dynamics from partial recordings of a neural circuit presents considerable challenges. Here we show that some of these challenges can be overcome by a fine-grained analysis of the dynamics of neural residuals-that is, trial-by-trial variability around the mean neural population trajectory for a given task condition. Residual dynamics in macaque prefrontal cortex (PFC) in a saccade-based perceptual decision-making task reveals recurrent dynamics that is time dependent, but consistently stable, and suggests that pronounced rotational structure in PFC trajectories during saccades is driven by inputs from upstream areas. The properties of residual dynamics restrict the possible contributions of PFC to decision-making and saccade generation and suggest a path toward fully characterizing distributed neural computations with large-scale neural recordings and targeted causal perturbations.
Collapse
Affiliation(s)
- Aniruddh R Galgali
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Valerio Mante
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
| |
Collapse
|
11
|
van der Plas TL, Tubiana J, Le Goc G, Migault G, Kunst M, Baier H, Bormuth V, Englitz B, Debrégeas G. Neural assemblies uncovered by generative modeling explain whole-brain activity statistics and reflect structural connectivity. eLife 2023; 12:83139. [PMID: 36648065 PMCID: PMC9940913 DOI: 10.7554/elife.83139] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 01/15/2023] [Indexed: 01/18/2023] Open
Abstract
Patterns of endogenous activity in the brain reflect a stochastic exploration of the neuronal state space that is constrained by the underlying assembly organization of neurons. Yet, it remains to be shown that this interplay between neurons and their assembly dynamics indeed suffices to generate whole-brain data statistics. Here, we recorded the activity from ∼40,000 neurons simultaneously in zebrafish larvae, and show that a data-driven generative model of neuron-assembly interactions can accurately reproduce the mean activity and pairwise correlation statistics of their spontaneous activity. This model, the compositional Restricted Boltzmann Machine (cRBM), unveils ∼200 neural assemblies, which compose neurophysiological circuits and whose various combinations form successive brain states. We then performed in silico perturbation experiments to determine the interregional functional connectivity, which is conserved across individual animals and correlates well with structural connectivity. Our results showcase how cRBMs can capture the coarse-grained organization of the zebrafish brain. Notably, this generative model can readily be deployed to parse neural data obtained by other large-scale recording techniques.
Collapse
Affiliation(s)
- Thijs L van der Plas
- Computational Neuroscience Lab, Department of Neurophysiology, Donders Center for Neuroscience, Radboud UniversityNijmegenNetherlands
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Jérôme Tubiana
- Blavatnik School of Computer Science, Tel Aviv UniversityTel AvivIsrael
| | - Guillaume Le Goc
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
| | - Geoffrey Migault
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
| | - Michael Kunst
- Department Genes – Circuits – Behavior, Max Planck Institute for Biological IntelligenceMartinsriedGermany
- Allen Institute for Brain ScienceSeattleUnited States
| | - Herwig Baier
- Department Genes – Circuits – Behavior, Max Planck Institute for Biological IntelligenceMartinsriedGermany
| | - Volker Bormuth
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
| | - Bernhard Englitz
- Computational Neuroscience Lab, Department of Neurophysiology, Donders Center for Neuroscience, Radboud UniversityNijmegenNetherlands
| | - Georges Debrégeas
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
| |
Collapse
|
12
|
Predicting network dynamics without requiring the knowledge of the interaction graph. Proc Natl Acad Sci U S A 2022; 119:e2205517119. [PMID: 36279454 PMCID: PMC9636954 DOI: 10.1073/pnas.2205517119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A network consists of two interdependent parts: the network topology or graph, consisting of the links between nodes and the network dynamics, specified by some governing equations. A crucial challenge is the prediction of dynamics on networks, such as forecasting the spread of an infectious disease on a human contact network. Unfortunately, an accurate prediction of the dynamics seems hardly feasible, because the network is often complicated and unknown. In this work, given past observations of the dynamics on a fixed graph, we show the contrary: Even without knowing the network topology, we can predict the dynamics. Specifically, for a general class of deterministic governing equations, we propose a two-step prediction algorithm. First, we obtain a surrogate network by fitting past observations of every nodal state to the dynamical model. Second, we iterate the governing equations on the surrogate network to predict the dynamics. Surprisingly, even though there is no similarity between the surrogate topology and the true topology, the predictions are accurate, for a considerable prediction time horizon, for a broad range of observation times, and in the presence of a reasonable noise level. The true topology is not needed for predicting dynamics on networks, since the dynamics evolve in a subspace of astonishingly low dimension compared to the size and heterogeneity of the graph. Our results constitute a fresh perspective on the broad field of nonlinear dynamics on complex networks.
Collapse
|
13
|
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat Commun 2022; 13:5163. [PMID: 36056006 PMCID: PMC9440011 DOI: 10.1038/s41467-022-32646-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 08/08/2022] [Indexed: 11/08/2022] Open
Abstract
Animals rapidly adapt their movements to external perturbations, a process paralleled by changes in neural activity in the motor cortex. Experimental studies suggest that these changes originate from altered inputs (Hinput) rather than from changes in local connectivity (Hlocal), as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, we used a modular recurrent neural network to qualitatively test this interpretation. As expected, Hinput resulted in small activity changes and largely preserved covariance. Surprisingly given the presumed dependence of stable covariance on preserved circuit connectivity, Hlocal led to only slightly larger changes in activity and covariance, still within the range of experimental recordings. This similarity is due to Hlocal only requiring small, correlated connectivity changes for successful adaptation. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference between Hinput and Hlocal, which could be exploited when designing future experiments.
Collapse
|
14
|
Gallego-Carracedo C, Perich MG, Chowdhury RH, Miller LE, Gallego JÁ. Local field potentials reflect cortical population dynamics in a region-specific and frequency-dependent manner. eLife 2022; 11:73155. [PMID: 35968845 PMCID: PMC9470163 DOI: 10.7554/elife.73155] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
The spiking activity of populations of cortical neurons is well described by the dynamics of a small number of population-wide covariance patterns, the 'latent dynamics'. These latent dynamics are largely driven by the same correlated synaptic currents across the circuit that determine the generation of local field potentials (LFP). Yet, the relationship between latent dynamics and LFPs remains largely unexplored. Here, we characterised this relationship for three different regions of primate sensorimotor cortex during reaching. The correlation between latent dynamics and LFPs was frequency-dependent and varied across regions. However, for any given region, this relationship remained stable throughout the behaviour: in each of primary motor and premotor cortices, the LFP-latent dynamics correlation profile was remarkably similar between movement planning and execution. These robust associations between LFPs and neural population latent dynamics help bridge the wealth of studies reporting neural correlates of behaviour using either type of recordings.
Collapse
Affiliation(s)
| | - Matthew G Perich
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, United States
| | - Raeed H Chowdhury
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Lee E Miller
- Department of Biomedical Engineering, Northwestern University, Evanston, United States
| | - Juan Álvaro Gallego
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
15
|
Wang X, Mi Y, Zhang Z, Chen Y, Hu G, Li H. Reconstructing distant interactions of multiple paths between perceptible nodes in dark networks. Phys Rev E 2022; 106:014302. [PMID: 35974494 DOI: 10.1103/physreve.106.014302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
Quantitative research of interdisciplinary fields, including biological and social systems, has attracted great attention in recent years. Complex networks are popular and important tools for the investigations. Explosively increasing data are created by practical networks, from which useful information about dynamic networks can be extracted. From data to network structure, i.e., network reconstruction, is a crucial task. There are many difficulties in fulfilling network reconstruction, including data shortage (existence of hidden nodes) and time delay for signal propagation between adjacent nodes. In this paper a deep network reconstruction method is proposed, which can work in the conditions that even only two nodes (say A and B) are perceptible and all other network nodes are hidden. With a well-designed stochastic driving on node A, this method can reconstruct multiple interaction paths from A to B based on measured data. The distance, effective intensity, and transmission time delay of each path can be inferred accurately.
Collapse
Affiliation(s)
- Xinyu Wang
- School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Yuanyuan Mi
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400044, China and AI Research Center, Peng Cheng Laboratory, Shenzhen 518005, China
| | - Zhaoyang Zhang
- Department of Physics, School of Physical Science and Technology, Ningbo University, Ningbo, Zhejiang 315211, China
| | - Yang Chen
- Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Gang Hu
- Department of Physics, Beijing Normal University, Beijing 100875, China
| | - Haihong Li
- School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| |
Collapse
|
16
|
Ouchi T, Orsborn AL. Quantifying the influence of stimulation protocols on neural network connectivity inference to optimize rapid network measurements. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2369-2372. [PMID: 36085860 DOI: 10.1109/embc48229.2022.9871658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Connectivity is key to understanding neural circuit computations. However, estimating in vivo connectivity using recording of activity alone is challenging. Issues include common input and bias errors in inference, and limited temporal resolution due to large data requirements. Perturbations (e.g. stimulation) can improve inference accuracy and accelerate estimation. However, optimal stimulation protocols for rapid network estimation are not yet established. Here, we use neural network simulations to identify stimulation protocols that minimize connectivity inference errors when using generalized linear model inference. We find that stimulation parameters that balance excitatory and inhibitory activity minimize inference error. We also show that pairing optimized stimulation with adaptive protocols that choose neurons to stimulate via Bayesian inference may ultimately enable rapid network inference.
Collapse
|
17
|
Dynamical differential covariance recovers directional network structure in multiscale neural systems. Proc Natl Acad Sci U S A 2022; 119:e2117234119. [PMID: 35679342 PMCID: PMC9214501 DOI: 10.1073/pnas.2117234119] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
We sense, move, and think by dynamical interactions between neurons. It is now possible to simultaneously record from many individual neurons and brain regions. Methods for analyzing these large-scale recordings are needed that can reveal how the patterns of activity give rise to behavior. We developed dynamical differential covariance (DDC), an efficient, intuitive, and robust way to analyze these recordings and validated it on simulations of model neural networks where the ground truth was known. It can estimate not only the presence of a connection but also which direction the information is flowing in a network between neurons or cortical areas. We applied DDC to recordings from functional magnetic resonance imaging in humans and confirmed predicted connectivity with direct measurements. Investigating neural interactions is essential to understanding the neural basis of behavior. Many statistical methods have been used for analyzing neural activity, but estimating the direction of network interactions correctly and efficiently remains a difficult problem. Here, we derive dynamical differential covariance (DDC), a method based on dynamical network models that detects directional interactions with low bias and high noise tolerance under nonstationarity conditions. Moreover, DDC scales well with the number of recording sites and the computation required is comparable to that needed for covariance. DDC was validated and compared favorably with other methods on networks with false positive motifs and multiscale neural simulations where the ground-truth connectivity was known. When applied to recordings of resting-state functional magnetic resonance imaging (rs-fMRI), DDC consistently detected regional interactions with strong structural connectivity in over 1,000 individual subjects obtained by diffusion MRI (dMRI). DDC is a promising family of methods for estimating connectivity that can be generalized to a wide range of dynamical models and recording techniques and to other applications where system identification is needed.
Collapse
|
18
|
Javadzadeh M, Hofer SB. Dynamic causal communication channels between neocortical areas. Neuron 2022; 110:2470-2483.e7. [PMID: 35690063 PMCID: PMC9616801 DOI: 10.1016/j.neuron.2022.05.011] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 03/26/2022] [Accepted: 05/12/2022] [Indexed: 11/08/2022]
Abstract
Processing of sensory information depends on the interactions between hierarchically connected neocortical regions, but it remains unclear how the activity in one area causally influences the activity dynamics in another and how rapidly such interactions change with time. Here, we show that the communication between the primary visual cortex (V1) and high-order visual area LM is context-dependent and surprisingly dynamic over time. By momentarily silencing one area while recording activity in the other, we find that both areas reliably affected changing subpopulations of target neurons within one hundred milliseconds while mice observed a visual stimulus. The influence of LM feedback on V1 responses became even more dynamic when the visual stimuli predicted a reward, causing fast changes in the geometry of V1 population activity and affecting stimulus coding in a context-dependent manner. Therefore, the functional interactions between cortical areas are not static but unfold through rapidly shifting communication subspaces whose dynamics depend on context when processing sensory information. Optogenetic perturbations reveal the causal structure of long-range cortical influences How visual areas influence each other changes dynamically over tens of milliseconds Feedback to V1 improves visual stimulus encoding required for behavior The dynamics of feedback influences depend on the behavioral context
Collapse
Affiliation(s)
- Mitra Javadzadeh
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK.
| | - Sonja B Hofer
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK.
| |
Collapse
|
19
|
Ngampruetikorn V, Sachdeva V, Torrence J, Humplik J, Schwab DJ, Palmer SE. Inferring couplings in networks across order-disorder phase transitions. PHYSICAL REVIEW RESEARCH 2022; 4:023240. [PMID: 37576946 PMCID: PMC10421637 DOI: 10.1103/physrevresearch.4.023240] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Statistical inference is central to many scientific endeavors, yet how it works remains unresolved. Answering this requires a quantitative understanding of the intrinsic interplay between statistical models, inference methods, and the structure in the data. To this end, we characterize the efficacy of direct coupling analysis (DCA) - a highly successful method for analyzing amino acid sequence data-in inferring pairwise interactions from samples of ferromagnetic Ising models on random graphs. Our approach allows for physically motivated exploration of qualitatively distinct data regimes separated by phase transitions. We show that inference quality depends strongly on the nature of data-generating distributions: optimal accuracy occurs at an intermediate temperature where the detrimental effects from macroscopic order and thermal noise are minimal. Importantly our results indicate that DCA does not always outperform its local-statistics-based predecessors; while DCA excels at low temperatures, it becomes inferior to simple correlation thresholding at virtually all temperatures when data are limited. Our findings offer insights into the regime in which DCA operates so successfully, and more broadly, how inference interacts with the structure in the data.
Collapse
Affiliation(s)
- Vudtiwat Ngampruetikorn
- Initiative for the Theoretical Sciences, The Graduate Center, CUNY, New York, New York 10016, USA
| | - Vedant Sachdeva
- Department of Organismal Biology and Anatomy and Department of Physics, University of Chicago, Chicago, Illinois 60637, USA
| | - Johanna Torrence
- Department of Organismal Biology and Anatomy and Department of Physics, University of Chicago, Chicago, Illinois 60637, USA
| | - Jan Humplik
- Institute of Science and Technology Austria, 3400 Klosterneuburg, Austria
| | - David J Schwab
- Initiative for the Theoretical Sciences, The Graduate Center, CUNY, New York, New York 10016, USA
| | - Stephanie E Palmer
- Department of Organismal Biology and Anatomy and Department of Physics, University of Chicago, Chicago, Illinois 60637, USA
| |
Collapse
|
20
|
Urai AE, Doiron B, Leifer AM, Churchland AK. Large-scale neural recordings call for new insights to link brain and behavior. Nat Neurosci 2022; 25:11-19. [PMID: 34980926 DOI: 10.1038/s41593-021-00980-9] [Citation(s) in RCA: 84] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 11/08/2021] [Indexed: 12/17/2022]
Abstract
Neuroscientists today can measure activity from more neurons than ever before, and are facing the challenge of connecting these brain-wide neural recordings to computation and behavior. In the present review, we first describe emerging tools and technologies being used to probe large-scale brain activity and new approaches to characterize behavior in the context of such measurements. We next highlight insights obtained from large-scale neural recordings in diverse model systems, and argue that some of these pose a challenge to traditional theoretical frameworks. Finally, we elaborate on existing modeling frameworks to interpret these data, and argue that the interpretation of brain-wide neural recordings calls for new theoretical approaches that may depend on the desired level of understanding. These advances in both neural recordings and theory development will pave the way for critical advances in our understanding of the brain.
Collapse
Affiliation(s)
- Anne E Urai
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.,Cognitive Psychology Unit, Leiden University, Leiden, The Netherlands
| | | | | | - Anne K Churchland
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA. .,University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
21
|
van Albada SJ, Morales-Gregorio A, Dickscheid T, Goulas A, Bakker R, Bludau S, Palm G, Hilgetag CC, Diesmann M. Bringing Anatomical Information into Neuronal Network Models. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:201-234. [DOI: 10.1007/978-3-030-89439-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
22
|
Puppo F, Pré D, Bang AG, Silva GA. Super-Selective Reconstruction of Causal and Direct Connectivity With Application to in vitro iPSC Neuronal Networks. Front Neurosci 2021; 15:647877. [PMID: 34335152 PMCID: PMC8323822 DOI: 10.3389/fnins.2021.647877] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Accepted: 05/31/2021] [Indexed: 12/22/2022] Open
Abstract
Despite advancements in the development of cell-based in-vitro neuronal network models, the lack of appropriate computational tools limits their analyses. Methods aimed at deciphering the effective connections between neurons from extracellular spike recordings would increase utility of in vitro local neural circuits, especially for studies of human neural development and disease based on induced pluripotent stem cells (hiPSC). Current techniques allow statistical inference of functional couplings in the network but are fundamentally unable to correctly identify indirect and apparent connections between neurons, generating redundant maps with limited ability to model the causal dynamics of the network. In this paper, we describe a novel mathematically rigorous, model-free method to map effective-direct and causal-connectivity of neuronal networks from multi-electrode array data. The inference algorithm uses a combination of statistical and deterministic indicators which, first, enables identification of all existing functional links in the network and then reconstructs the directed and causal connection diagram via a super-selective rule enabling highly accurate classification of direct, indirect, and apparent links. Our method can be generally applied to the functional characterization of any in vitro neuronal networks. Here, we show that, given its accuracy, it can offer important insights into the functional development of in vitro hiPSC-derived neuronal cultures.
Collapse
Affiliation(s)
- Francesca Puppo
- BioCircuits Institute and Center for Engineered Natural Intelligence, University of California, San Diego, La Jolla, CA, United States
| | - Deborah Pré
- Conrad Prebys Center for Chemical Genomics, Sanford Burnham Prebys Medical Discovery Institute, La Jolla, CA, United States
| | - Anne G. Bang
- Conrad Prebys Center for Chemical Genomics, Sanford Burnham Prebys Medical Discovery Institute, La Jolla, CA, United States
| | - Gabriel A. Silva
- BioCircuits Institute, Center for Engineered Natural Intelligence, Department of Bioengineering, Department of Neurosciences, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
23
|
Cai W, Ryali S, Pasumarthy R, Talasila V, Menon V. Dynamic causal brain circuits during working memory and their functional controllability. Nat Commun 2021; 12:3314. [PMID: 34188024 PMCID: PMC8241851 DOI: 10.1038/s41467-021-23509-x] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 04/30/2021] [Indexed: 02/04/2023] Open
Abstract
Control processes associated with working memory play a central role in human cognition, but their underlying dynamic brain circuit mechanisms are poorly understood. Here we use system identification, network science, stability analysis, and control theory to probe functional circuit dynamics during working memory task performance. Our results show that dynamic signaling between distributed brain areas encompassing the salience (SN), fronto-parietal (FPN), and default mode networks can distinguish between working memory load and predict performance. Network analysis of directed causal influences suggests the anterior insula node of the SN and dorsolateral prefrontal cortex node of the FPN are causal outflow and inflow hubs, respectively. Network controllability decreases with working memory load and SN nodes show the highest functional controllability. Our findings reveal dissociable roles of the SN and FPN in systems control and provide novel insights into dynamic circuit mechanisms by which cognitive control circuits operate asymmetrically during cognition.
Collapse
Affiliation(s)
- Weidong Cai
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA.
- Wu Tsai Neurosciences Institute, Stanford University School of Medicine, Stanford, CA, USA.
| | - Srikanth Ryali
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA
| | - Ramkrishna Pasumarthy
- Department of Electrical Engineering, Robert Bosch Center of Data Sciences and Artificial Intelligence, Indian Institute of Technology Madras, Chennai, India
| | - Viswanath Talasila
- Department of Electronics and Telecommunication Engineering, Center for Imaging Technologies, M.S. Ramaiah Institute of Technology, Bengaluru, India
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA.
- Wu Tsai Neurosciences Institute, Stanford University School of Medicine, Stanford, CA, USA.
- Department of Neurology and Neurological Sciences, Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|
24
|
Endo D, Kobayashi R, Bartolo R, Averbeck BB, Sugase-Miyamoto Y, Hayashi K, Kawano K, Richmond BJ, Shinomoto S. A convolutional neural network for estimating synaptic connectivity from spike trains. Sci Rep 2021; 11:12087. [PMID: 34103546 PMCID: PMC8187444 DOI: 10.1038/s41598-021-91244-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 05/21/2021] [Indexed: 02/05/2023] Open
Abstract
The recent increase in reliable, simultaneous high channel count extracellular recordings is exciting for physiologists and theoreticians because it offers the possibility of reconstructing the underlying neuronal circuits. We recently presented a method of inferring this circuit connectivity from neuronal spike trains by applying the generalized linear model to cross-correlograms. Although the algorithm can do a good job of circuit reconstruction, the parameters need to be carefully tuned for each individual dataset. Here we present another method using a Convolutional Neural Network for Estimating synaptic Connectivity from spike trains. After adaptation to huge amounts of simulated data, this method robustly captures the specific feature of monosynaptic impact in a noisy cross-correlogram. There are no user-adjustable parameters. With this new method, we have constructed diagrams of neuronal circuits recorded in several cortical areas of monkeys.
Collapse
Affiliation(s)
- Daisuke Endo
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan
| | - Ryota Kobayashi
- Mathematics and Informatics Center, The University of Tokyo, Tokyo, 113-8656, Japan
- Department of Complexity Science and Engineering, The University of Tokyo, Chiba, 277-8561, Japan
- JST, PRESTO, Saitama, 332-0012, Japan
| | - Ramon Bartolo
- Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD, 20814, USA
| | - Bruno B Averbeck
- Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD, 20814, USA
| | - Yasuko Sugase-Miyamoto
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, 305-8568, Japan
| | - Kazuko Hayashi
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, 305-8568, Japan
- Japan Society for the Promotion of Science, Tokyo, 102-0083, Japan
| | - Kenji Kawano
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, 305-8568, Japan
| | - Barry J Richmond
- Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD, 20814, USA
| | - Shigeru Shinomoto
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan.
- Brain Information Communication Research Laboratory Group, ATR Institute International, Kyoto, 619-0288, Japan.
| |
Collapse
|
25
|
Shorten DP, Spinney RE, Lizier JT. Estimating Transfer Entropy in Continuous Time Between Neural Spike Trains or Other Event-Based Data. PLoS Comput Biol 2021; 17:e1008054. [PMID: 33872296 PMCID: PMC8084348 DOI: 10.1371/journal.pcbi.1008054] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 04/29/2021] [Accepted: 02/19/2021] [Indexed: 11/24/2022] Open
Abstract
Transfer entropy (TE) is a widely used measure of directed information flows in a number of domains including neuroscience. Many real-world time series for which we are interested in information flows come in the form of (near) instantaneous events occurring over time. Examples include the spiking of biological neurons, trades on stock markets and posts to social media, amongst myriad other systems involving events in continuous time throughout the natural and social sciences. However, there exist severe limitations to the current approach to TE estimation on such event-based data via discretising the time series into time bins: it is not consistent, has high bias, converges slowly and cannot simultaneously capture relationships that occur with very fine time precision as well as those that occur over long time intervals. Building on recent work which derived a theoretical framework for TE in continuous time, we present an estimation framework for TE on event-based data and develop a k-nearest-neighbours estimator within this framework. This estimator is provably consistent, has favourable bias properties and converges orders of magnitude more quickly than the current state-of-the-art in discrete-time estimation on synthetic examples. We demonstrate failures of the traditionally-used source-time-shift method for null surrogate generation. In order to overcome these failures, we develop a local permutation scheme for generating surrogate time series conforming to the appropriate null hypothesis in order to test for the statistical significance of the TE and, as such, test for the conditional independence between the history of one point process and the updates of another. Our approach is shown to be capable of correctly rejecting or accepting the null hypothesis of conditional independence even in the presence of strong pairwise time-directed correlations. This capacity to accurately test for conditional independence is further demonstrated on models of a spiking neural circuit inspired by the pyloric circuit of the crustacean stomatogastric ganglion, succeeding where previous related estimators have failed.
Collapse
Affiliation(s)
- David P. Shorten
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Richard E. Spinney
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
- School of Physics and EMBL Australia Node Single Molecule Science, School of Medical Sciences, The University of New South Wales, Sydney, Australia
| | - Joseph T. Lizier
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| |
Collapse
|
26
|
Kim J, Barath AS, Rusheen AE, Rojas Cabrera JM, Price JB, Shin H, Goyal A, Yuen JW, Jondal DE, Blaha CD, Lee KH, Jang DP, Oh Y. Automatic and Reliable Quantification of Tonic Dopamine Concentrations In Vivo Using a Novel Probabilistic Inference Method. ACS OMEGA 2021; 6:6607-6613. [PMID: 33748573 PMCID: PMC7970470 DOI: 10.1021/acsomega.0c05217] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 02/19/2021] [Indexed: 05/04/2023]
Abstract
Dysregulation of the neurotransmitter dopamine (DA) is implicated in several neuropsychiatric conditions. Multiple-cyclic square-wave voltammetry (MCSWV) is a state-of-the-art technique for measuring tonic DA levels with high sensitivity (<5 nM), selectivity, and spatiotemporal resolution. Currently, however, analysis of MCSWV data requires manual, qualitative adjustments of analysis parameters, which can inadvertently introduce bias. Here, we demonstrate the development of a computational technique using a statistical model for standardized, unbiased analysis of experimental MCSWV data for unbiased quantification of tonic DA. The oxidation current in the MCSWV signal was predicted to follow a lognormal distribution. The DA-related oxidation signal was inferred to be present in the top 5% of this analytical distribution and was used to predict a tonic DA level. The performance of this technique was compared against the previously used peak-based method on paired in vivo and post-calibration in vitro datasets. Analytical inference of DA signals derived from the predicted statistical model enabled high-fidelity conversion of the in vivo current signal to a concentration value via in vitro post-calibration. As a result, this technique demonstrated reliable and improved estimation of tonic DA levels in vivo compared to the conventional manual post-processing technique using the peak current signals. These results show that probabilistic inference-based voltammetry signal processing techniques can standardize the determination of tonic DA concentrations, enabling progress toward the development of MCSWV as a robust research and clinical tool.
Collapse
Affiliation(s)
- Jaekyung Kim
- Department
of Neurology, University of California,
San Francisco, San Francisco, California 94158, United States
- Neurology
and Rehabilitation Service, San Francisco
Veterans Affairs Medical Center, San Francisco, California 94158, United States
| | - Abhijeet S. Barath
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - Aaron E. Rusheen
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
- Mayo
Clinic Alix School of Medicine, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - Juan M. Rojas Cabrera
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - J. Blair Price
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - Hojin Shin
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - Abhinav Goyal
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
- Mayo
Clinic Graduate School of Biomedical Sciences, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - Jason W. Yuen
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - Danielle E. Jondal
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - Charles D. Blaha
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - Kendall H. Lee
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
- Department
of Biomedical Engineering, Mayo Clinic, Rochester, Minnesota 55905, United States
| | - Dong Pyo Jang
- Department
of Biomedical Engineering, Hanyang University, Seoul 04763, Republic of Korea
| | - Yoonbae Oh
- Department
of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota 55905, United States
- Department
of Biomedical Engineering, Mayo Clinic, Rochester, Minnesota 55905, United States
| |
Collapse
|
27
|
Young J, Neveu CL, Byrne JH, Aazhang B. Inferring functional connectivity through graphical directed information. J Neural Eng 2021; 18. [PMID: 33684898 PMCID: PMC8600965 DOI: 10.1088/1741-2552/abecc6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/08/2021] [Indexed: 11/25/2022]
Abstract
Objective. Accurate inference of functional connectivity is critical for understanding brain function. Previous methods have limited ability distinguishing between direct and indirect connections because of inadequate scaling with dimensionality. This poor scaling performance reduces the number of nodes that can be included in conditioning. Our goal was to provide a technique that scales better and thereby enables minimization of indirect connections. Approach. Our major contribution is a powerful model-free framework, graphical directed information (GDI), that enables pairwise directed functional connections to be conditioned on the activity of substantially more nodes in a network, producing a more accurate graph of functional connectivity that reduces indirect connections. The key technology enabling this advancement is a recent advance in the estimation of mutual information (MI), which relies on multilayer perceptrons and exploiting an alternative representation of the Kullback–Leibler divergence definition of MI. Our second major contribution is the application of this technique to both discretely valued and continuously valued time series. Main results. GDI correctly inferred the circuitry of arbitrary Gaussian, nonlinear, and conductance-based networks. Furthermore, GDI inferred many of the connections of a model of a central pattern generator circuit in Aplysia, while also reducing many indirect connections. Significance. GDI is a general and model-free technique that can be used on a variety of scales and data types to provide accurate direct connectivity graphs and addresses the critical issue of indirect connections in neural data analysis.
Collapse
Affiliation(s)
- Joseph Young
- Department of Electrical & Computer Engineering, Rice University, 6100 Main St, Houston, Texas, 77005, UNITED STATES
| | - Curtis L Neveu
- Department of Neurobiology & Anatomy, The University of Texas Health Science Center at Houston John P and Katherine G McGovern Medical School, 6431 Fannin Street, Houston, Texas, 77030-1501, UNITED STATES
| | - John H Byrne
- Department of Neurobiology and Anatomy, The University of Texas Health Science Center at Houston John P and Katherine G McGovern Medical School, 6431 Fannin Street, Houston, Texas, 77030-1501, UNITED STATES
| | - Behnaam Aazhang
- Department of Electrical & Computer Engineering, Rice University, 6100 Main St, Houston, Texas, 77005, UNITED STATES
| |
Collapse
|
28
|
Daie K, Svoboda K, Druckmann S. Targeted photostimulation uncovers circuit motifs supporting short-term memory. Nat Neurosci 2021; 24:259-265. [PMID: 33495637 DOI: 10.1038/s41593-020-00776-3] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 12/15/2020] [Indexed: 12/25/2022]
Abstract
Short-term memory is associated with persistent neural activity that is maintained by positive feedback between neurons. To explore the neural circuit motifs that produce memory-related persistent activity, we measured coupling between functionally characterized motor cortex neurons in mice performing a memory-guided response task. Targeted two-photon photostimulation of small (<10) groups of neurons produced sparse calcium responses in coupled neurons over approximately 100 μm. Neurons with similar task-related selectivity were preferentially coupled. Photostimulation of different groups of neurons modulated activity in different subpopulations of coupled neurons. Responses of stimulated and coupled neurons persisted for seconds, far outlasting the duration of the photostimuli. Photostimuli produced behavioral biases that were predictable based on the selectivity of the perturbed neuronal population, even though photostimulation preceded the behavioral response by seconds. Our results suggest that memory-related neural circuits contain intercalated, recurrently connected modules, which can independently maintain selective persistent activity.
Collapse
Affiliation(s)
- Kayvon Daie
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | - Shaul Druckmann
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
- Stanford University, Stanford, CA, USA.
| |
Collapse
|
29
|
Perich MG, Rajan K. Rethinking brain-wide interactions through multi-region 'network of networks' models. Curr Opin Neurobiol 2020; 65:146-151. [PMID: 33254073 DOI: 10.1016/j.conb.2020.11.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 10/17/2020] [Accepted: 11/08/2020] [Indexed: 12/20/2022]
Abstract
The neural control of behavior is distributed across many functionally and anatomically distinct brain regions even in small nervous systems. While classical neuroscience models treated these regions as a set of hierarchically isolated nodes, the brain comprises a recurrently interconnected network in which each region is intimately modulated by many others. Uncovering these interactions is now possible through experimental techniques that access large neural populations from many brain regions simultaneously. Harnessing these large-scale datasets, however, requires new theoretical approaches. Here, we review recent work to understand brain-wide interactions using multi-region 'network of networks' models and discuss how they can guide future experiments. We also emphasize the importance of multi-region recordings, and posit that studying individual components in isolation will be insufficient to understand the neural basis of behavior.
Collapse
Affiliation(s)
- Matthew G Perich
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Kanaka Rajan
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
30
|
Stepaniants G, Brunton BW, Kutz JN. Inferring causal networks of dynamical systems through transient dynamics and perturbation. Phys Rev E 2020; 102:042309. [PMID: 33212733 DOI: 10.1103/physreve.102.042309] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 09/25/2020] [Indexed: 12/28/2022]
Abstract
Inferring causal relations from time series measurements is an ill-posed mathematical problem, where typically an infinite number of potential solutions can reproduce the given data. We explore in depth a strategy to disambiguate between possible underlying causal networks by perturbing the network, where the forcings are either targeted or applied at random. The resulting transient dynamics provide the critical information necessary to infer causality. Two methods are shown to provide accurate causal reconstructions: Granger causality (GC) with perturbations, and our proposed perturbation cascade inference (PCI). Perturbed GC is capable of inferring smaller networks under low coupling strength regimes. Our proposed PCI method demonstrated consistently strong performance in inferring causal relations for small (2-5 node) and large (10-20 node) networks, with both linear and nonlinear dynamics. Thus, the ability to apply a large and diverse set of perturbations to the network is critical for successfully and accurately determining causal relations and disambiguating between various viable networks.
Collapse
Affiliation(s)
- George Stepaniants
- Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA and Department of Mathematics, University of Washington, Seattle, Washington 98195, USA
| | - Bingni W Brunton
- Department of Biology, University of Washington, Seattle, Washington 98195, USA
| | - J Nathan Kutz
- Department of Applied Mathematics, University of Washington, Seattle, Washington 98195, USA
| |
Collapse
|