1
|
Wang X, Mi Y, Zhang Z, Chen Y, Hu G, Li H. Reconstructing distant interactions of multiple paths between perceptible nodes in dark networks. Phys Rev E 2022; 106:014302. [PMID: 35974494 DOI: 10.1103/physreve.106.014302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
Quantitative research of interdisciplinary fields, including biological and social systems, has attracted great attention in recent years. Complex networks are popular and important tools for the investigations. Explosively increasing data are created by practical networks, from which useful information about dynamic networks can be extracted. From data to network structure, i.e., network reconstruction, is a crucial task. There are many difficulties in fulfilling network reconstruction, including data shortage (existence of hidden nodes) and time delay for signal propagation between adjacent nodes. In this paper a deep network reconstruction method is proposed, which can work in the conditions that even only two nodes (say A and B) are perceptible and all other network nodes are hidden. With a well-designed stochastic driving on node A, this method can reconstruct multiple interaction paths from A to B based on measured data. The distance, effective intensity, and transmission time delay of each path can be inferred accurately.
Collapse
Affiliation(s)
- Xinyu Wang
- School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Yuanyuan Mi
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400044, China and AI Research Center, Peng Cheng Laboratory, Shenzhen 518005, China
| | - Zhaoyang Zhang
- Department of Physics, School of Physical Science and Technology, Ningbo University, Ningbo, Zhejiang 315211, China
| | - Yang Chen
- Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Gang Hu
- Department of Physics, Beijing Normal University, Beijing 100875, China
| | - Haihong Li
- School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| |
Collapse
|
2
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
3
|
Das A, Fiete IR. Systematic errors in connectivity inferred from activity in strongly recurrent networks. Nat Neurosci 2020; 23:1286-1296. [DOI: 10.1038/s41593-020-0699-2] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 07/28/2020] [Indexed: 11/09/2022]
|
4
|
Yan Y, Jebara T, Abernathey R, Goes J, Gomes H. Robust learning algorithms for capturing oceanic dynamics and transport of Noctiluca blooms using linear dynamical models. PLoS One 2019; 14:e0218183. [PMID: 31194825 PMCID: PMC6564007 DOI: 10.1371/journal.pone.0218183] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Accepted: 05/28/2019] [Indexed: 11/18/2022] Open
Abstract
The blooms of Noctiluca in the Gulf of Oman and the Arabian Sea have been intensifying in recent years, posing now a threat to regional fisheries and the long-term health of an ecosystem supporting a coastal population of nearly 120 million people. We present the results of a local-scale data analysis to investigate the onset and patterns of the Noctiluca blooms, which form annually during the winter monsoon in the Gulf of Oman and in the Arabian Sea. Our approach combines methods in physical and biological oceanography with machine learning techniques. In particular, we present a robust algorithm, the variable-length Linear Dynamic Systems (vLDS) model, that extracts the causal factors and latent dynamics at the local-scale along each individual drifter trajectory, and demonstrate its effectiveness by using it to generate predictive plots for all variables and test macroscopic scientific hypotheses. The vLDS model is a new algorithm specifically designed to analyze the irregular dataset from surface velocity drifters, in which the multivariate time series trajectories are having variable or unequal lengths. The test results provide local-scale statistical evidence to support and check the macroscopic physical and biological Oceanography hypotheses on the Noctiluca blooms; it also helps identify complementary local trajectory-scale dynamics that might not be visible or discoverable at the macroscopic scale. The vLDS model also exhibits a generalization capability (as a machine learning methodology) to investigate important causal factors and hidden dynamics associated with ocean biogeochemical processes and phenomena at the population-level and local trajectory-scale.
Collapse
Affiliation(s)
- Yan Yan
- Data Science Institute, Columbia University, New York, NY, United States of America
- * E-mail: (YY); (TJ); (RA); (JG); (HG)
| | - Tony Jebara
- Data Science Institute, Columbia University, New York, NY, United States of America
- Department of Computer Sciences, Columbia University, New York, NY, United States of America
- * E-mail: (YY); (TJ); (RA); (JG); (HG)
| | - Ryan Abernathey
- Lamont Doherty Earth Observatory, Columbia University, Palisades, New York, NY, United States of America
- * E-mail: (YY); (TJ); (RA); (JG); (HG)
| | - Joaquim Goes
- Lamont Doherty Earth Observatory, Columbia University, Palisades, New York, NY, United States of America
- * E-mail: (YY); (TJ); (RA); (JG); (HG)
| | - Helga Gomes
- Lamont Doherty Earth Observatory, Columbia University, Palisades, New York, NY, United States of America
- * E-mail: (YY); (TJ); (RA); (JG); (HG)
| |
Collapse
|
5
|
Paninski L, Cunningham JP. Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience. Curr Opin Neurobiol 2019; 50:232-241. [PMID: 29738986 DOI: 10.1016/j.conb.2018.04.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 03/12/2018] [Accepted: 04/06/2018] [Indexed: 01/01/2023]
Abstract
Modern large-scale multineuronal recording methodologies, including multielectrode arrays, calcium imaging, and optogenetic techniques, produce single-neuron resolution data of a magnitude and precision that were the realm of science fiction twenty years ago. The major bottlenecks in systems and circuit neuroscience no longer lie in simply collecting data from large neural populations, but also in understanding this data: developing novel scientific questions, with corresponding analysis techniques and experimental designs to fully harness these new capabilities and meaningfully interrogate these questions. Advances in methods for signal processing, network analysis, dimensionality reduction, and optimal control-developed in lockstep with advances in experimental neurotechnology-promise major breakthroughs in multiple fundamental neuroscience problems. These trends are clear in a broad array of subfields of modern neuroscience; this review focuses on recent advances in methods for analyzing neural time-series data with single-neuronal precision.
Collapse
Affiliation(s)
- L Paninski
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States; Department of Neuroscience, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States.
| | - J P Cunningham
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States
| |
Collapse
|
6
|
Soudry D, Keshri S, Stinson P, Oh MH, Iyengar G, Paninski L. Efficient "Shotgun" Inference of Neural Connectivity from Highly Sub-sampled Activity Data. PLoS Comput Biol 2015; 11:e1004464. [PMID: 26465147 PMCID: PMC4605541 DOI: 10.1371/journal.pcbi.1004464] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2014] [Accepted: 07/09/2015] [Indexed: 11/19/2022] Open
Abstract
Inferring connectivity in neuronal networks remains a key challenge in statistical neuroscience. The “common input” problem presents a major roadblock: it is difficult to reliably distinguish causal connections between pairs of observed neurons versus correlations induced by common input from unobserved neurons. Available techniques allow us to simultaneously record, with sufficient temporal resolution, only a small fraction of the network. Consequently, naive connectivity estimators that neglect these common input effects are highly biased. This work proposes a “shotgun” experimental design, in which we observe multiple sub-networks briefly, in a serial manner. Thus, while the full network cannot be observed simultaneously at any given time, we may be able to observe much larger subsets of the network over the course of the entire experiment, thus ameliorating the common input problem. Using a generalized linear model for a spiking recurrent neural network, we develop a scalable approximate expected loglikelihood-based Bayesian method to perform network inference given this type of data, in which only a small fraction of the network is observed in each time bin. We demonstrate in simulation that the shotgun experimental design can eliminate the biases induced by common input effects. Networks with thousands of neurons, in which only a small fraction of the neurons is observed in each time bin, can be quickly and accurately estimated, achieving orders of magnitude speed up over previous approaches. Optical imaging of the activity in a neuronal network is limited by the scanning speed of the imaging device. Therefore, typically, only a small fixed part of the network is observed during the entire experiment. However, in such an experiment, it can be hard to infer from the observed activity patterns whether (1) a neuron A directly affects neuron B, or (2) another, unobserved neuron C affects both A and B. To deal with this issue, we propose a “shotgun” observation scheme, in which, at each time point, we observe a small changing subset of the neurons from the network. Consequently, many fewer neurons remain completely unobserved during the entire experiment, enabling us to eventually distinguish between cases (1) and (2) given sufficiently long experiments. Since previous inference algorithms cannot efficiently handle so many missing observations, we develop a scalable algorithm for data acquired using the shotgun observation scheme, in which only a small fraction of the neurons are observed in each time bin. Using this kind of simulated data, we show the algorithm is able to quickly infer connectivity in spiking recurrent networks with thousands of neurons.
Collapse
Affiliation(s)
- Daniel Soudry
- Department of Statistics, Department of Neuroscience, the Center for Theoretical Neuroscience, the Grossman Center for the Statistics of Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center, Columbia University, New York, New York, United States of America
| | - Suraj Keshri
- Department of Industrial Engineering and Operations Research, Columbia University, New York, New York, United States of America
| | - Patrick Stinson
- Department of Statistics, Department of Neuroscience, the Center for Theoretical Neuroscience, the Grossman Center for the Statistics of Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center, Columbia University, New York, New York, United States of America
| | - Min-Hwan Oh
- Department of Industrial Engineering and Operations Research, Columbia University, New York, New York, United States of America
| | - Garud Iyengar
- Department of Industrial Engineering and Operations Research, Columbia University, New York, New York, United States of America
| | - Liam Paninski
- Department of Statistics, Department of Neuroscience, the Center for Theoretical Neuroscience, the Grossman Center for the Statistics of Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center, Columbia University, New York, New York, United States of America
| |
Collapse
|
7
|
Abstract
Advances in optical manipulation and observation of neural activity have set the stage for widespread implementation of closed-loop and activity-guided optical control of neural circuit dynamics. Closing the loop optogenetically (i.e., basing optogenetic stimulation on simultaneously observed dynamics in a principled way) is a powerful strategy for causal investigation of neural circuitry. In particular, observing and feeding back the effects of circuit interventions on physiologically relevant timescales is valuable for directly testing whether inferred models of dynamics, connectivity, and causation are accurate in vivo. Here we highlight technical and theoretical foundations as well as recent advances and opportunities in this area, and we review in detail the known caveats and limitations of optogenetic experimentation in the context of addressing these challenges with closed-loop optogenetic control in behaving animals.
Collapse
Affiliation(s)
- Logan Grosenick
- Department of Bioengineering, Stanford University, Stanford, CA 94305 USA; CNC Program, Stanford University, Stanford, CA 94305 USA; Neurosciences Program, Stanford University, Stanford, CA 94305 USA
| | - James H Marshel
- Department of Bioengineering, Stanford University, Stanford, CA 94305 USA; CNC Program, Stanford University, Stanford, CA 94305 USA
| | - Karl Deisseroth
- Department of Bioengineering, Stanford University, Stanford, CA 94305 USA; CNC Program, Stanford University, Stanford, CA 94305 USA; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA 94305 USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305 USA.
| |
Collapse
|
8
|
O'Leary T, Sutton AC, Marder E. Computational models in the age of large datasets. Curr Opin Neurobiol 2015; 32:87-94. [PMID: 25637959 DOI: 10.1016/j.conb.2015.01.006] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Accepted: 01/10/2015] [Indexed: 10/24/2022]
Abstract
Technological advances in experimental neuroscience are generating vast quantities of data, from the dynamics of single molecules to the structure and activity patterns of large networks of neurons. How do we make sense of these voluminous, complex, disparate and often incomplete data? How do we find general principles in the morass of detail? Computational models are invaluable and necessary in this task and yield insights that cannot otherwise be obtained. However, building and interpreting good computational models is a substantial challenge, especially so in the era of large datasets. Fitting detailed models to experimental data is difficult and often requires onerous assumptions, while more loosely constrained conceptual models that explore broad hypotheses and principles can yield more useful insights.
Collapse
Affiliation(s)
- Timothy O'Leary
- Biology Department and Volen Center, Brandeis University, Waltham, MA 02454, United States
| | - Alexander C Sutton
- Biology Department and Volen Center, Brandeis University, Waltham, MA 02454, United States
| | - Eve Marder
- Biology Department and Volen Center, Brandeis University, Waltham, MA 02454, United States.
| |
Collapse
|