1
|
Karbasi A, Salavati AH, Vetterli M. Learning neural connectivity from firing activity: efficient algorithms with provable guarantees on topology. J Comput Neurosci 2018; 44:253-272. [PMID: 29464489 PMCID: PMC5851696 DOI: 10.1007/s10827-018-0678-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2017] [Revised: 01/06/2018] [Accepted: 01/22/2018] [Indexed: 11/18/2022]
Abstract
The connectivity of a neuronal network has a major effect on its functionality and role. It is generally believed that the complex network structure of the brain provides a physiological basis for information processing. Therefore, identifying the network's topology has received a lot of attentions in neuroscience and has been the center of many research initiatives such as Human Connectome Project. Nevertheless, direct and invasive approaches that slice and observe the neural tissue have proven to be time consuming, complex and costly. As a result, the inverse methods that utilize firing activity of neurons in order to identify the (functional) connections have gained momentum recently, especially in light of rapid advances in recording technologies; It will soon be possible to simultaneously monitor the activities of tens of thousands of neurons in real time. While there are a number of excellent approaches that aim to identify the functional connections from firing activities, the scalability of the proposed techniques plays a major challenge in applying them on large-scale datasets of recorded firing activities. In exceptional cases where scalability has not been an issue, the theoretical performance guarantees are usually limited to a specific family of neurons or the type of firing activities. In this paper, we formulate the neural network reconstruction as an instance of a graph learning problem, where we observe the behavior of nodes/neurons (i.e., firing activities) and aim to find the links/connections. We develop a scalable learning mechanism and derive the conditions under which the estimated graph for a network of Leaky Integrate and Fire (LIf) neurons matches the true underlying synaptic connections. We then validate the performance of the algorithm using artificially generated data (for benchmarking) and real data recorded from multiple hippocampal areas in rats.
Collapse
Affiliation(s)
- Amin Karbasi
- Inference, Information and Decision Systems Group, Yale Institute for Network Science, Yale University, New Haven, CT 06520 USA
| | - Amir Hesam Salavati
- Laboratory of Audiovisual Communications (LCAV), School of Computer and Communication Sciences, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
| | - Martin Vetterli
- Laboratory of Audiovisual Communications (LCAV), School of Computer and Communication Sciences, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
2
|
Florescu D, Coca D. Identification of Linear and Nonlinear Sensory Processing Circuits from Spiking Neuron Data. Neural Comput 2018; 30:670-707. [PMID: 29342394 DOI: 10.1162/neco_a_01051] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.
Collapse
Affiliation(s)
- Dorian Florescu
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| | - Daniel Coca
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| |
Collapse
|
3
|
Hesse J, Schleimer JH, Schreiber S. Qualitative changes in phase-response curve and synchronization at the saddle-node-loop bifurcation. Phys Rev E 2017; 95:052203. [PMID: 28618541 DOI: 10.1103/physreve.95.052203] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Indexed: 06/07/2023]
Abstract
Prominent changes in neuronal dynamics have previously been attributed to a specific switch in onset bifurcation, the Bogdanov-Takens (BT) point. This study unveils another, relevant and so far underestimated transition point: the saddle-node-loop bifurcation, which can be reached by several parameters, including capacitance, leak conductance, and temperature. This bifurcation turns out to induce even more drastic changes in synchronization than the BT transition. This result arises from a direct effect of the saddle-node-loop bifurcation on the limit cycle and hence spike dynamics. In contrast, the BT bifurcation exerts its immediate influence upon the subthreshold dynamics and hence only indirectly relates to spiking. We specifically demonstrate that the saddle-node-loop bifurcation (i) ubiquitously occurs in planar neuron models with a saddle node on invariant cycle onset bifurcation, and (ii) results in a symmetry breaking of the system's phase-response curve. The latter entails an increase in synchronization range in pulse-coupled oscillators, such as neurons. The derived bifurcation structure is of interest in any system for which a relaxation limit is admissible, such as Josephson junctions and chemical oscillators.
Collapse
Affiliation(s)
- Janina Hesse
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany and Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Jan-Hendrik Schleimer
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany and Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Susanne Schreiber
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany and Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
4
|
Givon LE, Lazar AA. Neurokernel: An Open Source Platform for Emulating the Fruit Fly Brain. PLoS One 2016; 11:e0146581. [PMID: 26751378 PMCID: PMC4709234 DOI: 10.1371/journal.pone.0146581] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 12/18/2015] [Indexed: 11/23/2022] Open
Abstract
We have developed an open software platform called Neurokernel for collaborative development of comprehensive models of the brain of the fruit fly Drosophila melanogaster and their execution and testing on multiple Graphics Processing Units (GPUs). Neurokernel provides a programming model that capitalizes upon the structural organization of the fly brain into a fixed number of functional modules to distinguish between these modules' local information processing capabilities and the connectivity patterns that link them. By defining mandatory communication interfaces that specify how data is transmitted between models of each of these modules regardless of their internal design, Neurokernel explicitly enables multiple researchers to collaboratively model the fruit fly's entire brain by integration of their independently developed models of its constituent processing units. We demonstrate the power of Neurokernel's model integration by combining independently developed models of the retina and lamina neuropils in the fly's visual system and by demonstrating their neuroinformation processing capability. We also illustrate Neurokernel's ability to take advantage of direct GPU-to-GPU data transfers with benchmarks that demonstrate scaling of Neurokernel's communication performance both over the number of interface ports exposed by an emulation's constituent modules and the total number of modules comprised by an emulation.
Collapse
Affiliation(s)
- Lev E. Givon
- Department of Electrical Engineering, Columbia University, New York, NY 10027, United States of America
| | - Aurel A. Lazar
- Department of Electrical Engineering, Columbia University, New York, NY 10027, United States of America
| |
Collapse
|
5
|
Florescu D, Coca D. Nonlinear system identification of receptive fields from spiking neuron data. BMC Neurosci 2015. [PMCID: PMC4697543 DOI: 10.1186/1471-2202-16-s1-p46] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
6
|
Volgushev M, Ilin V, Stevenson IH. Identifying and tracking simulated synaptic inputs from neuronal firing: insights from in vitro experiments. PLoS Comput Biol 2015; 11:e1004167. [PMID: 25823000 PMCID: PMC4379067 DOI: 10.1371/journal.pcbi.1004167] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2014] [Accepted: 02/02/2015] [Indexed: 11/18/2022] Open
Abstract
Accurately describing synaptic interactions between neurons and how interactions change over time are key challenges for systems neuroscience. Although intracellular electrophysiology is a powerful tool for studying synaptic integration and plasticity, it is limited by the small number of neurons that can be recorded simultaneously in vitro and by the technical difficulty of intracellular recording in vivo. One way around these difficulties may be to use large-scale extracellular recording of spike trains and apply statistical methods to model and infer functional connections between neurons. These techniques have the potential to reveal large-scale connectivity structure based on the spike timing alone. However, the interpretation of functional connectivity is often approximate, since only a small fraction of presynaptic inputs are typically observed. Here we use in vitro current injection in layer 2/3 pyramidal neurons to validate methods for inferring functional connectivity in a setting where input to the neuron is controlled. In experiments with partially-defined input, we inject a single simulated input with known amplitude on a background of fluctuating noise. In a fully-defined input paradigm, we then control the synaptic weights and timing of many simulated presynaptic neurons. By analyzing the firing of neurons in response to these artificial inputs, we ask 1) How does functional connectivity inferred from spikes relate to simulated synaptic input? and 2) What are the limitations of connectivity inference? We find that individual current-based synaptic inputs are detectable over a broad range of amplitudes and conditions. Detectability depends on input amplitude and output firing rate, and excitatory inputs are detected more readily than inhibitory. Moreover, as we model increasing numbers of presynaptic inputs, we are able to estimate connection strengths more accurately and detect the presence of connections more quickly. These results illustrate the possibilities and outline the limits of inferring synaptic input from spikes.
Collapse
Affiliation(s)
- Maxim Volgushev
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Vladimir Ilin
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| |
Collapse
|
7
|
Lazar AA, Slutskiy YB, Zhou Y. Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification. Neural Netw 2015; 63:254-71. [PMID: 25594573 DOI: 10.1016/j.neunet.2014.10.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2014] [Revised: 10/23/2014] [Accepted: 10/28/2014] [Indexed: 10/24/2022]
Abstract
Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus.
Collapse
Affiliation(s)
- Aurel A Lazar
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| | - Yevgeniy B Slutskiy
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| | - Yiyin Zhou
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| |
Collapse
|
8
|
Lazar AA, Slutskiy YB. Channel identification machines for multidimensional receptive fields. Front Comput Neurosci 2014; 8:117. [PMID: 25309413 PMCID: PMC4176398 DOI: 10.3389/fncom.2014.00117] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Accepted: 08/31/2014] [Indexed: 12/04/2022] Open
Abstract
We present algorithms for identifying multidimensional receptive fields directly from spike trains produced by biophysically-grounded neuron models. We demonstrate that only the projection of a receptive field onto the input stimulus space may be perfectly identified and derive conditions under which this identification is possible. We also provide detailed examples of identification of neural circuits incorporating spatiotemporal and spectrotemporal receptive fields.
Collapse
Affiliation(s)
- Aurel A Lazar
- Bionet Group, Department of Electrical Engineering, Columbia University in the City of New York New York, NY, USA
| | - Yevgeniy B Slutskiy
- Bionet Group, Department of Electrical Engineering, Columbia University in the City of New York New York, NY, USA
| |
Collapse
|
9
|
Lazar AA, Zhou Y. Volterra dendritic stimulus processors and biophysical spike generators with intrinsic noise sources. Front Comput Neurosci 2014; 8:95. [PMID: 25225477 PMCID: PMC4150400 DOI: 10.3389/fncom.2014.00095] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Accepted: 07/23/2014] [Indexed: 11/13/2022] Open
Abstract
We consider a class of neural circuit models with internal noise sources arising in sensory systems. The basic neuron model in these circuits consists of a dendritic stimulus processor (DSP) cascaded with a biophysical spike generator (BSG). The dendritic stimulus processor is modeled as a set of nonlinear operators that are assumed to have a Volterra series representation. Biophysical point neuron models, such as the Hodgkin-Huxley neuron, are used to model the spike generator. We address the question of how intrinsic noise sources affect the precision in encoding and decoding of sensory stimuli and the functional identification of its sensory circuits. We investigate two intrinsic noise sources arising (i) in the active dendritic trees underlying the DSPs, and (ii) in the ion channels of the BSGs. Noise in dendritic stimulus processing arises from a combined effect of variability in synaptic transmission and dendritic interactions. Channel noise arises in the BSGs due to the fluctuation of the number of the active ion channels. Using a stochastic differential equations formalism we show that encoding with a neuron model consisting of a nonlinear DSP cascaded with a BSG with intrinsic noise sources can be treated as generalized sampling with noisy measurements. For single-input multi-output neural circuit models with feedforward, feedback and cross-feedback DSPs cascaded with BSGs we theoretically analyze the effect of noise sources on stimulus decoding. Building on a key duality property, the effect of noise parameters on the precision of the functional identification of the complete neural circuit with DSP/BSG neuron models is given. We demonstrate through extensive simulations the effects of noise on encoding stimuli with circuits that include neuron models that are akin to those commonly seen in sensory systems, e.g., complex cells in V1.
Collapse
Affiliation(s)
- Aurel A Lazar
- Department of Electrical Engineering, Columbia University New York, NY, USA
| | - Yiyin Zhou
- Department of Electrical Engineering, Columbia University New York, NY, USA
| |
Collapse
|
10
|
Spiking neural circuits with dendritic stimulus processors : encoding, decoding, and identification in reproducing kernel Hilbert spaces. J Comput Neurosci 2014; 38:1-24. [PMID: 25175020 DOI: 10.1007/s10827-014-0522-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2013] [Revised: 06/03/2014] [Accepted: 07/25/2014] [Indexed: 10/24/2022]
Abstract
We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.
Collapse
|
11
|
Lazar AA, Yeh CH. Functional identification of an antennal lobe DM4 projection neuron of the fruit fly. BMC Neurosci 2014. [PMCID: PMC4126497 DOI: 10.1186/1471-2202-15-s1-p5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|