1
|
Lazar AA, Ukani NH, Zhou Y. Sparse Functional Identification of Complex Cells from Spike Times and the Decoding of Visual Stimuli. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2018; 8:2. [PMID: 29349664 PMCID: PMC5773573 DOI: 10.1186/s13408-017-0057-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2017] [Accepted: 12/29/2017] [Indexed: 05/21/2023]
Abstract
We investigate the sparse functional identification of complex cells and the decoding of spatio-temporal visual stimuli encoded by an ensemble of complex cells. The reconstruction algorithm is formulated as a rank minimization problem that significantly reduces the number of sampling measurements (spikes) required for decoding. We also establish the duality between sparse decoding and functional identification and provide algorithms for identification of low-rank dendritic stimulus processors. The duality enables us to efficiently evaluate our functional identification algorithms by reconstructing novel stimuli in the input space. Finally, we demonstrate that our identification algorithms substantially outperform the generalized quadratic model, the nonlinear input model, and the widely used spike-triggered covariance algorithm.
Collapse
Affiliation(s)
- Aurel A. Lazar
- Department of Electrical Engineering, Columbia University, 500 W 120th Street, Mudd 1300, New York, NY 10027 USA
| | - Nikul H. Ukani
- Department of Electrical Engineering, Columbia University, 500 W 120th Street, Mudd 1300, New York, NY 10027 USA
| | - Yiyin Zhou
- Department of Electrical Engineering, Columbia University, 500 W 120th Street, Mudd 1300, New York, NY 10027 USA
| |
Collapse
|
2
|
Geng K, Marmarelis VZ. Methodology of Recurrent Laguerre-Volterra Network for Modeling Nonlinear Dynamic Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2196-2208. [PMID: 27352401 PMCID: PMC5596897 DOI: 10.1109/tnnls.2016.2581141] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, we have introduced a general modeling approach for dynamic nonlinear systems that utilizes a variant of the simulated annealing algorithm for training the Laguerre-Volterra network (LVN) to overcome the local minima and convergence problems and employs a pruning technique to achieve sparse LVN representations with l1 regularization. We tested this new approach with computer simulated systems and extended it to autoregressive sparse LVN (ASLVN) model structures that are suitable for input-output modeling of nonlinear systems that exhibit transitions in dynamic states, such as the Hodgkin-Huxley (H-H) equations of neuronal firing. Application of the proposed ASLVN to the H-H equations yields a more parsimonious input-output model with improved predictive capability that is amenable to more insightful physiological/biological interpretation.
Collapse
|
3
|
Hesse J, Schleimer JH, Schreiber S. Qualitative changes in phase-response curve and synchronization at the saddle-node-loop bifurcation. Phys Rev E 2017; 95:052203. [PMID: 28618541 DOI: 10.1103/physreve.95.052203] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Indexed: 06/07/2023]
Abstract
Prominent changes in neuronal dynamics have previously been attributed to a specific switch in onset bifurcation, the Bogdanov-Takens (BT) point. This study unveils another, relevant and so far underestimated transition point: the saddle-node-loop bifurcation, which can be reached by several parameters, including capacitance, leak conductance, and temperature. This bifurcation turns out to induce even more drastic changes in synchronization than the BT transition. This result arises from a direct effect of the saddle-node-loop bifurcation on the limit cycle and hence spike dynamics. In contrast, the BT bifurcation exerts its immediate influence upon the subthreshold dynamics and hence only indirectly relates to spiking. We specifically demonstrate that the saddle-node-loop bifurcation (i) ubiquitously occurs in planar neuron models with a saddle node on invariant cycle onset bifurcation, and (ii) results in a symmetry breaking of the system's phase-response curve. The latter entails an increase in synchronization range in pulse-coupled oscillators, such as neurons. The derived bifurcation structure is of interest in any system for which a relaxation limit is admissible, such as Josephson junctions and chemical oscillators.
Collapse
Affiliation(s)
- Janina Hesse
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany and Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Jan-Hendrik Schleimer
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany and Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Susanne Schreiber
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany and Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
4
|
Lazar AA, Zhou Y. Volterra dendritic stimulus processors and biophysical spike generators with intrinsic noise sources. Front Comput Neurosci 2014; 8:95. [PMID: 25225477 PMCID: PMC4150400 DOI: 10.3389/fncom.2014.00095] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Accepted: 07/23/2014] [Indexed: 11/13/2022] Open
Abstract
We consider a class of neural circuit models with internal noise sources arising in sensory systems. The basic neuron model in these circuits consists of a dendritic stimulus processor (DSP) cascaded with a biophysical spike generator (BSG). The dendritic stimulus processor is modeled as a set of nonlinear operators that are assumed to have a Volterra series representation. Biophysical point neuron models, such as the Hodgkin-Huxley neuron, are used to model the spike generator. We address the question of how intrinsic noise sources affect the precision in encoding and decoding of sensory stimuli and the functional identification of its sensory circuits. We investigate two intrinsic noise sources arising (i) in the active dendritic trees underlying the DSPs, and (ii) in the ion channels of the BSGs. Noise in dendritic stimulus processing arises from a combined effect of variability in synaptic transmission and dendritic interactions. Channel noise arises in the BSGs due to the fluctuation of the number of the active ion channels. Using a stochastic differential equations formalism we show that encoding with a neuron model consisting of a nonlinear DSP cascaded with a BSG with intrinsic noise sources can be treated as generalized sampling with noisy measurements. For single-input multi-output neural circuit models with feedforward, feedback and cross-feedback DSPs cascaded with BSGs we theoretically analyze the effect of noise sources on stimulus decoding. Building on a key duality property, the effect of noise parameters on the precision of the functional identification of the complete neural circuit with DSP/BSG neuron models is given. We demonstrate through extensive simulations the effects of noise on encoding stimuli with circuits that include neuron models that are akin to those commonly seen in sensory systems, e.g., complex cells in V1.
Collapse
Affiliation(s)
- Aurel A Lazar
- Department of Electrical Engineering, Columbia University New York, NY, USA
| | - Yiyin Zhou
- Department of Electrical Engineering, Columbia University New York, NY, USA
| |
Collapse
|
5
|
Spiking neural circuits with dendritic stimulus processors : encoding, decoding, and identification in reproducing kernel Hilbert spaces. J Comput Neurosci 2014; 38:1-24. [PMID: 25175020 DOI: 10.1007/s10827-014-0522-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2013] [Revised: 06/03/2014] [Accepted: 07/25/2014] [Indexed: 10/24/2022]
Abstract
We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.
Collapse
|
6
|
Lazar AA, Yeh CH. Functional identification of an antennal lobe DM4 projection neuron of the fruit fly. BMC Neurosci 2014. [PMCID: PMC4126497 DOI: 10.1186/1471-2202-15-s1-p5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
7
|
Sengupta B, Laughlin SB, Niven JE. Consequences of converting graded to action potentials upon neural information coding and energy efficiency. PLoS Comput Biol 2014; 10:e1003439. [PMID: 24465197 PMCID: PMC3900385 DOI: 10.1371/journal.pcbi.1003439] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2013] [Accepted: 12/02/2013] [Indexed: 11/18/2022] Open
Abstract
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na(+) and K(+) channels, with generator potential and graded potential models lacking voltage-gated Na(+) channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na(+) channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a 'footprint' in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.
Collapse
Affiliation(s)
- Biswa Sengupta
- Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom
- Centre for Neuroscience, Indian Institute of Science, Bangalore, India
| | | | - Jeremy Edward Niven
- School of Life Sciences and Centre for Computational Neuroscience and Robotics, University of Sussex, Falmer, Brighton, United Kingdom
| |
Collapse
|
8
|
Lazar AA, Slutskiy YB. Functional identification of spike-processing neural circuits. Neural Comput 2013; 26:264-305. [PMID: 24206386 DOI: 10.1162/neco_a_00543] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons' rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.
Collapse
Affiliation(s)
- Aurel A Lazar
- Department of Electrical Engineering, Columbia University, New York, NY 10027, U.S.A.
| | | |
Collapse
|
9
|
The power of connectivity: identity preserving transformations on visual streams in the spike domain. Neural Netw 2013; 44:22-35. [PMID: 23545540 DOI: 10.1016/j.neunet.2013.02.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2012] [Revised: 02/07/2013] [Accepted: 02/27/2013] [Indexed: 11/21/2022]
Abstract
We investigate neural architectures for identity preserving transformations (IPTs) on visual stimuli in the spike domain. The stimuli are encoded with a population of spiking neurons; the resulting spikes are processed and finally decoded. A number of IPTs are demonstrated including faithful stimulus recovery, as well as simple transformations on the original visual stimulus such as translations, rotations and zoomings. We show that if the set of receptive fields satisfies certain symmetry properties, then IPTs can easily be realized and additionally, the same basic stimulus decoding algorithm can be employed to recover the transformed input stimulus. Using group theoretic methods we advance two different neural encoding architectures and discuss the realization of exact and approximate IPTs. These are realized in the spike domain processing block by a "switching matrix" that regulates the input/output connectivity between the stimulus encoding and decoding blocks. For example, for a particular connectivity setting of the switching matrix, the original stimulus is faithfully recovered. For other settings, translations, rotations and dilations (or combinations of these operations) of the original video stream are obtained. We evaluate our theoretical derivations through extensive simulations on natural video scenes, and discuss implications of our results on the problem of invariant object recognition in the spike domain.
Collapse
|
10
|
Channel identification machines. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2012; 2012:209590. [PMID: 23227035 PMCID: PMC3505648 DOI: 10.1155/2012/209590] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2012] [Revised: 06/29/2012] [Accepted: 07/16/2012] [Indexed: 11/17/2022]
Abstract
We present a formal methodology for identifying a channel in a system consisting of a communication channel in cascade with an asynchronous sampler. The channel is modeled as a multidimensional filter, while models of asynchronous samplers are taken from neuroscience and communications and include integrate-and-fire neurons, asynchronous sigma/delta modulators and general oscillators in cascade with zero-crossing detectors. We devise channel identification algorithms that recover a projection of the filter(s) onto a space of input signals loss-free for both scalar and vector-valued test signals. The test signals are modeled as elements of a reproducing kernel Hilbert space (RKHS) with a Dirichlet kernel. Under appropriate limiting conditions on the bandwidth and the order of the test signal space, the filter projection converges to the impulse response of the filter. We show that our results hold for a wide class of RKHSs, including the space of finite-energy bandlimited signals. We also extend our channel identification results to noisy circuits.
Collapse
|
11
|
Lazar AA, Slutskiy YB. Identifying dendritic processing in a [Filter]-[Hodgkin Huxley] circuit. BMC Neurosci 2011. [PMCID: PMC3240419 DOI: 10.1186/1471-2202-12-s1-p306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
12
|
|
13
|
|
14
|
Ballard DH, Jehee JFM. Dual roles for spike signaling in cortical neural populations. Front Comput Neurosci 2011; 5:22. [PMID: 21687798 PMCID: PMC3108387 DOI: 10.3389/fncom.2011.00022] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2011] [Accepted: 05/05/2011] [Indexed: 11/13/2022] Open
Abstract
A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding.
Collapse
Affiliation(s)
- Dana H Ballard
- Department of Computer Science, University of Texas at Austin Austin, TX, USA
| | | |
Collapse
|
15
|
Lazar AA, Turetsky RJ. Encoding auditory scenes with a population of Hodgkin-Huxley neurons. BMC Neurosci 2010. [PMCID: PMC3090873 DOI: 10.1186/1471-2202-11-s1-p167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|