1
|
Wodeyar A, Marshall FA, Chu CJ, Eden UT, Kramer MA. Different Methods to Estimate the Phase of Neural Rhythms Agree But Only During Times of Low Uncertainty. eNeuro 2023; 10:ENEURO.0507-22.2023. [PMID: 37833061 PMCID: PMC10626504 DOI: 10.1523/eneuro.0507-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 08/25/2023] [Accepted: 09/18/2023] [Indexed: 10/15/2023] Open
Abstract
Rhythms are a common feature of brain activity. Across different types of rhythms, the phase has been proposed to have functional consequences, thus requiring its accurate specification from noisy data. Phase is conventionally specified using techniques that presume a frequency band-limited rhythm. However, in practice, observed brain rhythms are typically nonsinusoidal and amplitude modulated. How these features impact methods to estimate phase remains unclear. To address this, we consider three phase estimation methods, each with different underlying assumptions about the rhythm. We apply these methods to rhythms simulated with different generative mechanisms and demonstrate inconsistency in phase estimates across the different methods. We propose two improvements to the practice of phase estimation: (1) estimating confidence in the phase estimate, and (2) examining the consistency of phase estimates between two (or more) methods.
Collapse
Affiliation(s)
- Anirudh Wodeyar
- Department of Mathematics & Statistics, Boston University, Boston, MA 02215
| | | | - Catherine J Chu
- Department of Neurology, Massachusetts General Hospital, Boston, MA 02215
- Harvard Medical School, Boston, MA 02114
| | - Uri T Eden
- Department of Mathematics & Statistics, Boston University, Boston, MA 02215
- Center for Systems Neuroscience, Boston University, Boston, MA 02215
| | - Mark A Kramer
- Department of Mathematics & Statistics, Boston University, Boston, MA 02215
- Center for Systems Neuroscience, Boston University, Boston, MA 02215
| |
Collapse
|
2
|
Pérez-Cervera A, Gutkin B, Thomas PJ, Lindner B. A universal description of stochastic oscillators. Proc Natl Acad Sci U S A 2023; 120:e2303222120. [PMID: 37432992 PMCID: PMC10629544 DOI: 10.1073/pnas.2303222120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 05/18/2023] [Indexed: 07/13/2023] Open
Abstract
Many systems in physics, chemistry, and biology exhibit oscillations with a pronounced random component. Such stochastic oscillations can emerge via different mechanisms, for example, linear dynamics of a stable focus with fluctuations, limit-cycle systems perturbed by noise, or excitable systems in which random inputs lead to a train of pulses. Despite their diverse origins, the phenomenology of random oscillations can be strikingly similar. Here, we introduce a nonlinear transformation of stochastic oscillators to a complex-valued function [Formula: see text](x) that greatly simplifies and unifies the mathematical description of the oscillator's spontaneous activity, its response to an external time-dependent perturbation, and the correlation statistics of different oscillators that are weakly coupled. The function [Formula: see text] (x) is the eigenfunction of the Kolmogorov backward operator with the least negative (but nonvanishing) eigenvalue λ1 = μ1 + iω1. The resulting power spectrum of the complex-valued function is exactly given by a Lorentz spectrum with peak frequency ω1 and half-width μ1; its susceptibility with respect to a weak external forcing is given by a simple one-pole filter, centered around ω1; and the cross-spectrum between two coupled oscillators can be easily expressed by a combination of the spontaneous power spectra of the uncoupled systems and their susceptibilities. Our approach makes qualitatively different stochastic oscillators comparable, provides simple characteristics for the coherence of the random oscillation, and gives a framework for the description of weakly coupled oscillators.
Collapse
Affiliation(s)
- Alberto Pérez-Cervera
- Department of Applied Mathematics, Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid28040, Spain
| | - Boris Gutkin
- Group for Neural Theory, LNC2 INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure - Paris Science Letters University, Paris75005, France
| | - Peter J. Thomas
- Department of Mathematics, Applied Mathematics, and Statistics, Case Western Reserve University, Cleveland, OH44106
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Berlin10115, Germany
- Department of Physics, Humboldt Universität zu Berlin, BerlinD-12489, Germany
| |
Collapse
|
3
|
Heteroclinic cycling and extinction in May-Leonard models with demographic stochasticity. J Math Biol 2023; 86:30. [PMID: 36637504 PMCID: PMC9839821 DOI: 10.1007/s00285-022-01859-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 09/14/2022] [Accepted: 12/16/2022] [Indexed: 01/14/2023]
Abstract
May and Leonard (SIAM J Appl Math 29:243-253, 1975) introduced a three-species Lotka-Volterra type population model that exhibits heteroclinic cycling. Rather than producing a periodic limit cycle, the trajectory takes longer and longer to complete each "cycle", passing closer and closer to unstable fixed points in which one population dominates and the others approach zero. Aperiodic heteroclinic dynamics have subsequently been studied in ecological systems (side-blotched lizards; colicinogenic Escherichia coli), in the immune system, in neural information processing models ("winnerless competition"), and in models of neural central pattern generators. Yet as May and Leonard observed "Biologically, the behavior (produced by the model) is nonsense. Once it is conceded that the variables represent animals, and therefore cannot fall below unity, it is clear that the system will, after a few cycles, converge on some single population, extinguishing the other two." Here, we explore different ways of introducing discrete stochastic dynamics based on May and Leonard's ODE model, with application to ecological population dynamics, and to a neuromotor central pattern generator system. We study examples of several quantitatively distinct asymptotic behaviors, including total extinction of all species, extinction to a single species, and persistent cyclic dominance with finite mean cycle length.
Collapse
|
4
|
The Effects of Background Noise on a Biophysical Model of Olfactory Bulb Mitral Cells. Bull Math Biol 2022; 84:107. [PMID: 36008641 DOI: 10.1007/s11538-022-01066-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 08/03/2022] [Indexed: 11/02/2022]
Abstract
The spiking activity of mitral cells (MC) in the olfactory bulb is a key attribute in olfactory sensory information processing to downstream cortical areas. A more detailed understanding of the modulation of MC spike statistics could shed light on mechanistic studies of olfactory bulb circuits and olfactory coding. We study the spike response of a recently developed single-compartment biophysical MC model containing seven known ionic currents and calcium dynamics subject to constant current input with background white noise. We observe rich spiking dynamics even with constant current input, including multimodal peaks in the interspike interval distribution (ISI). Although weak-to-moderate background noise for a fixed current input does not change the firing rate much, the spike dynamics can change dramatically, exhibiting non-monotonic spike variability not commonly observed in standard neuron models. We explain these dynamics with a phenomenological model of the ISI probability density function. Our study clarifies some of the complexities of MC spiking dynamics.
Collapse
|
5
|
Kato Y, Nakao H. A definition of the asymptotic phase for quantum nonlinear oscillators from the Koopman operator viewpoint. CHAOS (WOODBURY, N.Y.) 2022; 32:063133. [PMID: 35778147 DOI: 10.1063/5.0088559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/24/2022] [Indexed: 06/15/2023]
Abstract
We propose a definition of the asymptotic phase for quantum nonlinear oscillators from the viewpoint of the Koopman operator theory. The asymptotic phase is a fundamental quantity for the analysis of classical limit-cycle oscillators, but it has not been defined explicitly for quantum nonlinear oscillators. In this study, we define the asymptotic phase for quantum oscillatory systems by using the eigenoperator of the backward Liouville operator associated with the fundamental oscillation frequency. By using the quantum van der Pol oscillator with a Kerr effect as an example, we illustrate that the proposed asymptotic phase appropriately yields isochronous phase values in both semiclassical and strong quantum regimes.
Collapse
Affiliation(s)
- Yuzuru Kato
- Department of Complex and Intelligent Systems, Future University Hakodate, Hokkaido 041-8655, Japan
| | - Hiroya Nakao
- Department of Systems and Control Engineering, Tokyo Institute of Technology, Tokyo 152-8552, Japan
| |
Collapse
|
6
|
Pérez-Cervera A, Lindner B, Thomas PJ. Quantitative comparison of the mean-return-time phase and the stochastic asymptotic phase for noisy oscillators. BIOLOGICAL CYBERNETICS 2022; 116:219-234. [PMID: 35320405 PMCID: PMC9068686 DOI: 10.1007/s00422-022-00929-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 02/16/2022] [Indexed: 05/10/2023]
Abstract
Seminal work by A. Winfree and J. Guckenheimer showed that a deterministic phase variable can be defined either in terms of Poincaré sections or in terms of the asymptotic (long-time) behaviour of trajectories approaching a stable limit cycle. However, this equivalence between the deterministic notions of phase is broken in the presence of noise. Different notions of phase reduction for a stochastic oscillator can be defined either in terms of mean-return-time sections or as the argument of the slowest decaying complex eigenfunction of the Kolmogorov backwards operator. Although both notions of phase enjoy a solid theoretical foundation, their relationship remains unexplored. Here, we quantitatively compare both notions of stochastic phase. We derive an expression relating both notions of phase and use it to discuss differences (and similarities) between both definitions of stochastic phase for (i) a spiral sink motivated by stochastic models for electroencephalograms, (ii) noisy limit-cycle systems-neuroscience models, and (iii) a stochastic heteroclinic oscillator inspired by a simple motor-control system.
Collapse
Affiliation(s)
- Alberto Pérez-Cervera
- National Research University Higher School of Economics, Moscow, Russia
- Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Institute of Physics, Humboldt University, Berlin, Germany
| | - Peter J. Thomas
- Department of Mathematics, Applied Mathematics and Statistics, Case Western Reserve University, Cleveland, OH USA
| |
Collapse
|
7
|
Holzhausen K, Ramlow L, Pu S, Thomas PJ, Lindner B. Mean-return-time phase of a stochastic oscillator provides an approximate renewal description for the associated point process. BIOLOGICAL CYBERNETICS 2022; 116:235-251. [PMID: 35166932 PMCID: PMC9068687 DOI: 10.1007/s00422-022-00920-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 01/11/2022] [Indexed: 06/14/2023]
Abstract
Stochastic oscillations can be characterized by a corresponding point process; this is a common practice in computational neuroscience, where oscillations of the membrane voltage under the influence of noise are often analyzed in terms of the interspike interval statistics, specifically the distribution and correlation of intervals between subsequent threshold-crossing times. More generally, crossing times and the corresponding interval sequences can be introduced for different kinds of stochastic oscillators that have been used to model variability of rhythmic activity in biological systems. In this paper we show that if we use the so-called mean-return-time (MRT) phase isochrons (introduced by Schwabedal and Pikovsky) to count the cycles of a stochastic oscillator with Markovian dynamics, the interphase interval sequence does not show any linear correlations, i.e., the corresponding sequence of passage times forms approximately a renewal point process. We first outline the general mathematical argument for this finding and illustrate it numerically for three models of increasing complexity: (i) the isotropic Guckenheimer-Schwabedal-Pikovsky oscillator that displays positive interspike interval (ISI) correlations if rotations are counted by passing the spoke of a wheel; (ii) the adaptive leaky integrate-and-fire model with white Gaussian noise that shows negative interspike interval correlations when spikes are counted in the usual way by the passage of a voltage threshold; (iii) a Hodgkin-Huxley model with channel noise (in the diffusion approximation represented by Gaussian noise) that exhibits weak but statistically significant interspike interval correlations, again for spikes counted when passing a voltage threshold. For all these models, linear correlations between intervals vanish when we count rotations by the passage of an MRT isochron. We finally discuss that the removal of interval correlations does not change the long-term variability and its effect on information transmission, especially in the neural context.
Collapse
Affiliation(s)
- Konstantin Holzhausen
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| | - Lukas Ramlow
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| | - Shusen Pu
- Department of Biomedical Engineering, 5814 Stevenson Center, Vanderbilt University, Nashville, TN 37215 USA
| | - Peter J. Thomas
- Department of Mathematics, Applied Mathematics, and Statistics, 212 Yost Hall, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, Ohio USA
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| |
Collapse
|
8
|
Toth K, Wilson D. Control of coupled neural oscillations using near-periodic inputs. CHAOS (WOODBURY, N.Y.) 2022; 32:033130. [PMID: 35364826 DOI: 10.1063/5.0076508] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 02/22/2022] [Indexed: 06/14/2023]
Abstract
Deep brain stimulation (DBS) is a commonly used treatment for medication resistant Parkinson's disease and is an emerging treatment for other neurological disorders. More recently, phase-specific adaptive DBS (aDBS), whereby the application of stimulation is locked to a particular phase of tremor, has been proposed as a strategy to improve therapeutic efficacy and decrease side effects. In this work, in the context of these phase-specific aDBS strategies, we investigate the dynamical behavior of large populations of coupled neurons in response to near-periodic stimulation, namely, stimulation that is periodic except for a slowly changing amplitude and phase offset that can be used to coordinate the timing of applied input with a specified phase of model oscillations. Using an adaptive phase-amplitude reduction strategy, we illustrate that for a large population of oscillatory neurons, the temporal evolution of the associated phase distribution in response to near-periodic forcing can be captured using a reduced order model with four state variables. Subsequently, we devise and validate a closed-loop control strategy to disrupt synchronization caused by coupling. Additionally, we identify strategies for implementing the proposed control strategy in situations where underlying model equations are unavailable by estimating the necessary terms of the reduced order equations in real-time from observables.
Collapse
Affiliation(s)
- Kaitlyn Toth
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, Tennessee 37996, USA
| | - Dan Wilson
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, Tennessee 37996, USA
| |
Collapse
|
9
|
Holzhausen K, Thomas PJ, Lindner B. Analytical approach to the mean-return-time phase of isotropic stochastic oscillators. Phys Rev E 2022; 105:024202. [PMID: 35291171 DOI: 10.1103/physreve.105.024202] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 01/24/2022] [Indexed: 06/14/2023]
Abstract
One notion of phase for stochastic oscillators is based on the mean return-time (MRT): a set of points represents a certain phase if the mean time to return from any point in this set to this set after one rotation is equal to the mean rotation period of the oscillator (irrespective of the starting point). For this so far only algorithmically defined phase, we derive here analytical expressions for the important class of isotropic stochastic oscillators. This allows us to evaluate cases from the literature explicitly and to study the behavior of the MRT phase in the limits of strong noise. We also use the same formalism to show that lines of constant return time variance (instead of constant mean return time) can be defined, and that they in general differ from the MRT isochrons.
Collapse
Affiliation(s)
- Konstantin Holzhausen
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| | - Peter J Thomas
- Department of Mathematics, Applied Mathematics and Statistics, 212 Yost Hall, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, Ohio, USA
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| |
Collapse
|
10
|
Kullmann R, Knoll G, Bernardi D, Lindner B. Critical current for giant Fano factor in neural models with bistable firing dynamics and implications for signal transmission. Phys Rev E 2022; 105:014416. [PMID: 35193262 DOI: 10.1103/physreve.105.014416] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 01/05/2022] [Indexed: 06/14/2023]
Abstract
Bistability in the firing rate is a prominent feature in different types of neurons as well as in neural networks. We show that for a constant input below a critical value, such bistability can lead to a giant spike-count diffusion. We study the transmission of a periodic signal and demonstrate that close to the critical bias current, the signal-to-noise ratio suffers a sharp increase, an effect that can be traced back to the giant diffusion and large Fano factor.
Collapse
Affiliation(s)
- Richard Kullmann
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| | - Gregory Knoll
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| | - Davide Bernardi
- Center for Translational Neurophysiology of Speech and Communication, Fondazione Istituto Italiano di Tecnologia, via Fossato di Mortara 19, 44121 Ferrara, Italy
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| |
Collapse
|
11
|
Pérez-Cervera A, Lindner B, Thomas PJ. Isostables for Stochastic Oscillators. PHYSICAL REVIEW LETTERS 2021; 127:254101. [PMID: 35029447 DOI: 10.1103/physrevlett.127.254101] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 10/18/2021] [Accepted: 11/04/2021] [Indexed: 05/25/2023]
Abstract
Thomas and Lindner [P. J. Thomas and B. Lindner, Phys. Rev. Lett. 113, 254101 (2014).PRLTAO0031-900710.1103/PhysRevLett.113.254101], defined an asymptotic phase for stochastic oscillators as the angle in the complex plane made by the eigenfunction, having a complex eigenvalue with a least negative real part, of the backward Kolmogorov (or stochastic Koopman) operator. We complete the phase-amplitude description of noisy oscillators by defining the stochastic isostable coordinate as the eigenfunction with the least negative nontrivial real eigenvalue. Our results suggest a framework for stochastic limit cycle dynamics that encompasses noise-induced oscillations.
Collapse
Affiliation(s)
- Alberto Pérez-Cervera
- National Research University Higher School of Economics, 109208 Moscow, Russia and Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Institute of Physics, Humboldt University at Berlin, Newtonstraße 15, D-12489 Berlin, Germany
| | - Peter J Thomas
- Department of Mathematics, Applied Mathematics, and Statistics, Case Western Reserve University, Cleveland, Ohio 44106, USA
| |
Collapse
|
12
|
Asymptotic Phase and Amplitude for Classical and Semiclassical Stochastic Oscillators via Koopman Operator Theory. MATHEMATICS 2021. [DOI: 10.3390/math9182188] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The asymptotic phase is a fundamental quantity for the analysis of deterministic limit-cycle oscillators, and generalized definitions of the asymptotic phase for stochastic oscillators have also been proposed. In this article, we show that the asymptotic phase and also amplitude can be defined for classical and semiclassical stochastic oscillators in a natural and unified manner by using the eigenfunctions of the Koopman operator of the system. We show that the proposed definition gives appropriate values of the phase and amplitude for strongly stochastic limit-cycle oscillators, excitable systems undergoing noise-induced oscillations, and also for quantum limit-cycle oscillators in the semiclassical regime.
Collapse
|
13
|
Ramlow L, Lindner B. Interspike interval correlations in neuron models with adaptation and correlated noise. PLoS Comput Biol 2021; 17:e1009261. [PMID: 34449771 PMCID: PMC8428727 DOI: 10.1371/journal.pcbi.1009261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 09/09/2021] [Accepted: 07/08/2021] [Indexed: 11/19/2022] Open
Abstract
The generation of neural action potentials (spikes) is random but nevertheless may result in a rich statistical structure of the spike sequence. In particular, contrary to the popular renewal assumption of theoreticians, the intervals between adjacent spikes are often correlated. Experimentally, different patterns of interspike-interval correlations have been observed and computational studies have identified spike-frequency adaptation and correlated noise as the two main mechanisms that can lead to such correlations. Analytical studies have focused on the single cases of either correlated (colored) noise or adaptation currents in combination with uncorrelated (white) noise. For low-pass filtered noise or adaptation, the serial correlation coefficient can be approximated as a single geometric sequence of the lag between the intervals, providing an explanation for some of the experimentally observed patterns. Here we address the problem of interval correlations for a widely used class of models, multidimensional integrate-and-fire neurons subject to a combination of colored and white noise sources and a spike-triggered adaptation current. Assuming weak noise, we derive a simple formula for the serial correlation coefficient, a sum of two geometric sequences, which accounts for a large class of correlation patterns. The theory is confirmed by means of numerical simulations in a number of special cases including the leaky, quadratic, and generalized integrate-and-fire models with colored noise and spike-frequency adaptation. Furthermore we study the case in which the adaptation current and the colored noise share the same time scale, corresponding to a slow stochastic population of adaptation channels; we demonstrate that our theory can account for a nonmonotonic dependence of the correlation coefficient on the channel's time scale. Another application of the theory is a neuron driven by network-noise-like fluctuations (green noise). We also discuss the range of validity of our weak-noise theory and show that by changing the relative strength of white and colored noise sources, we can change the sign of the correlation coefficient. Finally, we apply our theory to a conductance-based model which demonstrates its broad applicability.
Collapse
Affiliation(s)
- Lukas Ramlow
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Physics Department, Humboldt University zu Berlin, Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Physics Department, Humboldt University zu Berlin, Berlin, Germany
| |
Collapse
|
14
|
Pu S, Thomas PJ. Resolving molecular contributions of ion channel noise to interspike interval variability through stochastic shielding. BIOLOGICAL CYBERNETICS 2021; 115:267-302. [PMID: 34021802 DOI: 10.1007/s00422-021-00877-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 05/04/2021] [Indexed: 06/12/2023]
Abstract
Molecular fluctuations can lead to macroscopically observable effects. The random gating of ion channels in the membrane of a nerve cell provides an important example. The contributions of independent noise sources to the variability of action potential timing have not previously been studied at the level of molecular transitions within a conductance-based model ion-state graph. Here we study a stochastic Langevin model for the Hodgkin-Huxley (HH) system based on a detailed representation of the underlying channel state Markov process, the "[Formula: see text]D model" introduced in (Pu and Thomas in Neural Computation 32(10):1775-1835, 2020). We show how to resolve the individual contributions that each transition in the ion channel graph makes to the variance of the interspike interval (ISI). We extend the mean return time (MRT) phase reduction developed in (Cao et al. in SIAM J Appl Math 80(1):422-447, 2020) to the second moment of the return time from an MRT isochron to itself. Because fixed-voltage spike detection triggers do not correspond to MRT isochrons, the inter-phase interval (IPI) variance only approximates the ISI variance. We find the IPI variance and ISI variance agree to within a few percent when both can be computed. Moreover, we prove rigorously, and show numerically, that our expression for the IPI variance is accurate in the small noise (large system size) regime; our theory is exact in the limit of small noise. By selectively including the noise associated with only those few transitions responsible for most of the ISI variance, our analysis extends the stochastic shielding (SS) paradigm (Schmandt and Galán in Phys Rev Lett 109(11):118101, 2012) from the stationary voltage clamp case to the current clamp case. We show numerically that the SS approximation has a high degree of accuracy even for larger, physiologically relevant noise levels. Finally, we demonstrate that the ISI variance is not an unambiguously defined quantity, but depends on the choice of voltage level set as the spike detection threshold. We find a small but significant increase in ISI variance, the higher the spike detection voltage, both for simulated stochastic HH data and for voltage traces recorded in in vitro experiments. In contrast, the IPI variance is invariant with respect to the choice of isochron used as a trigger for counting "spikes."
Collapse
Affiliation(s)
- Shusen Pu
- Department of Mathematics, Applied Mathematics, and Statistics, Case Western Reserve University, Cleveland, OH, USA.
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA.
| | - Peter J Thomas
- Department of Mathematics, Applied Mathematics, and Statistics, Case Western Reserve University, Cleveland, OH, USA
- Department of Biology, Case Western Reserve University, Cleveland, OH, USA
- Department of Cognitive Science, Case Western Reserve University, Cleveland, OH, USA
- Department of Data and Computer Science, Case Western Reserve University, Cleveland, OH, USA
- Department of Electrical, Control, and Systems Engineering, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
15
|
Duchet B, Weerasinghe G, Bick C, Bogacz R. Optimizing deep brain stimulation based on isostable amplitude in essential tremor patient models. J Neural Eng 2021; 18:046023. [PMID: 33821809 PMCID: PMC7610712 DOI: 10.1088/1741-2552/abd90d] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
OBJECTIVE Deep brain stimulation is a treatment for medically refractory essential tremor. To improve the therapy, closed-loop approaches are designed to deliver stimulation according to the system's state, which is constantly monitored by recording a pathological signal associated with symptoms (e.g. brain signal or limb tremor). Since the space of possible closed-loop stimulation strategies is vast and cannot be fully explored experimentally, how to stimulate according to the state should be informed by modeling. A typical modeling goal is to design a stimulation strategy that aims to maximally reduce the Hilbert amplitude of the pathological signal in order to minimize symptoms. Isostables provide a notion of amplitude related to convergence time to the attractor, which can be beneficial in model-based control problems. However, how isostable and Hilbert amplitudes compare when optimizing the amplitude response to stimulation in models constrained by data is unknown. APPROACH We formulate a simple closed-loop stimulation strategy based on models previously fitted to phase-locked deep brain stimulation data from essential tremor patients. We compare the performance of this strategy in suppressing oscillatory power when based on Hilbert amplitude and when based on isostable amplitude. We also compare performance to phase-locked stimulation and open-loop high-frequency stimulation. MAIN RESULTS For our closed-loop phase space stimulation strategy, stimulation based on isostable amplitude is significantly more effective than stimulation based on Hilbert amplitude when amplitude field computation time is limited to minutes. Performance is similar when there are no constraints, however constraints on computation time are expected in clinical applications. Even when computation time is limited to minutes, closed-loop phase space stimulation based on isostable amplitude is advantageous compared to phase-locked stimulation, and is more efficient than high-frequency stimulation. SIGNIFICANCE Our results suggest a potential benefit to using isostable amplitude more broadly for model-based optimization of stimulation in neurological disorders.
Collapse
Affiliation(s)
- Benoit Duchet
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom. MRC Brain Network Dynamics Unit, University of Oxford, Oxford, United Kingdom
| | | | | | | |
Collapse
|
16
|
Schleimer JH, Hesse J, Contreras SA, Schreiber S. Firing statistics in the bistable regime of neurons with homoclinic spike generation. Phys Rev E 2021; 103:012407. [PMID: 33601551 DOI: 10.1103/physreve.103.012407] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 11/20/2020] [Indexed: 11/07/2022]
Abstract
Neuronal voltage dynamics of regularly firing neurons typically has one stable attractor: either a fixed point (like in the subthreshold regime) or a limit cycle that defines the tonic firing of action potentials (in the suprathreshold regime). In two of the three spike onset bifurcation sequences that are known to give rise to all-or-none type action potentials, however, the resting-state fixed point and limit cycle spiking can coexist in an intermediate regime, resulting in bistable dynamics. Here, noise can induce switches between the attractors, i.e., between rest and spiking, and thus increase the variability of the spike train compared to neurons with only one stable attractor. Qualitative features of the resulting spike statistics depend on the spike onset bifurcations. This paper focuses on the creation of the spiking limit cycle via the saddle-homoclinic orbit (HOM) bifurcation and derives interspike interval (ISI) densities for a conductance-based neuron model in the bistable regime. The ISI densities of bistable homoclinic neurons are found to be unimodal yet distinct from the inverse Gaussian distribution associated with the saddle-node-on-invariant-cycle bifurcation. It is demonstrated that for the HOM bifurcation the transition between rest and spiking is mainly determined along the downstroke of the action potential-a dynamical feature that is not captured by the commonly used reset neuron models. The deduced spike statistics can help to identify HOM dynamics in experimental data.
Collapse
Affiliation(s)
- Jan-Hendrik Schleimer
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
| | - Janina Hesse
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany
- MSH Medical School Hamburg, Am Kaiserkai 1, 20457 Hamburg, Germany
| | - Susana Andrea Contreras
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
| | - Susanne Schreiber
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Philippstrasse 13, Haus 4, 10115 Berlin, Germany
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
| |
Collapse
|
17
|
Pérez-Cervera A, M-Seara T, Huguet G. Global phase-amplitude description of oscillatory dynamics via the parameterization method. CHAOS (WOODBURY, N.Y.) 2020; 30:083117. [PMID: 32872842 DOI: 10.1063/5.0010149] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Accepted: 07/13/2020] [Indexed: 05/25/2023]
Abstract
In this paper, we use the parameterization method to provide a complete description of the dynamics of an n-dimensional oscillator beyond the classical phase reduction. The parameterization method allows us, via efficient algorithms, to obtain a parameterization of the attracting invariant manifold of the limit cycle in terms of the phase-amplitude variables. The method has several advantages. It provides analytically a Fourier-Taylor expansion of the parameterization up to any order, as well as a simplification of the dynamics that allows for a numerical globalization of the manifolds. Thus, one can obtain the local and global isochrons and isostables, including the slow attracting manifold, up to high accuracy, which offer a geometrical portrait of the oscillatory dynamics. Furthermore, it provides straightforwardly the infinitesimal phase and amplitude response functions, that is, the extended infinitesimal phase and amplitude response curves, which monitor the phase and amplitude shifts beyond the asymptotic state. Thus, the methodology presented yields an accurate description of the phase dynamics for perturbations not restricted to the limit cycle but to its attracting invariant manifold. Finally, we explore some strategies to reduce the dimension of the dynamics, including the reduction of the dynamics to the slow stable submanifold. We illustrate our methods by applying them to different three-dimensional single neuron and neural population models in neuroscience.
Collapse
Affiliation(s)
- Alberto Pérez-Cervera
- Departament de Matemàtiques, Universitat Politècnica de Catalunya, Avda. Diagonal 647, 08028 Barcelona, Spain
| | - Tere M-Seara
- Departament de Matemàtiques, Universitat Politècnica de Catalunya, Avda. Diagonal 647, 08028 Barcelona, Spain
| | - Gemma Huguet
- Departament de Matemàtiques, Universitat Politècnica de Catalunya, Avda. Diagonal 647, 08028 Barcelona, Spain
| |
Collapse
|
18
|
Thomas PJ, Lindner B. Phase descriptions of a multidimensional Ornstein-Uhlenbeck process. Phys Rev E 2019; 99:062221. [PMID: 31330649 DOI: 10.1103/physreve.99.062221] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Indexed: 05/25/2023]
Abstract
Stochastic oscillators play a prominent role in different fields of science. Their simplified description in terms of a phase has been advocated by different authors using distinct phase definitions in the stochastic case. One notion of phase that we put forward previously, the asymptotic phase of a stochastic oscillator, is based on the eigenfunction expansion of its probability density. More specifically, it is given by the complex argument of the eigenfunction of the backward operator corresponding to the least-negative eigenvalue. Formally, besides the "backward" phase, one can also define the "forward" phase as the complex argument of the eigenfunction of the forward Kolomogorov operator corresponding to the least-negative eigenvalue. Until now, the intuition about these phase descriptions has been limited. Here we study these definitions for a process that is analytically tractable, the two-dimensional Ornstein-Uhlenbeck process with complex eigenvalues. For this process, (i) we give explicit expressions for the two phases; (ii) we demonstrate that the isochrons are always the spokes of a wheel but that (iii) the spacing of these isochrons (their angular density) is different for backward and forward phases; (iv) we show that the isochrons of the backward phase are completely determined by the deterministic part of the vector field, whereas the forward phase also depends on the noise matrix; and (v) we demonstrate that the mean progression of the backward phase in time is always uniform, whereas this is not true for the forward phase except in the rotationally symmetric case. We illustrate our analytical results for a number of qualitatively different cases.
Collapse
Affiliation(s)
- Peter J Thomas
- Department of Mathematics, Applied Mathematics, and Statistics, Case Western Reserve University, Cleveland, Ohio 44106, USA
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Department of Physics, Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| |
Collapse
|
19
|
Monga B, Wilson D, Matchen T, Moehlis J. Phase reduction and phase-based optimal control for biological systems: a tutorial. BIOLOGICAL CYBERNETICS 2019; 113:11-46. [PMID: 30203130 DOI: 10.1007/s00422-018-0780-z] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Accepted: 08/25/2018] [Indexed: 05/20/2023]
Abstract
A powerful technique for the analysis of nonlinear oscillators is the rigorous reduction to phase models, with a single variable describing the phase of the oscillation with respect to some reference state. An analog to phase reduction has recently been proposed for systems with a stable fixed point, and phase reduction for periodic orbits has recently been extended to take into account transverse directions and higher-order terms. This tutorial gives a unified treatment of such phase reduction techniques and illustrates their use through mathematical and biological examples. It also covers the use of phase reduction for designing control algorithms which optimally change properties of the system, such as the phase of the oscillation. The control techniques are illustrated for example neural and cardiac systems.
Collapse
Affiliation(s)
- Bharat Monga
- Department of Mechanical Engineering, University of California, Santa Barbara, CA, 93106, USA
| | - Dan Wilson
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996, USA
| | - Tim Matchen
- Department of Mechanical Engineering, University of California, Santa Barbara, CA, 93106, USA
| | - Jeff Moehlis
- Department of Mechanical Engineering, University of California, Santa Barbara, CA, 93106, USA.
| |
Collapse
|
20
|
Voronenko SO, Lindner B. Improved lower bound for the mutual information between signal and neural spike count. BIOLOGICAL CYBERNETICS 2018; 112:523-538. [PMID: 30155699 DOI: 10.1007/s00422-018-0779-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2018] [Accepted: 08/20/2018] [Indexed: 06/08/2023]
Abstract
The mutual information between a stimulus signal and the spike count of a stochastic neuron is in many cases difficult to determine. Therefore, it is often approximated by a lower bound formula that involves linear correlations between input and output only. Here, we improve the linear lower bound for the mutual information by incorporating nonlinear correlations. For the special case of a Gaussian output variable with nonlinear signal dependencies of mean and variance we also derive an exact integral formula for the full mutual information. In our numerical analysis, we first compare the linear and nonlinear lower bounds and the exact integral formula for two different Gaussian models and show under which conditions the nonlinear lower bound provides a significant improvement to the linear approximation. We then inspect two neuron models, the leaky integrate-and-fire model with white Gaussian noise and the Na-K model with channel noise. We show that for certain firing regimes and for intermediate signal strengths the nonlinear lower bound can provide a substantial improvement compared to the linear lower bound. Our results demonstrate the importance of nonlinear input-output correlations for neural information transmission and provide a simple nonlinear approximation for the mutual information that can be applied to more complicated neuron models as well as to experimental data.
Collapse
Affiliation(s)
- Sergej O Voronenko
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115, Berlin, Germany.
- Physics Department, Humboldt University Berlin, Newtonstr. 15, 12489, Berlin, Germany.
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115, Berlin, Germany
- Physics Department, Humboldt University Berlin, Newtonstr. 15, 12489, Berlin, Germany
| |
Collapse
|
21
|
Ly C, Weinberg SH. Analysis of heterogeneous cardiac pacemaker tissue models and traveling wave dynamics. J Theor Biol 2018; 459:18-35. [PMID: 30248329 DOI: 10.1016/j.jtbi.2018.09.023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 09/12/2018] [Accepted: 09/20/2018] [Indexed: 01/06/2023]
Abstract
The sinoatrial-node (SAN) is a complex heterogeneous tissue that generates a stable rhythm in healthy hearts, yet a general mechanistic explanation for when and how this tissue remains stable is lacking. Although computational and theoretical analyses could elucidate these phenomena, such methods have rarely been used in realistic (large-dimensional) gap-junction coupled heterogeneous pacemaker tissue models. In this study, we adapt a recent model of pacemaker cells (Severi et al., 2012), incorporating biophysical representations of ion channel and intracellular calcium dynamics, to capture physiological features of a heterogeneous population of pacemaker cells, in particular "center" and "peripheral" cells with distinct intrinsic frequencies and action potential morphology. Large-scale simulations of the SAN tissue, represented by a heterogeneous tissue structure of pacemaker cells, exhibit a rich repertoire of behaviors, including complete synchrony, traveling waves of activity originating from periphery to center, and transient traveling waves originating from the center. We use phase reduction methods that do not require fully simulating the large-scale model to capture these observations. Moreover, the phase reduced models accurately predict key properties of the tissue electrical dynamics, including wave frequencies when synchronization occurs, and wave propagation direction in a variety of tissue models. With the reduced phase models, we analyze the relationship between cell distributions and coupling strengths and the resulting transient dynamics. Further, the reduced phase model predicts parameter regimes of irregular electrical dynamics. Thus, we demonstrate that phase reduced oscillator models applied to realistic pacemaker tissue is a useful tool for investigating the spatial-temporal dynamics of cardiac pacemaker activity.
Collapse
Affiliation(s)
- Cheng Ly
- Department of Statistical Sciences & Operations Research, Virginia Commonwealth University, USA.
| | - Seth H Weinberg
- Department of Biomedical Engineering, Virginia Commonwealth University USA. http://www.shweinberglab.com
| |
Collapse
|
22
|
Bressloff PC, Maclaurin JN. Stochastic Hybrid Systems in Cellular Neuroscience. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2018; 8:12. [PMID: 30136005 PMCID: PMC6104574 DOI: 10.1186/s13408-018-0067-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 08/05/2018] [Indexed: 06/08/2023]
Abstract
We review recent work on the theory and applications of stochastic hybrid systems in cellular neuroscience. A stochastic hybrid system or piecewise deterministic Markov process involves the coupling between a piecewise deterministic differential equation and a time-homogeneous Markov chain on some discrete space. The latter typically represents some random switching process. We begin by summarizing the basic theory of stochastic hybrid systems, including various approximation schemes in the fast switching (weak noise) limit. In subsequent sections, we consider various applications of stochastic hybrid systems, including stochastic ion channels and membrane voltage fluctuations, stochastic gap junctions and diffusion in randomly switching environments, and intracellular transport in axons and dendrites. Finally, we describe recent work on phase reduction methods for stochastic hybrid limit cycle oscillators.
Collapse
|
23
|
Minas G, Rand DA. Long-time analytic approximation of large stochastic oscillators: Simulation, analysis and inference. PLoS Comput Biol 2017; 13:e1005676. [PMID: 28742083 PMCID: PMC5555717 DOI: 10.1371/journal.pcbi.1005676] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 08/14/2017] [Accepted: 07/06/2017] [Indexed: 02/02/2023] Open
Abstract
In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA) overcomes the main limitations of the standard Linear Noise Approximation (LNA) to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results. Many cellular and molecular systems such as the circadian clock and the cell cycle are oscillators that are modelled using nonlinear dynamical systems. Moreover, oscillatory systems are ubiquitous elsewhere in science. There is an extensive theory for perfectly noise-free dynamical systems and very effective algorithms for simulating their temporal behaviour. On the other hand, biological systems are inherently stochastic and the presence of stochastic noise can play a crucial role. Unfortunately, there are far fewer analytical tools and much less understanding for stochastic models especially when they are nonlinear and have lots of state variables and parameters. Moreover simulation is not so effective and can be very slow if the system is large. In this article we describe how to accurately approximate such systems in a way that facilitates fast simulation, parameter estimation and new approaches to analysis, such as calculating probability distributions that describe the system’s stochastic behaviour and describing how these distributions change when the parameters of the system are varied.
Collapse
Affiliation(s)
- Giorgos Minas
- Zeeman Institute for Systems Biology & Infectious Disease Epidemiology Research, University of Warwick, Coventry, United Kingdom
- Mathematics Institute, University of Warwick, Coventry, United Kingdom
| | - David A. Rand
- Zeeman Institute for Systems Biology & Infectious Disease Epidemiology Research, University of Warwick, Coventry, United Kingdom
- Mathematics Institute, University of Warwick, Coventry, United Kingdom
- * E-mail:
| |
Collapse
|
24
|
Stiefel KM, Ermentrout GB. Neurons as oscillators. J Neurophysiol 2016; 116:2950-2960. [PMID: 27683887 DOI: 10.1152/jn.00525.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Accepted: 09/27/2016] [Indexed: 01/03/2023] Open
Abstract
Regularly spiking neurons can be described as oscillators. In this article we review some of the insights gained from this conceptualization and their relevance for systems neuroscience. First, we explain how a regularly spiking neuron can be viewed as an oscillator and how the phase-response curve (PRC) describes the response of the neuron's spike times to small perturbations. We then discuss the meaning of the PRC for a single neuron's spiking behavior and review the PRCs measured from a variety of neurons in a range of spiking regimes. Next, we show how the PRC can be related to a number of common measures used to quantify neuronal firing, such as the spike-triggered average and the peristimulus histogram. We further show that the response of a neuron to correlated inputs depends on the shape of the PRC. We then explain how the PRC of single neurons can be used to predict neural network behavior. Given the PRC, conduction delays, and the waveform and time course of the synaptic potentials, it is possible to predict neural population behavior such as synchronization. The PRC also allows us to quantify the robustness of the synchronization to heterogeneity and noise. We finally ask how to combine the measured PRCs and the predictions based on PRC to further the understanding of systems neuroscience. As an example, we discuss how the change of the PRC by the neuromodulator acetylcholine could lead to a destabilization of cortical network dynamics. Although all of these studies are grounded in mathematical abstractions that do not strictly hold in biology, they provide good estimates for the emergence of the brain's network activity from the properties of individual neurons. The study of neurons as oscillators can provide testable hypotheses and mechanistic explanations for systems neuroscience.
Collapse
Affiliation(s)
| | - G Bard Ermentrout
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
25
|
Anderson DF, Ermentrout B, Friel DD, Galán RF, Lindner B, Pu S, Schmidt DR, Thomas PJ. Fast and accurate representations of stochastic ion channel fluctuations. BMC Neurosci 2015. [PMCID: PMC4699020 DOI: 10.1186/1471-2202-16-s1-p258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
26
|
Pikovsky A. Comment on "Asymptotic Phase for Stochastic Oscillators". PHYSICAL REVIEW LETTERS 2015; 115:069401. [PMID: 26296133 DOI: 10.1103/physrevlett.115.069401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2015] [Indexed: 06/04/2023]
Affiliation(s)
- Arkady Pikovsky
- Institute for Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Strasse 24/25, 14476 Potsdam-Golm, Germany and Department of Control Theory, Nizhni Novgorod State University, Gagarin Avenue 23, 606950 Nizhni Novgorod, Russia
| |
Collapse
|
27
|
Thomas PJ, Lindner B. Thomas and Lindner Reply. PHYSICAL REVIEW LETTERS 2015; 115:069402. [PMID: 26296134 DOI: 10.1103/physrevlett.115.069402] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Indexed: 06/04/2023]
Affiliation(s)
- Peter J Thomas
- Department of Mathematics, Applied Mathematics, and Statistics. Case Western Reserve University, Cleveland, Ohio 44106, USA
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
- Department of Physics. Humboldt University, 12489 Berlin, Germany
| |
Collapse
|