1
|
Spaeth A, Haussler D, Teodorescu M. Model-agnostic neural mean field with a data-driven transfer function. NEUROMORPHIC COMPUTING AND ENGINEERING 2024; 4:034013. [PMID: 39310743 PMCID: PMC11413991 DOI: 10.1088/2634-4386/ad787f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 09/02/2024] [Accepted: 09/09/2024] [Indexed: 09/25/2024]
Abstract
As one of the most complex systems known to science, modeling brain behavior and function is both fascinating and extremely difficult. Empirical data is increasingly available from ex vivo human brain organoids and surgical samples, as well as in vivo animal models, so the problem of modeling the behavior of large-scale neuronal systems is more relevant than ever. The statistical physics concept of a mean-field model offers a tractable way to bridge the gap between single-neuron and population-level descriptions of neuronal activity, by modeling the behavior of a single representative neuron and extending this to the population. However, existing neural mean-field methods typically either take the limit of small interaction sizes, or are applicable only to the specific neuron models for which they were derived. This paper derives a mean-field model by fitting a transfer function called Refractory SoftPlus, which is simple yet applicable to a broad variety of neuron types. The transfer function is fitted numerically to simulated spike time data, and is entirely agnostic to the underlying neuronal dynamics. The resulting mean-field model predicts the response of a network of randomly connected neurons to a time-varying external stimulus with a high degree of accuracy. Furthermore, it enables an accurate approximate bifurcation analysis as a function of the level of recurrent input. This model does not assume large presynaptic rates or small postsynaptic potential size, allowing mean-field models to be developed even for populations with large interaction terms.
Collapse
Affiliation(s)
- Alex Spaeth
- Electrical and Computer Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States of America
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States of America
| | - David Haussler
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States of America
- Biomolecular Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States of America
| | - Mircea Teodorescu
- Electrical and Computer Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States of America
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States of America
- Biomolecular Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States of America
| |
Collapse
|
2
|
Cecchini G, DePass M, Baspinar E, Andujar M, Ramawat S, Pani P, Ferraina S, Destexhe A, Moreno-Bote R, Cos I. Cognitive mechanisms of learning in sequential decision-making under uncertainty: an experimental and theoretical approach. Front Behav Neurosci 2024; 18:1399394. [PMID: 39188591 PMCID: PMC11346247 DOI: 10.3389/fnbeh.2024.1399394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 07/19/2024] [Indexed: 08/28/2024] Open
Abstract
Learning to make adaptive decisions involves making choices, assessing their consequence, and leveraging this assessment to attain higher rewarding states. Despite vast literature on value-based decision-making, relatively little is known about the cognitive processes underlying decisions in highly uncertain contexts. Real world decisions are rarely accompanied by immediate feedback, explicit rewards, or complete knowledge of the environment. Being able to make informed decisions in such contexts requires significant knowledge about the environment, which can only be gained via exploration. Here we aim at understanding and formalizing the brain mechanisms underlying these processes. To this end, we first designed and performed an experimental task. Human participants had to learn to maximize reward while making sequences of decisions with only basic knowledge of the environment, and in the absence of explicit performance cues. Participants had to rely on their own internal assessment of performance to reveal a covert relationship between their choices and their subsequent consequences to find a strategy leading to the highest cumulative reward. Our results show that the participants' reaction times were longer whenever the decision involved a future consequence, suggesting greater introspection whenever a delayed value had to be considered. The learning time varied significantly across participants. Second, we formalized the neurocognitive processes underlying decision-making within this task, combining mean-field representations of competing neural populations with a reinforcement learning mechanism. This model provided a plausible characterization of the brain dynamics underlying these processes, and reproduced each aspect of the participants' behavior, from their reaction times and choices to their learning rates. In summary, both the experimental results and the model provide a principled explanation to how delayed value may be computed and incorporated into the neural dynamics of decision-making, and to how learning occurs in these uncertain scenarios.
Collapse
Affiliation(s)
- Gloria Cecchini
- Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
- Center for Brain and Cognition, DTIC, Universitat Pompeu Fabra, Barcelona, Spain
| | - Michael DePass
- Center for Brain and Cognition, DTIC, Universitat Pompeu Fabra, Barcelona, Spain
| | - Emre Baspinar
- CNRS, Institute of Neuroscience (NeuroPSI), Paris-Saclay University, Saclay, France
| | - Marta Andujar
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Surabhi Ramawat
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Alain Destexhe
- CNRS, Institute of Neuroscience (NeuroPSI), Paris-Saclay University, Saclay, France
| | - Rubén Moreno-Bote
- Center for Brain and Cognition, DTIC, Universitat Pompeu Fabra, Barcelona, Spain
- Serra-Hunter Fellow Programme, Barcelona, Spain
| | - Ignasi Cos
- Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
- Serra-Hunter Fellow Programme, Barcelona, Spain
| |
Collapse
|
3
|
Kubo Y, Chalmers E, Luczak A. Biologically-inspired neuronal adaptation improves learning in neural networks. Commun Integr Biol 2023; 16:2163131. [PMID: 36685291 PMCID: PMC9851208 DOI: 10.1080/19420889.2022.2163131] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Since humans still outperform artificial neural networks on many tasks, drawing inspiration from the brain may help to improve current machine learning algorithms. Contrastive Hebbian learning (CHL) and equilibrium propagation (EP) are biologically plausible algorithms that update weights using only local information (without explicitly calculating gradients) and still achieve performance comparable to conventional backpropagation. In this study, we augmented CHL and EP with Adjusted Adaptation, inspired by the adaptation effect observed in neurons, in which a neuron's response to a given stimulus is adjusted after a short time. We add this adaptation feature to multilayer perceptrons and convolutional neural networks trained on MNIST and CIFAR-10. Surprisingly, adaptation improved the performance of these networks. We discuss the biological inspiration for this idea and investigate why Neuronal Adaptation could be an important brain mechanism to improve the stability and accuracy of learning.
Collapse
Affiliation(s)
- Yoshimasa Kubo
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada,CONTACT Yoshimasa Kubo
| | - Eric Chalmers
- Department of Mathematics & Computing, Mount Royal University, Calgary, AB, Canada
| | - Artur Luczak
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada,Artur Luczak Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada
| |
Collapse
|
4
|
Kim SY, Lim W. Dynamical origin for winner-take-all competition in a biological network of the hippocampal dentate gyrus. Phys Rev E 2022; 105:014418. [PMID: 35193268 DOI: 10.1103/physreve.105.014418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 01/13/2022] [Indexed: 06/14/2023]
Abstract
We consider a biological network of the hippocampal dentate gyrus (DG). Computational models suggest that the DG would be a preprocessor for pattern separation (i.e., a process transforming a set of similar input patterns into distinct nonoverlapping output patterns) which could facilitate pattern storage and retrieval in the CA3 area of the hippocampus. The main encoding cells in the DG are the granule cells (GCs) which receive the input from the entorhinal cortex (EC) and send their output to the CA3. We note that the activation degree of GCs is very low (∼5%). This sparsity has been thought to enhance the pattern separation. We investigate the dynamical origin for winner-take-all (WTA) competition which leads to sparse activation of the GCs. The whole GCs are grouped into lamellar clusters. In each cluster, there is one inhibitory (I) basket cell (BC) along with excitatory (E) GCs. There are three kinds of external inputs into the GCs: the direct excitatory EC input; the indirect feedforward inhibitory EC input, mediated by the HIPP (hilar perforant path-associated) cells; and the excitatory input from the hilar mossy cells (MCs). The firing activities of the GCs are determined via competition between the external E and I inputs. The E-I conductance ratio R_{E-I}^{(con)}^{*} (given by the time average of the ratio of the external E to I conductances) may represent well the degree of such external E-I input competition. It is thus found that GCs become active when their R_{E-I}^{(con)}^{*} is larger than a threshold R_{th}^{*}, and then the mean firing rates of the active GCs are strongly correlated with R_{E-I}^{(con)}^{*}. In each cluster, the feedback inhibition from the BC may select the winner GCs. GCs with larger R_{E-I}^{(con)}^{*} than the threshold R_{th}^{*} survive, and they become winners; all the other GCs with smaller R_{E-I}^{(con)}^{*} become silent. In this way, WTA competition occurs via competition between the firing activity of the GCs and the feedback inhibition from the BC in each cluster. Finally, we also study the effects of MC death and adult-born immature GCs on the WTA competition.
Collapse
Affiliation(s)
- Sang-Yoon Kim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Korea
| | - Woochang Lim
- Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Korea
| |
Collapse
|
5
|
Akil AE, Rosenbaum R, Josić K. Balanced networks under spike-time dependent plasticity. PLoS Comput Biol 2021; 17:e1008958. [PMID: 33979336 PMCID: PMC8143429 DOI: 10.1371/journal.pcbi.1008958] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 05/24/2021] [Accepted: 04/12/2021] [Indexed: 11/28/2022] Open
Abstract
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input. Animals are able to learn complex tasks through changes in individual synapses between cells. Such changes lead to the coevolution of neural activity patterns and the structure of neural connectivity, but the consequences of these interactions are not fully understood. We consider plasticity in model neural networks which achieve an average balance between the excitatory and inhibitory synaptic inputs to different cells, and display cortical–like, irregular activity. We extend the theory of balanced networks to account for synaptic plasticity and show which rules can maintain balance, and which will drive the network into a different state. This theory of plasticity can provide insights into the relationship between stimuli, network dynamics, and synaptic circuitry.
Collapse
Affiliation(s)
- Alan Eric Akil
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana, United States of America
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana, United States of America
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
- Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
6
|
Biophysically grounded mean-field models of neural populations under electrical stimulation. PLoS Comput Biol 2020; 16:e1007822. [PMID: 32324734 PMCID: PMC7200022 DOI: 10.1371/journal.pcbi.1007822] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 05/05/2020] [Accepted: 03/24/2020] [Indexed: 11/19/2022] Open
Abstract
Electrical stimulation of neural systems is a key tool for understanding neural dynamics and ultimately for developing clinical treatments. Many applications of electrical stimulation affect large populations of neurons. However, computational models of large networks of spiking neurons are inherently hard to simulate and analyze. We evaluate a reduced mean-field model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons which can be used to efficiently study the effects of electrical stimulation on large neural populations. The rich dynamical properties of this basic cortical model are described in detail and validated using large network simulations. Bifurcation diagrams reflecting the network's state reveal asynchronous up- and down-states, bistable regimes, and oscillatory regions corresponding to fast excitation-inhibition and slow excitation-adaptation feedback loops. The biophysical parameters of the AdEx neuron can be coupled to an electric field with realistic field strengths which then can be propagated up to the population description. We show how on the edge of bifurcation, direct electrical inputs cause network state transitions, such as turning on and off oscillations of the population rate. Oscillatory input can frequency-entrain and phase-lock endogenous oscillations. Relatively weak electric field strengths on the order of 1 V/m are able to produce these effects, indicating that field effects are strongly amplified in the network. The effects of time-varying external stimulation are well-predicted by the mean-field model, further underpinning the utility of low-dimensional neural mass models.
Collapse
|
7
|
Durstewitz D, Koppe G, Meyer-Lindenberg A. Deep neural networks in psychiatry. Mol Psychiatry 2019; 24:1583-1598. [PMID: 30770893 DOI: 10.1038/s41380-019-0365-9] [Citation(s) in RCA: 108] [Impact Index Per Article: 21.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Revised: 01/02/2019] [Accepted: 01/24/2019] [Indexed: 01/03/2023]
Abstract
Machine and deep learning methods, today's core of artificial intelligence, have been applied with increasing success and impact in many commercial and research settings. They are powerful tools for large scale data analysis, prediction and classification, especially in very data-rich environments ("big data"), and have started to find their way into medical applications. Here we will first give an overview of machine learning methods, with a focus on deep and recurrent neural networks, their relation to statistics, and the core principles behind them. We will then discuss and review directions along which (deep) neural networks can be, or already have been, applied in the context of psychiatry, and will try to delineate their future potential in this area. We will also comment on an emerging area that so far has been much less well explored: by embedding semantically interpretable computational models of brain dynamics or behavior into a statistical machine learning context, insights into dysfunction beyond mere prediction and classification may be gained. Especially this marriage of computational models with statistical inference may offer insights into neural and behavioral mechanisms that could open completely novel avenues for psychiatric treatment.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159, Mannheim, Germany.
| | - Georgia Koppe
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159, Mannheim, Germany.,Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159, Mannheim, Germany
| | - Andreas Meyer-Lindenberg
- Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159, Mannheim, Germany
| |
Collapse
|
8
|
Koppe G, Toutounji H, Kirsch P, Lis S, Durstewitz D. Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI. PLoS Comput Biol 2019; 15:e1007263. [PMID: 31433810 PMCID: PMC6719895 DOI: 10.1371/journal.pcbi.1007263] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Revised: 09/03/2019] [Accepted: 07/11/2019] [Indexed: 12/31/2022] Open
Abstract
A major tenet in theoretical neuroscience is that cognitive and behavioral processes are ultimately implemented in terms of the neural system dynamics. Accordingly, a major aim for the analysis of neurophysiological measurements should lie in the identification of the computational dynamics underlying task processing. Here we advance a state space model (SSM) based on generative piecewise-linear recurrent neural networks (PLRNN) to assess dynamics from neuroimaging data. In contrast to many other nonlinear time series models which have been proposed for reconstructing latent dynamics, our model is easily interpretable in neural terms, amenable to systematic dynamical systems analysis of the resulting set of equations, and can straightforwardly be transformed into an equivalent continuous-time dynamical system. The major contributions of this paper are the introduction of a new observation model suitable for functional magnetic resonance imaging (fMRI) coupled to the latent PLRNN, an efficient stepwise training procedure that forces the latent model to capture the 'true' underlying dynamics rather than just fitting (or predicting) the observations, and of an empirical measure based on the Kullback-Leibler divergence to evaluate from empirical time series how well this goal of approximating the underlying dynamics has been achieved. We validate and illustrate the power of our approach on simulated 'ground-truth' dynamical systems as well as on experimental fMRI time series, and demonstrate that the learnt dynamics harbors task-related nonlinear structure that a linear dynamical model fails to capture. Given that fMRI is one of the most common techniques for measuring brain activity non-invasively in human subjects, this approach may provide a novel step toward analyzing aberrant (nonlinear) dynamics for clinical assessment or neuroscientific research.
Collapse
Affiliation(s)
- Georgia Koppe
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Hazem Toutounji
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Peter Kirsch
- Department of Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Stefanie Lis
- Institute for Psychiatric and Psychosomatic Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Daniel Durstewitz
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
9
|
di Volo M, Romagnoni A, Capone C, Destexhe A. Biologically Realistic Mean-Field Models of Conductance-Based Networks of Spiking Neurons with Adaptation. Neural Comput 2019; 31:653-680. [DOI: 10.1162/neco_a_01173] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Accurate population models are needed to build very large-scale neural models, but their derivation is difficult for realistic networks of neurons, in particular when nonlinear properties are involved, such as conductance-based interactions and spike-frequency adaptation. Here, we consider such models based on networks of adaptive exponential integrate-and-fire excitatory and inhibitory neurons. Using a master equation formalism, we derive a mean-field model of such networks and compare it to the full network dynamics. The mean-field model is capable of correctly predicting the average spontaneous activity levels in asynchronous irregular regimes similar to in vivo activity. It also captures the transient temporal response of the network to complex external inputs. Finally, the mean-field model is also able to quantitatively describe regimes where high- and low-activity states alternate (up-down state dynamics), leading to slow oscillations. We conclude that such mean-field models are biologically realistic in the sense that they can capture both spontaneous and evoked activity, and they naturally appear as candidates to build very large-scale models involving multiple brain areas.
Collapse
Affiliation(s)
- Matteo di Volo
- Unité de Neuroscience, Information et Complexité, CNRS FRE 3693, 91198 Gif sur Yvette, France
| | - Alberto Romagnoni
- Centre de Recherche sur l'inflammation UMR 1149, Inserm-Université Paris Diderot, 75018 Paris, France, and Data Team, Departement d'informatique de l'Ecole normale supérieure, CNRS, PSL Research University, 75005 Paris, France, and European Institute for Theoretical Neuroscience, 75012 Paris, France
| | - Cristiano Capone
- European Institute for Theoretical Neuroscience, 75012 Paris, France, and INFN Sezione di Roma, Rome 00185, Italy
| | - Alain Destexhe
- Unité de Neuroscience, Information et Complexité, CNRS FRE 3693, 91198 Gif sur Yvette, France, and European Institute for Theoretical Neuroscience, 75012 Paris, France
| |
Collapse
|
10
|
An Efficient Population Density Method for Modeling Neural Networks with Synaptic Dynamics Manifesting Finite Relaxation Time and Short-Term Plasticity. eNeuro 2019; 5:eN-MNT-0002-18. [PMID: 30662939 PMCID: PMC6336402 DOI: 10.1523/eneuro.0002-18.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2018] [Revised: 10/24/2018] [Accepted: 11/21/2018] [Indexed: 12/05/2022] Open
Abstract
When incorporating more realistic synaptic dynamics, the computational efficiency of population density methods (PDMs) declines sharply due to the increase in the dimension of master equations. To avoid such a decline, we develop an efficient PDM, termed colored-synapse PDM (csPDM), in which the dimension of the master equations does not depend on the number of synapse-associated state variables in the underlying network model. Our goal is to allow the PDM to incorporate realistic synaptic dynamics that possesses not only finite relaxation time but also short-term plasticity (STP). The model equations of csPDM are derived based on the diffusion approximation on synaptic dynamics and probability density function methods for Langevin equations with colored noise. Numerical examples, given by simulations of the population dynamics of uncoupled exponential integrate-and-fire (EIF) neurons, show good agreement between the results of csPDM and Monte Carlo simulations (MCSs). Compared to the original full-dimensional PDM (fdPDM), the csPDM reveals more excellent computational efficiency because of the lower dimension of the master equations. In addition, it permits network dynamics to possess the short-term plastic characteristics inherited from plastic synapses. The novel csPDM has potential applicability to any spiking neuron models because of no assumptions on neuronal dynamics, and, more importantly, this is the first report of PDM to successfully encompass short-term facilitation/depression properties.
Collapse
|
11
|
Setareh H, Deger M, Gerstner W. Excitable neuronal assemblies with adaptation as a building block of brain circuits for velocity-controlled signal propagation. PLoS Comput Biol 2018; 14:e1006216. [PMID: 29979674 PMCID: PMC6051644 DOI: 10.1371/journal.pcbi.1006216] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Revised: 07/18/2018] [Accepted: 05/21/2018] [Indexed: 01/07/2023] Open
Abstract
The time scale of neuronal network dynamics is determined by synaptic interactions and neuronal signal integration, both of which occur on the time scale of milliseconds. Yet many behaviors like the generation of movements or vocalizations of sounds occur on the much slower time scale of seconds. Here we ask the question of how neuronal networks of the brain can support reliable behavior on this time scale. We argue that excitable neuronal assemblies with spike-frequency adaptation may serve as building blocks that can flexibly adjust the speed of execution of neural circuit function. We show in simulations that a chain of neuronal assemblies can propagate signals reliably, similar to the well-known synfire chain, but with the crucial difference that the propagation speed is slower and tunable to the behaviorally relevant range. Moreover we study a grid of excitable neuronal assemblies as a simplified model of the somatosensory barrel cortex of the mouse and demonstrate that various patterns of experimentally observed spatial activity propagation can be explained. Models of activity propagation in cortical networks have often been based on feedforward structures. Here we propose a model of activity propagation, called excitation chain, which does not need such a feedforward structure. The model is composed of excitable neural assemblies with spike-frequency adaptation, connected bidirectionally in a row or a grid. This prototypical neural circuit can propagate activity forwards, backwards or in both directions. Furthermore, the propagation speed is slow enough to trigger the generation of behaviors on the time scale of hundreds of milliseconds. A two-dimensional variant of the model is able to generate different activity propagation patterns, similar to spontaneous activity and stimulus-evoked responses in anesthetized mouse barrel cortex. We propose the excitation chain model as a basic component that can be employed in various ways to create spiking neural circuit models that generate signals on behavioral time scales. In contrast to abstract models of excitable media, our model makes an explicit link to the time scale of neuronal spikes.
Collapse
Affiliation(s)
- Hesam Setareh
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Switzerland
| | - Moritz Deger
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Switzerland
- Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Köln, Germany
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Switzerland
- * E-mail:
| |
Collapse
|
12
|
Imbalanced amplification: A mechanism of amplification and suppression from local imbalance of excitation and inhibition in cortical circuits. PLoS Comput Biol 2018; 14:e1006048. [PMID: 29543827 PMCID: PMC5871018 DOI: 10.1371/journal.pcbi.1006048] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 03/27/2018] [Accepted: 02/22/2018] [Indexed: 01/02/2023] Open
Abstract
Understanding the relationship between external stimuli and the spiking activity of cortical populations is a central problem in neuroscience. Dense recurrent connectivity in local cortical circuits can lead to counterintuitive response properties, raising the question of whether there are simple arithmetical rules for relating circuits’ connectivity structure to their response properties. One such arithmetic is provided by the mean field theory of balanced networks, which is derived in a limit where excitatory and inhibitory synaptic currents precisely balance on average. However, balanced network theory is not applicable to some biologically relevant connectivity structures. We show that cortical circuits with such structure are susceptible to an amplification mechanism arising when excitatory-inhibitory balance is broken at the level of local subpopulations, but maintained at a global level. This amplification, which can be quantified by a linear correction to the classical mean field theory of balanced networks, explains several response properties observed in cortical recordings and provides fundamental insights into the relationship between connectivity structure and neural responses in cortical circuits. Understanding how the brain represents and processes stimuli requires a quantitative understanding of how signals propagate through networks of neurons. Developing such an understanding is made difficult by the dense interconnectivity of neurons, especially in the cerebral cortex. One approach to quantifying neural processing in the cortex is derived from observations that excitatory (positive) and inhibitory (negative) interactions between neurons tend to balance each other in many brain areas. This balance is achieved under a class of computational models called “balanced networks.” However, previous approaches to the mathematical analysis of balanced network models is not possible under some biologically relevant connectivity structures. We show that, under these structures, balance between excitation and inhibition is necessarily broken and the resulting imbalance causes some stimulus features to be amplified. This “imbalanced amplification” of stimuli can explain several observations from recordings in mouse somatosensory and visual cortical circuits and provides fundamental insights into the relationship between connectivity structure and neural responses in cortical circuits.
Collapse
|
13
|
Ly C, Marsat G. Variable synaptic strengths controls the firing rate distribution in feedforward neural networks. J Comput Neurosci 2017; 44:75-95. [DOI: 10.1007/s10827-017-0670-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 10/18/2017] [Accepted: 10/19/2017] [Indexed: 12/27/2022]
|
14
|
Østergaard J, Kramer MA, Eden UT. Capturing Spike Variability in Noisy Izhikevich Neurons Using Point Process Generalized Linear Models. Neural Comput 2017; 30:125-148. [PMID: 29064782 DOI: 10.1162/neco_a_01030] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
To understand neural activity, two broad categories of models exist: statistical and dynamical. While statistical models possess rigorous methods for parameter estimation and goodness-of-fit assessment, dynamical models provide mechanistic insight. In general, these two categories of models are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input current. We then fit these spike train data with a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured.
Collapse
Affiliation(s)
| | | | - Uri T Eden
- Boston University, Boston, MA 02215, U.S.A.
| |
Collapse
|
15
|
Augustin M, Ladenbauer J, Baumann F, Obermayer K. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation. PLoS Comput Biol 2017. [PMID: 28644841 PMCID: PMC5507472 DOI: 10.1371/journal.pcbi.1005545] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models. Characterizing the dynamics of biophysically modeled, large neuronal networks usually involves extensive numerical simulations. As an alternative to this expensive procedure we propose efficient models that describe the network activity in terms of a few ordinary differential equations. These systems are simple to solve and allow for convenient investigations of asynchronous, oscillatory or chaotic network states because linear stability analyses and powerful related methods are readily applicable. We build upon two research lines on which substantial efforts have been exerted in the last two decades: (i) the development of single neuron models of reduced complexity that can accurately reproduce a large repertoire of observed neuronal behavior, and (ii) different approaches to approximate the Fokker-Planck equation that represents the collective dynamics of large neuronal networks. We combine these advances and extend recent approximation methods of the latter kind to obtain spike rate models that surprisingly well reproduce the macroscopic dynamics of the underlying neuronal network. At the same time the microscopic properties are retained through the single neuron model parameters. To enable a fast adoption we have released an efficient Python implementation as open source software under a free license.
Collapse
Affiliation(s)
- Moritz Augustin
- Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Josef Ladenbauer
- Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Group for Neural Theory, Laboratoire de Neurosciences Cognitives, École Normale Supérieure, Paris, France
| | - Fabian Baumann
- Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Klaus Obermayer
- Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
16
|
Setareh H, Deger M, Petersen CCH, Gerstner W. Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons. Front Comput Neurosci 2017; 11:52. [PMID: 28690508 PMCID: PMC5480278 DOI: 10.3389/fncom.2017.00052] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Accepted: 05/29/2017] [Indexed: 01/21/2023] Open
Abstract
Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly.
Collapse
Affiliation(s)
- Hesam Setareh
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Moritz Deger
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland.,Faculty of Mathematics and Natural Sciences, Institute for Zoology, University of CologneCologne, Germany
| | - Carl C H Petersen
- Laboratory of Sensory Processing, Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Wulfram Gerstner
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| |
Collapse
|
17
|
A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. PLoS Comput Biol 2017; 13:e1005542. [PMID: 28574992 PMCID: PMC5456035 DOI: 10.1371/journal.pcbi.1005542] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Accepted: 04/26/2017] [Indexed: 01/21/2023] Open
Abstract
The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.
Collapse
|
18
|
Wang L, Wang Y, Fu WL, Cao LH. Modulation of neuronal dynamic range using two different adaptation mechanisms. Neural Regen Res 2017; 12:447-451. [PMID: 28469660 PMCID: PMC5399723 DOI: 10.4103/1673-5374.202931] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The capability of neurons to discriminate between intensity of external stimulus is measured by its dynamic range. A larger dynamic range indicates a greater probability of neuronal survival. In this study, the potential roles of adaptation mechanisms (ion currents) in modulating neuronal dynamic range were numerically investigated. Based on the adaptive exponential integrate-and-fire model, which includes two different adaptation mechanisms, i.e. subthreshold and suprathreshold (spike-triggered) adaptation, our results reveal that the two adaptation mechanisms exhibit rather different roles in regulating neuronal dynamic range. Specifically, subthreshold adaptation acts as a negative factor that observably decreases the neuronal dynamic range, while suprathreshold adaptation has little influence on the neuronal dynamic range. Moreover, when stochastic noise was introduced into the adaptation mechanisms, the dynamic range was apparently enhanced, regardless of what state the neuron was in, e.g. adaptive or non-adaptive. Our model results suggested that the neuronal dynamic range can be differentially modulated by different adaptation mechanisms. Additionally, noise was a non-ignorable factor, which could effectively modulate the neuronal dynamic range.
Collapse
Affiliation(s)
- Lei Wang
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, China
| | - Ye Wang
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, China
| | - Wen-Long Fu
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, China
| | - Li-Hong Cao
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, China
| |
Collapse
|
19
|
Braun W, Thul R, Longtin A. Evolution of moments and correlations in nonrenewal escape-time processes. Phys Rev E 2017; 95:052127. [PMID: 28618562 DOI: 10.1103/physreve.95.052127] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Indexed: 06/07/2023]
Abstract
The theoretical description of nonrenewal stochastic systems is a challenge. Analytical results are often not available or can be obtained only under strong conditions, limiting their applicability. Also, numerical results have mostly been obtained by ad hoc Monte Carlo simulations, which are usually computationally expensive when a high degree of accuracy is needed. To gain quantitative insight into these systems under general conditions, we here introduce a numerical iterated first-passage time approach based on solving the time-dependent Fokker-Planck equation (FPE) to describe the statistics of nonrenewal stochastic systems. We illustrate the approach using spike-triggered neuronal adaptation in the leaky and perfect integrate-and-fire model, respectively. The transition to stationarity of first-passage time moments and their sequential correlations occur on a nontrivial time scale that depends on all system parameters. Surprisingly this is so for both single exponential and scale-free power-law adaptation. The method works beyond the small noise and time-scale separation approximations. It shows excellent agreement with direct Monte Carlo simulations, which allow for the computation of transient and stationary distributions. We compare different methods to compute the evolution of the moments and serial correlation coefficients (SCCs) and discuss the challenge of reliably computing the SCCs, which we find to be very sensitive to numerical inaccuracies for both the leaky and perfect integrate-and-fire models. In conclusion, our methods provide a general picture of nonrenewal dynamics in a wide range of stochastic systems exhibiting short- and long-range correlations.
Collapse
Affiliation(s)
- Wilhelm Braun
- Department of Physics and Centre for Neural Dynamics, University of Ottawa, 598 King Edward, Ottawa K1N 6N5, Canada
- University of Ottawa Brain and Mind Research Institute, University of Ottawa, 451 Smyth Road, Ottawa, ON K1H 8M5, Canada
| | - Rüdiger Thul
- Centre for Mathematical Medicine and Biology, School of Mathematical Sciences, University of Nottingham, Nottingham NG7 2RD, United Kingdom
| | - André Longtin
- Department of Physics and Centre for Neural Dynamics, University of Ottawa, 598 King Edward, Ottawa K1N 6N5, Canada
- University of Ottawa Brain and Mind Research Institute, University of Ottawa, 451 Smyth Road, Ottawa, ON K1H 8M5, Canada
| |
Collapse
|
20
|
Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS Comput Biol 2017; 13:e1005507. [PMID: 28422957 PMCID: PMC5415267 DOI: 10.1371/journal.pcbi.1005507] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 05/03/2017] [Accepted: 04/07/2017] [Indexed: 11/22/2022] Open
Abstract
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. Understanding the brain requires mathematical models on different spatial scales. On the “microscopic” level of nerve cells, neural spike trains can be well predicted by phenomenological spiking neuron models. On a coarse scale, neural activity can be modeled by phenomenological equations that summarize the total activity of many thousands of neurons. Such population models are widely used to model neuroimaging data such as EEG, MEG or fMRI data. However, it is largely unknown how large-scale models are connected to an underlying microscale model. Linking the scales is vital for a correct description of rapid changes and fluctuations of the population activity, and is crucial for multiscale brain models. The challenge is to treat realistic spiking dynamics as well as fluctuations arising from the finite number of neurons. We obtained such a link by deriving stochastic population equations on the mesoscopic scale of 100–1000 neurons from an underlying microscopic model. These equations can be efficiently integrated and reproduce results of a microscopic simulation while achieving a high speed-up factor. We expect that our novel population theory on the mesoscopic scale will be instrumental for understanding experimental data on information processing in the brain, and ultimately link microscopic and macroscopic activity patterns.
Collapse
|
21
|
Schneider AD. Model Vestibular Nuclei Neurons Can Exhibit a Boosting Nonlinearity Due to an Adaptation Current Regulated by Spike-Triggered Calcium and Calcium-Activated Potassium Channels. PLoS One 2016; 11:e0159300. [PMID: 27427914 PMCID: PMC4948908 DOI: 10.1371/journal.pone.0159300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Accepted: 06/30/2016] [Indexed: 11/18/2022] Open
Abstract
In vitro studies have previously found a class of vestibular nuclei neurons to exhibit a bidirectional afterhyperpolarization (AHP) in their membrane potential, due to calcium and calcium-activated potassium conductances. More recently in vivo studies of such vestibular neurons were found to exhibit a boosting nonlinearity in their input-output tuning curves. In this paper, a Hodgkin-Huxley (HH) type neuron model, originally developed to reproduce the in vitro AHP, is shown to produce a boosting nonlinearity similar to that seen in vivo for increased the calcium conductance. Indicative of a bifurcation, the HH model is reduced to a generalized integrate-and-fire (IF) model that preserves the bifurcation structure and boosting nonliearity. By then projecting the neuron model’s phase space trajectories into 2D, the underlying geometric mechanism relating the AHP and boosting nonlinearity is revealed. Further simplifications and approximations are made to derive analytic expressions for the steady steady state firing rate as a function of bias current, μ, as well as the gain (i.e. its slope) and the position of its peak at μ = μ*. Finally, although the boosting nonlinearity has not yet been experimentally observed in vitro, testable predictions indicate how it might be found.
Collapse
|
22
|
Rosenbaum R. A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs. Front Comput Neurosci 2016; 10:39. [PMID: 27148036 PMCID: PMC4840919 DOI: 10.3389/fncom.2016.00039] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2015] [Accepted: 04/04/2016] [Indexed: 11/16/2022] Open
Abstract
Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public.
Collapse
Affiliation(s)
- Robert Rosenbaum
- Applied and Computational Mathematics and Statistics, University of Notre Dame Notre Dame, IN, USA
| |
Collapse
|
23
|
Schwalger T, Lindner B. Analytical approach to an integrate-and-fire model with spike-triggered adaptation. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:062703. [PMID: 26764723 DOI: 10.1103/physreve.92.062703] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2015] [Indexed: 06/05/2023]
Abstract
The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.
Collapse
Affiliation(s)
- Tilo Schwalger
- Brain Mind Institute, École Polytechnique Féderale de Lausanne (EPFL) Station 15, CH-1015 Lausanne, Switzerland
- Bernstein Center for Computational Neuroscience, Haus 2, Philippstraße 13, 10115 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience, Haus 2, Philippstraße 13, 10115 Berlin, Germany
- Department of Physics, Humboldt Universität zu Berlin, Newtonstraße 15, 12489 Berlin, Germany
| |
Collapse
|
24
|
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity. J Comput Neurosci 2015; 39:311-27. [DOI: 10.1007/s10827-015-0578-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2015] [Revised: 07/06/2015] [Accepted: 09/23/2015] [Indexed: 11/25/2022]
|
25
|
Colliaux D, Yger P, Kaneko K. Impact of sub and supra-threshold adaptation currents in networks of spiking neurons. J Comput Neurosci 2015; 39:255-70. [PMID: 26400658 PMCID: PMC4649064 DOI: 10.1007/s10827-015-0575-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2014] [Revised: 07/30/2015] [Accepted: 08/04/2015] [Indexed: 11/26/2022]
Abstract
Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.
Collapse
Affiliation(s)
- David Colliaux
- Institut des Systèmes Intelligents et de Robotique (ISIR), CNRS UMR 7222, UPMC University Paris, 4 Place Jussieu, 75005, Paris, France.
| | - Pierre Yger
- Institut d'Etudes de la Cognition, ENS, Paris, France
- Sorbonne Université, UPMC University Paris06 UMRS968, Insititut de la Vision, Paris, France
- INSERM, U968, Paris, France
- CNRS, UMR7210, Paris, France
| | - Kunihiko Kaneko
- Department of Basic Science, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan
| |
Collapse
|