1
|
Yamakou ME, Kuehn C. Combined effects of spike-timing-dependent plasticity and homeostatic structural plasticity on coherence resonance. Phys Rev E 2023; 107:044302. [PMID: 37198865 DOI: 10.1103/physreve.107.044302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 03/23/2023] [Indexed: 05/19/2023]
Abstract
Efficient processing and transfer of information in neurons have been linked to noise-induced resonance phenomena such as coherence resonance (CR), and adaptive rules in neural networks have been mostly linked to two prevalent mechanisms: spike-timing-dependent plasticity (STDP) and homeostatic structural plasticity (HSP). Thus this paper investigates CR in small-world and random adaptive networks of Hodgkin-Huxley neurons driven by STDP and HSP. Our numerical study indicates that the degree of CR strongly depends, and in different ways, on the adjusting rate parameter P, which controls STDP, on the characteristic rewiring frequency parameter F, which controls HSP, and on the parameters of the network topology. In particular, we found two robust behaviors. (i) Decreasing P (which enhances the weakening effect of STDP on synaptic weights) and decreasing F (which slows down the swapping rate of synapses between neurons) always leads to higher degrees of CR in small-world and random networks, provided that the synaptic time delay parameter τ_{c} has some appropriate values. (ii) Increasing the synaptic time delay τ_{c} induces multiple CR (MCR)-the occurrence of multiple peaks in the degree of coherence as τ_{c} changes-in small-world and random networks, with MCR becoming more pronounced at smaller values of P and F. Our results imply that STDP and HSP can jointly play an essential role in enhancing the time precision of firing necessary for optimal information processing and transfer in neural systems and could thus have applications in designing networks of noisy artificial neural circuits engineered to use CR to optimize information processing and transfer.
Collapse
Affiliation(s)
- Marius E Yamakou
- Department of Data Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Cauerstr. 11, 91058 Erlangen, Germany
- Max-Planck-Institut für Mathematik in den Naturwissenschaften, Inselstr. 22, 04103 Leipzig, Germany
| | - Christian Kuehn
- Faculty of Mathematics, Technical University of Munich, Boltzmannstrasse 3, 85748 Garching bei München, Germany
- Complexity Science Hub Vienna, Josefstädter Strasse 39, 1080 Vienna, Austria
| |
Collapse
|
2
|
Spike-train level supervised learning algorithm based on bidirectional modification for liquid state machines. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04152-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
3
|
Palgen JL, Perrillat-Mercerot A, Ceres N, Peyronnet E, Coudron M, Tixier E, Illigens BMW, Bosley J, L’Hostis A, Monteiro C. Integration of Heterogeneous Biological Data in Multiscale Mechanistic Model Calibration: Application to Lung Adenocarcinoma. Acta Biotheor 2022; 70:19. [PMID: 35796890 PMCID: PMC9261258 DOI: 10.1007/s10441-022-09445-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 06/15/2022] [Indexed: 11/26/2022]
Abstract
Mechanistic models are built using knowledge as the primary information source, with well-established biological and physical laws determining the causal relationships within the model. Once the causal structure of the model is determined, parameters must be defined in order to accurately reproduce relevant data. Determining parameters and their values is particularly challenging in the case of models of pathophysiology, for which data for calibration is sparse. Multiple data sources might be required, and data may not be in a uniform or desirable format. We describe a calibration strategy to address the challenges of scarcity and heterogeneity of calibration data. Our strategy focuses on parameters whose initial values cannot be easily derived from the literature, and our goal is to determine the values of these parameters via calibration with constraints set by relevant data. When combined with a covariance matrix adaptation evolution strategy (CMA-ES), this step-by-step approach can be applied to a wide range of biological models. We describe a stepwise, integrative and iterative approach to multiscale mechanistic model calibration, and provide an example of calibrating a pathophysiological lung adenocarcinoma model. Using the approach described here we illustrate the successful calibration of a complex knowledge-based mechanistic model using only the limited heterogeneous datasets publicly available in the literature.
Collapse
Affiliation(s)
| | | | - Nicoletta Ceres
- Novadiscovery, Pl. Giovanni da Verrazzano, Lyon, 69009 Rhône France
| | | | - Matthieu Coudron
- Novadiscovery, Pl. Giovanni da Verrazzano, Lyon, 69009 Rhône France
| | - Eliott Tixier
- Novadiscovery, Pl. Giovanni da Verrazzano, Lyon, 69009 Rhône France
| | - Ben M. W. Illigens
- Novadiscovery, Pl. Giovanni da Verrazzano, Lyon, 69009 Rhône France
- Dresden International University, Freiberger Str. 37, Dresden, 01067 Germany
| | - Jim Bosley
- Novadiscovery, Pl. Giovanni da Verrazzano, Lyon, 69009 Rhône France
| | - Adèle L’Hostis
- Novadiscovery, Pl. Giovanni da Verrazzano, Lyon, 69009 Rhône France
| | - Claudio Monteiro
- Novadiscovery, Pl. Giovanni da Verrazzano, Lyon, 69009 Rhône France
| |
Collapse
|
4
|
Surace SC, Pfister JP, Gerstner W, Brea J. On the choice of metric in gradient-based theories of brain function. PLoS Comput Biol 2020; 16:e1007640. [PMID: 32271761 PMCID: PMC7144966 DOI: 10.1371/journal.pcbi.1007640] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
This is a PLOS Computational Biology Education paper. The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Because a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) is often suggested as an algorithmic principle of optimization potentially implemented by the brain. In practice, researchers often consider the vector of partial derivatives as the gradient. However, the definition of the gradient and the notion of a steepest direction depend on the choice of a metric. Because the choice of the metric involves a large number of degrees of freedom, the predictive power of models that are based on gradient descent must be called into question, unless there are strong constraints on the choice of the metric. Here, we provide a didactic review of the mathematics of gradient descent, illustrate common pitfalls of using gradient descent as a principle of brain function with examples from the literature, and propose ways forward to constrain the metric. A good skier may choose to follow the steepest direction to move as quickly as possible from the mountain peak to the base. Steepest descent in an abstract sense is also an appealing idea to describe adaptation and learning in the brain. For example, a scientist may hypothesize that synaptic or neuronal variables change in the direction of steepest descent in an abstract error landscape during learning of a new task or memorization of a new concept. There is, however, a pitfall in this reasoning: a multitude of steepest directions exists for any abstract error landscape because the steepest direction depends on how angles are measured, and it may be unclear how angles should be measured. Many scientists are taught that the steepest direction can be found by computing the vector of partial derivatives. But the vector of partial derivatives is equal to the steepest direction only if the angles in the abstract space are measured in a particular way. In this article, we provide a didactic review of the mathematics of finding steepest directions in abstract spaces, illustrate the pitfalls with examples from the neuroscience literature, and propose guidelines to constrain the way angles are measured in these spaces.
Collapse
Affiliation(s)
- Simone Carlo Surace
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics and Neuroscience Center Zurich, University Zurich and ETH Zurich, Zurich, Switzerland
| | - Jean-Pascal Pfister
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics and Neuroscience Center Zurich, University Zurich and ETH Zurich, Zurich, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Johanni Brea
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- * E-mail:
| |
Collapse
|
5
|
Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout. Neural Netw 2018; 99:134-147. [PMID: 29414535 DOI: 10.1016/j.neunet.2017.12.015] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Revised: 12/08/2017] [Accepted: 12/26/2017] [Indexed: 01/28/2023]
Abstract
Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.
Collapse
|
6
|
Marblestone AH, Wayne G, Kording KP. Toward an Integration of Deep Learning and Neuroscience. Front Comput Neurosci 2016; 10:94. [PMID: 27683554 PMCID: PMC5021692 DOI: 10.3389/fncom.2016.00094] [Citation(s) in RCA: 243] [Impact Index Per Article: 30.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 08/24/2016] [Indexed: 01/22/2023] Open
Abstract
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
Collapse
Affiliation(s)
- Adam H. Marblestone
- Synthetic Neurobiology Group, Massachusetts Institute of Technology, Media LabCambridge, MA, USA
| | | | - Konrad P. Kording
- Rehabilitation Institute of Chicago, Northwestern UniversityChicago, IL, USA
| |
Collapse
|
7
|
Bellec G, Galtier M, Brette R, Yger P. Slow feature analysis with spiking neurons and its application to audio stimuli. J Comput Neurosci 2016; 40:317-29. [PMID: 27075919 DOI: 10.1007/s10827-016-0599-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2015] [Revised: 01/31/2016] [Accepted: 02/29/2016] [Indexed: 10/22/2022]
Abstract
Extracting invariant features in an unsupervised manner is crucial to perform complex computation such as object recognition, analyzing music or understanding speech. While various algorithms have been proposed to perform such a task, Slow Feature Analysis (SFA) uses time as a means of detecting those invariants, extracting the slowly time-varying components in the input signals. In this work, we address the question of how such an algorithm can be implemented by neurons, and apply it in the context of audio stimuli. We propose a projected gradient implementation of SFA that can be adapted to a Hebbian like learning rule dealing with biologically plausible neuron models. Furthermore, we show that a Spike-Timing Dependent Plasticity learning rule, shaped as a smoothed second derivative, implements SFA for spiking neurons. The theory is supported by numerical simulations, and to illustrate a simple use of SFA, we have applied it to auditory signals. We show that a single SFA neuron can learn to extract the tempo in sound recordings.
Collapse
Affiliation(s)
- Guillaume Bellec
- Institut de la Vision, Sorbonne Université, UPMC Univ Paris06 UMRS968, Paris, France. .,INSERM, U968, Paris, France. .,CNRS, UMR7210, Paris, France.
| | - Mathieu Galtier
- European Institute for Theoretical Neuroscience CNRS UNIC UPR-3293, Paris, France
| | - Romain Brette
- Institut de la Vision, Sorbonne Université, UPMC Univ Paris06 UMRS968, Paris, France.,INSERM, U968, Paris, France.,CNRS, UMR7210, Paris, France
| | - Pierre Yger
- Institut de la Vision, Sorbonne Université, UPMC Univ Paris06 UMRS968, Paris, France.,INSERM, U968, Paris, France.,CNRS, UMR7210, Paris, France.,Institut d'Etudes de la Cognition, ENS, Paris, France
| |
Collapse
|
8
|
A local Echo State Property through the largest Lyapunov exponent. Neural Netw 2016; 76:39-45. [DOI: 10.1016/j.neunet.2015.12.013] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2015] [Revised: 11/29/2015] [Accepted: 12/23/2015] [Indexed: 11/23/2022]
|
9
|
Effenberger F, Jost J, Levina A. Self-organization in Balanced State Networks by STDP and Homeostatic Plasticity. PLoS Comput Biol 2015; 11:e1004420. [PMID: 26335425 PMCID: PMC4559467 DOI: 10.1371/journal.pcbi.1004420] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 06/30/2015] [Indexed: 11/18/2022] Open
Abstract
Structural inhomogeneities in synaptic efficacies have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However, little is known about how such inhomogeneities could evolve by means of synaptic plasticity. Here we present an adaptive model of a balanced neuronal network that combines two different types of plasticity, STDP and synaptic scaling. The plasticity rules yield both long-tailed distributions of synaptic weights and firing rates. Simultaneously, a highly connected subnetwork of driver neurons with strong synapses emerges. Coincident spiking activity of several driver cells can evoke population bursts and driver cells have similar dynamical properties as leader neurons found experimentally. Our model allows us to observe the delicate interplay between structural and dynamical properties of the emergent inhomogeneities. It is simple, robust to parameter changes and able to explain a multitude of different experimental findings in one basic network. It is widely believed that the structure of neuronal circuits plays a major role in brain functioning. Although the full synaptic connectivity for larger populations is not yet assessable even by current experimental techniques, available data show that neither synaptic strengths nor the number of synapses per neuron are homogeneously distributed. Several studies have found long-tailed distributions of synaptic weights with many weak and a few exceptionally strong synaptic connections, as well as strongly connected cells and subnetworks that may play a decisive role for data processing in neural circuits. Little is known about how inhomogeneities could arise in the developing brain and we hypothesize that there is a self-organizing principle behind their appearance. In this study we show how structural inhomogeneities can emerge by simple synaptic plasticity mechanisms from an initially homogeneous network. We perform numerical simulations and show analytically how a small imbalance in the initial structure is amplified by the synaptic plasticities and their interplay. Our network can simultaneously explain several experimental observations that were previously not linked.
Collapse
Affiliation(s)
- Felix Effenberger
- Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany
- * E-mail:
| | - Jürgen Jost
- Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany
| | - Anna Levina
- Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| |
Collapse
|
10
|
Galtier MN, Marini C, Wainrib G, Jaeger H. Relative entropy minimizing noisy non-linear neural network to approximate stochastic processes. Neural Netw 2014; 56:10-21. [PMID: 24815743 DOI: 10.1016/j.neunet.2014.04.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2013] [Revised: 04/15/2014] [Accepted: 04/18/2014] [Indexed: 11/30/2022]
Abstract
A method is provided for designing and training noise-driven recurrent neural networks as models of stochastic processes. The method unifies and generalizes two known separate modeling approaches, Echo State Networks (ESN) and Linear Inverse Modeling (LIM), under the common principle of relative entropy minimization. The power of the new method is demonstrated on a stochastic approximation of the El Niño phenomenon studied in climate research.
Collapse
Affiliation(s)
- Mathieu N Galtier
- School of Engineering and Science, Jacobs University Bremen gGmbH, 28759 Bremen, Germany.
| | - Camille Marini
- Institut für Meereskunde, Zentrum für Meeres- und Klimaforschung, Universität Hamburg, Hamburg, Germany; MINES ParisTech, 1, rue Claude Daunesse, F-06904 Sophia Antipolis Cedex, France
| | - Gilles Wainrib
- Laboratoire Analyse Géométrie et Applications, Université Paris XIII, France
| | - Herbert Jaeger
- School of Engineering and Science, Jacobs University Bremen gGmbH, 28759 Bremen, Germany
| |
Collapse
|