1
|
Klinshov VV, Nekorkin VI. Adaptive myelination causes slow oscillations in recurrent neural loops. CHAOS (WOODBURY, N.Y.) 2024; 34:033101. [PMID: 38427934 DOI: 10.1063/5.0193265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 02/03/2024] [Indexed: 03/03/2024]
Abstract
The brain is known to be plastic, i.e., capable of changing and reorganizing as it develops and accumulates experience. Recently, a novel form of brain plasticity was described which is activity-dependent myelination of nerve fibers. Since the speed of propagation of action potentials along axons depends significantly on their degree of myelination, this process leads to adaptive change of axonal delays depending on the neural activity. To understand the possible influence of the adaptive delays on the behavior of neural networks, we consider a simple setup, a neuronal oscillator with delayed feedback. We show that introducing the delay plasticity into this circuit can lead to the occurrence of slow oscillations which are impossible with a constant delay.
Collapse
Affiliation(s)
- Vladimir V Klinshov
- Institute of Applied Physics of the Russian Academy of Sciences, Ulyanova Street 46, 603950, Nizhny Novgorod, Russia
- National Research University Higher School of Economics, 25/12 Bol'shaya Pecherskaya street, Nizhny Novgorod 603155, Russia
| | - Vladimir I Nekorkin
- Institute of Applied Physics of the Russian Academy of Sciences, Ulyanova Street 46, 603950, Nizhny Novgorod, Russia
| |
Collapse
|
2
|
Li KT, Ji D, Zhou C. Memory rescue and learning in synaptic impaired neuronal circuits. iScience 2023; 26:106931. [PMID: 37534172 PMCID: PMC10391582 DOI: 10.1016/j.isci.2023.106931] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 04/05/2023] [Accepted: 05/16/2023] [Indexed: 08/04/2023] Open
Abstract
Neuronal impairment is a characteristic of Alzheimer's disease (AD), but its effect on neural activity dynamics underlying memory deficits is unclear. Here, we studied the effects of synaptic impairment on neural activities associated with memory recall, memory rescue, and learning a new memory, in an integrate-and-fire neuronal network. Our results showed that reducing connectivity decreases the neuronal synchronization of memory neurons and impairs memory recall performance. Although, slow-gamma stimulation rescued memory recall and slow-gamma oscillations, the rescue caused a side effect of activating mixed memories. During the learning of a new memory, reducing connectivity caused impairment in storing the new memory, but did not affect previously stored memories. We also explored the effects of other types of impairments including neuronal loss and excitation-inhibition imbalance and the rescue by general increase of excitability. Our results reveal potential computational mechanisms underlying the memory deficits caused by impairment in AD.
Collapse
Affiliation(s)
- Kwan Tung Li
- Department of Physics, Centre for Nonlinear Studies, Beijing–Hong Kong–Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China
- Research Center for Augmented Intelligence, Research Institute of Artificial Intelligence, Zhejiang Lab, Hangzhou 311100, China
| | - Daoyun Ji
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Molecular and Cellular Biology, Baylor College of Medicine, Houston, TX 77030, USA
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies, Beijing–Hong Kong–Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China
| |
Collapse
|
3
|
Madadi Asl M, Valizadeh A, Tass PA. Decoupling of interacting neuronal populations by time-shifted stimulation through spike-timing-dependent plasticity. PLoS Comput Biol 2023; 19:e1010853. [PMID: 36724144 PMCID: PMC9891531 DOI: 10.1371/journal.pcbi.1010853] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 01/05/2023] [Indexed: 02/02/2023] Open
Abstract
The synaptic organization of the brain is constantly modified by activity-dependent synaptic plasticity. In several neurological disorders, abnormal neuronal activity and pathological synaptic connectivity may significantly impair normal brain function. Reorganization of neuronal circuits by therapeutic stimulation has the potential to restore normal brain dynamics. Increasing evidence suggests that the temporal stimulation pattern crucially determines the long-lasting therapeutic effects of stimulation. Here, we tested whether a specific pattern of brain stimulation can enable the suppression of pathologically strong inter-population synaptic connectivity through spike-timing-dependent plasticity (STDP). More specifically, we tested how introducing a time shift between stimuli delivered to two interacting populations of neurons can effectively decouple them. To that end, we first used a tractable model, i.e., two bidirectionally coupled leaky integrate-and-fire (LIF) neurons, to theoretically analyze the optimal range of stimulation frequency and time shift for decoupling. We then extended our results to two reciprocally connected neuronal populations (modules) where inter-population delayed connections were modified by STDP. As predicted by the theoretical results, appropriately time-shifted stimulation causes a decoupling of the two-module system through STDP, i.e., by unlearning pathologically strong synaptic interactions between the two populations. Based on the overall topology of the connections, the decoupling of the two modules, in turn, causes a desynchronization of the populations that outlasts the cessation of stimulation. Decoupling effects of the time-shifted stimulation can be realized by time-shifted burst stimulation as well as time-shifted continuous simulation. Our results provide insight into the further optimization of a variety of multichannel stimulation protocols aiming at a therapeutic reshaping of diseased brain networks.
Collapse
Affiliation(s)
- Mojtaba Madadi Asl
- School of Biological Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
- Pasargad Institute for Advanced Innovative Solutions (PIAIS), Tehran, Iran
| | - Alireza Valizadeh
- Pasargad Institute for Advanced Innovative Solutions (PIAIS), Tehran, Iran
- Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
| | - Peter A. Tass
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, United States of America
| |
Collapse
|
4
|
Chakraborty B, Mukhopadhyay S. Heterogeneous recurrent spiking neural network for spatio-temporal classification. Front Neurosci 2023; 17:994517. [PMID: 36793542 PMCID: PMC9922697 DOI: 10.3389/fnins.2023.994517] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 01/04/2023] [Indexed: 02/01/2023] Open
Abstract
Spiking Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence. Although recent SNNs trained with supervised backpropagation show classification accuracy comparable to deep networks, the performance of unsupervised learning-based SNNs remains much lower. This paper presents a heterogeneous recurrent spiking neural network (HRSNN) with unsupervised learning for spatio-temporal classification of video activity recognition tasks on RGB (KTH, UCF11, UCF101) and event-based datasets (DVS128 Gesture). We observed an accuracy of 94.32% for the KTH dataset, 79.58% and 77.53% for the UCF11 and UCF101 datasets, respectively, and an accuracy of 96.54% on the event-based DVS Gesture dataset using the novel unsupervised HRSNN model. The key novelty of the HRSNN is that the recurrent layer in HRSNN consists of heterogeneous neurons with varying firing/relaxation dynamics, and they are trained via heterogeneous spike-time-dependent-plasticity (STDP) with varying learning dynamics for each synapse. We show that this novel combination of heterogeneity in architecture and learning method outperforms current homogeneous spiking neural networks. We further show that HRSNN can achieve similar performance to state-of-the-art backpropagation trained supervised SNN, but with less computation (fewer neurons and sparse connection) and less training data.
Collapse
|
5
|
Grimaldi A, Gruel A, Besnainou C, Jérémie JN, Martinet J, Perrinet LU. Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sci 2022; 13:68. [PMID: 36672049 PMCID: PMC9856822 DOI: 10.3390/brainsci13010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022] Open
Abstract
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption-a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.
Collapse
Affiliation(s)
- Antoine Grimaldi
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Amélie Gruel
- SPARKS, Côte d’Azur, CNRS, I3S, 2000 Rte des Lucioles, 06900 Sophia-Antipolis, France
| | - Camille Besnainou
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Jean-Nicolas Jérémie
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Jean Martinet
- SPARKS, Côte d’Azur, CNRS, I3S, 2000 Rte des Lucioles, 06900 Sophia-Antipolis, France
| | - Laurent U. Perrinet
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| |
Collapse
|
6
|
Kromer JA, Tass PA. Synaptic reshaping of plastic neuronal networks by periodic multichannel stimulation with single-pulse and burst stimuli. PLoS Comput Biol 2022; 18:e1010568. [PMID: 36327232 PMCID: PMC9632832 DOI: 10.1371/journal.pcbi.1010568] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 09/14/2022] [Indexed: 11/06/2022] Open
Abstract
Synaptic dysfunction is associated with several brain disorders, including Alzheimer's disease, Parkinson's disease (PD) and obsessive compulsive disorder (OCD). Utilizing synaptic plasticity, brain stimulation is capable of reshaping synaptic connectivity. This may pave the way for novel therapies that specifically counteract pathological synaptic connectivity. For instance, in PD, novel multichannel coordinated reset stimulation (CRS) was designed to counteract neuronal synchrony and down-regulate pathological synaptic connectivity. CRS was shown to entail long-lasting therapeutic aftereffects in PD patients and related animal models. This is in marked contrast to conventional deep brain stimulation (DBS) therapy, where PD symptoms return shortly after stimulation ceases. In the present paper, we study synaptic reshaping by periodic multichannel stimulation (PMCS) in networks of leaky integrate-and-fire (LIF) neurons with spike-timing-dependent plasticity (STDP). During PMCS, phase-shifted periodic stimulus trains are delivered to segregated neuronal subpopulations. Harnessing STDP, PMCS leads to changes of the synaptic network structure. We found that the PMCS-induced changes of the network structure depend on both the phase lags between stimuli and the shape of individual stimuli. Single-pulse stimuli and burst stimuli with low intraburst frequency down-regulate synapses between neurons receiving stimuli simultaneously. In contrast, burst stimuli with high intraburst frequency up-regulate these synapses. We derive theoretical approximations of the stimulation-induced network structure. This enables us to formulate stimulation strategies for inducing a variety of network structures. Our results provide testable hypotheses for future pre-clinical and clinical studies and suggest that periodic multichannel stimulation may be suitable for reshaping plastic neuronal networks to counteract pathological synaptic connectivity. Furthermore, we provide novel insight on how the stimulus type may affect the long-lasting outcome of conventional DBS. This may strongly impact parameter adjustment procedures for clinical DBS, which, so far, primarily focused on acute effects of stimulation.
Collapse
Affiliation(s)
- Justus A Kromer
- Department of Neurosurgery, Stanford University, Stanford, California, United States of America
| | - Peter A Tass
- Department of Neurosurgery, Stanford University, Stanford, California, United States of America
| |
Collapse
|
7
|
Garg N, Balafrej I, Stewart TC, Portal JM, Bocquet M, Querlioz D, Drouin D, Rouat J, Beilliard Y, Alibart F. Voltage-dependent synaptic plasticity: Unsupervised probabilistic Hebbian plasticity rule based on neurons membrane potential. Front Neurosci 2022; 16:983950. [PMID: 36340782 PMCID: PMC9634260 DOI: 10.3389/fnins.2022.983950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/05/2022] [Indexed: 11/27/2022] Open
Abstract
This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel brain-inspired unsupervised local learning rule for the online implementation of Hebb’s plasticity mechanism on neuromorphic hardware. The proposed VDSP learning rule updates the synaptic conductance on the spike of the postsynaptic neuron only, which reduces by a factor of two the number of updates with respect to standard spike timing dependent plasticity (STDP). This update is dependent on the membrane potential of the presynaptic neuron, which is readily available as part of neuron implementation and hence does not require additional memory for storage. Moreover, the update is also regularized on synaptic weight and prevents explosion or vanishing of weights on repeated stimulation. Rigorous mathematical analysis is performed to draw an equivalence between VDSP and STDP. To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits. We report 85.01 ± 0.76% (Mean ± SD) accuracy for a network of 100 output neurons on the MNIST dataset. The performance improves when scaling the network size (89.93 ± 0.41% for 400 output neurons, 90.56 ± 0.27 for 500 neurons), which validates the applicability of the proposed learning rule for spatial pattern recognition tasks. Future work will consider more complicated tasks. Interestingly, the learning rule better adapts than STDP to the frequency of input signal and does not require hand-tuning of hyperparameters.
Collapse
Affiliation(s)
- Nikhil Garg
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
- *Correspondence: Nikhil Garg,
| | - Ismael Balafrej
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- NECOTIS Research Lab, Department of Electrical and Computer Engineering, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Terrence C. Stewart
- National Research Council Canada, University of Waterloo Collaboration Centre, Waterloo, ON, Canada
| | - Jean-Michel Portal
- Aix-Marseille Université, Université de Toulon, CNRS, IM2NP, Marseille, France
| | - Marc Bocquet
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
| | - Damien Querlioz
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France
| | - Dominique Drouin
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Jean Rouat
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- NECOTIS Research Lab, Department of Electrical and Computer Engineering, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Yann Beilliard
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Fabien Alibart
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
- Fabien Alibart,
| |
Collapse
|
8
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
9
|
Vignoud G, Robert P. Spontaneous dynamics of synaptic weights in stochastic models with pair-based spike-timing-dependent plasticity. Phys Rev E 2022; 105:054405. [PMID: 35706237 DOI: 10.1103/physreve.105.054405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 03/31/2022] [Indexed: 06/15/2023]
Abstract
We investigate spike-timing dependent plasticity (STPD) in the case of a synapse connecting two neuronal cells. We develop a theoretical analysis of several STDP rules using Markovian theory. In this context there are two different timescales, fast neuronal activity and slower synaptic weight updates. Exploiting this timescale separation, we derive the long-time limits of a single synaptic weight subject to STDP. We show that the pairing model of presynaptic and postsynaptic spikes controls the synaptic weight dynamics for small external input on an excitatory synapse. This result implies in particular that mean-field analysis of plasticity may miss some important properties of STDP. Anti-Hebbian STDP favors the emergence of a stable synaptic weight. In the case of an inhibitory synapse the pairing schemes matter less, and we observe convergence of the synaptic weight to a nonnull value only for Hebbian STDP. We extensively study different asymptotic regimes for STDP rules, raising interesting questions for future work on adaptative neuronal networks and, more generally, on adaptative systems.
Collapse
Affiliation(s)
- Gaëtan Vignoud
- INRIA Paris, 2 rue Simone Iff, 75589 Paris Cedex 12, France and Center for Interdisciplinary Research in Biology (CIRB), Collège de France (CNRS UMR 7241, INSERM U1050), 11 Place Marcelin Berthelot, 75005 Paris, France
| | | |
Collapse
|
10
|
Madadi Asl M, Vahabie AH, Valizadeh A, Tass PA. Spike-Timing-Dependent Plasticity Mediated by Dopamine and its Role in Parkinson's Disease Pathophysiology. FRONTIERS IN NETWORK PHYSIOLOGY 2022; 2:817524. [PMID: 36926058 PMCID: PMC10013044 DOI: 10.3389/fnetp.2022.817524] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 02/08/2022] [Indexed: 01/05/2023]
Abstract
Parkinson's disease (PD) is a multi-systemic neurodegenerative brain disorder. Motor symptoms of PD are linked to the significant dopamine (DA) loss in substantia nigra pars compacta (SNc) followed by basal ganglia (BG) circuit dysfunction. Increasing experimental and computational evidence indicates that (synaptic) plasticity plays a key role in the emergence of PD-related pathological changes following DA loss. Spike-timing-dependent plasticity (STDP) mediated by DA provides a mechanistic model for synaptic plasticity to modify synaptic connections within the BG according to the neuronal activity. To shed light on how DA-mediated STDP can shape neuronal activity and synaptic connectivity in the PD condition, we reviewed experimental and computational findings addressing the modulatory effect of DA on STDP as well as other plasticity mechanisms and discussed their potential role in PD pathophysiology and related network dynamics and connectivity. In particular, reshaping of STDP profiles together with other plasticity-mediated processes following DA loss may abnormally modify synaptic connections in competing pathways of the BG. The cascade of plasticity-induced maladaptive or compensatory changes can impair the excitation-inhibition balance towards the BG output nuclei, leading to the emergence of pathological activity-connectivity patterns in PD. Pre-clinical, clinical as well as computational studies reviewed here provide an understanding of the impact of synaptic plasticity and other plasticity mechanisms on PD pathophysiology, especially PD-related network activity and connectivity, after DA loss. This review may provide further insights into the abnormal structure-function relationship within the BG contributing to the emergence of pathological states in PD. Specifically, this review is intended to provide detailed information for the development of computational network models for PD, serving as testbeds for the development and optimization of invasive and non-invasive brain stimulation techniques. Computationally derived hypotheses may accelerate the development of therapeutic stimulation techniques and potentially reduce the number of related animal experiments.
Collapse
Affiliation(s)
- Mojtaba Madadi Asl
- Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
| | - Abdol-Hossein Vahabie
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.,Department of Psychology, Faculty of Psychology and Education, University of Tehran, Tehran, Iran
| | - Alireza Valizadeh
- Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
| | - Peter A Tass
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
11
|
Li KT, Liang J, Zhou C. Gamma Oscillations Facilitate Effective Learning in Excitatory-Inhibitory Balanced Neural Circuits. Neural Plast 2021; 2021:6668175. [PMID: 33542728 PMCID: PMC7840255 DOI: 10.1155/2021/6668175] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 12/19/2020] [Accepted: 01/07/2021] [Indexed: 12/26/2022] Open
Abstract
Gamma oscillation in neural circuits is believed to associate with effective learning in the brain, while the underlying mechanism is unclear. This paper aims to study how spike-timing-dependent plasticity (STDP), a typical mechanism of learning, with its interaction with gamma oscillation in neural circuits, shapes the network dynamics properties and the network structure formation. We study an excitatory-inhibitory (E-I) integrate-and-fire neuronal network with triplet STDP, heterosynaptic plasticity, and a transmitter-induced plasticity. Our results show that the performance of plasticity is diverse in different synchronization levels. We find that gamma oscillation is beneficial to synaptic potentiation among stimulated neurons by forming a special network structure where the sum of excitatory input synaptic strength is correlated with the sum of inhibitory input synaptic strength. The circuit can maintain E-I balanced input on average, whereas the balance is temporal broken during the learning-induced oscillations. Our study reveals a potential mechanism about the benefits of gamma oscillation on learning in biological neural circuits.
Collapse
Affiliation(s)
- Kwan Tung Li
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Junhao Liang
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| |
Collapse
|
12
|
Gilson M, Dahmen D, Moreno-Bote R, Insabato A, Helias M. The covariance perceptron: A new paradigm for classification and processing of time series in recurrent neuronal networks. PLoS Comput Biol 2020; 16:e1008127. [PMID: 33044953 PMCID: PMC7595646 DOI: 10.1371/journal.pcbi.1008127] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 10/29/2020] [Accepted: 07/07/2020] [Indexed: 12/29/2022] Open
Abstract
Learning in neuronal networks has developed in many directions, in particular to reproduce cognitive tasks like image recognition and speech processing. Implementations have been inspired by stereotypical neuronal responses like tuning curves in the visual system, where, for example, ON/OFF cells fire or not depending on the contrast in their receptive fields. Classical models of neuronal networks therefore map a set of input signals to a set of activity levels in the output of the network. Each category of inputs is thereby predominantly characterized by its mean. In the case of time series, fluctuations around this mean constitute noise in this view. For this paradigm, the high variability exhibited by the cortical activity may thus imply limitations or constraints, which have been discussed for many years. For example, the need for averaging neuronal activity over long periods or large groups of cells to assess a robust mean and to diminish the effect of noise correlations. To reconcile robust computations with variable neuronal activity, we here propose a conceptual change of perspective by employing variability of activity as the basis for stimulus-related information to be learned by neurons, rather than merely being the noise that corrupts the mean signal. In this new paradigm both afferent and recurrent weights in a network are tuned to shape the input-output mapping for covariances, the second-order statistics of the fluctuating activity. When including time lags, covariance patterns define a natural metric for time series that capture their propagating nature. We develop the theory for classification of time series based on their spatio-temporal covariances, which reflect dynamical properties. We demonstrate that recurrent connectivity is able to transform information contained in the temporal structure of the signal into spatial covariances. Finally, we use the MNIST database to show how the covariance perceptron can capture specific second-order statistical patterns generated by moving digits. The dynamics in cortex is characterized by highly fluctuating activity: Even under the very same experimental conditions the activity typically does not reproduce on the level of individual spikes. Given this variability, how then does the brain realize its quasi-deterministic function? One obvious solution is to compute averages over many cells, assuming that the mean activity, or rate, is actually the decisive signal. Variability across trials of an experiment is thus considered noise. We here explore the opposite view: Can fluctuations be used to actually represent information? And if yes, is there a benefit over a representation using the mean rate? We find that a fluctuation-based scheme is not only powerful in distinguishing signals into several classes, but also that networks can efficiently be trained in the new paradigm. Moreover, we argue why such a scheme of representation is more consistent with known forms of synaptic plasticity than rate-based network dynamics.
Collapse
Affiliation(s)
- Matthieu Gilson
- Center for Brain and Cognition, Department of Information and Telecommunication technologies, Universitat Pompeu Fabra, Barcelona, Spain
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- * E-mail:
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Rubén Moreno-Bote
- Center for Brain and Cognition, Department of Information and Telecommunication technologies, Universitat Pompeu Fabra, Barcelona, Spain
- ICREA, Barcelona, Spain
| | - Andrea Insabato
- IDIBAPS (Institut d’Investigacions Biomèdiques August Pi i Sunyer), Barcelona, Spain
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
13
|
Kromer JA, Khaledi-Nasab A, Tass PA. Impact of number of stimulation sites on long-lasting desynchronization effects of coordinated reset stimulation. CHAOS (WOODBURY, N.Y.) 2020; 30:083134. [PMID: 32872805 DOI: 10.1063/5.0015196] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 07/27/2020] [Indexed: 06/11/2023]
Abstract
Excessive neuronal synchrony is a hallmark of several neurological disorders, e.g., Parkinson's disease. An established treatment for medically refractory Parkinson's disease is high-frequency deep brain stimulation. However, it provides only acute relief, and symptoms return shortly after cessation of stimulation. A theory-based approach called coordinated reset (CR) has shown great promise in achieving long-lasting effects. During CR stimulation, phase-shifted stimuli are delivered to multiple stimulation sites to counteract neuronal synchrony. Computational studies in plastic neuronal networks reported that synaptic weights reduce during stimulation, which may cause sustained structural changes leading to stabilized desynchronized activity even after stimulation ceases. Corresponding long-lasting effects were found in recent preclinical and clinical studies. We study long-lasting desynchronization by CR stimulation in excitatory recurrent neuronal networks of integrate-and-fire neurons with spike-timing-dependent plasticity (STDP). We focus on the impact of the stimulation frequency and the number of stimulation sites on long-lasting effects. We compare theoretical predictions to simulations of plastic neuronal networks. Our results are important regarding CR calibration for two reasons. We reveal that long-lasting effects become most pronounced when stimulation parameters are adjusted to the characteristics of STDP-rather than to neuronal frequency characteristics. This is in contrast to previous studies where the CR frequency was adjusted to the dominant neuronal rhythm. In addition, we reveal a nonlinear dependence of long-lasting effects on the number of stimulation sites and the CR frequency. Intriguingly, optimal long-lasting desynchronization does not require larger numbers of stimulation sites.
Collapse
Affiliation(s)
- Justus A Kromer
- Department of Neurosurgery, Stanford University, Stanford, California 94305, USA
| | - Ali Khaledi-Nasab
- Department of Neurosurgery, Stanford University, Stanford, California 94305, USA
| | - Peter A Tass
- Department of Neurosurgery, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
14
|
Cabessa J, Tchaptchet A. Automata complete computation with Hodgkin-Huxley neural networks composed of synfire rings. Neural Netw 2020; 126:312-334. [PMID: 32278841 DOI: 10.1016/j.neunet.2020.03.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Revised: 03/23/2020] [Accepted: 03/23/2020] [Indexed: 11/15/2022]
Abstract
Synfire rings are neural circuits capable of conveying synchronous, temporally precise and self-sustained activities in a robust manner. We propose a cell assembly based paradigm for abstract neural computation centered on the concept of synfire rings. More precisely, we empirically show that Hodgkin-Huxley neural networks modularly composed of synfire rings are automata complete. We provide an algorithmic construction which, starting from any given finite state automaton, builds a corresponding Hodgkin-Huxley neural network modularly composed of synfire rings and capable of simulating it. We illustrate the correctness of the construction on two specific examples. We further analyze the stability and robustness of the construction as a function of changes in the ring topologies as well as with respect to cell death and synaptic failure mechanisms, respectively. These results establish the possibility of achieving abstract computation with bio-inspired neural networks. They might constitute a theoretical ground for the realization of biological neural computers.
Collapse
Affiliation(s)
- Jérémie Cabessa
- Laboratory of Mathematical Economics and Applied Microeconomics (LEMMA), Université Paris 2, Panthéon-Assas, 75005 Paris, France; Institute of Computer Science of the Czech Academy of Sciences, P. O. Box 5, 18207 Prague 8, Czech Republic.
| | - Aubin Tchaptchet
- Institute of Physiology, Philipps University of Marburg, 35037 Marburg, Germany.
| |
Collapse
|
15
|
Lim S. Mechanisms underlying sharpening of visual response dynamics with familiarity. eLife 2019; 8:44098. [PMID: 31393260 PMCID: PMC6711664 DOI: 10.7554/elife.44098] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 08/07/2019] [Indexed: 12/03/2022] Open
Abstract
Experience-dependent modifications of synaptic connections are thought to change patterns of network activities and stimulus tuning with learning. However, only a few studies explored how synaptic plasticity shapes the response dynamics of cortical circuits. Here, we investigated the mechanism underlying sharpening of both stimulus selectivity and response dynamics with familiarity observed in monkey inferotemporal cortex. Broadening the distribution of activities and stronger oscillations in the response dynamics after learning provide evidence for synaptic plasticity in recurrent connections modifying the strength of positive feedback. Its interplay with slow negative feedback via firing rate adaptation is critical in sharpening response dynamics. Analysis of changes in temporal patterns also enables us to disentangle recurrent and feedforward synaptic plasticity and provides a measure for the strengths of recurrent synaptic plasticity. Overall, this work highlights the importance of analyzing changes in dynamics as well as network patterns to further reveal the mechanisms of visual learning.
Collapse
Affiliation(s)
- Sukbin Lim
- Neural Science, NYU Shanghai, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
| |
Collapse
|
16
|
Kasatkin DV, Klinshov VV, Nekorkin VI. Itinerant chimeras in an adaptive network of pulse-coupled oscillators. Phys Rev E 2019; 99:022203. [PMID: 30934254 DOI: 10.1103/physreve.99.022203] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Indexed: 11/07/2022]
Abstract
In a network of pulse-coupled oscillators with adaptive coupling, we discover a dynamical regime which we call an "itinerant chimera." Similarly as in classical chimera states, the network splits into two domains, the coherent and the incoherent. The drastic difference is that the composition of the domains is volatile, i.e., the oscillators demonstrate spontaneous switching between the domains. This process can be seen as traveling of the oscillators from one domain to another or as traveling of the chimera core across the network. We explore the basic features of the itinerant chimeras, such as the mean and the variance of the core size, and the oscillators lifetime within the core. We also study the scaling behavior of the system and show that the observed regime is not a finite-size effect but a key feature of the collective dynamics which persists even in large networks.
Collapse
Affiliation(s)
- Dmitry V Kasatkin
- Institute of Applied Physics of the Russian Academy of Sciences, 46 Ul'yanov Street, 603950, Nizhny Novgorod, Russia
| | - Vladimir V Klinshov
- Institute of Applied Physics of the Russian Academy of Sciences, 46 Ul'yanov Street, 603950, Nizhny Novgorod, Russia
| | - Vladimir I Nekorkin
- Institute of Applied Physics of the Russian Academy of Sciences, 46 Ul'yanov Street, 603950, Nizhny Novgorod, Russia
| |
Collapse
|
17
|
Pang R, Fairhall AL. Fast and flexible sequence induction in spiking neural networks via rapid excitability changes. eLife 2019; 8:44324. [PMID: 31081753 PMCID: PMC6538377 DOI: 10.7554/elife.44324] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 05/11/2019] [Indexed: 12/14/2022] Open
Abstract
Cognitive flexibility likely depends on modulation of the dynamics underlying how biological neural networks process information. While dynamics can be reshaped by gradually modifying connectivity, less is known about mechanisms operating on faster timescales. A compelling entrypoint to this problem is the observation that exploratory behaviors can rapidly cause selective hippocampal sequences to 'replay' during rest. Using a spiking network model, we asked whether simplified replay could arise from three biological components: fixed recurrent connectivity; stochastic 'gating' inputs; and rapid gating input scaling via long-term potentiation of intrinsic excitability (LTP-IE). Indeed, these enabled both forward and reverse replay of recent sensorimotor-evoked sequences, despite unchanged recurrent weights. LTP-IE 'tags' specific neurons with increased spiking probability under gating input, and ordering is reconstructed from recurrent connectivity. We further show how LTP-IE can implement temporary stimulus-response mappings. This elucidates a novel combination of mechanisms that might play a role in rapid cognitive flexibility.
Collapse
Affiliation(s)
- Rich Pang
- Neuroscience Graduate ProgramUniversity of WashingtonSeattleUnited States,Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States,Computational Neuroscience CenterUniversity of WashingtonSeattleUnited States
| | - Adrienne L Fairhall
- Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States,Computational Neuroscience CenterUniversity of WashingtonSeattleUnited States
| |
Collapse
|
18
|
Interplay of multiple pathways and activity-dependent rules in STDP. PLoS Comput Biol 2018; 14:e1006184. [PMID: 30106953 PMCID: PMC6112684 DOI: 10.1371/journal.pcbi.1006184] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Revised: 08/28/2018] [Accepted: 05/09/2018] [Indexed: 12/13/2022] Open
Abstract
Hebbian plasticity describes a basic mechanism for synaptic plasticity whereby synaptic weights evolve depending on the relative timing of paired activity of the pre- and postsynaptic neurons. Spike-timing-dependent plasticity (STDP) constitutes a central experimental and theoretical synaptic Hebbian learning rule. Various mechanisms, mostly calcium-based, account for the induction and maintenance of STDP. Classically STDP is assumed to gradually emerge in a monotonic way as the number of pairings increases. However, non-monotonic STDP accounting for fast associative learning led us to challenge this monotonicity hypothesis and explore how the existence of multiple plasticity pathways affects the dynamical establishment of plasticity. To account for distinct forms of STDP emerging from increasing numbers of pairings and the variety of signaling pathways involved, we developed a general class of simple mathematical models of plasticity based on calcium transients and accommodating various calcium-based plasticity mechanisms. These mechanisms can either compete or cooperate for the establishment of long-term potentiation (LTP) and depression (LTD), that emerge depending on past calcium activity. Our model reproduces accurately the striatal STDP that involves endocannabinoid and NMDAR signaling pathways. Moreover, we predict how stimulus frequency alters plasticity, and how triplet rules are affected by the number of pairings. We further investigate the general model with an arbitrary number of pathways and show that depending on those pathways and their properties, a variety of plasticities may emerge upon variation of the number and/or the frequency of pairings, even when the outcome after large numbers of pairings is identical. These findings, built upon a biologically realistic example and generalized to other applications, argue that in order to fully describe synaptic plasticity it is not sufficient to record STDP curves at fixed pairing numbers and frequencies. In fact, considering the whole spectrum of activity-dependent parameters could have a great impact on the description of plasticity, and a better understanding of the engram. The brain’s capacity to treat information, learn and store memory relies on synaptic connectivity patterns, which are altered through synaptic plasticity mechanisms. Experimentally, such plasticities were evidenced through protocols involving numerous repetitive stimulations of a given synapse, and were shown to be supported by multiple pathways. Using a simple biologically grounded mathematical model, we show how activation timescales and inactivation levels of each pathway interact and alter plasticity in an intricate manner as stimuli are presented. Building upon data from the synapse between cortex and striatum, we show that synaptic changes may revert or re-emerge as stimuli are presented, and predict specific responses to changes in stimulus frequency or to distinct simulation patterns. Our general model shows that a given plasticity profile emerging in response to a repetitive stimulation protocol can unfold into various scenarii upon variations of the number of stimulus presentations or patterns, which tightly depends on the underlying activated pathways. Altogether, these results argue that in order to better understand learning and memory, single plasticity responses obtained through intensive stimulations do not reveal the complexity of the responses for smaller number of presentations, which may have a strong impact in fast learning of stimuli with low numbers of presentations.
Collapse
|
19
|
Delay-Induced Multistability and Loop Formation in Neuronal Networks with Spike-Timing-Dependent Plasticity. Sci Rep 2018; 8:12068. [PMID: 30104713 PMCID: PMC6089910 DOI: 10.1038/s41598-018-30565-9] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Accepted: 08/02/2018] [Indexed: 12/16/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) adjusts synaptic strengths according to the precise timing of pre- and postsynaptic spike pairs. Theoretical and computational studies have revealed that STDP may contribute to the emergence of a variety of structural and dynamical states in plastic neuronal populations. In this manuscript, we show that by incorporating dendritic and axonal propagation delays in recurrent networks of oscillatory neurons, the asymptotic connectivity displays multistability, where different structures emerge depending on the initial distribution of the synaptic strengths. In particular, we show that the standard deviation of the initial distribution of synaptic weights, besides its mean, determines the main properties of the emergent structural connectivity such as the mean final synaptic weight, the number of two-neuron loops and the symmetry of the final structure. We also show that the firing rates of the neurons affect the evolution of the network, and a more symmetric configuration of the synapses emerges at higher firing rates. We justify the network results based on a two-neuron framework and show how the results translate to large recurrent networks.
Collapse
|
20
|
Min B, Zhou D, Cai D. Effects of Firing Variability on Network Structures with Spike-Timing-Dependent Plasticity. Front Comput Neurosci 2018; 12:1. [PMID: 29410621 PMCID: PMC5787127 DOI: 10.3389/fncom.2018.00001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Accepted: 01/03/2018] [Indexed: 11/17/2022] Open
Abstract
Synaptic plasticity is believed to be the biological substrate underlying learning and memory. One of the most widespread forms of synaptic plasticity, spike-timing-dependent plasticity (STDP), uses the spike timing information of presynaptic and postsynaptic neurons to induce synaptic potentiation or depression. An open question is how STDP organizes the connectivity patterns in neuronal circuits. Previous studies have placed much emphasis on the role of firing rate in shaping connectivity patterns. Here, we go beyond the firing rate description to develop a self-consistent linear response theory that incorporates the information of both firing rate and firing variability. By decomposing the pairwise spike correlation into one component associated with local direct connections and the other associated with indirect connections, we identify two distinct regimes regarding the network structures learned through STDP. In one regime, the contribution of the direct-connection correlations dominates over that of the indirect-connection correlations in the learning dynamics; this gives rise to a network structure consistent with the firing rate description. In the other regime, the contribution of the indirect-connection correlations dominates in the learning dynamics, leading to a network structure different from the firing rate description. We demonstrate that the heterogeneity of firing variability across neuronal populations induces a temporally asymmetric structure of indirect-connection correlations. This temporally asymmetric structure underlies the emergence of the second regime. Our study provides a new perspective that emphasizes the role of high-order statistics of spiking activity in the spike-correlation-sensitive learning dynamics.
Collapse
Affiliation(s)
- Bin Min
- Center for Neural Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Douglas Zhou
- School of Mathematical Sciences, MOE-LSC, Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - David Cai
- Center for Neural Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.,School of Mathematical Sciences, MOE-LSC, Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
21
|
Sprekeler H. Functional consequences of inhibitory plasticity: homeostasis, the excitation-inhibition balance and beyond. Curr Opin Neurobiol 2017; 43:198-203. [PMID: 28500933 DOI: 10.1016/j.conb.2017.03.014] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 03/12/2017] [Accepted: 03/22/2017] [Indexed: 11/18/2022]
Abstract
Computational neuroscience has a long-standing tradition of investigating the consequences of excitatory synaptic plasticity. In contrast, the functions of inhibitory plasticity are still largely nebulous, particularly given the bewildering diversity of interneurons in the brain. Here, we review recent computational advances that provide first suggestions for the functional roles of inhibitory plasticity, such as a maintenance of the excitation-inhibition balance, a stabilization of recurrent network dynamics and a decorrelation of sensory responses. The field is still in its infancy, but given the existing body of theory for excitatory plasticity, it is likely to mature quickly and deliver important insights into the self-organization of inhibitory circuits in the brain.
Collapse
Affiliation(s)
- Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Berlin Institute of Technology, and Bernstein Center for Computational Neuroscience, Marchstr. 23, 10587 Berlin, Germany.
| |
Collapse
|
22
|
Borges RR, Borges FS, Lameu EL, Batista AM, Iarosz KC, Caldas IL, Antonopoulos CG, Baptista MS. Spike timing-dependent plasticity induces non-trivial topology in the brain. Neural Netw 2017; 88:58-64. [PMID: 28189840 DOI: 10.1016/j.neunet.2017.01.010] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2016] [Revised: 01/14/2017] [Accepted: 01/24/2017] [Indexed: 10/20/2022]
Abstract
We study the capacity of Hodgkin-Huxley neuron in a network to change temporarily or permanently their connections and behavior, the so called spike timing-dependent plasticity (STDP), as a function of their synchronous behavior. We consider STDP of excitatory and inhibitory synapses driven by Hebbian rules. We show that the final state of networks evolved by a STDP depend on the initial network configuration. Specifically, an initial all-to-all topology evolves to a complex topology. Moreover, external perturbations can induce co-existence of clusters, those whose neurons are synchronous and those whose neurons are desynchronous. This work reveals that STDP based on Hebbian rules leads to a change in the direction of the synapses between high and low frequency neurons, and therefore, Hebbian learning can be explained in terms of preferential attachment between these two diverse communities of neurons, those with low-frequency spiking neurons, and those with higher-frequency spiking neurons.
Collapse
Affiliation(s)
- R R Borges
- Pós-Graduação em Ciências, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil; Departamento de Matemática, Universidade Tecnológica Federal do Paraná, 86812-460, Apucarana, PR, Brazil
| | - F S Borges
- Pós-Graduação em Ciências, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil
| | - E L Lameu
- Pós-Graduação em Ciências, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil
| | - A M Batista
- Pós-Graduação em Ciências, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil; Departamento de Matemática e Estatística, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil; Instituto de Física, Universidade de São Paulo, São Paulo, SP, Brazil.
| | - K C Iarosz
- Instituto de Física, Universidade de São Paulo, São Paulo, SP, Brazil
| | - I L Caldas
- Instituto de Física, Universidade de São Paulo, São Paulo, SP, Brazil
| | - C G Antonopoulos
- Department of Mathematical Sciences, University of Essex, Wivenhoe Park, UK
| | - M S Baptista
- Institute for Complex Systems and Mathematical Biology, University of Aberdeen, SUPA, Aberdeen, UK
| |
Collapse
|
23
|
Pedrosa V, Clopath C. The Role of Neuromodulators in Cortical Plasticity. A Computational Perspective. Front Synaptic Neurosci 2017; 8:38. [PMID: 28119596 PMCID: PMC5222801 DOI: 10.3389/fnsyn.2016.00038] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 12/12/2016] [Indexed: 11/13/2022] Open
Abstract
Neuromodulators play a ubiquitous role across the brain in regulating plasticity. With recent advances in experimental techniques, it is possible to study the effects of diverse neuromodulatory states in specific brain regions. Neuromodulators are thought to impact plasticity predominantly through two mechanisms: the gating of plasticity and the upregulation of neuronal activity. However, the consequences of these mechanisms are poorly understood and there is a need for both experimental and theoretical exploration. Here we illustrate how neuromodulatory state affects cortical plasticity through these two mechanisms. First, we explore the ability of neuromodulators to gate plasticity by reshaping the learning window for spike-timing-dependent plasticity. Using a simple computational model, we implement four different learning rules and demonstrate their effects on receptive field plasticity. We then compare the neuromodulatory effects of upregulating learning rate versus the effects of upregulating neuronal activity. We find that these seemingly similar mechanisms do not yield the same outcome: upregulating neuronal activity can lead to either a broadening or a sharpening of receptive field tuning, whereas upregulating learning rate only intensifies the sharpening of receptive field tuning. This simple model demonstrates the need for further exploration of the rich landscape of neuromodulator-mediated plasticity. Future experiments, coupled with biologically detailed computational models, will elucidate the diversity of mechanisms by which neuromodulatory state regulates cortical plasticity.
Collapse
Affiliation(s)
- Victor Pedrosa
- Department of Bioengineering, Imperial College LondonLondon, UK; CAPES Foundation, Ministry of Education of BrazilBrasilia, Brazil
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London London, UK
| |
Collapse
|
24
|
Madadi Asl M, Valizadeh A, Tass PA. Dendritic and Axonal Propagation Delays Determine Emergent Structures of Neuronal Networks with Plastic Synapses. Sci Rep 2017; 7:39682. [PMID: 28045109 PMCID: PMC5206725 DOI: 10.1038/srep39682] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Accepted: 11/25/2016] [Indexed: 11/09/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) modifies synaptic strengths based on the relative timing of pre- and postsynaptic spikes. The temporal order of spikes turned out to be crucial. We here take into account how propagation delays, composed of dendritic and axonal delay times, may affect the temporal order of spikes. In a minimal setting, characterized by neglecting dendritic and axonal propagation delays, STDP eliminates bidirectional connections between two coupled neurons and turns them into unidirectional connections. In this paper, however, we show that depending on the dendritic and axonal propagation delays, the temporal order of spikes at the synapses can be different from those in the cell bodies and, consequently, qualitatively different connectivity patterns emerge. In particular, we show that for a system of two coupled oscillatory neurons, bidirectional synapses can be preserved and potentiated. Intriguingly, this finding also translates to large networks of type-II phase oscillators and, hence, crucially impacts on the overall hierarchical connectivity patterns of oscillatory neuronal networks.
Collapse
Affiliation(s)
- Mojtaba Madadi Asl
- Institute for Advanced Studies in Basic Sciences (IASBS), Department of Physics, Zanjan, 45195-1159, Iran
| | - Alireza Valizadeh
- Institute for Advanced Studies in Basic Sciences (IASBS), Department of Physics, Zanjan, 45195-1159, Iran.,Institute for Research in Fundamental Sciences (IPM), School of Cognitive Sciences, Tehran, 19395-5746, Iran
| | - Peter A Tass
- Institute of Neuroscience and Medicine - Neuromodulation (INM-7), Research Center Jülich, Jülich, 52425, Germany.,Stanford University, Department of Neurosurgery, Stanford, CA, 94305, USA.,University of Cologne, Department of Neuromodulation, Cologne, 50937, Germany
| |
Collapse
|
25
|
Kato H, Ikeguchi T. Oscillation, Conduction Delays, and Learning Cooperate to Establish Neural Competition in Recurrent Networks. PLoS One 2016; 11:e0146044. [PMID: 26840529 PMCID: PMC4740405 DOI: 10.1371/journal.pone.0146044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2015] [Accepted: 12/11/2015] [Indexed: 11/18/2022] Open
Abstract
Specific memory might be stored in a subnetwork consisting of a small population of neurons. To select neurons involved in memory formation, neural competition might be essential. In this paper, we show that excitable neurons are competitive and organize into two assemblies in a recurrent network with spike timing-dependent synaptic plasticity (STDP) and axonal conduction delays. Neural competition is established by the cooperation of spontaneously induced neural oscillation, axonal conduction delays, and STDP. We also suggest that the competition mechanism in this paper is one of the basic functions required to organize memory-storing subnetworks into fine-scale cortical networks.
Collapse
Affiliation(s)
- Hideyuki Kato
- School of Engineering, Tokyo University of Technology, Tokyo Japan
- * E-mail:
| | - Tohru Ikeguchi
- Faculty of Engineering Division I, Tokyo University of Science, Tokyo, Japan
| |
Collapse
|
26
|
Effenberger F, Jost J, Levina A. Self-organization in Balanced State Networks by STDP and Homeostatic Plasticity. PLoS Comput Biol 2015; 11:e1004420. [PMID: 26335425 PMCID: PMC4559467 DOI: 10.1371/journal.pcbi.1004420] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 06/30/2015] [Indexed: 11/18/2022] Open
Abstract
Structural inhomogeneities in synaptic efficacies have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However, little is known about how such inhomogeneities could evolve by means of synaptic plasticity. Here we present an adaptive model of a balanced neuronal network that combines two different types of plasticity, STDP and synaptic scaling. The plasticity rules yield both long-tailed distributions of synaptic weights and firing rates. Simultaneously, a highly connected subnetwork of driver neurons with strong synapses emerges. Coincident spiking activity of several driver cells can evoke population bursts and driver cells have similar dynamical properties as leader neurons found experimentally. Our model allows us to observe the delicate interplay between structural and dynamical properties of the emergent inhomogeneities. It is simple, robust to parameter changes and able to explain a multitude of different experimental findings in one basic network. It is widely believed that the structure of neuronal circuits plays a major role in brain functioning. Although the full synaptic connectivity for larger populations is not yet assessable even by current experimental techniques, available data show that neither synaptic strengths nor the number of synapses per neuron are homogeneously distributed. Several studies have found long-tailed distributions of synaptic weights with many weak and a few exceptionally strong synaptic connections, as well as strongly connected cells and subnetworks that may play a decisive role for data processing in neural circuits. Little is known about how inhomogeneities could arise in the developing brain and we hypothesize that there is a self-organizing principle behind their appearance. In this study we show how structural inhomogeneities can emerge by simple synaptic plasticity mechanisms from an initially homogeneous network. We perform numerical simulations and show analytically how a small imbalance in the initial structure is amplified by the synaptic plasticities and their interplay. Our network can simultaneously explain several experimental observations that were previously not linked.
Collapse
Affiliation(s)
- Felix Effenberger
- Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany
- * E-mail:
| | - Jürgen Jost
- Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany
| | - Anna Levina
- Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| |
Collapse
|
27
|
Chrol-Cannon J, Jin Y. Learning structure of sensory inputs with synaptic plasticity leads to interference. Front Comput Neurosci 2015; 9:103. [PMID: 26300769 PMCID: PMC4525052 DOI: 10.3389/fncom.2015.00103] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2015] [Accepted: 07/20/2015] [Indexed: 12/14/2022] Open
Abstract
Synaptic plasticity is often explored as a form of unsupervised adaptation in cortical microcircuits to learn the structure of complex sensory inputs and thereby improve performance of classification and prediction. The question of whether the specific structure of the input patterns is encoded in the structure of neural networks has been largely neglected. Existing studies that have analyzed input-specific structural adaptation have used simplified, synthetic inputs in contrast to complex and noisy patterns found in real-world sensory data. In this work, input-specific structural changes are analyzed for three empirically derived models of plasticity applied to three temporal sensory classification tasks that include complex, real-world visual and auditory data. Two forms of spike-timing dependent plasticity (STDP) and the Bienenstock-Cooper-Munro (BCM) plasticity rule are used to adapt the recurrent network structure during the training process before performance is tested on the pattern recognition tasks. It is shown that synaptic adaptation is highly sensitive to specific classes of input pattern. However, plasticity does not improve the performance on sensory pattern recognition tasks, partly due to synaptic interference between consecutively presented input samples. The changes in synaptic strength produced by one stimulus are reversed by the presentation of another, thus largely preventing input-specific synaptic changes from being retained in the structure of the network. To solve the problem of interference, we suggest that models of plasticity be extended to restrict neural activity and synaptic modification to a subset of the neural circuit, which is increasingly found to be the case in experimental neuroscience.
Collapse
Affiliation(s)
- Joseph Chrol-Cannon
- Department of Computer Science, Faculty of Engineering and Physical Sciences, University of Surrey Guildford, UK
| | - Yaochu Jin
- Department of Computer Science, Faculty of Engineering and Physical Sciences, University of Surrey Guildford, UK
| |
Collapse
|
28
|
Mao H, Yuan Y, Si J. Improved discriminability of spatiotemporal neural patterns in rat motor cortical areas as directional choice learning progresses. Front Syst Neurosci 2015; 9:28. [PMID: 25798093 PMCID: PMC4351592 DOI: 10.3389/fnsys.2015.00028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2014] [Accepted: 02/16/2015] [Indexed: 11/13/2022] Open
Abstract
Animals learn to choose a proper action among alternatives to improve their odds of success in food foraging and other activities critical for survival. Through trial-and-error, they learn correct associations between their choices and external stimuli. While a neural network that underlies such learning process has been identified at a high level, it is still unclear how individual neurons and a neural ensemble adapt as learning progresses. In this study, we monitored the activity of single units in the rat medial and lateral agranular (AGm and AGl, respectively) areas as rats learned to make a left or right side lever press in response to a left or right side light cue. We noticed that rat movement parameters during the performance of the directional choice task quickly became stereotyped during the first 2–3 days or sessions. But learning the directional choice problem took weeks to occur. Accompanying rats' behavioral performance adaptation, we observed neural modulation by directional choice in recorded single units. Our analysis shows that ensemble mean firing rates in the cue-on period did not change significantly as learning progressed, and the ensemble mean rate difference between left and right side choices did not show a clear trend of change either. However, the spatiotemporal firing patterns of the neural ensemble exhibited improved discriminability between the two directional choices through learning. These results suggest a spatiotemporal neural coding scheme in a motor cortical neural ensemble that may be responsible for and contributing to learning the directional choice task.
Collapse
Affiliation(s)
- Hongwei Mao
- Electrical Engineering, School of Electrical, Computer and Energy Engineering, Arizona State University Tempe, AZ, USA
| | - Yuan Yuan
- Electrical Engineering, School of Electrical, Computer and Energy Engineering, Arizona State University Tempe, AZ, USA
| | - Jennie Si
- Electrical Engineering, School of Electrical, Computer and Energy Engineering, Arizona State University Tempe, AZ, USA ; Graduate Faculty of the School of Biological and Health Systems Engineering, Arizona State University Tempe, AZ, USA ; Affiliate Faculty of the Interdisciplinary Graduate Program in Neuroscience, Arizona State University Tempe, AZ, USA
| |
Collapse
|
29
|
Borovkov K, Decrouez G, Gilson M. On Stationary Distributions of Stochastic Neural Networks. J Appl Probab 2014. [DOI: 10.1239/jap/1409932677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.
Collapse
|
30
|
Interplay between short- and long-term plasticity in cell-assembly formation. PLoS One 2014; 9:e101535. [PMID: 25007209 PMCID: PMC4090127 DOI: 10.1371/journal.pone.0101535] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Accepted: 06/08/2014] [Indexed: 11/19/2022] Open
Abstract
Various hippocampal and neocortical synapses of mammalian brain show both short-term plasticity and long-term plasticity, which are considered to underlie learning and memory by the brain. According to Hebb’s postulate, synaptic plasticity encodes memory traces of past experiences into cell assemblies in cortical circuits. However, it remains unclear how the various forms of long-term and short-term synaptic plasticity cooperatively create and reorganize such cell assemblies. Here, we investigate the mechanism in which the three forms of synaptic plasticity known in cortical circuits, i.e., spike-timing-dependent plasticity (STDP), short-term depression (STD) and homeostatic plasticity, cooperatively generate, retain and reorganize cell assemblies in a recurrent neuronal network model. We show that multiple cell assemblies generated by external stimuli can survive noisy spontaneous network activity for an adequate range of the strength of STD. Furthermore, our model predicts that a symmetric temporal window of STDP, such as observed in dopaminergic modulations on hippocampal neurons, is crucial for the retention and integration of multiple cell assemblies. These results may have implications for the understanding of cortical memory processes.
Collapse
|
31
|
Kleberg FI, Fukai T, Gilson M. Excitatory and inhibitory STDP jointly tune feedforward neural circuits to selectively propagate correlated spiking activity. Front Comput Neurosci 2014; 8:53. [PMID: 24847242 PMCID: PMC4019846 DOI: 10.3389/fncom.2014.00053] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2013] [Accepted: 04/10/2014] [Indexed: 11/13/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) has been well established between excitatory neurons and several computational functions have been proposed in various neural systems. Despite some recent efforts, however, there is a significant lack of functional understanding of inhibitory STDP (iSTDP) and its interplay with excitatory STDP (eSTDP). Here, we demonstrate by analytical and numerical methods that iSTDP contributes crucially to the balance of excitatory and inhibitory weights for the selection of a specific signaling pathway among other pathways in a feedforward circuit. This pathway selection is based on the high sensitivity of STDP to correlations in spike times, which complements a recent proposal for the role of iSTDP in firing-rate based selection. Our model predicts that asymmetric anti-Hebbian iSTDP exceeds asymmetric Hebbian iSTDP for supporting pathway-specific balance, which we show is useful for propagating transient neuronal responses. Furthermore, we demonstrate how STDPs at excitatory-excitatory, excitatory-inhibitory, and inhibitory-excitatory synapses cooperate to improve the pathway selection. We propose that iSTDP is crucial for shaping the network structure that achieves efficient processing of synchronous spikes.
Collapse
Affiliation(s)
- Florence I Kleberg
- Lab for Neural Circuit Theory, RIKEN Brain Science Institute Wako, Japan
| | - Tomoki Fukai
- Lab for Neural Circuit Theory, RIKEN Brain Science Institute Wako, Japan
| | - Matthieu Gilson
- Lab for Neural Circuit Theory, RIKEN Brain Science Institute Wako, Japan
| |
Collapse
|
32
|
Vasilaki E, Giugliano M. Emergence of connectivity motifs in networks of model neurons with short- and long-term plastic synapses. PLoS One 2014; 9:e84626. [PMID: 24454735 PMCID: PMC3893143 DOI: 10.1371/journal.pone.0084626] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2013] [Accepted: 11/16/2013] [Indexed: 11/29/2022] Open
Abstract
Recent experimental data from the rodent cerebral cortex and olfactory bulb indicate that specific connectivity motifs are correlated with short-term dynamics of excitatory synaptic transmission. It was observed that neurons with short-term facilitating synapses form predominantly reciprocal pairwise connections, while neurons with short-term depressing synapses form predominantly unidirectional pairwise connections. The cause of these structural differences in excitatory synaptic microcircuits is unknown. We show that these connectivity motifs emerge in networks of model neurons, from the interactions between short-term synaptic dynamics (SD) and long-term spike-timing dependent plasticity (STDP). While the impact of STDP on SD was shown in simultaneous neuronal pair recordings in vitro, the mutual interactions between STDP and SD in large networks are still the subject of intense research. Our approach combines an SD phenomenological model with an STDP model that faithfully captures long-term plasticity dependence on both spike times and frequency. As a proof of concept, we first simulate and analyze recurrent networks of spiking neurons with random initial connection efficacies and where synapses are either all short-term facilitating or all depressing. For identical external inputs to the network, and as a direct consequence of internally generated activity, we find that networks with depressing synapses evolve unidirectional connectivity motifs, while networks with facilitating synapses evolve reciprocal connectivity motifs. We then show that the same results hold for heterogeneous networks, including both facilitating and depressing synapses. This does not contradict a recent theory that proposes that motifs are shaped by external inputs, but rather complements it by examining the role of both the external inputs and the internally generated network activity. Our study highlights the conditions under which SD-STDP might explain the correlation between facilitation and reciprocal connectivity motifs, as well as between depression and unidirectional motifs.
Collapse
Affiliation(s)
- Eleni Vasilaki
- Department of Computer Science, University of Sheffield, Sheffield, United Kingdom
- Department of Biomedical Sciences, University of Antwerp, Wilrijk, Belgium
| | - Michele Giugliano
- Department of Computer Science, University of Sheffield, Sheffield, United Kingdom
- Department of Biomedical Sciences, University of Antwerp, Wilrijk, Belgium
- Brain Mind Institute, Swiss Federal Institute of Technology of Lausanne, Lausanne, Switzerland
- * E-mail:
| |
Collapse
|
33
|
Grytskyy D, Tetzlaff T, Diesmann M, Helias M. A unified view on weakly correlated recurrent networks. Front Comput Neurosci 2013; 7:131. [PMID: 24151463 PMCID: PMC3799216 DOI: 10.3389/fncom.2013.00131] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Accepted: 09/10/2013] [Indexed: 11/13/2022] Open
Abstract
The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire (LIF) model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models (LRM), including the Ornstein-Uhlenbeck process (OUP) as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of LIF models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra.
Collapse
Affiliation(s)
- Dmytro Grytskyy
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany
| | | | | | | |
Collapse
|
34
|
Abstract
Numerous experimental data suggest that simultaneously or sequentially activated assemblies of neurons play a key role in the storage and computational use of long-term memory in the brain. However, a model that elucidates how these memory traces could emerge through spike-timing-dependent plasticity (STDP) has been missing. We show here that stimulus-specific assemblies of neurons emerge automatically through STDP in a simple cortical microcircuit model. The model that we consider is a randomly connected network of well known microcircuit motifs: pyramidal cells with lateral inhibition. We show that the emergent assembly codes for repeatedly occurring spatiotemporal input patterns tend to fire in some loose, sequential manner that is reminiscent of experimentally observed stereotypical trajectories of network states. We also show that the emergent assembly codes add an important computational capability to standard models for online computations in cortical microcircuits: the capability to integrate information from long-term memory with information from novel spike inputs.
Collapse
|
35
|
Babadi B, Abbott LF. Pairwise analysis can account for network structures arising from spike-timing dependent plasticity. PLoS Comput Biol 2013; 9:e1002906. [PMID: 23436986 PMCID: PMC3578766 DOI: 10.1371/journal.pcbi.1002906] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2012] [Accepted: 12/14/2012] [Indexed: 11/18/2022] Open
Abstract
Spike timing-dependent plasticity (STDP) modifies synaptic strengths based on timing information available locally at each synapse. Despite this, it induces global structures within a recurrently connected network. We study such structures both through simulations and by analyzing the effects of STDP on pair-wise interactions of neurons. We show how conventional STDP acts as a loop-eliminating mechanism and organizes neurons into in- and out-hubs. Loop-elimination increases when depression dominates and turns into loop-generation when potentiation dominates. STDP with a shifted temporal window such that coincident spikes cause depression enhances recurrent connections and functions as a strict buffering mechanism that maintains a roughly constant average firing rate. STDP with the opposite temporal shift functions as a loop eliminator at low rates and as a potent loop generator at higher rates. In general, studying pairwise interactions of neurons provides important insights about the structures that STDP can produce in large networks.
Collapse
Affiliation(s)
- Baktash Babadi
- Center for Theoretical Neuroscience, Department of Neuroscience, Columbia University, New York, New York, United States of America.
| | | |
Collapse
|
36
|
Kerr RR, Burkitt AN, Thomas DA, Gilson M, Grayden DB. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs. PLoS Comput Biol 2013; 9:e1002897. [PMID: 23408878 PMCID: PMC3567188 DOI: 10.1371/journal.pcbi.1002897] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2012] [Accepted: 12/10/2012] [Indexed: 11/28/2022] Open
Abstract
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem. Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
Collapse
Affiliation(s)
- Robert R. Kerr
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
| | - Anthony N. Burkitt
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Bionics Institute, Melbourne, Victoria, Australia
- * E-mail:
| | - Doreen A. Thomas
- Department of Mechanical Engineering, The University of Melbourne, Melbourne, Victoria, Australia
| | - Matthieu Gilson
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Saitama, Japan
| | - David B. Grayden
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Bionics Institute, Melbourne, Victoria, Australia
| |
Collapse
|
37
|
Ren Q, Kolwankar KM, Samal A, Jost J. Hopf bifurcation in the evolution of networks driven by spike-timing-dependent plasticity. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 86:056103. [PMID: 23214839 DOI: 10.1103/physreve.86.056103] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Indexed: 06/01/2023]
Abstract
We study the interplay of topology and dynamics in a neural network connected with spike-timing-dependent plasticity (STDP) synapses. Stimulated with periodic spike trains, the STDP-driven network undergoes a synaptic pruning process and evolves to a residual network. We examine the variation of topological and dynamical properties of the residual network by varying two key parameters of STDP: synaptic delay and the ratio between potentiation and depression. Our extensive numerical simulations of the leaky integrate-and-fire model show that there exists two regions in the parameter space. The first corresponds to fixed-point configurations, where the distribution of peak synaptic conductances and the firing rate of neurons remain constant over time. The second corresponds to oscillating configurations, where both topological and dynamical properties vary periodically, which is a result of a fixed point becoming a limit cycle via a Hopf bifurcation. This leads to interesting questions regarding the implications of these rhythms in the topology and dynamics of the network for learning and cognitive processing.
Collapse
Affiliation(s)
- Quansheng Ren
- School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China.
| | | | | | | |
Collapse
|
38
|
Markram H, Gerstner W, Sjöström PJ. Spike-timing-dependent plasticity: a comprehensive overview. Front Synaptic Neurosci 2012; 4:2. [PMID: 22807913 PMCID: PMC3395004 DOI: 10.3389/fnsyn.2012.00002] [Citation(s) in RCA: 132] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2012] [Accepted: 06/21/2012] [Indexed: 11/13/2022] Open
Affiliation(s)
- H Markram
- Brain Mind Institute Life Science, Ecole Polytechnique Federale de Lausanne Lausanne, Switzerland
| | | | | |
Collapse
|
39
|
Spectral analysis of input spike trains by spike-timing-dependent plasticity. PLoS Comput Biol 2012; 8:e1002584. [PMID: 22792056 PMCID: PMC3390410 DOI: 10.1371/journal.pcbi.1002584] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2011] [Accepted: 05/14/2012] [Indexed: 11/19/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP. Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
Collapse
|
40
|
Burbank KS, Kreiman G. Depression-biased reverse plasticity rule is required for stable learning at top-down connections. PLoS Comput Biol 2012; 8:e1002393. [PMID: 22396630 PMCID: PMC3291526 DOI: 10.1371/journal.pcbi.1002393] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2011] [Accepted: 01/01/2012] [Indexed: 11/19/2022] Open
Abstract
Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body. The complex circuitry in the cerebral cortex is characterized by bottom-up connections, which carry feedforward information from the sensory periphery to higher areas, and top-down connections, where the information flow is reversed. Changes over time in the strength of synaptic connections between neurons underlie development, learning and memory. A fundamental mechanism to change synaptic strength is spike timing dependent plasticity, whereby synapses are strengthened whenever pre-synaptic spikes shortly precede post-synaptic spikes and are weakened otherwise; the relative timing of spikes therefore dictates the direction of plasticity. Spike timing dependent plasticity has been observed in multiple species and different brain areas. Here, we argue that top-down connections obey a learning rule with a reversed temporal dependence, which we call reverse spike timing dependent plasticity. We use mathematical analysis and computational simulations to show that this reverse time learning rule, and not previous learning rules, leads to a biologically plausible connectivity pattern with stable synaptic strengths. This reverse time learning rule is supported by recent neuroanatomical and neurophysiological experiments and can explain empirical observations about the development and function of top-down synapses in the brain.
Collapse
Affiliation(s)
- Kendra S. Burbank
- Department of Neurology and Ophthalmology, Children's Hospital Boston, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Gabriel Kreiman
- Department of Neurology and Ophthalmology, Children's Hospital Boston, Harvard Medical School, Boston, Massachusetts, United States of America
- Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America
- Swartz Center for Theoretical Neuroscience, Harvard University, Cambridge, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
41
|
Abstract
The mammalian cerebral cortex is characterized in vivo by irregular spontaneous activity, but how this ongoing dynamics affects signal processing and learning remains unknown. The associative plasticity rules demonstrated in vitro, mostly in silent networks, are based on the detection of correlations between presynaptic and postsynaptic activity and hence are sensitive to spontaneous activity and spurious correlations. Therefore, they cannot operate in realistic network states. Here, we present a new class of spike-timing-dependent plasticity learning rules with local floating plasticity thresholds, the slow dynamics of which account for metaplasticity. This novel algorithm is shown to both correctly predict homeostasis in synaptic weights and solve the problem of asymptotic stable learning in noisy states. It is shown to naturally encompass many other known types of learning rule, unifying them into a single coherent framework. The mixed presynaptic and postsynaptic dependency of the floating plasticity threshold is justified by a cascade of known molecular pathways, which leads to experimentally testable predictions.
Collapse
|
42
|
Gilson M, Fukai T, Toyoizumi T. Interplay between dendritic non-linearities and STDP. BMC Neurosci 2011. [PMCID: PMC3240454 DOI: 10.1186/1471-2202-12-s1-p338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
43
|
Stability versus neuronal specialization for STDP: long-tail weight distributions solve the dilemma. PLoS One 2011; 6:e25339. [PMID: 22003389 PMCID: PMC3189213 DOI: 10.1371/journal.pone.0025339] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2011] [Accepted: 09/01/2011] [Indexed: 11/19/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) modifies the weight (or strength) of synaptic connections between neurons and is considered to be crucial for generating network structure. It has been observed in physiology that, in addition to spike timing, the weight update also depends on the current value of the weight. The functional implications of this feature are still largely unclear. Additive STDP gives rise to strong competition among synapses, but due to the absence of weight dependence, it requires hard boundaries to secure the stability of weight dynamics. Multiplicative STDP with linear weight dependence for depression ensures stability, but it lacks sufficiently strong competition required to obtain a clear synaptic specialization. A solution to this stability-versus-function dilemma can be found with an intermediate parametrization between additive and multiplicative STDP. Here we propose a novel solution to the dilemma, named log-STDP, whose key feature is a sublinear weight dependence for depression. Due to its specific weight dependence, this new model can produce significantly broad weight distributions with no hard upper bound, similar to those recently observed in experiments. Log-STDP induces graded competition between synapses, such that synapses receiving stronger input correlations are pushed further in the tail of (very) large weights. Strong weights are functionally important to enhance the neuronal response to synchronous spike volleys. Depending on the input configuration, multiple groups of correlated synaptic inputs exhibit either winner-share-all or winner-take-all behavior. When the configuration of input correlations changes, individual synapses quickly and robustly readapt to represent the new configuration. We also demonstrate the advantages of log-STDP for generating a stable structure of strong weights in a recurrently connected network. These properties of log-STDP are compared with those of previous models. Through long-tail weight distributions, log-STDP achieves both stable dynamics for and robust competition of synapses, which are crucial for spike-based information processing.
Collapse
|
44
|
Markram H, Gerstner W, Sjöström PJ. A history of spike-timing-dependent plasticity. Front Synaptic Neurosci 2011; 3:4. [PMID: 22007168 PMCID: PMC3187646 DOI: 10.3389/fnsyn.2011.00004] [Citation(s) in RCA: 201] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2011] [Accepted: 07/25/2011] [Indexed: 01/21/2023] Open
Abstract
How learning and memory is achieved in the brain is a central question in neuroscience. Key to today's research into information storage in the brain is the concept of synaptic plasticity, a notion that has been heavily influenced by Hebb's (1949) postulate. Hebb conjectured that repeatedly and persistently co-active cells should increase connective strength among populations of interconnected neurons as a means of storing a memory trace, also known as an engram. Hebb certainly was not the first to make such a conjecture, as we show in this history. Nevertheless, literally thousands of studies into the classical frequency-dependent paradigm of cellular learning rules were directly inspired by the Hebbian postulate. But in more recent years, a novel concept in cellular learning has emerged, where temporal order instead of frequency is emphasized. This new learning paradigm - known as spike-timing-dependent plasticity (STDP) - has rapidly gained tremendous interest, perhaps because of its combination of elegant simplicity, biological plausibility, and computational power. But what are the roots of today's STDP concept? Here, we discuss several centuries of diverse thinking, beginning with philosophers such as Aristotle, Locke, and Ribot, traversing, e.g., Lugaro's plasticità and Rosenblatt's perceptron, and culminating with the discovery of STDP. We highlight interactions between theoretical and experimental fields, showing how discoveries sometimes occurred in parallel, seemingly without much knowledge of the other field, and sometimes via concrete back-and-forth communication. We point out where the future directions may lie, which includes interneuron STDP, the functional impact of STDP, its mechanisms and its neuromodulatory regulation, and the linking of STDP to the developmental formation and continuous plasticity of neuronal networks.
Collapse
Affiliation(s)
- Henry Markram
- Brain Mind Institute, Ecole Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Wulfram Gerstner
- Brain Mind Institute, Ecole Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Per Jesper Sjöström
- Department of Neuroscience, Physiology and Pharmacology, University College LondonLondon, UK
- Department of Neurology and Neurosurgery, Centre for Research in Neuroscience, The Research Institute of the McGill University Health Centre, Montreal General HospitalMontreal, QC, Canada
| |
Collapse
|
45
|
Osan R, Tort ABL, Amaral OB. A mismatch-based model for memory reconsolidation and extinction in attractor networks. PLoS One 2011; 6:e23113. [PMID: 21826231 PMCID: PMC3149635 DOI: 10.1371/journal.pone.0023113] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2011] [Accepted: 07/06/2011] [Indexed: 11/23/2022] Open
Abstract
The processes of memory reconsolidation and extinction have received increasing attention in recent experimental research, as their potential clinical applications begin to be uncovered. A number of studies suggest that amnestic drugs injected after reexposure to a learning context can disrupt either of the two processes, depending on the behavioral protocol employed. Hypothesizing that reconsolidation represents updating of a memory trace in the hippocampus, while extinction represents formation of a new trace, we have built a neural network model in which either simple retrieval, reconsolidation or extinction of a stored attractor can occur upon contextual reexposure, depending on the similarity between the representations of the original learning and reexposure sessions. This is achieved by assuming that independent mechanisms mediate Hebbian-like synaptic strengthening and mismatch-driven labilization of synaptic changes, with protein synthesis inhibition preferentially affecting the former. Our framework provides a unified mechanistic explanation for experimental data showing (a) the effect of reexposure duration on the occurrence of reconsolidation or extinction and (b) the requirement of memory updating during reexposure to drive reconsolidation.
Collapse
Affiliation(s)
- Remus Osan
- Center for Neuroscience, Boston University, Boston, Massachusetts, United States of America
- Center for Biodynamics, Boston University, Boston, Massachusetts, United States of America
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts, United States of America
| | - Adriano B. L. Tort
- Brain Institute, Federal University of Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil
- Edmond and Lily Safra International Institute of Neuroscience of Natal, Natal, Rio Grande do Norte, Brazil
| | - Olavo B. Amaral
- Institute of Medical Biochemistry, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
- * E-mail:
| |
Collapse
|
46
|
Kunkel S, Diesmann M, Morrison A. Limits to the development of feed-forward structures in large recurrent neuronal networks. Front Comput Neurosci 2011; 4:160. [PMID: 21415913 PMCID: PMC3042733 DOI: 10.3389/fncom.2010.00160] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2010] [Accepted: 12/25/2010] [Indexed: 11/25/2022] Open
Abstract
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it.
Collapse
Affiliation(s)
- Susanne Kunkel
- Functional Neural Circuits Group, Faculty of Biology, Albert-Ludwig University of Freiburg Germany
| | | | | |
Collapse
|