51
|
Mojarrad H, Azimirad V, Koohestani B. A framework for preparing a stochastic nonlinear integrate-and-fire model for integrated information theory. NETWORK (BRISTOL, ENGLAND) 2022; 33:17-61. [PMID: 35380085 DOI: 10.1080/0954898x.2022.2049644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 01/26/2022] [Accepted: 02/27/2022] [Indexed: 06/14/2023]
Abstract
This paper presents a framework for spiking neural networks to be prepared for the Integrated Information Theory (IIT) analysis, using a stochastic nonlinear integrate-and-fire model. The model includes the crucial dynamics of the all-or-none law and after-spike refractoriness. The noise is modelled as an additive term in the system's equations. By preparing the model for the IIT analysis, it is meant to determine the length of the analysis time-window and the transition probability distributions required for the IIT 3.0. To this end, a system of differential equations is proposed to estimate the time evolution of the system's mean and covariance. Assuming the binary Fired/Silent activity as the possible states of each neuron, an algorithm is proposed to calculate the required probability distributions. As long as the Fired/Silent probabilities are only concerned, the Gaussian density assumption with the estimated moments is a reasonable estimate. The synaptic inputs are treated as random variables with low variances to avoid the costs of conditioning on the system's past activities. The Monte-Carlo simulation is used to validate the estimation methods. To increase the reliability of the inductive inference behind the Monte-Carlo method, various stimulation protocols are applied to evoke the dynamics of the equations.
Collapse
Affiliation(s)
- Hossein Mojarrad
- Department of Mechatronics, Faculty of Mechanical Engineering, University of Tabriz, Tabriz, Iran
| | - Vahid Azimirad
- Department of Mechatronics, Faculty of Mechanical Engineering, University of Tabriz, Tabriz, Iran
| | - Behrooz Koohestani
- Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| |
Collapse
|
52
|
Khaledi-Nasab A, Kromer JA, Tass PA. Long-Lasting Desynchronization of Plastic Neuronal Networks by Double-Random Coordinated Reset Stimulation. FRONTIERS IN NETWORK PHYSIOLOGY 2022; 2:864859. [PMID: 36926109 PMCID: PMC10013062 DOI: 10.3389/fnetp.2022.864859] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 03/18/2022] [Indexed: 11/13/2022]
Abstract
Hypersynchrony of neuronal activity is associated with several neurological disorders, including essential tremor and Parkinson's disease (PD). Chronic high-frequency deep brain stimulation (HF DBS) is the standard of care for medically refractory PD. Symptoms may effectively be suppressed by HF DBS, but return shortly after cessation of stimulation. Coordinated reset (CR) stimulation is a theory-based stimulation technique that was designed to specifically counteract neuronal synchrony by desynchronization. During CR, phase-shifted stimuli are delivered to multiple neuronal subpopulations. Computational studies on CR stimulation of plastic neuronal networks revealed long-lasting desynchronization effects obtained by down-regulating abnormal synaptic connectivity. This way, networks are moved into attractors of stable desynchronized states such that stimulation-induced desynchronization persists after cessation of stimulation. Preclinical and clinical studies confirmed corresponding long-lasting therapeutic and desynchronizing effects in PD. As PD symptoms are associated with different pathological synchronous rhythms, stimulation-induced long-lasting desynchronization effects should favorably be robust to variations of the stimulation frequency. Recent computational studies suggested that this robustness can be improved by randomizing the timings of stimulus deliveries. We study the long-lasting effects of CR stimulation with randomized stimulus amplitudes and/or randomized stimulus timing in networks of leaky integrate-and-fire (LIF) neurons with spike-timing-dependent plasticity. Performing computer simulations and analytical calculations, we study long-lasting desynchronization effects of CR with and without randomization of stimulus amplitudes alone, randomization of stimulus times alone as well as the combination of both. Varying the CR stimulation frequency (with respect to the frequency of abnormal target rhythm) and the number of separately stimulated neuronal subpopulations, we reveal parameter regions and related mechanisms where the two qualitatively different randomization mechanisms improve the robustness of long-lasting desynchronization effects of CR. In particular, for clinically relevant parameter ranges double-random CR stimulation, i.e., CR stimulation with the specific combination of stimulus amplitude randomization and stimulus time randomization, may outperform regular CR stimulation with respect to long-lasting desynchronization. In addition, our results provide the first evidence that an effective reduction of the overall stimulation current by stimulus amplitude randomization may improve the frequency robustness of long-lasting therapeutic effects of brain stimulation.
Collapse
Affiliation(s)
- Ali Khaledi-Nasab
- Department of Neurosurgery, Stanford University, Stanford, CA, United States
| | - Justus A Kromer
- Department of Neurosurgery, Stanford University, Stanford, CA, United States
| | - Peter A Tass
- Department of Neurosurgery, Stanford University, Stanford, CA, United States
| |
Collapse
|
53
|
Dasbach S, Tetzlaff T, Diesmann M, Senk J. Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution. Front Neurosci 2021; 15:757790. [PMID: 35002599 PMCID: PMC8740282 DOI: 10.3389/fnins.2021.757790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.
Collapse
Affiliation(s)
- Stefan Dasbach
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
54
|
Tao W, Lee J, Chen X, Díaz-Alonso J, Zhou J, Pleasure S, Nicoll RA. Synaptic memory requires CaMKII. eLife 2021; 10:e60360. [PMID: 34908526 PMCID: PMC8798046 DOI: 10.7554/elife.60360] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 12/14/2021] [Indexed: 01/28/2023] Open
Abstract
Long-term potentiation (LTP) is arguably the most compelling cellular model for learning and memory. While the mechanisms underlying the induction of LTP ('learning') are well understood, the maintenance of LTP ('memory') has remained contentious over the last 20 years. Here, we find that Ca2+-calmodulin-dependent kinase II (CaMKII) contributes to synaptic transmission and is required LTP maintenance. Acute inhibition of CaMKII erases LTP and transient inhibition of CaMKII enhances subsequent LTP. These findings strongly support the role of CaMKII as a molecular storage device.
Collapse
Affiliation(s)
- Wucheng Tao
- Key Laboratory of Brain Aging and Neurodegenerative Diseases, Fujian Medical UniversityFuzhouChina
- Department of Cellular and Molecular Pharmacology, University of California, San FranciscoSan FranciscoUnited States
| | - Joel Lee
- Department of Cellular and Molecular Pharmacology, University of California, San FranciscoSan FranciscoUnited States
| | - Xiumin Chen
- Department of Cellular and Molecular Pharmacology, University of California, San FranciscoSan FranciscoUnited States
| | - Javier Díaz-Alonso
- Department of Cellular and Molecular Pharmacology, University of California, San FranciscoSan FranciscoUnited States
| | - Jing Zhou
- Department of Neurology, University of California, San FranciscoSan FranciscoUnited States
| | - Samuel Pleasure
- Department of Neurology, University of California, San FranciscoSan FranciscoUnited States
| | - Roger A Nicoll
- Department of Cellular and Molecular Pharmacology, University of California, San FranciscoSan FranciscoUnited States
- Physiology, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
55
|
Gallinaro JV, Clopath C. Memories in a network with excitatory and inhibitory plasticity are encoded in the spiking irregularity. PLoS Comput Biol 2021; 17:e1009593. [PMID: 34762644 PMCID: PMC8610285 DOI: 10.1371/journal.pcbi.1009593] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/23/2021] [Accepted: 10/26/2021] [Indexed: 11/19/2022] Open
Abstract
Cell assemblies are thought to be the substrate of memory in the brain. Theoretical studies have previously shown that assemblies can be formed in networks with multiple types of plasticity. But how exactly they are formed and how they encode information is yet to be fully understood. One possibility is that memories are stored in silent assemblies. Here we used a computational model to study the formation of silent assemblies in a network of spiking neurons with excitatory and inhibitory plasticity. We found that even though the formed assemblies were silent in terms of mean firing rate, they had an increased coefficient of variation of inter-spike intervals. We also found that this spiking irregularity could be read out with support of short-term plasticity, and that it could contribute to the longevity of memories.
Collapse
Affiliation(s)
- Júlia V. Gallinaro
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
56
|
Jordan J, Schmidt M, Senn W, Petrovici MA. Evolving interpretable plasticity for spiking networks. eLife 2021; 10:66273. [PMID: 34709176 PMCID: PMC8553337 DOI: 10.7554/elife.66273] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022] Open
Abstract
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called ‘plasticity rules’, is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms. Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition – that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural – and reasonable – tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on ‘evolutionary algorithms’. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner – an example of reinforcement learning. Finally, in the third ‘supervised learning’ scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers ‘learn’ will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Maximilian Schmidt
- Ascent Robotics, Tokyo, Japan.,RIKEN Center for Brain Science, Tokyo, Japan
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland.,Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
57
|
Zendrikov D, Paraskevov A. Emergent population activity in metric-free and metric networks of neurons with stochastic spontaneous spikes and dynamic synapses. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.11.073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
58
|
|
59
|
Khaledi-Nasab A, Kromer JA, Tass PA. Long-Lasting Desynchronization Effects of Coordinated Reset Stimulation Improved by Random Jitters. Front Physiol 2021; 12:719680. [PMID: 34630142 PMCID: PMC8497886 DOI: 10.3389/fphys.2021.719680] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 08/12/2021] [Indexed: 12/30/2022] Open
Abstract
Abnormally strong synchronized activity is related to several neurological disorders, including essential tremor, epilepsy, and Parkinson's disease. Chronic high-frequency deep brain stimulation (HF DBS) is an established treatment for advanced Parkinson's disease. To reduce the delivered integral electrical current, novel theory-based stimulation techniques such as coordinated reset (CR) stimulation directly counteract the abnormal synchronous firing by delivering phase-shifted stimuli through multiple stimulation sites. In computational studies in neuronal networks with spike-timing-dependent plasticity (STDP), it was shown that CR stimulation down-regulates synaptic weights and drives the network into an attractor of a stable desynchronized state. This led to desynchronization effects that outlasted the stimulation. Corresponding long-lasting therapeutic effects were observed in preclinical and clinical studies. Computational studies suggest that long-lasting effects of CR stimulation depend on the adjustment of the stimulation frequency to the dominant synchronous rhythm. This may limit clinical applicability as different pathological rhythms may coexist. To increase the robustness of the long-lasting effects, we study randomized versions of CR stimulation in networks of leaky integrate-and-fire neurons with STDP. Randomization is obtained by adding random jitters to the stimulation times and by shuffling the sequence of stimulation site activations. We study the corresponding long-lasting effects using analytical calculations and computer simulations. We show that random jitters increase the robustness of long-lasting effects with respect to changes of the number of stimulation sites and the stimulation frequency. In contrast, shuffling does not increase parameter robustness of long-lasting effects. Studying the relation between acute, acute after-, and long-lasting effects of stimulation, we find that both acute after- and long-lasting effects are strongly determined by the stimulation-induced synaptic reshaping, whereas acute effects solely depend on the statistics of administered stimuli. We find that the stimulation duration is another important parameter, as effective stimulation only entails long-lasting effects after a sufficient stimulation duration. Our results show that long-lasting therapeutic effects of CR stimulation with random jitters are more robust than those of regular CR stimulation. This might reduce the parameter adjustment time in future clinical trials and make CR with random jitters more suitable for treating brain disorders with abnormal synchronization in multiple frequency bands.
Collapse
Affiliation(s)
- Ali Khaledi-Nasab
- Department of Neurosurgery, Stanford University, Stanford, CA, United States
| | - Justus A Kromer
- Department of Neurosurgery, Stanford University, Stanford, CA, United States
| | - Peter A Tass
- Department of Neurosurgery, Stanford University, Stanford, CA, United States
| |
Collapse
|
60
|
Sherf N, Shamir M. STDP and the distribution of preferred phases in the whisker system. PLoS Comput Biol 2021; 17:e1009353. [PMID: 34534208 PMCID: PMC8480728 DOI: 10.1371/journal.pcbi.1009353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 09/29/2021] [Accepted: 08/17/2021] [Indexed: 11/19/2022] Open
Abstract
Rats and mice use their whiskers to probe the environment. By rhythmically swiping their whiskers back and forth they can detect the existence of an object, locate it, and identify its texture. Localization can be accomplished by inferring the whisker’s position. Rhythmic neurons that track the phase of the whisking cycle encode information about the azimuthal location of the whisker. These neurons are characterized by preferred phases of firing that are narrowly distributed. Consequently, pooling the rhythmic signal from several upstream neurons is expected to result in a much narrower distribution of preferred phases in the downstream population, which however has not been observed empirically. Here, we show how spike timing dependent plasticity (STDP) can provide a solution to this conundrum. We investigated the effect of STDP on the utility of a neural population to transmit rhythmic information downstream using the framework of a modeling study. We found that under a wide range of parameters, STDP facilitated the transfer of rhythmic information despite the fact that all the synaptic weights remained dynamic. As a result, the preferred phase of the downstream neuron was not fixed, but rather drifted in time at a drift velocity that depended on the preferred phase, thus inducing a distribution of preferred phases. We further analyzed how the STDP rule governs the distribution of preferred phases in the downstream population. This link between the STDP rule and the distribution of preferred phases constitutes a natural test for our theory. The distribution of preferred phases of whisking neurons in the somatosensory system of rats and mice presents a conundrum: a simple pooling model predicts a distribution that is an order of magnitude narrower than what is observed empirically. Here, we suggest that this non-trivial distribution may result from activity-dependent plasticity in the form of spike timing dependent plasticity (STDP). We show that under STDP, the synaptic weights do not converge to a fixed value, but rather remain dynamic. As a result, the preferred phases of the whisking neurons vary in time, hence inducing a non-trivial distribution of preferred phases, which is governed by the STDP rule. Our results imply that the considerable synaptic volatility which has long been viewed as a difficulty that needs to be overcome, may actually be an underlying principle of the organization of the central nervous system.
Collapse
Affiliation(s)
- Nimrod Sherf
- Physics Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- * E-mail:
| | - Maoz Shamir
- Physics Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Department of Physiology and Cell Biology Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
61
|
Socolovsky G, Shamir M. Robust rhythmogenesis via spike-timing-dependent plasticity. Phys Rev E 2021; 104:024413. [PMID: 34525545 DOI: 10.1103/physreve.104.024413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 07/21/2021] [Indexed: 11/07/2022]
Abstract
Rhythmic activity has been observed in numerous animal species ranging from insects to humans, and in relation to a wide range of cognitive tasks. Various experimental and theoretical studies have investigated rhythmic activity. The theoretical efforts have mainly been focused on the neuronal dynamics, under the assumption that network connectivity satisfies certain fine-tuning conditions required to generate oscillations. However, it remains unclear how this fine-tuning is achieved. Here we investigated the hypothesis that spike-timing-dependent plasticity (STDP) can provide the underlying mechanism for tuning synaptic connectivity to generate rhythmic activity. We addressed this question in a modeling study. We examined STDP dynamics in the framework of a network of excitatory and inhibitory neuronal populations that has been suggested to underlie the generation of oscillations in the gamma range. Mean-field Fokker-Planck equations for the synaptic weight dynamics are derived in the limit of slow learning. We drew on this approximation to determine which types of STDP rules drive the system to exhibit rhythmic activity, and we demonstrate how the parameters that characterize the plasticity rule govern the rhythmic activity. Finally, we propose a mechanism that can ensure the robustness of self-developing processes in general, and for rhythmogenesis in particular.
Collapse
Affiliation(s)
- Gabi Socolovsky
- Department of Physics, Faculty of Natural Sciences, Ben-Gurion University of the Negev, Be'er-Sheva 8410501, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Be'er-Sheva 8410501, Israel
| | - Maoz Shamir
- Department of Physics, Faculty of Natural Sciences, Ben-Gurion University of the Negev, Be'er-Sheva 8410501, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Be'er-Sheva 8410501, Israel.,Department of Physiology and Cell Biology, Faculty of Health Sciences, Ben-Gurion University of the Negev, Be'er-Sheva 8410501, Israel
| |
Collapse
|
62
|
Crodelle J, McLaughlin DW. Modeling the role of gap junctions between excitatory neurons in the developing visual cortex. PLoS Comput Biol 2021; 17:e1007915. [PMID: 34228707 PMCID: PMC8284639 DOI: 10.1371/journal.pcbi.1007915] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 07/16/2021] [Accepted: 06/16/2021] [Indexed: 11/23/2022] Open
Abstract
Recent experiments in the developing mammalian visual cortex have revealed that gap junctions couple excitatory cells and potentially influence the formation of chemical synapses. In particular, cells that were coupled by a gap junction during development tend to share an orientation preference and are preferentially coupled by a chemical synapse in the adult cortex, a property that is diminished when gap junctions are blocked. In this work, we construct a simplified model of the developing mouse visual cortex including spike-timing-dependent plasticity of both the feedforward synaptic inputs and recurrent cortical synapses. We use this model to show that synchrony among gap-junction-coupled cells underlies their preference to form strong recurrent synapses and develop similar orientation preference; this effect decreases with an increase in coupling density. Additionally, we demonstrate that gap-junction coupling works, together with the relative timing of synaptic development of the feedforward and recurrent synapses, to determine the resulting cortical map of orientation preference. Gap junctions, or sites of direct electrical connections between neurons, have a significant presence in the cortex, both during development and in adulthood. Their primary function during either of these periods, however, is still poorly understood. In the adult cortex, gap junctions between local, inhibitory neurons have been shown to promote synchronous firing, a network characteristic thought to be important for learning, attention, and memory. During development, gap junctions between excitatory, pyramidal cells, have been conjectured to play a role in synaptic plasticity and the formation of cortical circuits. In the visual cortex, where neurons exhibit tuned responses to properties of visual input such as orientation and direction, recent experiments show that excitatory cells are coupled by gap junctions during the first postnatal week and are replaced by chemical synapses during the second week. In this work, we explore the possible contribution of gap-junction coupling during development to the formation of chemical synapses between the visual cortex from the thalamus and between cortical cells within the visual cortex. Specifically, using a mathematical model of the visual cortex during development, we identify the response properties of gap-junction-coupled cells and their influence on the formation of the cortical map of orientation preference.
Collapse
Affiliation(s)
- Jennifer Crodelle
- Middlebury College, Middlebury, Vermont, United States of America
- Courant Institute of Mathematical Sciences, NYU, New York, New York, United States of America
- * E-mail:
| | - David W. McLaughlin
- Courant Institute of Mathematical Sciences, NYU, New York, New York, United States of America
- Center for Neural Science, NYU, New York, New York, United States of America
- Neuroscience Institute of NYU Langone Health, New York, New York, United States of America
- New York University Shanghai, Shanghai, China
| |
Collapse
|
63
|
Golosio B, De Luca C, Capone C, Pastorelli E, Stegel G, Tiddia G, De Bonis G, Paolucci PS. Thalamo-cortical spiking model of incremental learning combining perception, context and NREM-sleep. PLoS Comput Biol 2021; 17:e1009045. [PMID: 34181642 PMCID: PMC8270441 DOI: 10.1371/journal.pcbi.1009045] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 07/09/2021] [Accepted: 05/05/2021] [Indexed: 01/19/2023] Open
Abstract
The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories. We created a thalamo-cortical spiking model (ThaCo) with the purpose of demonstrating a link among two phenomena that we believe to be essential for the brain capability of efficient incremental learning from few examples in noisy environments. Grounded in two experimental observations—the first about the effects of deep-sleep on pre- and post-sleep firing rate distributions, the second about the combination of perceptual and contextual information in pyramidal neurons—our model joins these two ingredients. ThaCo alternates phases of incremental learning, classification and deep-sleep. Memories of handwritten digit examples are learned through thalamo-cortical and cortico-cortical plastic synapses. In absence of noise, the combination of contextual information with perception enables fast incremental learning. Deep-sleep becomes crucial when noisy inputs are considered. We observed in ThaCo both homeostatic and associative processes: deep-sleep fights noise in perceptual and internal knowledge and it supports the categorical association of examples belonging to the same digit class, through reinforcement of class-specific cortico-cortical synapses. The distributions of pre-sleep and post-sleep firing rates during classification change in a manner similar to those of experimental observation. These changes promote energetic efficiency during recall of memories, better representation of individual memories and categories and higher classification performances.
Collapse
Affiliation(s)
- Bruno Golosio
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Chiara De Luca
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
- * E-mail:
| | - Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Elena Pastorelli
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Giovanni Stegel
- Dipartimento di Chimica e Farmacia, Università di Sassari, Sassari, Italy
| | - Gianmarco Tiddia
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Giulia De Bonis
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | | |
Collapse
|
64
|
Stapmanns J, Hahne J, Helias M, Bolten M, Diesmann M, Dahmen D. Event-Based Update of Synapses in Voltage-Based Learning Rules. Front Neuroinform 2021; 15:609147. [PMID: 34177505 PMCID: PMC8222618 DOI: 10.3389/fninf.2021.609147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/07/2021] [Indexed: 11/13/2022] Open
Abstract
Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
65
|
Xiang S, Ren Z, Song Z, Zhang Y, Guo X, Han G, Hao Y. Computing Primitive of Fully VCSEL-Based All-Optical Spiking Neural Network for Supervised Learning and Pattern Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2494-2505. [PMID: 32673197 DOI: 10.1109/tnnls.2020.3006263] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose computing primitive for an all-optical spiking neural network (SNN) based on vertical-cavity surface-emitting lasers (VCSELs) for supervised learning by using biologically plausible mechanisms. The spike-timing-dependent plasticity (STDP) model was established based on the dynamics of the vertical-cavity semiconductor optical amplifier (VCSOA) subject to dual-optical pulse injection. The neuron-synapse self-consistent unified model of the all-optical SNN was developed, which enables reproducing the essential neuron-like dynamics and STDP function. Optical character numbers are trained and tested by the proposed fully VCSEL-based all-optical SNN. Simulation results show that the proposed all-optical SNN is capable of recognizing ten numbers by a supervised learning algorithm, in which the input and output patterns as well as the teacher signals of the all-optical SNN are represented by spatiotemporal fashions. Moreover, the lateral inhibition is not required in our proposed architecture, which is friendly to the hardware implementation. The system-level unified model enables architecture-algorithm codesigns and optimization of all-optical SNN. To the best of our knowledge, the computing primitive of an all-optical SNN based on VCSELs for supervised learning has not yet been reported, which paves the way toward fully VCSEL-based large-scale photonic neuromorphic systems with low power consumption.
Collapse
|
66
|
Multistability in a star network of Kuramoto-type oscillators with synaptic plasticity. Sci Rep 2021; 11:9840. [PMID: 33972613 PMCID: PMC8110549 DOI: 10.1038/s41598-021-89198-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 04/20/2021] [Indexed: 11/09/2022] Open
Abstract
We analyze multistability in a star-type network of phase oscillators with coupling weights governed by phase-difference-dependent plasticity. It is shown that a network with N leaves can evolve into \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$2^N$$\end{document}2N various asymptotic states, characterized by different values of the coupling strength between the hub and the leaves. Starting from the simple case of two coupled oscillators, we develop an analytical approach based on two small parameters \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\varepsilon$$\end{document}ε and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mu$$\end{document}μ, where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\varepsilon$$\end{document}ε is the ratio of the time scales of the phase variables and synaptic weights, and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mu$$\end{document}μ defines the sharpness of the plasticity boundary function. The limit \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mu \rightarrow 0$$\end{document}μ→0 corresponds to a hard boundary. The analytical results obtained on the model of two oscillators are generalized for multi-leaf star networks. Multistability with \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$2^N$$\end{document}2N various asymptotic states is numerically demonstrated for one-, two-, three- and nine-leaf star-type networks.
Collapse
|
67
|
Cachi PG, Ventura S, Cios KJ. CRBA: A Competitive Rate-Based Algorithm Based on Competitive Spiking Neural Networks. Front Comput Neurosci 2021; 15:627567. [PMID: 33967726 PMCID: PMC8100331 DOI: 10.3389/fncom.2021.627567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/22/2021] [Indexed: 11/30/2022] Open
Abstract
In this paper we present a Competitive Rate-Based Algorithm (CRBA) that approximates operation of a Competitive Spiking Neural Network (CSNN). CRBA is based on modeling of the competition between neurons during a sample presentation, which can be reduced to ranking of the neurons based on a dot product operation and the use of a discrete Expectation Maximization algorithm; the latter is equivalent to the spike time-dependent plasticity rule. CRBA's performance is compared with that of CSNN on the MNIST and Fashion-MNIST datasets. The results show that CRBA performs on par with CSNN, while using three orders of magnitude less computational time. Importantly, we show that the weights and firing thresholds learned by CRBA can be used to initialize CSNN's parameters that results in its much more efficient operation.
Collapse
Affiliation(s)
- Paolo G Cachi
- Department of Computer Science, Virginia Commonwealth University, Richmond, VA, United States
| | - Sebastián Ventura
- Department of Computer Science, Universidad de Córdoba, Córdoba, Spain
| | - Krzysztof J Cios
- Department of Computer Science, Virginia Commonwealth University, Richmond, VA, United States.,Polish Academy of Sciences, Gliwice, Poland
| |
Collapse
|
68
|
Gardner B, Grüning A. Supervised Learning With First-to-Spike Decoding in Multilayer Spiking Neural Networks. Front Comput Neurosci 2021; 15:617862. [PMID: 33912021 PMCID: PMC8072060 DOI: 10.3389/fncom.2021.617862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 03/08/2021] [Indexed: 11/18/2022] Open
Abstract
Experimental studies support the notion of spike-based neuronal information processing in the brain, with neural circuits exhibiting a wide range of temporally-based coding strategies to rapidly and efficiently represent sensory stimuli. Accordingly, it would be desirable to apply spike-based computation to tackling real-world challenges, and in particular transferring such theory to neuromorphic systems for low-power embedded applications. Motivated by this, we propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems based on a rapid, first-to-spike decoding strategy. The proposed learning rule supports multiple spikes fired by stochastic hidden neurons, and yet is stable by relying on first-spike responses generated by a deterministic output layer. In addition to this, we also explore several distinct, spike-based encoding strategies in order to form compact representations of presented input data. We demonstrate the classification performance of the learning rule as applied to several benchmark datasets, including MNIST. The learning rule is capable of generalizing from the data, and is successful even when used with constrained network architectures containing few input and hidden layer neurons. Furthermore, we highlight a novel encoding strategy, termed "scanline encoding," that can transform image data into compact spatiotemporal patterns for subsequent network processing. Designing constrained, but optimized, network structures and performing input dimensionality reduction has strong implications for neuromorphic applications.
Collapse
Affiliation(s)
- Brian Gardner
- Department of Computer Science, University of Surrey, Guildford, United Kingdom
| | - André Grüning
- Faculty of Electrical Engineering and Computer Science, University of Applied Sciences, Stralsund, Germany
| |
Collapse
|
69
|
Lobov SA, Zharinov AI, Makarov VA, Kazantsev VB. Spatial Memory in a Spiking Neural Network with Robot Embodiment. SENSORS 2021; 21:s21082678. [PMID: 33920246 PMCID: PMC8070389 DOI: 10.3390/s21082678] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 04/06/2021] [Accepted: 04/07/2021] [Indexed: 11/16/2022]
Abstract
Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.
Collapse
Affiliation(s)
- Sergey A. Lobov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, 1 Universitetskaya Str., 420500 Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, 14 Nevsky Str., 236016 Kaliningrad, Russia
- Correspondence:
| | - Alexey I. Zharinov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
| | - Valeri A. Makarov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
- Instituto de Matemática Interdisciplinar, Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, 28040 Madrid, Spain
| | - Victor B. Kazantsev
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, 1 Universitetskaya Str., 420500 Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, 14 Nevsky Str., 236016 Kaliningrad, Russia
- Lab of Neurocybernetics, Russian State Scientific Center for Robotics and Technical Cybernetics, 21 Tikhoretsky Ave., St., 194064 Petersburg, Russia
| |
Collapse
|
70
|
Golosio B, Tiddia G, De Luca C, Pastorelli E, Simula F, Paolucci PS. Fast Simulations of Highly-Connected Spiking Cortical Models Using GPUs. Front Comput Neurosci 2021; 15:627620. [PMID: 33679358 PMCID: PMC7925400 DOI: 10.3389/fncom.2021.627620] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 01/26/2021] [Indexed: 11/16/2022] Open
Abstract
Over the past decade there has been a growing interest in the development of parallel hardware systems for simulating large-scale networks of spiking neurons. Compared to other highly-parallel systems, GPU-accelerated solutions have the advantage of a relatively low cost and a great versatility, thanks also to the possibility of using the CUDA-C/C++ programming languages. NeuronGPU is a GPU library for large-scale simulations of spiking neural network models, written in the C++ and CUDA-C++ programming languages, based on a novel spike-delivery algorithm. This library includes simple LIF (leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx (adaptive-exponential-integrate-and-fire) neuron models with current or conductance based synapses, different types of spike generators, tools for recording spikes, state variables and parameters, and it supports user-definable models. The numerical solution of the differential equations of the dynamics of the AdEx models is performed through a parallel implementation, written in CUDA-C++, of the fifth-order Runge-Kutta method with adaptive step-size control. In this work we evaluate the performance of this library on the simulation of a cortical microcircuit model, based on LIF neurons and current-based synapses, and on balanced networks of excitatory and inhibitory neurons, using AdEx or Izhikevich neuron models and conductance-based or current-based synapses. On these models, we will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity. In particular, using a single NVIDIA GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model, which includes about 77,000 neurons and 3 · 108 connections, can be simulated at a speed very close to real time, while the simulation time of a balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron was about 70 s per second of biological activity.
Collapse
Affiliation(s)
- Bruno Golosio
- Department of Physics, University of Cagliari, Cagliari, Italy.,Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Gianmarco Tiddia
- Department of Physics, University of Cagliari, Cagliari, Italy.,Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Chiara De Luca
- Ph.D. Program in Behavioral Neuroscience, "Sapienza" University of Rome, Rome, Italy.,Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Elena Pastorelli
- Ph.D. Program in Behavioral Neuroscience, "Sapienza" University of Rome, Rome, Italy.,Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Francesco Simula
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | | |
Collapse
|
71
|
Brivio S, Ly DRB, Vianello E, Spiga S. Non-linear Memristive Synaptic Dynamics for Efficient Unsupervised Learning in Spiking Neural Networks. Front Neurosci 2021; 15:580909. [PMID: 33633531 PMCID: PMC7901913 DOI: 10.3389/fnins.2021.580909] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Accepted: 01/06/2021] [Indexed: 11/13/2022] Open
Abstract
Spiking neural networks (SNNs) are a computational tool in which the information is coded into spikes, as in some parts of the brain, differently from conventional neural networks (NNs) that compute over real-numbers. Therefore, SNNs can implement intelligent information extraction in real-time at the edge of data acquisition and correspond to a complementary solution to conventional NNs working for cloud-computing. Both NN classes face hardware constraints due to limited computing parallelism and separation of logic and memory. Emerging memory devices, like resistive switching memories, phase change memories, or memristive devices in general are strong candidates to remove these hurdles for NN applications. The well-established training procedures of conventional NNs helped in defining the desiderata for memristive device dynamics implementing synaptic units. The generally agreed requirements are a linear evolution of memristive conductance upon stimulation with train of identical pulses and a symmetric conductance change for conductance increase and decrease. Conversely, little work has been done to understand the main properties of memristive devices supporting efficient SNN operation. The reason lies in the lack of a background theory for their training. As a consequence, requirements for NNs have been taken as a reference to develop memristive devices for SNNs. In the present work, we show that, for efficient CMOS/memristive SNNs, the requirements for synaptic memristive dynamics are very different from the needs of a conventional NN. System-level simulations of a SNN trained to classify hand-written digit images through a spike timing dependent plasticity protocol are performed considering various linear and non-linear plausible synaptic memristive dynamics. We consider memristive dynamics bounded by artificial hard conductance values and limited by the natural dynamics evolution toward asymptotic values (soft-boundaries). We quantitatively analyze the impact of resolution and non-linearity properties of the synapses on the network training and classification performance. Finally, we demonstrate that the non-linear synapses with hard boundary values enable higher classification performance and realize the best trade-off between classification accuracy and required training time. With reference to the obtained results, we discuss how memristive devices with non-linear dynamics constitute a technologically convenient solution for the development of on-line SNN training.
Collapse
Affiliation(s)
- Stefano Brivio
- CNR - IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| | - Denys R B Ly
- Université Grenoble Alpes, CEA, Leti, Grenoble, France
| | | | - Sabina Spiga
- CNR - IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| |
Collapse
|
72
|
Gamma Oscillations Facilitate Effective Learning in Excitatory-Inhibitory Balanced Neural Circuits. Neural Plast 2021; 2021:6668175. [PMID: 33542728 PMCID: PMC7840255 DOI: 10.1155/2021/6668175] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 12/19/2020] [Accepted: 01/07/2021] [Indexed: 12/26/2022] Open
Abstract
Gamma oscillation in neural circuits is believed to associate with effective learning in the brain, while the underlying mechanism is unclear. This paper aims to study how spike-timing-dependent plasticity (STDP), a typical mechanism of learning, with its interaction with gamma oscillation in neural circuits, shapes the network dynamics properties and the network structure formation. We study an excitatory-inhibitory (E-I) integrate-and-fire neuronal network with triplet STDP, heterosynaptic plasticity, and a transmitter-induced plasticity. Our results show that the performance of plasticity is diverse in different synchronization levels. We find that gamma oscillation is beneficial to synaptic potentiation among stimulated neurons by forming a special network structure where the sum of excitatory input synaptic strength is correlated with the sum of inhibitory input synaptic strength. The circuit can maintain E-I balanced input on average, whereas the balance is temporal broken during the learning-induced oscillations. Our study reveals a potential mechanism about the benefits of gamma oscillation on learning in biological neural circuits.
Collapse
|
73
|
Rastogi M, Lu S, Islam N, Sengupta A. On the Self-Repair Role of Astrocytes in STDP Enabled Unsupervised SNNs. Front Neurosci 2021; 14:603796. [PMID: 33519358 PMCID: PMC7841294 DOI: 10.3389/fnins.2020.603796] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 11/27/2020] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic computing is emerging to be a disruptive computational paradigm that attempts to emulate various facets of the underlying structure and functionalities of the brain in the algorithm and hardware design of next-generation machine learning platforms. This work goes beyond the focus of current neuromorphic computing architectures on computational models for neuron and synapse to examine other computational units of the biological brain that might contribute to cognition and especially self-repair. We draw inspiration and insights from computational neuroscience regarding functionalities of glial cells and explore their role in the fault-tolerant capacity of Spiking Neural Networks (SNNs) trained in an unsupervised fashion using Spike-Timing Dependent Plasticity (STDP). We characterize the degree of self-repair that can be enabled in such networks with varying degree of faults ranging from 50 to 90% and evaluate our proposal on the MNIST and Fashion-MNIST datasets.
Collapse
Affiliation(s)
- Mehul Rastogi
- School of Electrical Engineering and Computer Science, Pennsylvania State University (PSU), University Park, PA, United States
- Department of Computer Science and Information Systems, Birla Institute of Technology and Science Pilani, Goa Campus, India
| | - Sen Lu
- School of Electrical Engineering and Computer Science, Pennsylvania State University (PSU), University Park, PA, United States
| | - Nafiul Islam
- School of Electrical Engineering and Computer Science, Pennsylvania State University (PSU), University Park, PA, United States
| | - Abhronil Sengupta
- School of Electrical Engineering and Computer Science, Pennsylvania State University (PSU), University Park, PA, United States
| |
Collapse
|
74
|
Berberian N, Ross M, Chartier S. Embodied working memory during ongoing input streams. PLoS One 2021; 16:e0244822. [PMID: 33400724 PMCID: PMC7785253 DOI: 10.1371/journal.pone.0244822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 12/16/2020] [Indexed: 11/18/2022] Open
Abstract
Sensory stimuli endow animals with the ability to generate an internal representation. This representation can be maintained for a certain duration in the absence of previously elicited inputs. The reliance on an internal representation rather than purely on the basis of external stimuli is a hallmark feature of higher-order functions such as working memory. Patterns of neural activity produced in response to sensory inputs can continue long after the disappearance of previous inputs. Experimental and theoretical studies have largely invested in understanding how animals faithfully maintain sensory representations during ongoing reverberations of neural activity. However, these studies have focused on preassigned protocols of stimulus presentation, leaving out by default the possibility of exploring how the content of working memory interacts with ongoing input streams. Here, we study working memory using a network of spiking neurons with dynamic synapses subject to short-term and long-term synaptic plasticity. The formal model is embodied in a physical robot as a companion approach under which neuronal activity is directly linked to motor output. The artificial agent is used as a methodological tool for studying the formation of working memory capacity. To this end, we devise a keyboard listening framework to delineate the context under which working memory content is (1) refined, (2) overwritten or (3) resisted by ongoing new input streams. Ultimately, this study takes a neurorobotic perspective to resurface the long-standing implication of working memory in flexible cognition.
Collapse
Affiliation(s)
- Nareg Berberian
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Matt Ross
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Sylvain Chartier
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
75
|
Demin VA, Nekhaev DV, Surazhevsky IA, Nikiruy KE, Emelyanov AV, Nikolaev SN, Rylkov VV, Kovalchuk MV. Necessary conditions for STDP-based pattern recognition learning in a memristive spiking neural network. Neural Netw 2020; 134:64-75. [PMID: 33291017 DOI: 10.1016/j.neunet.2020.11.005] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 09/19/2020] [Accepted: 11/12/2020] [Indexed: 11/28/2022]
Abstract
This work is aimed to study experimental and theoretical approaches for searching effective local training rules for unsupervised pattern recognition by high-performance memristor-based Spiking Neural Networks (SNNs). First, the possibility of weight change using Spike-Timing-Dependent Plasticity (STDP) is demonstrated with a pair of hardware analog neurons connected through a (CoFeB)x(LiNbO3)1-x nanocomposite memristor. Next, the learning convergence to a solution of binary clusterization task is analyzed in a wide range of memristive STDP parameters for a single-layer fully connected feedforward SNN. The memristive STDP behavior supplying convergence in this simple task is shown also to provide it in the handwritten digit recognition domain by the more complex SNN architecture with a Winner-Take-All competition between neurons. To investigate basic conditions necessary for training convergence, an original probabilistic generative model of a rate-based single-layer network with independent or competing neurons is built and thoroughly analyzed. The main result is a statement of "correlation growth-anticorrelation decay" principle which prompts near-optimal policy to configure model parameters. This principle is in line with requiring the binary clusterization convergence which can be defined as the necessary condition for optimal learning and used as the simple benchmark for tuning parameters of various neural network realizations with population-rate information coding. At last, a heuristic algorithm is described to experimentally find out the convergence conditions in a memristive SNN, including robustness to a device variability. Due to the generality of the proposed approach, it can be applied to a wide range of memristors and neurons of software- or hardware-based rate-coding single-layer SNNs when searching for local rules that ensure their unsupervised learning convergence in a pattern recognition task domain.
Collapse
Affiliation(s)
- V A Demin
- National Research Center "Kurchatov Institute", Moscow, Russia.
| | - D V Nekhaev
- National Research Center "Kurchatov Institute", Moscow, Russia
| | - I A Surazhevsky
- National Research Center "Kurchatov Institute", Moscow, Russia
| | - K E Nikiruy
- National Research Center "Kurchatov Institute", Moscow, Russia
| | - A V Emelyanov
- National Research Center "Kurchatov Institute", Moscow, Russia; Moscow Institute of Physics and Technology, Dolgoprudny, Russia
| | - S N Nikolaev
- National Research Center "Kurchatov Institute", Moscow, Russia
| | - V V Rylkov
- National Research Center "Kurchatov Institute", Moscow, Russia; Kotel'nikov Institute of Radio Engineering and Electronics RAS, 141190 Fryazino, Moscow Region, Russia
| | - M V Kovalchuk
- National Research Center "Kurchatov Institute", Moscow, Russia; Moscow Institute of Physics and Technology, Dolgoprudny, Russia; Lomonosov Moscow State University, Moscow, Russia
| |
Collapse
|
76
|
Deperrois N, Graupner M. Short-term depression and long-term plasticity together tune sensitive range of synaptic plasticity. PLoS Comput Biol 2020; 16:e1008265. [PMID: 32976516 PMCID: PMC7549837 DOI: 10.1371/journal.pcbi.1008265] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 10/12/2020] [Accepted: 08/17/2020] [Indexed: 01/24/2023] Open
Abstract
Synaptic efficacy is subjected to activity-dependent changes on short- and long time scales. While short-term changes decay over minutes, long-term modifications last from hours up to a lifetime and are thought to constitute the basis of learning and memory. Both plasticity mechanisms have been studied extensively but how their interaction shapes synaptic dynamics is little known. To investigate how both short- and long-term plasticity together control the induction of synaptic depression and potentiation, we used numerical simulations and mathematical analysis of a calcium-based model, where pre- and postsynaptic activity induces calcium transients driving synaptic long-term plasticity. We found that the model implementing known synaptic short-term dynamics in the calcium transients can be successfully fitted to long-term plasticity data obtained in visual- and somatosensory cortex. Interestingly, the impact of spike-timing and firing rate changes on plasticity occurs in the prevalent firing rate range, which is different in both cortical areas considered here. Our findings suggest that short- and long-term plasticity are together tuned to adapt plasticity to area-specific activity statistics such as firing rates. Synaptic long-term plasticity, the long-lasting change in efficacy of connections between neurons, is believed to underlie learning and memory. Synapses furthermore change their efficacy reversibly in an activity-dependent manner on the subsecond time scale, referred to as short-term plasticity. It is not known how both synaptic plasticity mechanisms—long- and short-term—interact during activity epochs. To address this question, we used a biologically-inspired plasticity model in which calcium drives changes in synaptic efficacy. We applied the model to plasticity data from visual- and somatosensory cortex and found that synaptic changes occur in very different firing rate ranges, which correspond to the prevalent firing rates in both structures. Our results suggest that short- and long-term plasticity act in a well concerted fashion.
Collapse
Affiliation(s)
- Nicolas Deperrois
- Université de Paris, CNRS, SPPIN - Saints-Pères Paris Institute for the Neurosciences, F-75006 Paris, France
| | - Michael Graupner
- Université de Paris, CNRS, SPPIN - Saints-Pères Paris Institute for the Neurosciences, F-75006 Paris, France
- * E-mail:
| |
Collapse
|
77
|
Ebner C, Clopath C, Jedlicka P, Cuntz H. Unifying Long-Term Plasticity Rules for Excitatory Synapses by Modeling Dendrites of Cortical Pyramidal Neurons. Cell Rep 2020; 29:4295-4307.e6. [PMID: 31875541 PMCID: PMC6941234 DOI: 10.1016/j.celrep.2019.11.068] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 05/02/2019] [Accepted: 11/15/2019] [Indexed: 11/30/2022] Open
Abstract
A large number of experiments have indicated that precise spike times, firing rates, and synapse locations crucially determine the dynamics of long-term plasticity induction in excitatory synapses. However, it remains unknown how plasticity mechanisms of synapses distributed along dendritic trees cooperate to produce the wide spectrum of outcomes for various plasticity protocols. Here, we propose a four-pathway plasticity framework that is well grounded in experimental evidence and apply it to a biophysically realistic cortical pyramidal neuron model. We show in computer simulations that several seemingly contradictory experimental landmark studies are consistent with one unifying set of mechanisms when considering the effects of signal propagation in dendritic trees with respect to synapse location. Our model identifies specific spatiotemporal contributions of dendritic and axo-somatic spikes as well as of subthreshold activation of synaptic clusters, providing a unified parsimonious explanation not only for rate and timing dependence but also for location dependence of synaptic changes. A phenomenological synaptic plasticity rule is applied to a pyramidal neuron model Model reproduces rate-, timing-, and location-dependent plasticity results Active dendrites allow plasticity via dendritic spikes and subthreshold events Cooperative plasticity exists across the dendritic tree and within single branches
Collapse
Affiliation(s)
- Christian Ebner
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; NeuroCure Cluster of Excellence, Charité-Universitätsmedizin Berlin, 10117 Berlin, Germany; Institute for Biology, Humboldt-Universität zu Berlin, 10117 Berlin, Germany.
| | - Claudia Clopath
- Computational Neuroscience Laboratory, Bioengineering Department, Imperial College London, London SW7 2AZ, UK
| | - Peter Jedlicka
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany; Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University Frankfurt, 60528 Frankfurt am Main, Germany; ICAR3R-Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus-Liebig-University, 35392 Giessen, Germany
| | - Hermann Cuntz
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| |
Collapse
|
78
|
Phillips RS, Rosner I, Gittis AH, Rubin JE. The effects of chloride dynamics on substantia nigra pars reticulata responses to pallidal and striatal inputs. eLife 2020; 9:e55592. [PMID: 32894224 PMCID: PMC7476764 DOI: 10.7554/elife.55592] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Accepted: 08/14/2020] [Indexed: 11/20/2022] Open
Abstract
As a rodent basal ganglia (BG) output nucleus, the substantia nigra pars reticulata (SNr) is well positioned to impact behavior. SNr neurons receive GABAergic inputs from the striatum (direct pathway) and globus pallidus (GPe, indirect pathway). Dominant theories of action selection rely on these pathways' inhibitory actions. Yet, experimental results on SNr responses to these inputs are limited and include excitatory effects. Our study combines experimental and computational work to characterize, explain, and make predictions about these pathways. We observe diverse SNr responses to stimulation of SNr-projecting striatal and GPe neurons, including biphasic and excitatory effects, which our modeling shows can be explained by intracellular chloride processing. Our work predicts that ongoing GPe activity could tune the SNr operating mode, including its responses in decision-making scenarios, and GPe output may modulate synchrony and low-frequency oscillations of SNr neurons, which we confirm using optogenetic stimulation of GPe terminals within the SNr.
Collapse
Affiliation(s)
- Ryan S Phillips
- Department of Mathematics, University of PittsburghPittsburghUnited States
- Center for the Neural Basis of CognitionPittsburghUnited States
| | - Ian Rosner
- Center for the Neural Basis of CognitionPittsburghUnited States
- Department of Biological Sciences, Carnegie Mellon UniversityPittsburghUnited States
| | - Aryn H Gittis
- Center for the Neural Basis of CognitionPittsburghUnited States
- Department of Biological Sciences, Carnegie Mellon UniversityPittsburghUnited States
| | - Jonathan E Rubin
- Department of Mathematics, University of PittsburghPittsburghUnited States
- Center for the Neural Basis of CognitionPittsburghUnited States
| |
Collapse
|
79
|
Taherkhani A, Cosma G, McGinnity TM. Optimization of Output Spike Train Encoding for a Spiking Neuron Based on its Spatio–Temporal Input Pattern. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2909355] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
80
|
Jackson MB. Hebbian and non-Hebbian timing-dependent plasticity in the hippocampal CA3 region. Hippocampus 2020; 30:1241-1256. [PMID: 32818312 DOI: 10.1002/hipo.23252] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 07/14/2020] [Accepted: 07/19/2020] [Indexed: 11/10/2022]
Abstract
The timing between synaptic inputs has been proposed to play a role in the induction of plastic changes that enable neural circuits to store information. In the case of spike timing-dependent plasticity (STDP), this relates to the interval between a synaptic input and a postsynaptic spike, thus providing a conceptual link to the Hebb learning rule. Experiments have documented STDP in many synapses and brain regions, and computational models have tested its utility in many neural network functions. However, questions remain about whether timing plays a role in plasticity during natural activity, and whether it can function in information storage. The present study used imaging with voltage sensitive dye to investigate the effectiveness of input timing in the plasticity of responses in the CA3 region of hippocampal slices. Plasticity was induced by sequential dual-site stimulation at 10 ms intervals of either synaptic inputs and cell bodies (synaptic-somatic induction) or of two sets of synaptic inputs (synaptic-synaptic induction). Both protocols potentiated responses, with greater potentiation of responses to the first stimulation of the sequence than the second. Neither of these protocols induced depression. Synaptic-somatic stimulation was much more effective than synaptic-synaptic stimulation in evoking somatic action potentials, but both protocols potentiated responses equally well. This suggests that sequential dual-site stimulation can potentiate equally well with very different degrees of somatic action potential firing. With synaptic-somatic induction, potentiation was focused at the sites of stimulation. In contrast, with synaptic-synaptic induction, the distribution of potentiation varied greatly. Changes in the spatial distribution of responses indicated that sequential dual-site stimulation functions poorly in the storage of activity patterns. These results suggest that in the hippocampal CA3 region, timed sequential activation of two inputs is less effective than theta bursts, both in the induction of LTP and in the storage of information.
Collapse
Affiliation(s)
- Meyer B Jackson
- Department of Neuroscience, University of Wisconsin, Madison, Wisconsin, USA
| |
Collapse
|
81
|
Multiplexing rhythmic information by spike timing dependent plasticity. PLoS Comput Biol 2020; 16:e1008000. [PMID: 32598350 PMCID: PMC7351241 DOI: 10.1371/journal.pcbi.1008000] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 07/10/2020] [Accepted: 05/29/2020] [Indexed: 01/05/2023] Open
Abstract
Rhythmic activity has been associated with a wide range of cognitive processes including the encoding of sensory information, navigation, the transfer of information and others. Rhythmic activity in the brain has also been suggested to be used for multiplexing information. Multiplexing is the ability to transmit more than one signal via the same channel. Here we focus on frequency division multiplexing, in which different signals are transmitted in different frequency bands. Recent work showed that spike-timing-dependent plasticity (STDP) can facilitate the transfer of rhythmic activity downstream the information processing pathway. However, STDP has also been known to generate strong winner-take-all like competition between subgroups of correlated synaptic inputs. This competition between different rhythmicity channels, induced by STDP, may prevent the multiplexing of information. Thus, raising doubts whether STDP is consistent with the idea of multiplexing. This study explores whether STDP can facilitate the multiplexing of information across multiple frequency channels, and if so, under what conditions. We address this question in a modelling study, investigating the STDP dynamics of two populations synapsing downstream onto the same neuron in a feed-forward manner. Each population was assumed to exhibit rhythmic activity, albeit in a different frequency band. Our theory reveals that the winner-take-all like competitions between the two populations is limited, in the sense that different rhythmic populations will not necessarily fully suppress each other. Furthermore, we found that for a wide range of parameters, the network converged to a solution in which the downstream neuron responded to both rhythms. Yet, the synaptic weights themselves did not converge to a fixed point, rather remained dynamic. These findings imply that STDP can support the multiplexing of rhythmic information, and demonstrate how functionality (multiplexing of information) can be retained in the face of continuous remodeling of all the synaptic weights. The constraints on the types of STDP rules that can support multiplexing provide a natural test for our theory. Spike timing dependent plasticity (STDP) quantifies the change in the synaptic efficacy as a function of the temporal relationship between pre- and post-synaptic firing. STDP can be viewed as a microscopic unsupervised learning rule, and a wide range of such microscopic learning rules have been described empirically. Since there is no supervisor in unsupervised learning (which would provide with the system its goal), theoreticians have struggled with the question of the possible computational roles of the various STDP rules. Previous studies have focused on the possible contribution of STDP to the spontaneous development of spatial structure. However, the rich temporal repertoire of reported STDP rules has largely been ignored. Here we studied the contribution of STDP to the development of temporal structure. We show how STDP can shape synaptic efficacies to facilitate the transfer of rhythmic information downstream and to enable the multiplexing of information across different frequency channels. Our work emphasizes the relationship between the temporal structure of the STDP rule and the rhythmic activity it can support.
Collapse
|
82
|
Haessig G, Milde MB, Aceituno PV, Oubari O, Knight JC, van Schaik A, Benosman RB, Indiveri G. Event-Based Computation for Touch Localization Based on Precise Spike Timing. Front Neurosci 2020; 14:420. [PMID: 32528239 PMCID: PMC7248403 DOI: 10.3389/fnins.2020.00420] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 04/07/2020] [Indexed: 11/13/2022] Open
Abstract
Precise spike timing and temporal coding are used extensively within the nervous system of insects and in the sensory periphery of higher order animals. However, conventional Artificial Neural Networks (ANNs) and machine learning algorithms cannot take advantage of this coding strategy, due to their rate-based representation of signals. Even in the case of artificial Spiking Neural Networks (SNNs), identifying applications where temporal coding outperforms the rate coding strategies of ANNs is still an open challenge. Neuromorphic sensory-processing systems provide an ideal context for exploring the potential advantages of temporal coding, as they are able to efficiently extract the information required to cluster or classify spatio-temporal activity patterns from relative spike timing. Here we propose a neuromorphic model inspired by the sand scorpion to explore the benefits of temporal coding, and validate it in an event-based sensory-processing task. The task consists in localizing a target using only the relative spike timing of eight spatially-separated vibration sensors. We propose two different approaches in which the SNNs learns to cluster spatio-temporal patterns in an unsupervised manner and we demonstrate how the task can be solved both analytically and through numerical simulation of multiple SNN models. We argue that the models presented are optimal for spatio-temporal pattern classification using precise spike timing in a task that could be used as a standard benchmark for evaluating event-based sensory processing models based on temporal coding.
Collapse
Affiliation(s)
- Germain Haessig
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Moritz B Milde
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Penrith, NSW, Australia
| | - Pau Vilimelis Aceituno
- Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany.,Max Planck School of Cognition, Leipzig, Germany
| | - Omar Oubari
- Institut de la Vision, Sorbonne Université, Paris, France
| | - James C Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | - André van Schaik
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Penrith, NSW, Australia
| | - Ryad B Benosman
- Institut de la Vision, Sorbonne Université, Paris, France.,University of Pittsburgh, Pittsburgh, PA, United States.,Carnegie Mellon University, Pittsburgh, PA, United States
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
83
|
Moradi K, Ascoli GA. A comprehensive knowledge base of synaptic electrophysiology in the rodent hippocampal formation. Hippocampus 2020; 30:314-331. [PMID: 31472001 PMCID: PMC7875289 DOI: 10.1002/hipo.23148] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 07/16/2019] [Accepted: 08/06/2019] [Indexed: 01/14/2023]
Abstract
The cellular and synaptic architecture of the rodent hippocampus has been described in thousands of peer-reviewed publications. However, no human- or machine-readable public catalog of synaptic electrophysiology data exists for this or any other neural system. Harnessing state-of-the-art information technology, we have developed a cloud-based toolset for identifying empirical evidence from the scientific literature pertaining to synaptic electrophysiology, for extracting the experimental data of interest, and for linking each entry to relevant text or figure excerpts. Mining more than 1,200 published journal articles, we have identified eight different signal modalities quantified by 90 different methods to measure synaptic amplitude, kinetics, and plasticity in hippocampal neurons. We have designed a data structure that both reflects the differences and maintains the existing relations among experimental modalities. Moreover, we mapped every annotated experiment to identified potential connections, that is, specific pairs of presynaptic and postsynaptic neuron types. To this aim, we leveraged Hippocampome.org, an open-access knowledge base of morphologically, electrophysiologically, and molecularly characterized neuron types in the rodent hippocampal formation. Specifically, we have implemented a computational pipeline to systematically translate neuron type properties into formal queries in order to find all compatible potential connections. With this system, we have collected nearly 40,000 synaptic data entities covering 88% of the 3,120 potential connections in Hippocampome.org. Correcting membrane potentials with respect to liquid junction potentials significantly reduced the difference between theoretical and experimental reversal potentials, thereby enabling the accurate conversion of all synaptic amplitudes to conductance. This data set allows for large-scale hypothesis testing of the general rules governing synaptic signals. To illustrate these applications, we confirmed several expected correlations between synaptic measurements and their covariates while suggesting previously unreported ones. We release all data open-source at Hippocampome.org in order to further research across disciplines.
Collapse
Affiliation(s)
- Keivan Moradi
- Neuroscience Program, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA (USA)
| | - Giorgio A. Ascoli
- Neuroscience Program, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA (USA)
- Bioengineering Department, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA (USA)
| |
Collapse
|
84
|
Mendes A, Vignoud G, Perez S, Perrin E, Touboul J, Venance L. Concurrent Thalamostriatal and Corticostriatal Spike-Timing-Dependent Plasticity and Heterosynaptic Interactions Shape Striatal Plasticity Map. Cereb Cortex 2020; 30:4381-4401. [DOI: 10.1093/cercor/bhaa024] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Abstract
The striatum integrates inputs from the cortex and thalamus, which display concomitant or sequential activity. The striatum assists in forming memory, with acquisition of the behavioral repertoire being associated with corticostriatal (CS) plasticity. The literature has mainly focused on that CS plasticity, and little remains known about thalamostriatal (TS) plasticity rules or CS and TS plasticity interactions. We undertook here the study of these plasticity rules. We found bidirectional Hebbian and anti-Hebbian spike-timing-dependent plasticity (STDP) at the thalamic and cortical inputs, respectively, which were driving concurrent changes at the striatal synapses. Moreover, TS- and CS-STDP induced heterosynaptic plasticity. We developed a calcium-based mathematical model of the coupled TS and CS plasticity, and simulations predict complex changes in the CS and TS plasticity maps depending on the precise cortex–thalamus–striatum engram. These predictions were experimentally validated using triplet-based STDP stimulations, which revealed the significant remodeling of the CS-STDP map upon TS activity, which is notably the induction of the LTD areas in the CS-STDP for specific timing regimes. TS-STDP exerts a greater influence on CS plasticity than CS-STDP on TS plasticity. These findings highlight the major impact of precise timing in cortical and thalamic activity for the memory engram of striatal synapses.
Collapse
Affiliation(s)
- Alexandre Mendes
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
| | - Gaetan Vignoud
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
- Department of Mathematics, Volen National Center for Complex Systems, Brandeis University, Waltham, MA 2454-9110, USA
| | - Sylvie Perez
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
| | - Elodie Perrin
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
| | - Jonathan Touboul
- Department of Mathematics, Volen National Center for Complex Systems, Brandeis University, Waltham, MA 2454-9110, USA
| | - Laurent Venance
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
| |
Collapse
|
85
|
Bačić I, Franović I. Two paradigmatic scenarios for inverse stochastic resonance. CHAOS (WOODBURY, N.Y.) 2020; 30:033123. [PMID: 32237779 DOI: 10.1063/1.5139628] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 03/04/2020] [Indexed: 06/11/2023]
Abstract
Inverse stochastic resonance comprises a nonlinear response of an oscillatory system to noise where the frequency of noise-perturbed oscillations becomes minimal at an intermediate noise level. We demonstrate two generic scenarios for inverse stochastic resonance by considering a paradigmatic model of two adaptively coupled stochastic active rotators whose local dynamics is close to a bifurcation threshold. In the first scenario, shown for the two rotators in the excitable regime, inverse stochastic resonance emerges due to a biased switching between the oscillatory and the quasi-stationary metastable states derived from the attractors of the noiseless system. In the second scenario, illustrated for the rotators in the oscillatory regime, inverse stochastic resonance arises due to a trapping effect associated with a noise-enhanced stability of an unstable fixed point. The details of the mechanisms behind the resonant effect are explained in terms of slow-fast analysis of the corresponding noiseless systems.
Collapse
Affiliation(s)
- Iva Bačić
- Scientific Computing Laboratory, Center for the Study of Complex Systems, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade, Serbia
| | - Igor Franović
- Scientific Computing Laboratory, Center for the Study of Complex Systems, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade, Serbia
| |
Collapse
|
86
|
Lobov SA, Mikhaylov AN, Shamshin M, Makarov VA, Kazantsev VB. Spatial Properties of STDP in a Self-Learning Spiking Neural Network Enable Controlling a Mobile Robot. Front Neurosci 2020; 14:88. [PMID: 32174804 PMCID: PMC7054464 DOI: 10.3389/fnins.2020.00088] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 01/22/2020] [Indexed: 11/13/2022] Open
Abstract
Development of spiking neural networks (SNNs) controlling mobile robots is one of the modern challenges in computational neuroscience and artificial intelligence. Such networks, being replicas of biological ones, are expected to have a higher computational potential than traditional artificial neural networks (ANNs). The critical problem is in the design of robust learning algorithms aimed at building a “living computer” based on SNNs. Here, we propose a simple SNN equipped with a Hebbian rule in the form of spike-timing-dependent plasticity (STDP). The SNN implements associative learning by exploiting the spatial properties of STDP. We show that a LEGO robot controlled by the SNN can exhibit classical and operant conditioning. Competition of spike-conducting pathways in the SNN plays a fundamental role in establishing associations of neural connections. It replaces the irrelevant associations by new ones in response to a change in stimuli. Thus, the robot gets the ability to relearn when the environment changes. The proposed SNN and the stimulation protocol can be further enhanced and tested in developing neuronal cultures, and also admit the use of memristive devices for hardware implementation.
Collapse
Affiliation(s)
- Sergey A Lobov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Alexey N Mikhaylov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Maxim Shamshin
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Valeri A Makarov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Instituto de Matemática Interdisciplinar, Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, Madrid, Spain
| | - Victor B Kazantsev
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| |
Collapse
|
87
|
Multi-level anomalous Hall resistance in a single Hall cross for the applications of neuromorphic device. Sci Rep 2020; 10:1285. [PMID: 31992806 PMCID: PMC6987114 DOI: 10.1038/s41598-020-58223-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 12/13/2019] [Indexed: 12/13/2022] Open
Abstract
We demonstrate the process of obtaining memristive multi-states Hall resistance (RH) change in a single Hall cross (SHC) structure. Otherwise, the working mechanism successfully mimics the behavior of biological neural systems. The motion of domain wall (DW) in the SHC was used to control the ascend (or descend) of the RH amplitude. The primary synaptic functions such as long-term potentiation (LTP), long-term depression (LTD), and spike-time-dependent plasticity (STDP) could then be emulated by regulating RH. Applied programmable magnetic field pulses are in varying conditions such as intensity and duration to adjust RH. These results show that analog readings of DW movement can be closely resembled with the change of synaptic weight and have great potentials for bioinspired neuromorphic computing.
Collapse
|
88
|
Lobov SA, Chernyshov AV, Krilova NP, Shamshin MO, Kazantsev VB. Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier. SENSORS 2020; 20:s20020500. [PMID: 31963143 PMCID: PMC7014236 DOI: 10.3390/s20020500] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 01/10/2020] [Accepted: 01/14/2020] [Indexed: 12/24/2022]
Abstract
One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.
Collapse
|
89
|
Fast Convergence of Competitive Spiking Neural Networks with Sample-Based Weight Initialization. INFORMATION PROCESSING AND MANAGEMENT OF UNCERTAINTY IN KNOWLEDGE-BASED SYSTEMS 2020. [PMCID: PMC7274752 DOI: 10.1007/978-3-030-50153-2_57] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/30/2022]
Abstract
Recent work on spiking neural networks showed good progress towards unsupervised feature learning. In particular, networks called Competitive Spiking Neural Networks (CSNN) achieve reasonable accuracy in classification tasks. However, two major disadvantages limit their practical applications: high computational complexity and slow convergence. While the first problem has partially been addressed with the development of neuromorphic hardware, no work has addressed the latter problem. In this paper we show that the number of samples the CSNN needs to converge can be reduced significantly by a proposed new weight initialization. The proposed method uses input samples as initial values for the connection weights. Surprisingly, this simple initialization reduces the number of training samples needed for convergence by an order of magnitude without loss of accuracy. We use the MNIST dataset to show that the method is robust even when not all classes are seen during initialization.
Collapse
|
90
|
Vosoughi A, Sadigh-Eteghad S, Ghorbani M, Shahmorad S, Farhoudi M, Rafi MA, Omidi Y. Mathematical Models to Shed Light on Amyloid-Beta and Tau Protein Dependent Pathologies in Alzheimer's Disease. Neuroscience 2019; 424:45-57. [PMID: 31682825 DOI: 10.1016/j.neuroscience.2019.09.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Revised: 09/10/2019] [Accepted: 09/11/2019] [Indexed: 12/11/2022]
Abstract
The number of patients suffering from dementia due to Alzheimer's disease (AD) is constantly rising worldwide. This has accordingly resulted in huge burdens on the health systems and involved families. Lack of profound understanding of neural networking in normal brain and their interruption in AD makes the treatment of this neurodegenerative multifaceted disease a challenging issue. In recent years, mathematical and computational methods have paved the way towards a better understanding of the brain functional connectivity. Thus, much attention has been paid to this matter from both basic science researchers and clinicians with an interdisciplinary approach to determine what is not functioning properly in AD patients and how this malfunctioning can be addressed. In this review, a number of AD-related articles and well-studied pathophysiologic topics (e.g., amyloid-beta, neurofibrillary tangles, Ca2+ dysregulation, and synaptic plasticity alterations) has been literally surveyed from a computational and systems biology point of view. The neural networks were discussed from biological and mathematical point of views and their alterations in recent findings were further highlighted. Application of the graph theoretical analysis in the brain imaging was reviewed, depicting the relations between brain structure and function, without diving into mathematical details. Moreover, differential rate equations were briefly articulated, emphasizing the potential use of these equations in simplifying complex processes in relevance to pathologies of AD. Comprehensive insights were given into the AD progression from neural networks perspective, which may lead us towards potential strategies for early diagnosis and effective treatment of AD.
Collapse
Affiliation(s)
- Armin Vosoughi
- Neurosciences Research Center, Tabriz University of Medical Sciences, Tabriz, Iran; Research Center for Pharmaceutical Nanotechnology, Biomedicine Institute, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Saeed Sadigh-Eteghad
- Neurosciences Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | | | | | - Mehdi Farhoudi
- Neurosciences Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Mohammad A Rafi
- Department of Neurology, Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Yadollah Omidi
- Research Center for Pharmaceutical Nanotechnology, Biomedicine Institute, Tabriz University of Medical Sciences, Tabriz, Iran; Department of Pharmaceutics, Faculty of Pharmacy, Tabriz University of Medical Sciences, Tabriz, Iran.
| |
Collapse
|
91
|
Reduced order models of myelinated axonal compartments. J Comput Neurosci 2019; 47:141-166. [PMID: 31659570 DOI: 10.1007/s10827-019-00726-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Revised: 07/02/2019] [Accepted: 08/07/2019] [Indexed: 10/25/2022]
Abstract
The paper presents a hierarchical series of computational models for myelinated axonal compartments. Three classes of models are considered, either with distributed parameters (2.5D EQS-ElectroQuasi Static, 1D TL-Transmission Lines) or with lumped parameters (0D). They are systematically analyzed with both analytical and numerical approaches, the main goal being to identify the best procedure for order reduction of each case. An appropriate error estimator is proposed in order to assess the accuracy of the models. This is the foundation of a procedure able to find the simplest reduced model having an imposed precision. The most computationally efficient model from the three geometries proved to be the analytical 1D one, which is able to have accuracy less than 0.1%. By order reduction with vector fitting, a finite model is generated with a relative difference of 10- 4 for order 5. The dynamical models thus extracted allow an efficient simulation of neurons and, consequently, of neuronal circuits. In such situations, the linear models of the myelinated compartments coupled with the dynamical, non-linear models of the Ranvier nodes, neuronal body (soma) and dendritic tree give global reduced models. In order to ease the simulation of large-scale neuronal systems, the sub-models at each level, including those of myelinated compartments should have the lowest possible order. The presented procedure is a first step in achieving simulations of neural systems with accuracy control.
Collapse
|
92
|
Taherkhani A, Belatreche A, Li Y, Cosma G, Maguire LP, McGinnity TM. A review of learning in biologically plausible spiking neural networks. Neural Netw 2019; 122:253-272. [PMID: 31726331 DOI: 10.1016/j.neunet.2019.09.036] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 09/17/2019] [Accepted: 09/23/2019] [Indexed: 11/30/2022]
Abstract
Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed.
Collapse
Affiliation(s)
- Aboozar Taherkhani
- School of Computer Science and Informatics, Faculty of Computing, Engineering and Media, De Montfort University, Leicester, UK.
| | - Ammar Belatreche
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK
| | - Yuhua Li
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Georgina Cosma
- Department of Computer Science, Loughborough University, Loughborough, UK
| | - Liam P Maguire
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK
| | - T M McGinnity
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK; School of Science and Technology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
93
|
Shamir M. Theories of rhythmogenesis. Curr Opin Neurobiol 2019; 58:70-77. [PMID: 31408837 DOI: 10.1016/j.conb.2019.07.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2019] [Accepted: 07/14/2019] [Indexed: 12/31/2022]
Abstract
Rhythmogenesis is the process that develops the capacity for rhythmic activity in a non-rhythmic system. Theoretical works suggested a wide array of possible mechanisms for rhythmogenesis ranging from the regulation of cellular properties to top-down control. Here we discuss theories of rhythmogenesis with an emphasis on spike timing-dependent plasticity. We argue that even though the specifics of different mechanisms vary greatly they all share certain key features. Namely, rhythmogenesis can be described as a flow on the phase diagram leading the system into a rhythmic region and stabilizing it on a specific manifold characterized by the desired rhythmic activity. Functionality is retained despite biological diversity by forcing the system into a specific manifold, but allowing fluctuations within that manifold.
Collapse
Affiliation(s)
- Maoz Shamir
- Department of Physiology and Cell Biology, Faculty of Health Sciences, Department of Physics, Faculty of Natural Sciences, Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Be'er-Sheva, Israel; The Kavli Institute for Theoretical Physics, University of California, Santa Barbara, USA.
| |
Collapse
|
94
|
Soto FA. Beyond the "Conceptual Nervous System": Can computational cognitive neuroscience transform learning theory? Behav Processes 2019; 167:103908. [PMID: 31381986 DOI: 10.1016/j.beproc.2019.103908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2018] [Revised: 05/08/2019] [Accepted: 07/11/2019] [Indexed: 11/29/2022]
Abstract
In the last century, learning theory has been dominated by an approach assuming that associations between hypothetical representational nodes can support the acquisition of knowledge about the environment. The similarities between this approach and connectionism did not go unnoticed to learning theorists, with many of them explicitly adopting a neural network approach in the modeling of learning phenomena. Skinner famously criticized such use of hypothetical neural structures for the explanation of behavior (the "Conceptual Nervous System"), and one aspect of his criticism has proven to be correct: theory underdetermination is a pervasive problem in cognitive modeling in general, and in associationist and connectionist models in particular. That is, models implementing two very different cognitive processes often make the exact same behavioral predictions, meaning that important theoretical questions posed by contrasting the two models remain unanswered. We show through several examples that theory underdetermination is common in the learning theory literature, affecting the solvability of some of the most important theoretical problems that have been posed in the last decades. Computational cognitive neuroscience (CCN) offers a solution to this problem, by including neurobiological constraints in computational models of behavior and cognition. Rather than simply being inspired by neural computation, CCN models are built to reflect as much as possible about the actual neural structures thought to underlie a particular behavior. They go beyond the "Conceptual Nervous System" and offer a true integration of behavioral and neural levels of analysis.
Collapse
Affiliation(s)
- Fabian A Soto
- Department of Psychology, Florida International University, 11200 SW 8th St, AHC4 460, Miami, FL 33199, United States.
| |
Collapse
|
95
|
Guo Y, Wu H, Gao B, Qian H. Unsupervised Learning on Resistive Memory Array Based Spiking Neural Networks. Front Neurosci 2019; 13:812. [PMID: 31447634 PMCID: PMC6691091 DOI: 10.3389/fnins.2019.00812] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 07/22/2019] [Indexed: 11/13/2022] Open
Abstract
Spiking Neural Networks (SNNs) offer great potential to promote both the performance and efficiency of real-world computing systems, considering the biological plausibility of SNNs. The emerging analog Resistive Random Access Memory (RRAM) devices have drawn increasing interest as potential neuromorphic hardware for implementing practical SNNs. In this article, we propose a novel training approach (called greedy training) for SNNs by diluting spike events on the temporal dimension with necessary controls on input encoding phase switching, endowing SNNs with the ability to cooperate with the inevitable conductance variations of RRAM devices. The SNNs could utilize Spike-Timing-Dependent Plasticity (STDP) as the unsupervised learning rule, and this plasticity has been observed on our one-transistor-one-resistor (1T1R) RRAM devices under voltage pulses with designed waveforms. We have also conducted handwritten digit recognition task simulations on MNIST dataset. The results show that the unsupervised SNNs trained by the proposed method could mitigate the requirement for the number of gradual levels of RRAM devices, and also have immunity to both cycle-to-cycle and device-to-device RRAM conductance variations. Unsupervised SNNs trained by the proposed methods could cooperate with real RRAM devices with non-ideal behaviors better, promising high feasibility of RRAM array based neuromorphic systems for online training.
Collapse
Affiliation(s)
- Yilong Guo
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Huaqiang Wu
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Bin Gao
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - He Qian
- Institute of Microelectronics, Tsinghua University, Beijing, China
| |
Collapse
|
96
|
Farokhniaee A, McIntyre CC. Theoretical principles of deep brain stimulation induced synaptic suppression. Brain Stimul 2019; 12:1402-1409. [PMID: 31351911 DOI: 10.1016/j.brs.2019.07.005] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 07/06/2019] [Accepted: 07/08/2019] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Deep brain stimulation (DBS) is a successful clinical therapy for a wide range of neurological disorders; however, the physiological mechanisms of DBS remain unresolved. While many different hypotheses currently exist, our analyses suggest that high frequency (∼100 Hz) stimulation-induced synaptic suppression represents the most basic concept that can be directly reconciled with experimental recordings of spiking activity in neurons that are being driven by DBS inputs. OBJECTIVE The goal of this project was to develop a simple model system to characterize the excitatory post-synaptic currents (EPSCs) and action potential signaling generated in a neuron that is strongly connected to pre-synaptic glutamatergic inputs that are being directly activated by DBS. METHODS We used the Tsodyks-Markram (TM) phenomenological synapse model to represent depressing, facilitating, and pseudo-linear synapses driven by DBS over a wide range of stimulation frequencies. The EPSCs were then used as inputs to a leaky integrate-and-fire neuron model and we measured the DBS-triggered post-synaptic spiking activity. RESULTS Synaptic suppression was a robust feature of high frequency stimulation, independent of the synapse type. As such, the TM equations were used to define alternative DBS pulsing strategies that maximized synaptic suppression with the minimum number of stimuli. CONCLUSIONS Synaptic suppression provides a biophysical explanation to the intermittent, but still time-locked, post-synaptic firing characteristics commonly seen in DBS experimental recordings. Therefore, network models attempting to analyze or predict the effects of DBS on neural activity patterns should integrate synaptic suppression into their simulations.
Collapse
Affiliation(s)
- AmirAli Farokhniaee
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Cameron C McIntyre
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.
| |
Collapse
|
97
|
Deger M, Seeholzer A, Gerstner W. Multicontact Co-operativity in Spike-Timing-Dependent Structural Plasticity Stabilizes Networks. Cereb Cortex 2019; 28:1396-1415. [PMID: 29300903 PMCID: PMC6041941 DOI: 10.1093/cercor/bhx339] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Accepted: 11/30/2017] [Indexed: 12/12/2022] Open
Abstract
Excitatory synaptic connections in the adult neocortex consist of multiple synaptic contacts, almost exclusively formed on dendritic spines. Changes of spine volume, a correlate of synaptic strength, can be tracked in vivo for weeks. Here, we present a combined model of structural and spike-timing–dependent plasticity that explains the multicontact configuration of synapses in adult neocortical networks under steady-state and lesion-induced conditions. Our plasticity rule with Hebbian and anti-Hebbian terms stabilizes both the postsynaptic firing rate and correlations between the pre- and postsynaptic activity at an active synaptic contact. Contacts appear spontaneously at a low rate and disappear if their strength approaches zero. Many presynaptic neurons compete to make strong synaptic connections onto a postsynaptic neuron, whereas the synaptic contacts of a given presynaptic neuron co-operate via postsynaptic firing. We find that co-operation of multiple synaptic contacts is crucial for stable, long-term synaptic memories. In simulations of a simplified network model of barrel cortex, our plasticity rule reproduces whisker-trimming–induced rewiring of thalamocortical and recurrent synaptic connectivity on realistic time scales.
Collapse
Affiliation(s)
- Moritz Deger
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland.,Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, 50674 Cologne, Germany
| | - Alexander Seeholzer
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland
| |
Collapse
|
98
|
Abstract
Parallel recordings of motor cortex show weak pairwise correlations on average but a wide dispersion across cells. This observation runs counter to the prevailing notion that optimal information processing requires networks to operate at a critical point, entailing strong correlations. We here reconcile this apparent contradiction by showing that the observed structure of correlations is consistent with network models that operate close to a critical point of a different nature than previously considered: dynamics that is dominated by inhibition yet nearly unstable due to heterogeneous connectivity. Our findings provide a different perspective on criticality in neural systems: network topology and heterogeneity endow the brain with two complementary substrates for critical dynamics of largely different complexities. Cortical networks that have been found to operate close to a critical point exhibit joint activations of large numbers of neurons. However, in motor cortex of the awake macaque monkey, we observe very different dynamics: massively parallel recordings of 155 single-neuron spiking activities show weak fluctuations on the population level. This a priori suggests that motor cortex operates in a noncritical regime, which in models, has been found to be suboptimal for computational performance. However, here, we show the opposite: The large dispersion of correlations across neurons is the signature of a second critical regime. This regime exhibits a rich dynamical repertoire hidden from macroscopic brain signals but essential for high performance in such concepts as reservoir computing. An analytical link between the eigenvalue spectrum of the dynamics, the heterogeneity of connectivity, and the dispersion of correlations allows us to assess the closeness to the critical point.
Collapse
|
99
|
Pedroni BU, Joshi S, Deiss SR, Sheik S, Detorakis G, Paul S, Augustine C, Neftci EO, Cauwenberghs G. Memory-Efficient Synaptic Connectivity for Spike-Timing- Dependent Plasticity. Front Neurosci 2019; 13:357. [PMID: 31110470 PMCID: PMC6499189 DOI: 10.3389/fnins.2019.00357] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Accepted: 03/28/2019] [Indexed: 11/13/2022] Open
Abstract
Spike-Timing-Dependent Plasticity (STDP) is a bio-inspired local incremental weight update rule commonly used for online learning in spike-based neuromorphic systems. In STDP, the intensity of long-term potentiation and depression in synaptic efficacy (weight) between neurons is expressed as a function of the relative timing between pre- and post-synaptic action potentials (spikes), while the polarity of change is dependent on the order (causality) of the spikes. Online STDP weight updates for causal and acausal relative spike times are activated at the onset of post- and pre-synaptic spike events, respectively, implying access to synaptic connectivity both in forward (pre-to-post) and reverse (post-to-pre) directions. Here we study the impact of different arrangements of synaptic connectivity tables on weight storage and STDP updates for large-scale neuromorphic systems. We analyze the memory efficiency for varying degrees of density in synaptic connectivity, ranging from crossbar arrays for full connectivity to pointer-based lookup for sparse connectivity. The study includes comparison of storage and access costs and efficiencies for each memory arrangement, along with a trade-off analysis of the benefits of each data structure depending on application requirements and budget. Finally, we present an alternative formulation of STDP via a delayed causal update mechanism that permits efficient weight access, requiring no more than forward connectivity lookup. We show functional equivalence of the delayed causal updates to the original STDP formulation, with substantial savings in storage and access costs and efficiencies for networks with sparse synaptic connectivity as typically encountered in large-scale models in computational neuroscience.
Collapse
Affiliation(s)
- Bruno U Pedroni
- Integrated Systems Neuroengineering Laboratory, Department of Bioengineering, University of California, San Diego, La Jolla, CA, United States
| | - Siddharth Joshi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Stephen R Deiss
- Integrated Systems Neuroengineering Laboratory, Department of Bioengineering, University of California, San Diego, La Jolla, CA, United States
| | | | - Georgios Detorakis
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Somnath Paul
- Intel Corporation - Circuit Research Lab, Hillsboro, OR, United States
| | - Charles Augustine
- Intel Corporation - Circuit Research Lab, Hillsboro, OR, United States
| | - Emre O Neftci
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Gert Cauwenberghs
- Integrated Systems Neuroengineering Laboratory, Department of Bioengineering, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
100
|
Kasatkin DV, Klinshov VV, Nekorkin VI. Itinerant chimeras in an adaptive network of pulse-coupled oscillators. Phys Rev E 2019; 99:022203. [PMID: 30934254 DOI: 10.1103/physreve.99.022203] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Indexed: 11/07/2022]
Abstract
In a network of pulse-coupled oscillators with adaptive coupling, we discover a dynamical regime which we call an "itinerant chimera." Similarly as in classical chimera states, the network splits into two domains, the coherent and the incoherent. The drastic difference is that the composition of the domains is volatile, i.e., the oscillators demonstrate spontaneous switching between the domains. This process can be seen as traveling of the oscillators from one domain to another or as traveling of the chimera core across the network. We explore the basic features of the itinerant chimeras, such as the mean and the variance of the core size, and the oscillators lifetime within the core. We also study the scaling behavior of the system and show that the observed regime is not a finite-size effect but a key feature of the collective dynamics which persists even in large networks.
Collapse
Affiliation(s)
- Dmitry V Kasatkin
- Institute of Applied Physics of the Russian Academy of Sciences, 46 Ul'yanov Street, 603950, Nizhny Novgorod, Russia
| | - Vladimir V Klinshov
- Institute of Applied Physics of the Russian Academy of Sciences, 46 Ul'yanov Street, 603950, Nizhny Novgorod, Russia
| | - Vladimir I Nekorkin
- Institute of Applied Physics of the Russian Academy of Sciences, 46 Ul'yanov Street, 603950, Nizhny Novgorod, Russia
| |
Collapse
|