51
|
A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron 2021; 109:4001-4017.e10. [PMID: 34715026 PMCID: PMC8691952 DOI: 10.1016/j.neuron.2021.09.044] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 08/10/2021] [Accepted: 09/23/2021] [Indexed: 11/23/2022]
Abstract
Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons.
Collapse
|
52
|
Visual exposure enhances stimulus encoding and persistence in primary cortex. Proc Natl Acad Sci U S A 2021; 118:2105276118. [PMID: 34663727 PMCID: PMC8639370 DOI: 10.1073/pnas.2105276118] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2021] [Indexed: 11/28/2022] Open
Abstract
Experience shapes sensory responses, already at the earliest stages of cortical processing. We provide evidence that, in the primary visual cortex of anesthetized cats, brief repetitive exposure to a set of simple, abstract stimuli expands the range and decreases the variability of neuronal responses that persist after stimulus offset. These refinements increase the stimulus-specific clustering of neuronal population responses and result in a more efficient encoding of both stimulus identity and stimulus structure, thus potentially benefiting simple readouts in higher cortical areas. Similar results can be achieved via local plasticity mechanisms in recurrent networks, through self-organized refinements of internal dynamics that do not require changes in firing amplitudes. The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in postexposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Collapse
|
53
|
Sjöström PJ. Grand Challenge at the Frontiers of Synaptic Neuroscience. Front Synaptic Neurosci 2021; 13:748937. [PMID: 34759809 PMCID: PMC8575031 DOI: 10.3389/fnsyn.2021.748937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 09/22/2021] [Indexed: 11/24/2022] Open
Affiliation(s)
- P. Jesper Sjöström
- Department of Medicine, Department of Neurology and Neurosurgery, Centre for Research in Neuroscience, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC, Canada
| |
Collapse
|
54
|
Doborjeh M, Doborjeh Z, Merkin A, Bahrami H, Sumich A, Krishnamurthi R, Medvedev ON, Crook-Rumsey M, Morgan C, Kirk I, Sachdev PS, Brodaty H, Kang K, Wen W, Feigin V, Kasabov N. Personalised predictive modelling with brain-inspired spiking neural networks of longitudinal MRI neuroimaging data and the case study of dementia. Neural Netw 2021; 144:522-539. [PMID: 34619582 DOI: 10.1016/j.neunet.2021.09.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 08/11/2021] [Accepted: 09/12/2021] [Indexed: 11/27/2022]
Abstract
BACKGROUND Longitudinal neuroimaging provides spatiotemporal brain data (STBD) measurement that can be utilised to understand dynamic changes in brain structure and/or function underpinning cognitive activities. Making sense of such highly interactive information is challenging, given that the features manifest intricate temporal, causal relations between the spatially distributed neural sources in the brain. METHODS The current paper argues for the advancement of deep learning algorithms in brain-inspired spiking neural networks (SNN), capable of modelling structural data across time (longitudinal measurement) and space (anatomical components). The paper proposes a methodology and a computational architecture based on SNN for building personalised predictive models from longitudinal brain data to accurately detect, understand, and predict the dynamics of an individual's functional brain state. The methodology includes finding clusters of similar data to each individual, data interpolation, deep learning in a 3-dimensional brain-template structured SNN model, classification and prediction of individual outcome, visualisation of structural brain changes related to the predicted outcomes, interpretation of results, and individual and group predictive marker discovery. RESULTS To demonstrate the functionality of the proposed methodology, the paper presents experimental results on a longitudinal magnetic resonance imaging (MRI) dataset derived from 175 older adults of the internationally recognised community-based cohort Sydney Memory and Ageing Study (MAS) spanning 6 years of follow-up. SIGNIFICANCE The models were able to accurately classify and predict 2 years ahead of cognitive decline, such as mild cognitive impairment (MCI) and dementia with 95% and 91% accuracy, respectively. The proposed methodology also offers a 3-dimensional visualisation of the MRI models reflecting the dynamic patterns of regional changes in white matter hyperintensity (WMH) and brain volume over 6 years. CONCLUSION The method is efficient for personalised predictive modelling on a wide range of neuroimaging longitudinal data, including also demographic, genetic, and clinical data. As a case study, it resulted in finding predictive markers for MCI and dementia as dynamic brain patterns using MRI data.
Collapse
Affiliation(s)
- Maryam Doborjeh
- Computer Science and Software Engineering Department, School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand.
| | - Zohreh Doborjeh
- Department of Audiology, School of Population Health, Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Alexander Merkin
- The National Institute for Stroke and Applied Neurosciences, School of Clinical Sciences, Auckland University of Technology, New Zealand
| | - Helena Bahrami
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand
| | - Alexander Sumich
- NTU Psychology, Nottingham Trent University, Nottingham, United Kingdom
| | - Rita Krishnamurthi
- The National Institute for Stroke and Applied Neurosciences, School of Clinical Sciences, Auckland University of Technology, New Zealand
| | - Oleg N Medvedev
- University of Waikato, School of Psychology, Hamilton, New Zealand
| | - Mark Crook-Rumsey
- NTU Psychology, Nottingham Trent University, Nottingham, United Kingdom; School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand
| | - Catherine Morgan
- School of Psychology and Centre for Brain Research, University of Auckland, New Zealand; Brain Research New Zealand - Rangahau Roro Aotearoa, Centre of Research Excellence, New Zealand
| | - Ian Kirk
- School of Psychology and Centre for Brain Research, University of Auckland, New Zealand; Brain Research New Zealand - Rangahau Roro Aotearoa, Centre of Research Excellence, New Zealand
| | - Perminder S Sachdev
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia; Neuropsychiatric Institute, the Prince of Wales Hospital, Sydney, Australia
| | - Henry Brodaty
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia
| | - Kristan Kang
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia
| | - Wei Wen
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia; Neuropsychiatric Institute, the Prince of Wales Hospital, Sydney, Australia
| | - Valery Feigin
- The National Institute for Stroke and Applied Neurosciences, School of Clinical Sciences, Auckland University of Technology, New Zealand; Research Center of Neurology, Moscow, Russia
| | - Nikola Kasabov
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand; George Moore Chair, Ulster University, Londonderry, United Kingdom
| |
Collapse
|
55
|
Manos T, Diaz-Pier S, Tass PA. Long-Term Desynchronization by Coordinated Reset Stimulation in a Neural Network Model With Synaptic and Structural Plasticity. Front Physiol 2021; 12:716556. [PMID: 34566681 PMCID: PMC8455881 DOI: 10.3389/fphys.2021.716556] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 08/16/2021] [Indexed: 11/16/2022] Open
Abstract
Several brain disorders are characterized by abnormal neuronal synchronization. To specifically counteract abnormal neuronal synchrony and, hence, related symptoms, coordinated reset (CR) stimulation was computationally developed. In principle, successive epochs of synchronizing and desynchronizing stimulation may reversibly move neural networks with plastic synapses back and forth between stable regimes with synchronized and desynchronized firing. Computationally derived predictions have been verified in pre-clinical and clinical studies, paving the way for novel therapies. However, as yet, computational models were not able to reproduce the clinically observed increase of desynchronizing effects of regularly administered CR stimulation intermingled by long stimulation-free epochs. We show that this clinically important phenomenon can be computationally reproduced by taking into account structural plasticity (SP), a mechanism that deletes or generates synapses in order to homeostatically adapt the firing rates of neurons to a set point-like target firing rate in the course of days to months. If we assume that CR stimulation favorably reduces the target firing rate of SP, the desynchronizing effects of CR stimulation increase after long stimulation-free epochs, in accordance with clinically observed phenomena. Our study highlights the pivotal role of stimulation- and dosing-induced modulation of homeostatic set points in therapeutic processes.
Collapse
Affiliation(s)
- Thanos Manos
- Institute of Neuroscience and Medicine, Brain and Behaviour (INM-7), Research Centre Jülich, Jülich, Germany.,Medical Faculty, Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.,Laboratoire de Physique Théorique et Modélisation, CNRS, UMR 8089, CY Cergy Paris Université, Cergy-Pontoise Cedex, France
| | - Sandra Diaz-Pier
- Simulation & Data Lab Neuroscience, Institute for Advanced Simulation, Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich GmbH, JARA, Jülich, Germany
| | - Peter A Tass
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
56
|
Teichmann M, Larisch R, Hamker FH. Performance of biologically grounded models of the early visual system on standard object recognition tasks. Neural Netw 2021; 144:210-228. [PMID: 34507042 DOI: 10.1016/j.neunet.2021.08.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 07/05/2021] [Accepted: 08/04/2021] [Indexed: 11/29/2022]
Abstract
Computational neuroscience models of vision and neural network models for object recognition are often framed by different research agendas. Computational neuroscience mainly aims at replicating experimental data, while (artificial) neural networks target high performance on classification tasks. However, we propose that models of vision should be validated on object recognition tasks. At some point, mechanisms of realistic neuro-computational models of the visual cortex have to convince in object recognition as well. In order to foster this idea, we report the recognition accuracy for two different neuro-computational models of the visual cortex on several object recognition datasets. The models were trained using unsupervised Hebbian learning rules on natural scene inputs for the emergence of receptive fields comparable to their biological counterpart. We assume that the emerged receptive fields result in a general codebook of features, which should be applicable to a variety of visual scenes. We report the performances on datasets with different levels of difficulty, ranging from the simple MNIST to the more complex CIFAR-10 or ETH-80. We found that both networks show good results on simple digit recognition, comparable with previously published biologically plausible models. We also observed that our deeper layer neurons provide for naturalistic datasets a better recognition codebook. As for most datasets, recognition results of biologically grounded models are not available yet, our results provide a broad basis of performance values to compare methodologically similar models.
Collapse
Affiliation(s)
- Michael Teichmann
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| | - René Larisch
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| | - Fred H Hamker
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| |
Collapse
|
57
|
Madadi Asl M, Ramezani Akbarabadi S. Voltage-dependent plasticity of spin-polarized conductance in phenyl-based single-molecule magnetic tunnel junctions. PLoS One 2021; 16:e0257228. [PMID: 34506579 PMCID: PMC8432808 DOI: 10.1371/journal.pone.0257228] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 08/27/2021] [Indexed: 11/24/2022] Open
Abstract
Synaptic strengths between neurons in brain networks are highly adaptive due to synaptic plasticity. Spike-timing-dependent plasticity (STDP) is a form of synaptic plasticity induced by temporal correlations between the firing activity of neurons. The development of experimental techniques in recent years enabled the realization of brain-inspired neuromorphic devices. Particularly, magnetic tunnel junctions (MTJs) provide a suitable means for the implementation of learning processes in molecular junctions. Here, we first considered a two-neuron motif subjected to STDP. By employing theoretical analysis and computer simulations we showed that the dynamics and emergent structure of the motif can be predicted by introducing an effective two-neuron synaptic conductance. Then, we considered a phenyl-based single-molecule MTJ connected to two ferromagnetic (FM) cobalt electrodes and investigated its electrical properties using the non-equilibrium Green’s function (NEGF) formalism. Similar to the two-neuron motif, we introduced an effective spin-polarized conductance in the MTJ. Depending on the polarity, frequency and strength of the bias voltage applied to the MTJ, the system can learn input signals by adaptive changes of the effective conductance. Interestingly, this voltage-dependent plasticity is an intrinsic property of the MTJ where its behavior is reminiscent of the classical temporally asymmetric STDP. Furthermore, the shape of voltage-dependent plasticity in the MTJ is determined by the molecule-electrode coupling strength or the length of the molecule. Our results may be relevant for the development of single-molecule devices that capture the adaptive properties of synapses in the brain.
Collapse
Affiliation(s)
- Mojtaba Madadi Asl
- Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
- * E-mail:
| | | |
Collapse
|
58
|
Schubert F, Gros C. Nonlinear Dendritic Coincidence Detection for Supervised Learning. Front Comput Neurosci 2021; 15:718020. [PMID: 34421566 PMCID: PMC8372750 DOI: 10.3389/fncom.2021.718020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 07/13/2021] [Indexed: 11/25/2022] Open
Abstract
Cortical pyramidal neurons have a complex dendritic anatomy, whose function is an active research field. In particular, the segregation between its soma and the apical dendritic tree is believed to play an active role in processing feed-forward sensory information and top-down or feedback signals. In this work, we use a simple two-compartment model accounting for the nonlinear interactions between basal and apical input streams and show that standard unsupervised Hebbian learning rules in the basal compartment allow the neuron to align the feed-forward basal input with the top-down target signal received by the apical compartment. We show that this learning process, termed coincidence detection, is robust against strong distractions in the basal input space and demonstrate its effectiveness in a linear classification task.
Collapse
Affiliation(s)
- Fabian Schubert
- Institute for Theoretical Physics, Goethe University Frankfurt am Main, Frankfurt am Main, Germany
| | - Claudius Gros
- Institute for Theoretical Physics, Goethe University Frankfurt am Main, Frankfurt am Main, Germany
| |
Collapse
|
59
|
Sarazin MXB, Victor J, Medernach D, Naudé J, Delord B. Online Learning and Memory of Neural Trajectory Replays for Prefrontal Persistent and Dynamic Representations in the Irregular Asynchronous State. Front Neural Circuits 2021; 15:648538. [PMID: 34305535 PMCID: PMC8298038 DOI: 10.3389/fncir.2021.648538] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 05/31/2021] [Indexed: 11/13/2022] Open
Abstract
In the prefrontal cortex (PFC), higher-order cognitive functions and adaptive flexible behaviors rely on continuous dynamical sequences of spiking activity that constitute neural trajectories in the state space of activity. Neural trajectories subserve diverse representations, from explicit mappings in physical spaces to generalized mappings in the task space, and up to complex abstract transformations such as working memory, decision-making and behavioral planning. Computational models have separately assessed learning and replay of neural trajectories, often using unrealistic learning rules or decoupling simulations for learning from replay. Hence, the question remains open of how neural trajectories are learned, memorized and replayed online, with permanently acting biological plasticity rules. The asynchronous irregular regime characterizing cortical dynamics in awake conditions exerts a major source of disorder that may jeopardize plasticity and replay of locally ordered activity. Here, we show that a recurrent model of local PFC circuitry endowed with realistic synaptic spike timing-dependent plasticity and scaling processes can learn, memorize and replay large-size neural trajectories online under asynchronous irregular dynamics, at regular or fast (sped-up) timescale. Presented trajectories are quickly learned (within seconds) as synaptic engrams in the network, and the model is able to chunk overlapping trajectories presented separately. These trajectory engrams last long-term (dozen hours) and trajectory replays can be triggered over an hour. In turn, we show the conditions under which trajectory engrams and replays preserve asynchronous irregular dynamics in the network. Functionally, spiking activity during trajectory replays at regular timescale accounts for both dynamical coding with temporal tuning in individual neurons, persistent activity at the population level, and large levels of variability consistent with observed cognitive-related PFC dynamics. Together, these results offer a consistent theoretical framework accounting for how neural trajectories can be learned, memorized and replayed in PFC networks circuits to subserve flexible dynamic representations and adaptive behaviors.
Collapse
Affiliation(s)
- Matthieu X B Sarazin
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Julie Victor
- CEA Paris-Saclay, CNRS, NeuroSpin, Saclay, France
| | - David Medernach
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Jérémie Naudé
- Neuroscience Paris Seine - Institut de biologie Paris Seine, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Bruno Delord
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| |
Collapse
|
60
|
Kirchner JH, Gjorgjieva J. Emergence of local and global synaptic organization on cortical dendrites. Nat Commun 2021; 12:4005. [PMID: 34183661 PMCID: PMC8239006 DOI: 10.1038/s41467-021-23557-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 05/03/2021] [Indexed: 02/06/2023] Open
Abstract
Synaptic inputs on cortical dendrites are organized with remarkable subcellular precision at the micron level. This organization emerges during early postnatal development through patterned spontaneous activity and manifests both locally where nearby synapses are significantly correlated, and globally with distance to the soma. We propose a biophysically motivated synaptic plasticity model to dissect the mechanistic origins of this organization during development and elucidate synaptic clustering of different stimulus features in the adult. Our model captures local clustering of orientation in ferret and receptive field overlap in mouse visual cortex based on the receptive field diameter and the cortical magnification of visual space. Including action potential back-propagation explains branch clustering heterogeneity in the ferret and produces a global retinotopy gradient from soma to dendrite in the mouse. Therefore, by combining activity-dependent synaptic competition and species-specific receptive fields, our framework explains different aspects of synaptic organization regarding stimulus features and spatial scales.
Collapse
Affiliation(s)
- Jan H. Kirchner
- grid.419505.c0000 0004 0491 3878Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany ,grid.6936.a0000000123222966School of Life Sciences, Technical University of Munich, Freising, Germany
| | - Julijana Gjorgjieva
- grid.419505.c0000 0004 0491 3878Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany ,grid.6936.a0000000123222966School of Life Sciences, Technical University of Munich, Freising, Germany
| |
Collapse
|
61
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
62
|
Gozel O, Gerstner W. A functional model of adult dentate gyrus neurogenesis. eLife 2021; 10:66463. [PMID: 34137370 PMCID: PMC8260225 DOI: 10.7554/elife.66463] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 06/16/2021] [Indexed: 12/27/2022] Open
Abstract
In adult dentate gyrus neurogenesis, the link between maturation of newborn neurons and their function, such as behavioral pattern separation, has remained puzzling. By analyzing a theoretical model, we show that the switch from excitation to inhibition of the GABAergic input onto maturing newborn cells is crucial for their proper functional integration. When the GABAergic input is excitatory, cooperativity drives the growth of synapses such that newborn cells become sensitive to stimuli similar to those that activate mature cells. When GABAergic input switches to inhibitory, competition pushes the configuration of synapses onto newborn cells toward stimuli that are different from previously stored ones. This enables the maturing newborn cells to code for concepts that are novel, yet similar to familiar ones. Our theory of newborn cell maturation explains both how adult-born dentate granule cells integrate into the preexisting network and why they promote separation of similar but not distinct patterns.
Collapse
Affiliation(s)
- Olivia Gozel
- School of Life Sciences and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,Departments of Neurobiology and Statistics, University of Chicago, Chicago, United States.,Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, United States
| | - Wulfram Gerstner
- School of Life Sciences and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
63
|
Romaro C, Najman FA, Lytton WW, Roque AC, Dura-Bernal S. NetPyNE Implementation and Scaling of the Potjans-Diesmann Cortical Microcircuit Model. Neural Comput 2021; 33:1993-2032. [PMID: 34411272 PMCID: PMC8382011 DOI: 10.1162/neco_a_01400] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 02/16/2021] [Indexed: 11/04/2022]
Abstract
The Potjans-Diesmann cortical microcircuit model is a widely used model originally implemented in NEST. Here, we reimplemented the model using NetPyNE, a high-level Python interface to the NEURON simulator, and reproduced the findings of the original publication. We also implemented a method for scaling the network size that preserves first- and second-order statistics, building on existing work on network theory. Our new implementation enabled the use of more detailed neuron models with multicompartmental morphologies and multiple biophysically realistic ion channels. This opens the model to new research, including the study of dendritic processing, the influence of individual channel parameters, the relation to local field potentials, and other multiscale interactions. The scaling method we used provides flexibility to increase or decrease the network size as needed when running these CPU-intensive detailed simulations. Finally, NetPyNE facilitates modifying or extending the model using its declarative language; optimizing model parameters; running efficient, large-scale parallelized simulations; and analyzing the model through built-in methods, including local field potential calculation and information flow measures.
Collapse
Affiliation(s)
- Cecilia Romaro
- Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP 14049, Brazil
| | - Fernando Araujo Najman
- Institute of Mathematics and Statistics, University of São Paulo, São Paulo, SP 05508, Brazil
| | - William W Lytton
- Department of Physiology and Pharmacology, State University of New York Downstate Health Sciences University, New York, NY 11203, U.S.A.
| | - Antonio C Roque
- Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP 14049, Brazil
| | - Salvador Dura-Bernal
- Department of Physiology and Pharmacology, State University of New York Downstate Health Sciences University, New York, NY 11203, U.S.A., and Nathan Kline Institute for Psychiatric Research, New York, NY 10962, U.S.A.
| |
Collapse
|
64
|
Stapmanns J, Hahne J, Helias M, Bolten M, Diesmann M, Dahmen D. Event-Based Update of Synapses in Voltage-Based Learning Rules. Front Neuroinform 2021; 15:609147. [PMID: 34177505 PMCID: PMC8222618 DOI: 10.3389/fninf.2021.609147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/07/2021] [Indexed: 11/13/2022] Open
Abstract
Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
65
|
Farahani F, Kronberg G, FallahRad M, Oviedo HV, Parra LC. Effects of direct current stimulation on synaptic plasticity in a single neuron. Brain Stimul 2021; 14:588-597. [PMID: 33766677 DOI: 10.1016/j.brs.2021.03.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/02/2021] [Accepted: 03/03/2021] [Indexed: 10/21/2022] Open
Abstract
BACKGROUND Transcranial direct current stimulation (DCS) has lasting effects that may be explained by a boost in synaptic long-term potentiation (LTP). We hypothesized that this boost is the result of a modulation of somatic spiking in the postsynaptic neuron, as opposed to indirect network effects. To test this directly we record somatic spiking in a postsynaptic neuron during LTP induction with concurrent DCS. METHODS We performed rodent in-vitro patch-clamp recordings at the soma of individual CA1 pyramidal neurons. LTP was induced with theta-burst stimulation (TBS) applied concurrently with DCS. To test the causal role of somatic polarization, we manipulated polarization via current injections. We also used a computational multi-compartment neuron model that captures the effect of electric fields on membrane polarization and activity-dependent synaptic plasticity. RESULTS TBS-induced LTP was enhanced when paired with anodal DCS as well as depolarizing current injections. In both cases, somatic spiking during the TBS was increased, suggesting that evoked somatic activity is the primary factor affecting LTP modulation. However, the boost of LTP with DCS was less than expected given the increase in spiking activity alone. In some cells, we also observed DCS-induced spiking, suggesting DCS also modulates LTP via induced network activity. The computational model reproduces these results and suggests that they are driven by both direct changes in postsynaptic spiking and indirect changes due to network activity. CONCLUSION DCS enhances synaptic plasticity by increasing postsynaptic somatic spiking, but we also find that an increase in network activity may boost but also limit this enhancement.
Collapse
Affiliation(s)
- Forouzan Farahani
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA.
| | - Greg Kronberg
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA
| | - Mohamad FallahRad
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA
| | - Hysell V Oviedo
- Biology Department, The City College of New York, New York, NY, USA; CUNY Graduate Center, New York, NY, USA
| | - Lucas C Parra
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA
| |
Collapse
|
66
|
Noise in Neurons and Synapses Enables Reliable Associative Memory Storage in Local Cortical Circuits. eNeuro 2021; 8:ENEURO.0302-20.2020. [PMID: 33408153 PMCID: PMC8114874 DOI: 10.1523/eneuro.0302-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 12/15/2020] [Accepted: 12/16/2020] [Indexed: 12/02/2022] Open
Abstract
Neural networks in the brain can function reliably despite various sources of errors and noise present at every step of signal transmission. These sources include errors in the presynaptic inputs to the neurons, noise in synaptic transmission, and fluctuations in the neurons’ postsynaptic potentials (PSPs). Collectively they lead to errors in the neurons’ outputs which are, in turn, injected into the network. Does unreliable network activity hinder fundamental functions of the brain, such as learning and memory retrieval? To explore this question, this article examines the effects of errors and noise on the properties of model networks of inhibitory and excitatory neurons involved in associative sequence learning. The associative learning problem is solved analytically and numerically, and it is also shown how memory sequences can be loaded into the network with a biologically more plausible perceptron-type learning rule. Interestingly, the results reveal that errors and noise during learning increase the probability of memory recall. There is a trade-off between the capacity and reliability of stored memories, and, noise during learning is required for optimal retrieval of stored information. What is more, networks loaded with associative memories to capacity display many structural and dynamical features observed in local cortical circuits in mammals. Based on the similarities between the associative and cortical networks, this article predicts that connections originating from more unreliable neurons or neuron classes in the cortex are more likely to be depressed or eliminated during learning, while connections onto noisier neurons or neuron classes have lower probabilities and higher weights.
Collapse
|
67
|
Muratore P, Capone C, Paolucci PS. Target spike patterns enable efficient and biologically plausible learning for complex temporal tasks. PLoS One 2021; 16:e0247014. [PMID: 33592040 PMCID: PMC7886200 DOI: 10.1371/journal.pone.0247014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 01/31/2021] [Indexed: 11/28/2022] Open
Abstract
Recurrent spiking neural networks (RSNN) in the brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and their training requires very few examples. This motivates the search for biologically inspired learning rules for RSNNs, aiming to improve our understanding of brain computation and the efficiency of artificial intelligence. Several spiking models and learning rules have been proposed, but it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spatio-temporal spike pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. While error-based approaches, (e.g. e-prop) trial after trial optimize the internal sequence of spikes in order to progressively minimize the MSE we assume that a signal randomly projected from an external origin (e.g. from other brain areas) directly defines the target sequence. This facilitates the learning procedure since the network is trained from the beginning to reproduce the desired internal sequence. We propose two versions of our learning rule: spike-dependent and voltage-dependent. We find that the latter provides remarkable benefits in terms of learning speed and robustness to noise. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction for experimental validation.
Collapse
Affiliation(s)
- Paolo Muratore
- SISSA—International School for Advanced Studies, Trieste, Italy
- * E-mail:
| | | | | |
Collapse
|
68
|
Energetics of stochastic BCM type synaptic plasticity and storing of accurate information. J Comput Neurosci 2021; 49:71-106. [PMID: 33528721 PMCID: PMC8046702 DOI: 10.1007/s10827-020-00775-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 10/04/2020] [Accepted: 12/13/2020] [Indexed: 11/10/2022]
Abstract
Excitatory synaptic signaling in cortical circuits is thought to be metabolically expensive. Two fundamental brain functions, learning and memory, are associated with long-term synaptic plasticity, but we know very little about energetics of these slow biophysical processes. This study investigates the energy requirement of information storing in plastic synapses for an extended version of BCM plasticity with a decay term, stochastic noise, and nonlinear dependence of neuron’s firing rate on synaptic current (adaptation). It is shown that synaptic weights in this model exhibit bistability. In order to analyze the system analytically, it is reduced to a simple dynamic mean-field for a population averaged plastic synaptic current. Next, using the concepts of nonequilibrium thermodynamics, we derive the energy rate (entropy production rate) for plastic synapses and a corresponding Fisher information for coding presynaptic input. That energy, which is of chemical origin, is primarily used for battling fluctuations in the synaptic weights and presynaptic firing rates, and it increases steeply with synaptic weights, and more uniformly though nonlinearly with presynaptic firing. At the onset of synaptic bistability, Fisher information and memory lifetime both increase sharply, by a few orders of magnitude, but the plasticity energy rate changes only mildly. This implies that a huge gain in the precision of stored information does not have to cost large amounts of metabolic energy, which suggests that synaptic information is not directly limited by energy consumption. Interestingly, for very weak synaptic noise, such a limit on synaptic coding accuracy is imposed instead by a derivative of the plasticity energy rate with respect to the mean presynaptic firing, and this relationship has a general character that is independent of the plasticity type. An estimate for primate neocortex reveals that a relative metabolic cost of BCM type synaptic plasticity, as a fraction of neuronal cost related to fast synaptic transmission and spiking, can vary from negligible to substantial, depending on the synaptic noise level and presynaptic firing.
Collapse
|
69
|
Knight JC, Nowotny T. Larger GPU-accelerated brain simulations with procedural connectivity. NATURE COMPUTATIONAL SCIENCE 2021; 1:136-142. [PMID: 38217218 DOI: 10.1038/s43588-020-00022-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 12/23/2020] [Indexed: 01/15/2024]
Abstract
Simulations are an important tool for investigating brain function but large models are needed to faithfully reproduce the statistics and dynamics of brain activity. Simulating large spiking neural network models has, until now, needed so much memory for storing synaptic connections that it required high performance computer systems. Here, we present an alternative simulation method we call 'procedural connectivity' where connectivity and synaptic weights are generated 'on the fly' instead of stored and retrieved from memory. This method is particularly well suited for use on graphical processing units (GPUs)-which are a common fixture in many workstations. Using procedural connectivity and an additional GPU code generation optimization, we can simulate a recent model of the macaque visual cortex with 4.13 × 106 neurons and 24.2 × 109 synapses on a single GPU-a significant step forward in making large-scale brain modeling accessible to more researchers.
Collapse
Affiliation(s)
- James C Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, UK.
| | - Thomas Nowotny
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, UK
| |
Collapse
|
70
|
Berberian N, Ross M, Chartier S. Embodied working memory during ongoing input streams. PLoS One 2021; 16:e0244822. [PMID: 33400724 PMCID: PMC7785253 DOI: 10.1371/journal.pone.0244822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 12/16/2020] [Indexed: 11/18/2022] Open
Abstract
Sensory stimuli endow animals with the ability to generate an internal representation. This representation can be maintained for a certain duration in the absence of previously elicited inputs. The reliance on an internal representation rather than purely on the basis of external stimuli is a hallmark feature of higher-order functions such as working memory. Patterns of neural activity produced in response to sensory inputs can continue long after the disappearance of previous inputs. Experimental and theoretical studies have largely invested in understanding how animals faithfully maintain sensory representations during ongoing reverberations of neural activity. However, these studies have focused on preassigned protocols of stimulus presentation, leaving out by default the possibility of exploring how the content of working memory interacts with ongoing input streams. Here, we study working memory using a network of spiking neurons with dynamic synapses subject to short-term and long-term synaptic plasticity. The formal model is embodied in a physical robot as a companion approach under which neuronal activity is directly linked to motor output. The artificial agent is used as a methodological tool for studying the formation of working memory capacity. To this end, we devise a keyboard listening framework to delineate the context under which working memory content is (1) refined, (2) overwritten or (3) resisted by ongoing new input streams. Ultimately, this study takes a neurorobotic perspective to resurface the long-standing implication of working memory in flexible cognition.
Collapse
Affiliation(s)
- Nareg Berberian
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Matt Ross
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Sylvain Chartier
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
71
|
Meissner-Bernard C, Tsai MC, Logiaco L, Gerstner W. Dendritic Voltage Recordings Explain Paradoxical Synaptic Plasticity: A Modeling Study. Front Synaptic Neurosci 2020; 12:585539. [PMID: 33224033 PMCID: PMC7670913 DOI: 10.3389/fnsyn.2020.585539] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 09/23/2020] [Indexed: 11/21/2022] Open
Abstract
Experiments have shown that the same stimulation pattern that causes Long-Term Potentiation in proximal synapses, will induce Long-Term Depression in distal ones. In order to understand these, and other, surprising observations we use a phenomenological model of Hebbian plasticity at the location of the synapse. Our model describes the Hebbian condition of joint activity of pre- and postsynaptic neurons in a compact form as the interaction of the glutamate trace left by a presynaptic spike with the time course of the postsynaptic voltage. Instead of simulating the voltage, we test the model using experimentally recorded dendritic voltage traces in hippocampus and neocortex. We find that the time course of the voltage in the neighborhood of a stimulated synapse is a reliable predictor of whether a stimulated synapse undergoes potentiation, depression, or no change. Our computational model can explain the existence of different -at first glance seemingly paradoxical- outcomes of synaptic potentiation and depression experiments depending on the dendritic location of the synapse and the frequency or timing of the stimulation.
Collapse
Affiliation(s)
| | | | - Laureline Logiaco
- Center for Theoretical Neuroscience, Columbia University, New York, NY, United States
| | | |
Collapse
|
72
|
Chen H, Xie L, Wang Y, Zhang H. Memory retention in pyramidal neurons: a unified model of energy-based homo and heterosynaptic plasticity with homeostasis. Cogn Neurodyn 2020; 15:675-692. [PMID: 34367368 DOI: 10.1007/s11571-020-09652-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 10/27/2020] [Accepted: 11/09/2020] [Indexed: 01/07/2023] Open
Abstract
The brain can learn new tasks without forgetting old ones. This memory retention is closely associated with the long-term stability of synaptic strength. To understand the capacity of pyramidal neurons to preserve memory under different tasks, we established a plasticity model based on the postsynaptic membrane energy state, in which the change in synaptic strength depends on the difference between the energy state after stimulation and the resting energy state. If the post-stimulation energy state is higher than the resting energy state, then synaptic depression occurs. On the contrary, the synapse is strengthened. Our model unifies homo- and heterosynaptic plasticity and can reproduce synaptic plasticity observed in multiple experiments, such as spike-timing-dependent plasticity, and cooperative plasticity with few and common parameters. Based on the proposed plasticity model, we conducted a simulation study on how the activation patterns of dendritic branches by different tasks affect the synaptic connection strength of pyramidal neurons. We further investigate the formation mechanism by which different tasks activate different dendritic branches. Simulation results show that compare to the classic plasticity model, the plasticity model we proposed can achieve a better spatial separation of different branches activated by different tasks in pyramidal neurons, which deepens our insight into the memory retention mechanism of brains.
Collapse
Affiliation(s)
- Huanwen Chen
- The School of Automation, Central South University, Changsha, 410083 Hunan China
| | - Lijuan Xie
- The Institute of Physiology and Psychology, Changsha University of Science and Technology, Changsha, 410076 Hunan China
| | - Yijun Wang
- The School of Automation, Central South University, Changsha, 410083 Hunan China
| | - Hang Zhang
- The School of Automation, Central South University, Changsha, 410083 Hunan China
| |
Collapse
|
73
|
Sadeh S, Clopath C. Inhibitory stabilization and cortical computation. Nat Rev Neurosci 2020; 22:21-37. [PMID: 33177630 DOI: 10.1038/s41583-020-00390-z] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2020] [Indexed: 12/22/2022]
Abstract
Neuronal networks with strong recurrent connectivity provide the brain with a powerful means to perform complex computational tasks. However, high-gain excitatory networks are susceptible to instability, which can lead to runaway activity, as manifested in pathological regimes such as epilepsy. Inhibitory stabilization offers a dynamic, fast and flexible compensatory mechanism to balance otherwise unstable networks, thus enabling the brain to operate in its most efficient regimes. Here we review recent experimental evidence for the presence of such inhibition-stabilized dynamics in the brain and discuss their consequences for cortical computation. We show how the study of inhibition-stabilized networks in the brain has been facilitated by recent advances in the technological toolbox and perturbative techniques, as well as a concomitant development of biologically realistic computational models. By outlining future avenues, we suggest that inhibitory stabilization can offer an exemplary case of how experimental neuroscience can progress in tandem with technology and theory to advance our understanding of the brain.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bioengineering Department, Imperial College London, London, UK
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, UK.
| |
Collapse
|
74
|
Auth JM, Nachstedt T, Tetzlaff C. The Interplay of Synaptic Plasticity and Scaling Enables Self-Organized Formation and Allocation of Multiple Memory Representations. Front Neural Circuits 2020; 14:541728. [PMID: 33117130 PMCID: PMC7575689 DOI: 10.3389/fncir.2020.541728] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 08/19/2020] [Indexed: 12/23/2022] Open
Abstract
It is commonly assumed that memories about experienced stimuli are represented by groups of highly interconnected neurons called cell assemblies. This requires allocating and storing information in the neural circuitry, which happens through synaptic weight adaptations at different types of synapses. In general, memory allocation is associated with synaptic changes at feed-forward synapses while memory storage is linked with adaptation of recurrent connections. It remains, however, largely unknown how memory allocation and storage can be achieved and the adaption of the different synapses involved be coordinated to allow for a faithful representation of multiple memories without disruptive interference between them. In this theoretical study, by using network simulations and phase space analyses, we show that the interplay between long-term synaptic plasticity and homeostatic synaptic scaling organizes simultaneously the adaptations of feed-forward and recurrent synapses such that a new stimulus forms a new memory and where different stimuli are assigned to distinct cell assemblies. The resulting dynamics can reproduce experimental in-vivo data, focusing on how diverse factors, such as neuronal excitability and network connectivity, influence memory formation. Thus, the here presented model suggests that a few fundamental synaptic mechanisms may suffice to implement memory allocation and storage in neural circuitry.
Collapse
Affiliation(s)
- Johannes Maria Auth
- Department of Computational Neuroscience, Third Institute of Physics, Georg-August-Universität, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Timo Nachstedt
- Department of Computational Neuroscience, Third Institute of Physics, Georg-August-Universität, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, Third Institute of Physics, Georg-August-Universität, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
75
|
Ebner C, Clopath C, Jedlicka P, Cuntz H. Unifying Long-Term Plasticity Rules for Excitatory Synapses by Modeling Dendrites of Cortical Pyramidal Neurons. Cell Rep 2020; 29:4295-4307.e6. [PMID: 31875541 PMCID: PMC6941234 DOI: 10.1016/j.celrep.2019.11.068] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 05/02/2019] [Accepted: 11/15/2019] [Indexed: 11/30/2022] Open
Abstract
A large number of experiments have indicated that precise spike times, firing rates, and synapse locations crucially determine the dynamics of long-term plasticity induction in excitatory synapses. However, it remains unknown how plasticity mechanisms of synapses distributed along dendritic trees cooperate to produce the wide spectrum of outcomes for various plasticity protocols. Here, we propose a four-pathway plasticity framework that is well grounded in experimental evidence and apply it to a biophysically realistic cortical pyramidal neuron model. We show in computer simulations that several seemingly contradictory experimental landmark studies are consistent with one unifying set of mechanisms when considering the effects of signal propagation in dendritic trees with respect to synapse location. Our model identifies specific spatiotemporal contributions of dendritic and axo-somatic spikes as well as of subthreshold activation of synaptic clusters, providing a unified parsimonious explanation not only for rate and timing dependence but also for location dependence of synaptic changes. A phenomenological synaptic plasticity rule is applied to a pyramidal neuron model Model reproduces rate-, timing-, and location-dependent plasticity results Active dendrites allow plasticity via dendritic spikes and subthreshold events Cooperative plasticity exists across the dendritic tree and within single branches
Collapse
Affiliation(s)
- Christian Ebner
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; NeuroCure Cluster of Excellence, Charité-Universitätsmedizin Berlin, 10117 Berlin, Germany; Institute for Biology, Humboldt-Universität zu Berlin, 10117 Berlin, Germany.
| | - Claudia Clopath
- Computational Neuroscience Laboratory, Bioengineering Department, Imperial College London, London SW7 2AZ, UK
| | - Peter Jedlicka
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany; Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University Frankfurt, 60528 Frankfurt am Main, Germany; ICAR3R-Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus-Liebig-University, 35392 Giessen, Germany
| | - Hermann Cuntz
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| |
Collapse
|
76
|
Machado JN, Matias FS. Phase bistability between anticipated and delayed synchronization in neuronal populations. Phys Rev E 2020; 102:032412. [PMID: 33075861 DOI: 10.1103/physreve.102.032412] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 08/26/2020] [Indexed: 06/11/2023]
Abstract
Two dynamical systems unidirectionally coupled in a sender-receiver configuration can synchronize with a nonzero phase lag. In particular, the system can exhibit anticipated synchronization (AS), which is characterized by a negative phase lag, if the receiver also receives a delayed negative self-feedback. Recently, AS was shown to occur between cortical-like neuronal populations in which the self-feedback is mediated by inhibitory synapses. In this biologically plausible scenario, a transition from the usual delayed synchronization (with positive phase lag) to AS can be mediated by the inhibitory conductances in the receiver population. Here we show that depending on the relation between excitatory and inhibitory synaptic conductances the system can also exhibit phase bistability between anticipated and delayed synchronization. Furthermore, we show that the amount of noise at the receiver and the synaptic conductances can mediate the transition from stable phase locking to a bistable regime and eventually to a phase drift. We suggest that our spiking neuronal populations model could be potentially useful to study phase bistability in cortical regions related to bistable perception.
Collapse
Affiliation(s)
- Júlio Nunes Machado
- Instituto de Física, Universidade Federal de Alagoas, Maceió, Alagoas 57072-970, Brazil
| | | |
Collapse
|
77
|
Semyanov A, Henneberger C, Agarwal A. Making sense of astrocytic calcium signals — from acquisition to interpretation. Nat Rev Neurosci 2020; 21:551-564. [DOI: 10.1038/s41583-020-0361-8] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2020] [Indexed: 12/31/2022]
|
78
|
Jackson MB. Hebbian and non-Hebbian timing-dependent plasticity in the hippocampal CA3 region. Hippocampus 2020; 30:1241-1256. [PMID: 32818312 DOI: 10.1002/hipo.23252] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 07/14/2020] [Accepted: 07/19/2020] [Indexed: 11/10/2022]
Abstract
The timing between synaptic inputs has been proposed to play a role in the induction of plastic changes that enable neural circuits to store information. In the case of spike timing-dependent plasticity (STDP), this relates to the interval between a synaptic input and a postsynaptic spike, thus providing a conceptual link to the Hebb learning rule. Experiments have documented STDP in many synapses and brain regions, and computational models have tested its utility in many neural network functions. However, questions remain about whether timing plays a role in plasticity during natural activity, and whether it can function in information storage. The present study used imaging with voltage sensitive dye to investigate the effectiveness of input timing in the plasticity of responses in the CA3 region of hippocampal slices. Plasticity was induced by sequential dual-site stimulation at 10 ms intervals of either synaptic inputs and cell bodies (synaptic-somatic induction) or of two sets of synaptic inputs (synaptic-synaptic induction). Both protocols potentiated responses, with greater potentiation of responses to the first stimulation of the sequence than the second. Neither of these protocols induced depression. Synaptic-somatic stimulation was much more effective than synaptic-synaptic stimulation in evoking somatic action potentials, but both protocols potentiated responses equally well. This suggests that sequential dual-site stimulation can potentiate equally well with very different degrees of somatic action potential firing. With synaptic-somatic induction, potentiation was focused at the sites of stimulation. In contrast, with synaptic-synaptic induction, the distribution of potentiation varied greatly. Changes in the spatial distribution of responses indicated that sequential dual-site stimulation functions poorly in the storage of activity patterns. These results suggest that in the hippocampal CA3 region, timed sequential activation of two inputs is less effective than theta bursts, both in the induction of LTP and in the storage of information.
Collapse
Affiliation(s)
- Meyer B Jackson
- Department of Neuroscience, University of Wisconsin, Madison, Wisconsin, USA
| |
Collapse
|
79
|
Franović I, Yanchuk S, Eydam S, Bačić I, Wolfrum M. Dynamics of a stochastic excitable system with slowly adapting feedback. CHAOS (WOODBURY, N.Y.) 2020; 30:083109. [PMID: 32872843 DOI: 10.1063/1.5145176] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Accepted: 07/15/2020] [Indexed: 06/11/2023]
Abstract
We study an excitable active rotator with slowly adapting nonlinear feedback and noise. Depending on the adaptation and the noise level, this system may display noise-induced spiking, noise-perturbed oscillations, or stochastic bursting. We show how the system exhibits transitions between these dynamical regimes, as well as how one can enhance or suppress the coherence resonance or effectively control the features of the stochastic bursting. The setup can be considered a paradigmatic model for a neuron with a slow recovery variable or, more generally, as an excitable system under the influence of a nonlinear control mechanism. We employ a multiple timescale approach that combines the classical adiabatic elimination with averaging of rapid oscillations and stochastic averaging of noise-induced fluctuations by a corresponding stationary Fokker-Planck equation. This allows us to perform a numerical bifurcation analysis of a reduced slow system and to determine the parameter regions associated with different types of dynamics. In particular, we demonstrate the existence of a region of bistability, where the noise-induced switching between a stationary and an oscillatory regime gives rise to stochastic bursting.
Collapse
Affiliation(s)
- Igor Franović
- Scientific Computing Laboratory, Center for the Study of Complex Systems, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade, Serbia
| | - Serhiy Yanchuk
- Institut für Mathematik, Technische Universität Berlin, Straße des 17. Juni 136, 10623 Berlin, Germany
| | - Sebastian Eydam
- Weierstrass Institute, Mohrenstrasse 39, 10117 Berlin, Germany
| | - Iva Bačić
- Scientific Computing Laboratory, Center for the Study of Complex Systems, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade, Serbia
| | | |
Collapse
|
80
|
Using the hierarchical temporal memory spatial pooler for short-term forecasting of electrical load time series. APPLIED COMPUTING AND INFORMATICS 2020. [DOI: 10.1016/j.aci.2018.09.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In this paper, an emerging state-of-the-art machine intelligence technique called the Hierarchical Temporal Memory (HTM) is applied to the task of short-term load forecasting (STLF). A HTM Spatial Pooler (HTM-SP) stage is used to continually form sparse distributed representations (SDRs) from a univariate load time series data, a temporal aggregator is used to transform the SDRs into a sequential bivariate representation space and an overlap classifier makes temporal classifications from the bivariate SDRs through time. The comparative performance of HTM on several daily electrical load time series data including the Eunite competition dataset and the Polish power system dataset from 2002 to 2004 are presented. The robustness performance of HTM is also further validated using hourly load data from three more recent electricity markets. The results obtained from experimenting with the Eunite and Polish dataset indicated that HTM will perform better than the existing techniques reported in the literature. In general, the robustness test also shows that the error distribution performance of the proposed HTM technique is positively skewed for most of the years considered and with kurtosis values mostly lower than a base value of 3 indicating a reasonable level of outlier rejections.
Collapse
|
81
|
A solution to the learning dilemma for recurrent networks of spiking neurons. Nat Commun 2020; 11:3625. [PMID: 32681001 PMCID: PMC7367848 DOI: 10.1038/s41467-020-17236-y] [Citation(s) in RCA: 93] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 06/16/2020] [Indexed: 11/09/2022] Open
Abstract
Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.
Collapse
|
82
|
Kaiser J, Mostafa H, Neftci E. Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE). Front Neurosci 2020; 14:424. [PMID: 32477050 PMCID: PMC7235446 DOI: 10.3389/fnins.2020.00424] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 04/07/2020] [Indexed: 11/29/2022] Open
Abstract
A growing body of work underlines striking similarities between biological neural networks and recurrent, binary neural networks. A relatively smaller body of work, however, addresses the similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks. The challenge preventing this is largely caused by the discrepancy between the dynamical properties of synaptic plasticity and the requirements for gradient backpropagation. Learning algorithms that approximate gradient backpropagation using local error functions can overcome this challenge. Here, we introduce Deep Continuous Local Learning (DECOLLE), a spiking neural network equipped with local error functions for online learning with no memory overhead for computing gradients. DECOLLE is capable of learning deep spatio temporal representations from spikes relying solely on local information, making it compatible with neurobiology and neuromorphic hardware. Synaptic plasticity rules are derived systematically from user-defined cost functions and neural dynamics by leveraging existing autodifferentiation methods of machine learning frameworks. We benchmark our approach on the event-based neuromorphic dataset N-MNIST and DvsGesture, on which DECOLLE performs comparably to the state-of-the-art. DECOLLE networks provide continuously learning machines that are relevant to biology and supportive of event-based, low-power computer vision architectures matching the accuracies of conventional computers on tasks where temporal precision and speed are essential.
Collapse
Affiliation(s)
- Jacques Kaiser
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Hesham Mostafa
- Department of Bioengineering, University of California, San Diego, La Jolla, CA, United States
| | - Emre Neftci
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
83
|
Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLoS Comput Biol 2020; 16:e1007835. [PMID: 32384081 PMCID: PMC7239496 DOI: 10.1371/journal.pcbi.1007835] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/20/2020] [Accepted: 03/31/2020] [Indexed: 01/08/2023] Open
Abstract
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.
Collapse
Affiliation(s)
- Lisandro Montangie
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Christoph Miehl
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
- * E-mail:
| |
Collapse
|
84
|
Mikhaylov A, Pimashkin A, Pigareva Y, Gerasimova S, Gryaznov E, Shchanikov S, Zuev A, Talanov M, Lavrov I, Demin V, Erokhin V, Lobov S, Mukhina I, Kazantsev V, Wu H, Spagnolo B. Neurohybrid Memristive CMOS-Integrated Systems for Biosensors and Neuroprosthetics. Front Neurosci 2020; 14:358. [PMID: 32410943 PMCID: PMC7199501 DOI: 10.3389/fnins.2020.00358] [Citation(s) in RCA: 110] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 03/24/2020] [Indexed: 11/18/2022] Open
Abstract
Here we provide a perspective concept of neurohybrid memristive chip based on the combination of living neural networks cultivated in microfluidic/microelectrode system, metal-oxide memristive devices or arrays integrated with mixed-signal CMOS layer to control the analog memristive circuits, process the decoded information, and arrange a feedback stimulation of biological culture as parts of a bidirectional neurointerface. Our main focus is on the state-of-the-art approaches for cultivation and spatial ordering of the network of dissociated hippocampal neuron cells, fabrication of a large-scale cross-bar array of memristive devices tailored using device engineering, resistive state programming, or non-linear dynamics, as well as hardware implementation of spiking neural networks (SNNs) based on the arrays of memristive devices and integrated CMOS electronics. The concept represents an example of a brain-on-chip system belonging to a more general class of memristive neurohybrid systems for a new-generation robotics, artificial intelligence, and personalized medicine, discussed in the framework of the proposed roadmap for the next decade period.
Collapse
Affiliation(s)
- Alexey Mikhaylov
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Alexey Pimashkin
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Yana Pigareva
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | | | - Evgeny Gryaznov
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Sergey Shchanikov
- Department of Information Technologies, Vladimir State University, Murom, Russia
| | - Anton Zuev
- Department of Information Technologies, Vladimir State University, Murom, Russia
| | - Max Talanov
- Neuroscience Laboratory, Kazan Federal University, Kazan, Russia
| | - Igor Lavrov
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, United States
- Laboratory of Motor Neurorehabilitation, Kazan Federal University, Kazan, Russia
| | | | - Victor Erokhin
- Neuroscience Laboratory, Kazan Federal University, Kazan, Russia
- Kurchatov Institute, Moscow, Russia
- CNR-Institute of Materials for Electronics and Magnetism, Italian National Research Council, Parma, Italy
| | - Sergey Lobov
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Irina Mukhina
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Cell Technology Group, Privolzhsky Research Medical University, Nizhny Novgorod, Russia
| | - Victor Kazantsev
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Huaqiang Wu
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Bernardo Spagnolo
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Dipartimento di Fisica e Chimica-Emilio Segrè, Group of Interdisciplinary Theoretical Physics, Università di Palermo and CNISM, Unità di Palermo, Palermo, Italy
- Istituto Nazionale di Fisica Nucleare, Sezione di Catania, Catania, Italy
| |
Collapse
|
85
|
Tonic GABA A Conductance Favors Spike-Timing-Dependent over Theta-Burst-Induced Long-Term Potentiation in the Hippocampus. J Neurosci 2020; 40:4266-4276. [PMID: 32327534 DOI: 10.1523/jneurosci.2118-19.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 03/21/2020] [Accepted: 04/15/2020] [Indexed: 11/21/2022] Open
Abstract
Synaptic plasticity is triggered by different patterns of network activity. Here, we investigated how LTP in CA3-CA1 synapses induced by different stimulation patterns is affected by tonic GABAA conductances in rat hippocampal slices. Spike-timing-dependent LTP was induced by pairing Schaffer collateral stimulation with antidromic stimulation of CA1 pyramidal neurons. Theta-burst-induced LTP was induced by theta-burst stimulation of Schaffer collaterals. We mimicked increased tonic GABAA conductance by bath application of 30 μm GABA. Surprisingly, tonic GABAA conductance selectively suppressed theta-burst-induced LTP but not spike-timing-dependent LTP. We combined whole-cell patch-clamp electrophysiology, two-photon Ca2+ imaging, glutamate uncaging, and mathematical modeling to dissect the mechanisms underlying these differential effects of tonic GABAA conductance. We found that Ca2+ transients during pairing of an action potential with an EPSP were less sensitive to tonic GABAA conductance-induced shunting inhibition than Ca2+ transients induced by EPSP burst. Our results may explain how different forms of memory are affected by increasing tonic GABAA conductances under physiological or pathologic conditions, as well as under the influence of substances that target extrasynaptic GABAA receptors (e.g., neurosteroids, sedatives, antiepileptic drugs, and alcohol).SIGNIFICANCE STATEMENT Brain activity is associated with neuronal firing and synaptic signaling among neurons. Synaptic plasticity represents a mechanism for learning and memory. However, some neurotransmitters that escape the synaptic cleft or are released by astrocytes can target extrasynaptic receptors. Extrasynaptic GABAA receptors mediate tonic conductances that reduce the excitability of neurons by shunting. This results in the decreased ability for neurons to fire action potentials, but when action potentials are successfully triggered, tonic conductances are unable to reduce them significantly. As such, tonic GABAA conductances have minimal effects on spike-timing-dependent synaptic plasticity while strongly attenuating the plasticity evoked by EPSP bursts. Our findings shed light on how changes in tonic conductances can selectively affect different forms of learning and memory.
Collapse
|
86
|
Sweeney Y, Clopath C. Population coupling predicts the plasticity of stimulus responses in cortical circuits. eLife 2020; 9:e56053. [PMID: 32314959 PMCID: PMC7224697 DOI: 10.7554/elife.56053] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 04/16/2020] [Indexed: 12/31/2022] Open
Abstract
Some neurons have stimulus responses that are stable over days, whereas other neurons have highly plastic stimulus responses. Using a recurrent network model, we explore whether this could be due to an underlying diversity in their synaptic plasticity. We find that, in a network with diverse learning rates, neurons with fast rates are more coupled to population activity than neurons with slow rates. This plasticity-coupling link predicts that neurons with high population coupling exhibit more long-term stimulus response variability than neurons with low population coupling. We substantiate this prediction using recordings from the Allen Brain Observatory, finding that a neuron's population coupling is correlated with the plasticity of its orientation preference. Simulations of a simple perceptual learning task suggest a particular functional architecture: a stable 'backbone' of stimulus representation formed by neurons with low population coupling, on top of which lies a flexible substrate of neurons with high population coupling.
Collapse
Affiliation(s)
- Yann Sweeney
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| |
Collapse
|
87
|
Wybo WAM, Torben-Nielsen B, Nevian T, Gewaltig MO. Electrical Compartmentalization in Neurons. Cell Rep 2020; 26:1759-1773.e7. [PMID: 30759388 DOI: 10.1016/j.celrep.2019.01.074] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 10/03/2018] [Accepted: 01/17/2019] [Indexed: 12/31/2022] Open
Abstract
The dendritic tree of neurons plays an important role in information processing in the brain. While it is thought that dendrites require independent subunits to perform most of their computations, it is still not understood how they compartmentalize into functional subunits. Here, we show how these subunits can be deduced from the properties of dendrites. We devised a formalism that links the dendritic arborization to an impedance-based tree graph and show how the topology of this graph reveals independent subunits. This analysis reveals that cooperativity between synapses decreases slowly with increasing electrical separation and thus that few independent subunits coexist. We nevertheless find that balanced inputs or shunting inhibition can modify this topology and increase the number and size of the subunits in a context-dependent manner. We also find that this dynamic recompartmentalization can enable branch-specific learning of stimulus features. Analysis of dendritic patch-clamp recording experiments confirmed our theoretical predictions.
Collapse
Affiliation(s)
- Willem A M Wybo
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland; Laboratory of Computational Neuroscience, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Department of Physiology, University of Bern, Bern, Switzerland
| | - Benjamin Torben-Nielsen
- Biocomputation Group, University of Hertfordshire, Hertfordshire, UK; Neurolinx Research Institute, La Jolla, CA, USA.
| | - Thomas Nevian
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Marc-Oliver Gewaltig
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| |
Collapse
|
88
|
Köksal Ersöz E, Aguilar C, Chossat P, Krupa M, Lavigne F. Neuronal mechanisms for sequential activation of memory items: Dynamics and reliability. PLoS One 2020; 15:e0231165. [PMID: 32298290 PMCID: PMC7161983 DOI: 10.1371/journal.pone.0231165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
In this article we present a biologically inspired model of activation of memory items in a sequence. Our model produces two types of sequences, corresponding to two different types of cerebral functions: activation of regular or irregular sequences. The switch between the two types of activation occurs through the modulation of biological parameters, without altering the connectivity matrix. Some of the parameters included in our model are neuronal gain, strength of inhibition, synaptic depression and noise. We investigate how these parameters enable the existence of sequences and influence the type of sequences observed. In particular we show that synaptic depression and noise drive the transitions from one memory item to the next and neuronal gain controls the switching between regular and irregular (random) activation.
Collapse
Affiliation(s)
| | - Carlos Aguilar
- Lab by MANTU, Amaris Research Unit, Route des Colles, Biot, France
| | - Pascal Chossat
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | - Martin Krupa
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | | |
Collapse
|
89
|
Hey, look over there: Distraction effects on rapid sequence recall. PLoS One 2020; 15:e0223743. [PMID: 32275703 PMCID: PMC7147745 DOI: 10.1371/journal.pone.0223743] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 03/11/2020] [Indexed: 11/19/2022] Open
Abstract
In the course of everyday life, the brain must store and recall a huge variety of representations of stimuli which are presented in an ordered or sequential way. The processes by which the ordering of these various things is stored and recalled are moderately well understood. We use here a computational model of a cortex-like recurrent neural network adapted by a multitude of plasticity mechanisms. We first demonstrate the learning of a sequence. Then, we examine the influence of different types of distractors on the network dynamics during the recall of the encoded ordered information being ordered in a sequence. We are able to broadly arrive at two distinct effect-categories for distractors, arrive at a basic understanding of why this is so, and predict what distractors will fall into each category.
Collapse
|
90
|
Brendel W, Bourdoukan R, Vertechi P, Machens CK, Denève S. Learning to represent signals spike by spike. PLoS Comput Biol 2020; 16:e1007692. [PMID: 32176682 PMCID: PMC7135338 DOI: 10.1371/journal.pcbi.1007692] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 04/06/2020] [Accepted: 01/27/2020] [Indexed: 12/31/2022] Open
Abstract
Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks’ convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations. Spiking neural networks can encode information with high efficiency in the spike trains of individual neurons if the synaptic weights between neurons are set to specific, optimal values. In this regime, the networks exhibit irregular spike trains, high trial-to-trial variability, and stimulus tuning, as typically observed in cortex. The strong variability on the level of single neurons paradoxically coincides with a precise, non-redundant, and spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these spiking networks can be learnt with local learning rules. In this study, we show how the required architecture can be learnt. We derive local and biophysically plausible learning rules for recurrent neural networks from first principles. We show both mathematically and using numerical simulations that these learning rules drive the networks into the optimal state, and we show that the optimal state is governed by the statistics of the input signals. After learning, the voltages of individual neurons can be interpreted as measuring the instantaneous error of the code, given by the error between the desired output signal and the actual output signal.
Collapse
Affiliation(s)
- Wieland Brendel
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- Tübingen AI Center, University of Tübingen, Germany
| | - Ralph Bourdoukan
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Pietro Vertechi
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Christian K. Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- * E-mail: (CKM); (SD)
| | - Sophie Denève
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- * E-mail: (CKM); (SD)
| |
Collapse
|
91
|
Mendes A, Vignoud G, Perez S, Perrin E, Touboul J, Venance L. Concurrent Thalamostriatal and Corticostriatal Spike-Timing-Dependent Plasticity and Heterosynaptic Interactions Shape Striatal Plasticity Map. Cereb Cortex 2020; 30:4381-4401. [DOI: 10.1093/cercor/bhaa024] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Abstract
The striatum integrates inputs from the cortex and thalamus, which display concomitant or sequential activity. The striatum assists in forming memory, with acquisition of the behavioral repertoire being associated with corticostriatal (CS) plasticity. The literature has mainly focused on that CS plasticity, and little remains known about thalamostriatal (TS) plasticity rules or CS and TS plasticity interactions. We undertook here the study of these plasticity rules. We found bidirectional Hebbian and anti-Hebbian spike-timing-dependent plasticity (STDP) at the thalamic and cortical inputs, respectively, which were driving concurrent changes at the striatal synapses. Moreover, TS- and CS-STDP induced heterosynaptic plasticity. We developed a calcium-based mathematical model of the coupled TS and CS plasticity, and simulations predict complex changes in the CS and TS plasticity maps depending on the precise cortex–thalamus–striatum engram. These predictions were experimentally validated using triplet-based STDP stimulations, which revealed the significant remodeling of the CS-STDP map upon TS activity, which is notably the induction of the LTD areas in the CS-STDP for specific timing regimes. TS-STDP exerts a greater influence on CS plasticity than CS-STDP on TS plasticity. These findings highlight the major impact of precise timing in cortical and thalamic activity for the memory engram of striatal synapses.
Collapse
Affiliation(s)
- Alexandre Mendes
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
| | - Gaetan Vignoud
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
- Department of Mathematics, Volen National Center for Complex Systems, Brandeis University, Waltham, MA 2454-9110, USA
| | - Sylvie Perez
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
| | - Elodie Perrin
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
| | - Jonathan Touboul
- Department of Mathematics, Volen National Center for Complex Systems, Brandeis University, Waltham, MA 2454-9110, USA
| | - Laurent Venance
- Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS UMR7241, INSERM U1050, PSL Research University, Paris, 75005, France
| |
Collapse
|
92
|
|
93
|
Kronberg G, Rahman A, Sharma M, Bikson M, Parra LC. Direct current stimulation boosts hebbian plasticity in vitro. Brain Stimul 2020; 13:287-301. [PMID: 31668982 PMCID: PMC6989352 DOI: 10.1016/j.brs.2019.10.014] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 09/10/2019] [Accepted: 10/16/2019] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND There is evidence that transcranial direct current stimulation (tDCS) can improve learning performance. Arguably, this effect is related to long term potentiation (LTP), but the precise biophysical mechanisms remain unknown. HYPOTHESIS We propose that direct current stimulation (DCS) causes small changes in postsynaptic membrane potential during ongoing endogenous synaptic activity. The altered voltage dynamics in the postsynaptic neuron then modify synaptic strength via the machinery of endogenous voltage-dependent Hebbian plasticity. This hypothesis predicts that DCS should exhibit Hebbian properties, namely pathway specificity and associativity. METHODS We studied the effects of DCS applied during the induction of LTP in the CA1 region of rat hippocampal slices and using a biophysical computational model. RESULTS DCS enhanced LTP, but only at synapses that were undergoing plasticity, confirming that DCS respects Hebbian pathway specificity. When different synaptic pathways cooperated to produce LTP, DCS enhanced this cooperation, boosting Hebbian associativity. Further slice experiments and computer simulations support a model where polarization of postsynaptic pyramidal neurons drives these plasticity effects through endogenous Hebbian mechanisms. The model is able to reconcile several experimental results by capturing the complex interaction between the induced electric field, neuron morphology, and endogenous neural activity. CONCLUSIONS These results suggest that tDCS can enhance associative learning. We propose that clinical tDCS should be applied during tasks that induce Hebbian plasticity to harness this phenomenon, and that the effects should be task specific through their interaction with endogenous plasticity mechanisms. Models that incorporate brain state and plasticity mechanisms may help to improve prediction of tDCS outcomes.
Collapse
Affiliation(s)
- Greg Kronberg
- Department of Biomedical Engineering, The City College of New York, CUNY, 160 Convent Avenue, New York, NY, USA.
| | - Asif Rahman
- Department of Biomedical Engineering, The City College of New York, CUNY, 160 Convent Avenue, New York, NY, USA
| | - Mahima Sharma
- Department of Biomedical Engineering, The City College of New York, CUNY, 160 Convent Avenue, New York, NY, USA
| | - Marom Bikson
- Department of Biomedical Engineering, The City College of New York, CUNY, 160 Convent Avenue, New York, NY, USA
| | - Lucas C Parra
- Department of Biomedical Engineering, The City College of New York, CUNY, 160 Convent Avenue, New York, NY, USA
| |
Collapse
|
94
|
Lobov SA, Mikhaylov AN, Shamshin M, Makarov VA, Kazantsev VB. Spatial Properties of STDP in a Self-Learning Spiking Neural Network Enable Controlling a Mobile Robot. Front Neurosci 2020; 14:88. [PMID: 32174804 PMCID: PMC7054464 DOI: 10.3389/fnins.2020.00088] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 01/22/2020] [Indexed: 11/13/2022] Open
Abstract
Development of spiking neural networks (SNNs) controlling mobile robots is one of the modern challenges in computational neuroscience and artificial intelligence. Such networks, being replicas of biological ones, are expected to have a higher computational potential than traditional artificial neural networks (ANNs). The critical problem is in the design of robust learning algorithms aimed at building a “living computer” based on SNNs. Here, we propose a simple SNN equipped with a Hebbian rule in the form of spike-timing-dependent plasticity (STDP). The SNN implements associative learning by exploiting the spatial properties of STDP. We show that a LEGO robot controlled by the SNN can exhibit classical and operant conditioning. Competition of spike-conducting pathways in the SNN plays a fundamental role in establishing associations of neural connections. It replaces the irrelevant associations by new ones in response to a change in stimuli. Thus, the robot gets the ability to relearn when the environment changes. The proposed SNN and the stimulation protocol can be further enhanced and tested in developing neuronal cultures, and also admit the use of memristive devices for hardware implementation.
Collapse
Affiliation(s)
- Sergey A Lobov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Alexey N Mikhaylov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Maxim Shamshin
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Valeri A Makarov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Instituto de Matemática Interdisciplinar, Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, Madrid, Spain
| | - Victor B Kazantsev
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| |
Collapse
|
95
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
96
|
Sadeh S, Clopath C. Patterned perturbation of inhibition can reveal the dynamical structure of neural processing. eLife 2020; 9:e52757. [PMID: 32073400 PMCID: PMC7180056 DOI: 10.7554/elife.52757] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 02/19/2020] [Indexed: 12/18/2022] Open
Abstract
Perturbation of neuronal activity is key to understanding the brain's functional properties, however, intervention studies typically perturb neurons in a nonspecific manner. Recent optogenetics techniques have enabled patterned perturbations, in which specific patterns of activity can be invoked in identified target neurons to reveal more specific cortical function. Here, we argue that patterned perturbation of neurons is in fact necessary to reveal the specific dynamics of inhibitory stabilization, emerging in cortical networks with strong excitatory and inhibitory functional subnetworks, as recently reported in mouse visual cortex. We propose a specific perturbative signature of these networks and investigate how this can be measured under different experimental conditions. Functionally, rapid spontaneous transitions between selective ensembles of neurons emerge in such networks, consistent with experimental results. Our study outlines the dynamical and functional properties of feature-specific inhibitory-stabilized networks, and suggests experimental protocols that can be used to detect them in the intact cortex.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bioengineering Department, Imperial College LondonLondonUnited Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College LondonLondonUnited Kingdom
| |
Collapse
|
97
|
Synaptic Plasticity Depends on the Fine-Scale Input Pattern in Thin Dendrites of CA1 Pyramidal Neurons. J Neurosci 2020; 40:2593-2605. [PMID: 32047054 PMCID: PMC7096145 DOI: 10.1523/jneurosci.2071-19.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2019] [Revised: 01/16/2020] [Accepted: 01/23/2020] [Indexed: 12/19/2022] Open
Abstract
Coordinated long-term plasticity of nearby excitatory synaptic inputs has been proposed to shape experience-related neuronal information processing. To elucidate the induction rules leading to spatially structured forms of synaptic potentiation in dendrites, we explored plasticity of glutamate uncaging-evoked excitatory input patterns with various spatial distributions in perisomatic dendrites of CA1 pyramidal neurons in slices from adult male rats. Coordinated long-term plasticity of nearby excitatory synaptic inputs has been proposed to shape experience-related neuronal information processing. To elucidate the induction rules leading to spatially structured forms of synaptic potentiation in dendrites, we explored plasticity of glutamate uncaging-evoked excitatory input patterns with various spatial distributions in perisomatic dendrites of CA1 pyramidal neurons in slices from adult male rats. We show that (1) the cooperativity rules governing the induction of synaptic LTP depend on dendritic location; (2) LTP of input patterns that are subthreshold or suprathreshold to evoke local dendritic spikes (d-spikes) requires different spatial organization; and (3) input patterns evoking d-spikes can strengthen nearby, nonsynchronous synapses by local heterosynaptic plasticity crosstalk mediated by NMDAR-dependent MEK/ERK signaling. These results suggest that multiple mechanisms can trigger spatially organized synaptic plasticity on various spatial and temporal scales, enriching the ability of neurons to use synaptic clustering for information processing. SIGNIFICANCE STATEMENT A fundamental question in neuroscience is how neuronal feature selectivity is established via the combination of dendritic processing of synaptic input patterns with long-term synaptic plasticity. As these processes have been mostly studied separately, the relationship between the rules of integration and rules of plasticity remained elusive. Here we explore how the fine-grained spatial pattern and the form of voltage integration determine plasticity of different excitatory synaptic input patterns in perisomatic dendrites of CA1 pyramidal cells. We demonstrate that the plasticity rules depend highly on three factors: (1) the location of the input within the dendritic branch (proximal vs distal), (2) the strength of the input pattern (subthreshold or suprathreshold for dendritic spikes), and (3) the stimulation of neighboring synapses.
Collapse
|
98
|
Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 2020; 16:e1007606. [PMID: 31961853 PMCID: PMC7028299 DOI: 10.1371/journal.pcbi.1007606] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/18/2020] [Accepted: 12/13/2019] [Indexed: 12/15/2022] Open
Abstract
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
99
|
Lobov SA, Chernyshov AV, Krilova NP, Shamshin MO, Kazantsev VB. Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier. SENSORS 2020; 20:s20020500. [PMID: 31963143 PMCID: PMC7014236 DOI: 10.3390/s20020500] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 01/10/2020] [Accepted: 01/14/2020] [Indexed: 12/24/2022]
Abstract
One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.
Collapse
|
100
|
Xu X, Cang J, Riecke H. Development and binocular matching of orientation selectivity in visual cortex: a computational model. J Neurophysiol 2020; 123:1305-1319. [PMID: 31913758 DOI: 10.1152/jn.00386.2019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In mouse visual cortex, right after eye opening binocular cells have different preferred orientations for input from the two eyes. With normal visual experience during a critical period, these preferred orientations evolve and eventually become well matched. To gain insight into the matching process, we developed a computational model of a cortical cell receiving orientation selective inputs via plastic synapses. The model captures the experimentally observed matching of the preferred orientations, the dependence of matching on ocular dominance of the cell, and the relationship between the degree of matching and the resulting monocular orientation selectivity. Moreover, our model puts forward testable predictions: 1) The matching speed increases with initial ocular dominance. 2) While the matching improves more slowly for cells that are more orientation selective, the selectivity increases faster for better matched cells during the matching process. This suggests that matching drives orientation selectivity but not vice versa. 3) There are two main routes to matching: the preferred orientations either drift toward each other or one of the orientations switches suddenly. The latter occurs for cells with large initial mismatch and can render the cells monocular. We expect that these results provide insight more generally into the development of neuronal systems that integrate inputs from multiple sources, including different sensory modalities.NEW & NOTEWORTHY Animals gather information through multiple modalities (vision, audition, touch, etc.). These information streams have to be merged coherently to provide a meaningful representation of the world. Thus, for neurons in visual cortex V1, the orientation selectivities for inputs from the two eyes have to match to enable binocular vision. We analyze the postnatal process underlying this matching using computational modeling. It captures recent experimental results and reveals interdependence between matching, ocular dominance, and orientation selectivity.
Collapse
Affiliation(s)
- Xize Xu
- Department of Engineering Science and Applied Mathematics, Northwestern University, Evanston, Illinois
| | - Jianhua Cang
- Department of Biology and Department of Psychology, University of Virginia, Charlottesville, Virginia
| | - Hermann Riecke
- Department of Engineering Science and Applied Mathematics, Northwestern University, Evanston, Illinois
| |
Collapse
|