1
|
Jannesar N, Akbarzadeh-Sherbaf K, Safari S, Vahabie AH. SSTE: Syllable-Specific Temporal Encoding to FORCE-learn audio sequences with an associative memory approach. Neural Netw 2024; 177:106368. [PMID: 38761415 DOI: 10.1016/j.neunet.2024.106368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 03/28/2024] [Accepted: 05/05/2024] [Indexed: 05/20/2024]
Abstract
The circuitry and pathways in the brains of humans and other species have long inspired researchers and system designers to develop accurate and efficient systems capable of solving real-world problems and responding in real-time. We propose the Syllable-Specific Temporal Encoding (SSTE) to learn vocal sequences in a reservoir of Izhikevich neurons, by forming associations between exclusive input activities and their corresponding syllables in the sequence. Our model converts the audio signals to cochleograms using the CAR-FAC model to simulate a brain-like auditory learning and memorization process. The reservoir is trained using a hardware-friendly approach to FORCE learning. Reservoir computing could yield associative memory dynamics with far less computational complexity compared to RNNs. The SSTE-based learning enables competent accuracy and stable recall of spatiotemporal sequences with fewer reservoir inputs compared with existing encodings in the literature for similar purpose, offering resource savings. The encoding points to syllable onsets and allows recalling from a desired point in the sequence, making it particularly suitable for recalling subsets of long vocal sequences. The SSTE demonstrates the capability of learning new signals without forgetting previously memorized sequences and displays robustness against occasional noise, a characteristic of real-world scenarios. The components of this model are configured to improve resource consumption and computational intensity, addressing some of the cost-efficiency issues that might arise in future implementations aiming for compactness and real-time, low-power operation. Overall, this model proposes a brain-inspired pattern generation network for vocal sequences that can be extended with other bio-inspired computations to explore their potentials for brain-like auditory perception. Future designs could inspire from this model to implement embedded devices that learn vocal sequences and recall them as needed in real-time. Such systems could acquire language and speech, operate as artificial assistants, and transcribe text to speech, in the presence of natural noise and corruption on audio data.
Collapse
Affiliation(s)
- Nastaran Jannesar
- High Performance Embedded Architecture Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| | | | - Saeed Safari
- High Performance Embedded Architecture Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| | - Abdol-Hossein Vahabie
- Department of Psychology, Faculty of Psychology and Education, University of Tehran, Tehran, Iran; Cognitive Systems Laboratory, Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| |
Collapse
|
2
|
Bredenberg C, Savin C. Desiderata for Normative Models of Synaptic Plasticity. Neural Comput 2024; 36:1245-1285. [PMID: 38776950 DOI: 10.1162/neco_a_01671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/06/2024] [Indexed: 05/25/2024]
Abstract
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Mila-Quebec AI Institute, Montréal, QC H2S 3H1, Canada
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Center for Data Science, New York University, New York, NY 10011, U.S.A.
| |
Collapse
|
3
|
Lakshminarasimhan KJ, Xie M, Cohen JD, Sauerbrei BA, Hantman AW, Litwin-Kumar A, Escola S. Specific connectivity optimizes learning in thalamocortical loops. Cell Rep 2024; 43:114059. [PMID: 38602873 PMCID: PMC11104520 DOI: 10.1016/j.celrep.2024.114059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 01/04/2024] [Accepted: 03/20/2024] [Indexed: 04/13/2024] Open
Abstract
Thalamocortical loops have a central role in cognition and motor control, but precisely how they contribute to these processes is unclear. Recent studies showing evidence of plasticity in thalamocortical synapses indicate a role for the thalamus in shaping cortical dynamics through learning. Since signals undergo a compression from the cortex to the thalamus, we hypothesized that the computational role of the thalamus depends critically on the structure of corticothalamic connectivity. To test this, we identified the optimal corticothalamic structure that promotes biologically plausible learning in thalamocortical synapses. We found that corticothalamic projections specialized to communicate an efference copy of the cortical output benefit motor control, while communicating the modes of highest variance is optimal for working memory tasks. We analyzed neural recordings from mice performing grasping and delayed discrimination tasks and found corticothalamic communication consistent with these predictions. These results suggest that the thalamus orchestrates cortical dynamics in a functionally precise manner through structured connectivity.
Collapse
Affiliation(s)
| | - Marjorie Xie
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Jeremy D Cohen
- Neuroscience Center, University of North Carolina, Chapel Hill, NC 27559, USA
| | - Britton A Sauerbrei
- Department of Neurosciences, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Adam W Hantman
- Neuroscience Center, University of North Carolina, Chapel Hill, NC 27559, USA
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA.
| | - Sean Escola
- Department of Psychiatry, Columbia University, New York, NY 10032, USA.
| |
Collapse
|
4
|
Subramoney A, Bellec G, Scherr F, Legenstein R, Maass W. Fast learning without synaptic plasticity in spiking neural networks. Sci Rep 2024; 14:8557. [PMID: 38609429 PMCID: PMC11015027 DOI: 10.1038/s41598-024-55769-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 02/27/2024] [Indexed: 04/14/2024] Open
Abstract
Spiking neural networks are of high current interest, both from the perspective of modelling neural networks of the brain and for porting their fast learning capability and energy efficiency into neuromorphic hardware. But so far we have not been able to reproduce fast learning capabilities of the brain in spiking neural networks. Biological data suggest that a synergy of synaptic plasticity on a slow time scale with network dynamics on a faster time scale is responsible for fast learning capabilities of the brain. We show here that a suitable orchestration of this synergy between synaptic plasticity and network dynamics does in fact reproduce fast learning capabilities of generic recurrent networks of spiking neurons. This points to the important role of recurrent connections in spiking networks, since these are necessary for enabling salient network dynamics. We show more specifically that the proposed synergy enables synaptic weights to encode more general information such as priors and task structures, since moment-to-moment processing of new information can be delegated to the network dynamics.
Collapse
Affiliation(s)
- Anand Subramoney
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
- Department of Computer Science, Royal Holloway University of London, Egham, UK
| | - Guillaume Bellec
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
- Laboratory of Computational Neuroscience, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Franz Scherr
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
| |
Collapse
|
5
|
Maslennikov O, Perc M, Nekorkin V. Topological features of spike trains in recurrent spiking neural networks that are trained to generate spatiotemporal patterns. Front Comput Neurosci 2024; 18:1363514. [PMID: 38463243 PMCID: PMC10920356 DOI: 10.3389/fncom.2024.1363514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
In this study, we focus on training recurrent spiking neural networks to generate spatiotemporal patterns in the form of closed two-dimensional trajectories. Spike trains in the trained networks are examined in terms of their dissimilarity using the Victor-Purpura distance. We apply algebraic topology methods to the matrices obtained by rank-ordering the entries of the distance matrices, specifically calculating the persistence barcodes and Betti curves. By comparing the features of different types of output patterns, we uncover the complex relations between low-dimensional target signals and the underlying multidimensional spike trains.
Collapse
Affiliation(s)
- Oleg Maslennikov
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| | - Matjaž Perc
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung City, Taiwan
- Complexity Science Hub Vienna, Vienna, Austria
- Department of Physics, Kyung Hee University, Seoul, Republic of Korea
| | - Vladimir Nekorkin
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| |
Collapse
|
6
|
Alemi A, Aksay ERF, Goldman MS. A Lyapunov theory demonstrating a fundamental limit on the speed of systems consolidation. ARXIV 2024:arXiv:2402.01605v1. [PMID: 38351934 PMCID: PMC10862927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/19/2024]
Abstract
The nervous system reorganizes memories from an early site to a late site, a commonly observed feature of learning and memory systems known as systems consolidation. Previous work has suggested learning rules by which consolidation may occur. Here, we provide conditions under which such rules are guaranteed to lead to stable convergence of learning and consolidation. We use the theory of Lyapunov functions, which enforces stability by requiring learning rules to decrease an energy-like (Lyapunov) function. We present the theory in the context of a simple circuit architecture motivated by classic models of learning in systems consolidation mediated by the cerebellum. Stability is only guaranteed if the learning rate in the late stage is not faster than the learning rate in the early stage. Further, the slower the learning rate at the late stage, the larger the perturbation the system can tolerate with a guarantee of stability. We provide intuition for this result by mapping the consolidation model to a damped driven oscillator system, and showing that the ratio of early-to late-stage learning rates in the consolidation model can be directly identified with the (square of the) oscillator's damping ratio. This work suggests the power of the Lyapunov approach to provide constraints on nervous system function.
Collapse
Affiliation(s)
- Alireza Alemi
- Center for Neuroscience, and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, Davis, CA 95616, USA
| | - Emre R. F. Aksay
- Institute for Computational Biomedicine and Department of Physiology and Biophysics, Weill Cornell Medical College, New York, NY 10021, USA
| | - Mark S. Goldman
- Center for Neuroscience, and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, Davis, CA 95616, USA
- Department of Ophthalmology and Vision Science, University of California, Davis, Davis, CA 95616, USA
| |
Collapse
|
7
|
Song Y, Millidge B, Salvatori T, Lukasiewicz T, Xu Z, Bogacz R. Inferring neural activity before plasticity as a foundation for learning beyond backpropagation. Nat Neurosci 2024; 27:348-358. [PMID: 38172438 PMCID: PMC7615830 DOI: 10.1038/s41593-023-01514-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 11/02/2023] [Indexed: 01/05/2024]
Abstract
For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error in its output, a challenge that is known as 'credit assignment'. It has long been assumed that credit assignment is best solved by backpropagation, which is also the foundation of modern machine learning. Here, we set out a fundamentally different principle on credit assignment called 'prospective configuration'. In prospective configuration, the network first infers the pattern of neural activity that should result from learning, and then the synaptic weights are modified to consolidate the change in neural activity. We demonstrate that this distinct mechanism, in contrast to backpropagation, (1) underlies learning in a well-established family of models of cortical circuits, (2) enables learning that is more efficient and effective in many contexts faced by biological organisms and (3) reproduces surprising patterns of neural activity and behavior observed in diverse human and rat learning experiments.
Collapse
Affiliation(s)
- Yuhang Song
- Department of Computer Science, University of Oxford, Oxford, UK.
- Medical Research Council Brain Network Dynamics Unit, University of Oxford, Oxford, UK.
- Fractile, Ltd., London, UK.
| | - Beren Millidge
- Medical Research Council Brain Network Dynamics Unit, University of Oxford, Oxford, UK
| | - Tommaso Salvatori
- Department of Computer Science, University of Oxford, Oxford, UK
- Institute of Logic and Computation, Vienna University of Technology, Vienna, Austria
- VERSES AI Research Lab, Los Angeles, CA, USA
| | - Thomas Lukasiewicz
- Department of Computer Science, University of Oxford, Oxford, UK.
- Institute of Logic and Computation, Vienna University of Technology, Vienna, Austria.
| | - Zhenghua Xu
- Department of Computer Science, University of Oxford, Oxford, UK.
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China.
| | - Rafal Bogacz
- Medical Research Council Brain Network Dynamics Unit, University of Oxford, Oxford, UK.
| |
Collapse
|
8
|
Xing D, Yang Y, Zhang T, Xu B. A Brain-Inspired Approach for Probabilistic Estimation and Efficient Planning in Precision Physical Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6248-6262. [PMID: 35442901 DOI: 10.1109/tcyb.2022.3164750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article presents a novel structure of spiking neural networks (SNNs) to simulate the joint function of multiple brain regions in handling precision physical interactions. This task desires efficient movement planning while considering contact prediction and fast radial compensation. Contact prediction demands the cognitive memory of the interaction model, and we novelly propose a double recurrent network to imitate the hippocampus, addressing the spatiotemporal property of the distribution. Radial contact response needs rich spatial information, and we use a cerebellum-inspired module to achieve temporally dynamic prediction. We also use a block-based feedforward network to plan movements, behaving like the prefrontal cortex. These modules are integrated to realize the joint cognitive function of multiple brain regions in prediction, controlling, and planning. We present an appropriate controller and planner to generate teaching signals and provide a feasible network initialization for reinforcement learning, which modifies synapses in accordance with reality. The experimental results demonstrate the validity of the proposed method.
Collapse
|
9
|
Bredenberg C, Savin C. Desiderata for normative models of synaptic plasticity. ARXIV 2023:arXiv:2308.04988v1. [PMID: 37608931 PMCID: PMC10441445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models - REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, USA
- Mila-Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, USA
- Center for Data Science, New York University, New York, NY 10011, USA
| |
Collapse
|
10
|
Aceituno PV, Farinha MT, Loidl R, Grewe BF. Learning cortical hierarchies with temporal Hebbian updates. Front Comput Neurosci 2023; 17:1136010. [PMID: 37293353 PMCID: PMC10244748 DOI: 10.3389/fncom.2023.1136010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/25/2023] [Indexed: 06/10/2023] Open
Abstract
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| | | | - Reinhard Loidl
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Benjamin F. Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
11
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
12
|
Rafati AH, Ardalan M, Vontell RT, Mallard C, Wegener G. Geometrical modelling of neuronal clustering and development. Heliyon 2022; 8:e09871. [PMID: 35847609 PMCID: PMC9283893 DOI: 10.1016/j.heliyon.2022.e09871] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 04/14/2022] [Accepted: 06/30/2022] [Indexed: 11/17/2022] Open
Abstract
The dynamic geometry of neuronal development is an essential concept in theoretical neuroscience. We aimed to design a mathematical model which outlines stepwise in an innovative form and designed to model neuronal development geometrically and modelling spatially the neuronal-electrical field interaction. We demonstrated flexibility in forming the cell and its nucleus to show neuronal growth from inside to outside that uses a fractal cylinder to generate neurons (pyramidal/sphere) in form of mathematically called ‘surface of revolution’. Furthermore, we verified the effect of the adjacent neurons on a free branch from one-side, by modelling a ‘normal vector surface’ that represented a group of neurons. Our model also indicated how the geometrical shapes and clustering of the neurons can be transformed mathematically in the form of vector field that is equivalent to the neuronal electromagnetic activity/electric flux. We further simulated neuronal-electrical field interaction that was implemented spatially using Van der Pol oscillator and taking Laplacian vector field as it reflects biophysical mechanism of neuronal activity and geometrical change. In brief, our study would be considered a proper platform and inspiring modelling for next more complicated geometrical and electrical constructions.
Collapse
Affiliation(s)
- Ali H Rafati
- Translational Neuropsychiatry Unit, Department of Clinical Medicine, Aarhus University, 8000 Aarhus C, Denmark
| | - Maryam Ardalan
- Translational Neuropsychiatry Unit, Department of Clinical Medicine, Aarhus University, 8000 Aarhus C, Denmark.,Institute of Neuroscience and Physiology, Centre for Perinatal Medicine and Health, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.,Center of Functionally Integrative Neuroscience-SKS, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Regina T Vontell
- Department of Neurology, University of Miami Miller, School of Medicine, Brain Endowment Bank, Miami, USA
| | - Carina Mallard
- Institute of Neuroscience and Physiology, Centre for Perinatal Medicine and Health, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Gregers Wegener
- Translational Neuropsychiatry Unit, Department of Clinical Medicine, Aarhus University, 8000 Aarhus C, Denmark
| |
Collapse
|
13
|
Cramer B, Stradmann Y, Schemmel J, Zenke F. The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2744-2757. [PMID: 33378266 DOI: 10.1109/tnnls.2020.3044364] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification data sets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Furthermore, we applied this conversion to an existing and a novel speech data set. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit data set that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these data sets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.
Collapse
|
14
|
Ioannides G, Kourouklides I, Astolfi A. Spatiotemporal dynamics in spiking recurrent neural networks using modified-full-FORCE on EEG signals. Sci Rep 2022; 12:2896. [PMID: 35190579 PMCID: PMC8861015 DOI: 10.1038/s41598-022-06573-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/05/2022] [Indexed: 11/22/2022] Open
Abstract
Methods on modelling the human brain as a Complex System have increased remarkably in the literature as researchers seek to understand the underlying foundations behind cognition, behaviour, and perception. Computational methods, especially Graph Theory-based methods, have recently contributed significantly in understanding the wiring connectivity of the brain, modelling it as a set of nodes connected by edges. Therefore, the brain’s spatiotemporal dynamics can be holistically studied by considering a network, which consists of many neurons, represented by nodes. Various models have been proposed for modelling such neurons. A recently proposed method in training such networks, called full-Force, produces networks that perform tasks with fewer neurons and greater noise robustness than previous least-squares approaches (i.e. FORCE method). In this paper, the first direct applicability of a variant of the full-Force method to biologically-motivated Spiking RNNs (SRNNs) is demonstrated. The SRNN is a graph consisting of modules. Each module is modelled as a Small-World Network (SWN), which is a specific type of a biologically-plausible graph. So, the first direct applicability of a variant of the full-Force method to modular SWNs is demonstrated, evaluated through regression and information theoretic metrics. For the first time, the aforementioned method is applied to spiking neuron models and trained on various real-life Electroencephalography (EEG) signals. To the best of the authors’ knowledge, all the contributions of this paper are novel. Results show that trained SRNNs match EEG signals almost perfectly, while network dynamics can mimic the target dynamics. This demonstrates that the holistic setup of the network model and the neuron model which are both more biologically plausible than previous work, can be tuned into real biological signal dynamics.
Collapse
Affiliation(s)
- Georgios Ioannides
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK.
| | - Ioannis Kourouklides
- Department of Electrical Engineering, Computer Engineering and Informatics, Cyprus University of Technology, 33 Saripolou Street, 3036, Limassol, Cyprus
| | - Alessandro Astolfi
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
15
|
Büchel J, Zendrikov D, Solinas S, Indiveri G, Muir DR. Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors. Sci Rep 2021; 11:23376. [PMID: 34862429 PMCID: PMC8642544 DOI: 10.1038/s41598-021-02779-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 11/22/2021] [Indexed: 11/14/2022] Open
Abstract
Mixed-signal analog/digital circuits emulate spiking neurons and synapses with extremely high energy efficiency, an approach known as "neuromorphic engineering". However, analog circuits are sensitive to process-induced variation among transistors in a chip ("device mismatch"). For neuromorphic implementation of Spiking Neural Networks (SNNs), mismatch causes parameter variation between identically-configured neurons and synapses. Each chip exhibits a different distribution of neural parameters, causing deployed networks to respond differently between chips. Current solutions to mitigate mismatch based on per-chip calibration or on-chip learning entail increased design complexity, area and cost, making deployment of neuromorphic devices expensive and difficult. Here we present a supervised learning approach that produces SNNs with high robustness to mismatch and other common sources of noise. Our method trains SNNs to perform temporal classification tasks by mimicking a pre-trained dynamical system, using a local learning rule from non-linear control theory. We demonstrate our method on two tasks requiring temporal memory, and measure the robustness of our approach to several forms of noise and mismatch. We show that our approach is more robust than common alternatives for training SNNs. Our method provides robust deployment of pre-trained networks on mixed-signal neuromorphic hardware, without requiring per-device training or calibration.
Collapse
Affiliation(s)
- Julian Büchel
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dmitrii Zendrikov
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Sergio Solinas
- Department of Biomedical Science, University of Sassari, Piazza Università, 21, 07100, Sassari, Sardegna, Italy
| | - Giacomo Indiveri
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dylan R Muir
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland.
| |
Collapse
|
16
|
Gerum RC, Schilling A. Integration of Leaky-Integrate-and-Fire Neurons in Standard Machine Learning Architectures to Generate Hybrid Networks: A Surrogate Gradient Approach. Neural Comput 2021; 33:2827-2852. [PMID: 34280298 DOI: 10.1162/neco_a_01424] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 04/26/2021] [Indexed: 11/04/2022]
Abstract
Up to now, modern machine learning (ML) has been based on approximating big data sets with high-dimensional functions, taking advantage of huge computational resources. We show that biologically inspired neuron models such as the leaky-integrate-and-fire (LIF) neuron provide novel and efficient ways of information processing. They can be integrated in machine learning models and are a potential target to improve ML performance. Thus, we have derived simple update rules for LIF units to numerically integrate the differential equations. We apply a surrogate gradient approach to train the LIF units via backpropagation. We demonstrate that tuning the leak term of the LIF neurons can be used to run the neurons in different operating modes, such as simple signal integrators or coincidence detectors. Furthermore, we show that the constant surrogate gradient, in combination with tuning the leak term of the LIF units, can be used to achieve the learning dynamics of more complex surrogate gradients. To prove the validity of our method, we applied it to established image data sets (the Oxford 102 flower data set, MNIST), implemented various network architectures, used several input data encodings and demonstrated that the method is suitable to achieve state-of-the-art classification performance. We provide our method as well as further surrogate gradient methods to train spiking neural networks via backpropagation as an open-source KERAS package to make it available to the neuroscience and machine learning community. To increase the interpretability of the underlying effects and thus make a small step toward opening the black box of machine learning, we provide interactive illustrations, with the possibility of systematically monitoring the effects of parameter changes on the learning characteristics.
Collapse
Affiliation(s)
- Richard C Gerum
- Department of Physics and Center for Vision Research, York University, Toronto, Ontario M3J 1P3 Canada
| | - Achim Schilling
- Experimental Otolaryngology, Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany; Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Friedrich-Alexander University Erlangen-Nürnberg 91054 Erlangen Germany; and Laboratoire Neuorsciences Sensorielles et Cognitives, Aix Marseille-University, 13331 Marseille, France
| |
Collapse
|
17
|
Singanamalla SKR, Lin CT. Spiking Neural Network for Augmenting Electroencephalographic Data for Brain Computer Interfaces. Front Neurosci 2021; 15:651762. [PMID: 33867928 PMCID: PMC8047134 DOI: 10.3389/fnins.2021.651762] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 02/22/2021] [Indexed: 11/28/2022] Open
Abstract
With the advent of advanced machine learning methods, the performance of brain–computer interfaces (BCIs) has improved unprecedentedly. However, electroencephalography (EEG), a commonly used brain imaging method for BCI, is characterized by a tedious experimental setup, frequent data loss due to artifacts, and is time consuming for bulk trial recordings to take advantage of the capabilities of deep learning classifiers. Some studies have tried to address this issue by generating artificial EEG signals. However, a few of these methods are limited in retaining the prominent features or biomarker of the signal. And, other deep learning-based generative methods require a huge number of samples for training, and a majority of these models can handle data augmentation of one category or class of data at any training session. Therefore, there exists a necessity for a generative model that can generate synthetic EEG samples with as few available trials as possible and generate multi-class while retaining the biomarker of the signal. Since EEG signal represents an accumulation of action potentials from neuronal populations beneath the scalp surface and as spiking neural network (SNN), a biologically closer artificial neural network, communicates via spiking behavior, we propose an SNN-based approach using surrogate-gradient descent learning to reconstruct and generate multi-class artificial EEG signals from just a few original samples. The network was employed for augmenting motor imagery (MI) and steady-state visually evoked potential (SSVEP) data. These artificial data are further validated through classification and correlation metrics to assess its resemblance with original data and in-turn enhanced the MI classification performance.
Collapse
Affiliation(s)
- Sai Kalyan Ranga Singanamalla
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia
| | - Chin-Teng Lin
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia.,Centre for Artificial Intelligence, University of Technology Sydney, Sydney, NSW, Australia
| |
Collapse
|
18
|
Boffi NM, Slotine JJE. Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction. Neural Comput 2021; 33:590-673. [PMID: 33513321 DOI: 10.1162/neco_a_01360] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Stable concurrent learning and control of dynamical systems is the subject of adaptive control. Despite being an established field with many practical applications and a rich theory, much of the development in adaptive control for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between classical adaptive nonlinear control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction. We begin by introducing first-order adaptation laws inspired by natural gradient descent and mirror descent. We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model. Local geometry imposed during learning thus may be used to select parameter vectors-out of the many that will achieve perfect tracking or prediction-for desired properties such as sparsity. We apply this result to regularized dynamics predictor and observer design, and as concrete examples, we consider Hamiltonian systems, Lagrangian systems, and recurrent neural networks. We subsequently develop a variational formalism based on the Bregman Lagrangian. We show that its Euler Lagrange equations lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite friction limit. We illustrate our analyses with simulations demonstrating our theoretical results.
Collapse
Affiliation(s)
- Nicholas M Boffi
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, U.S.A.
| | | |
Collapse
|
19
|
Ilan Y. Improving Global Healthcare and Reducing Costs Using Second-Generation Artificial Intelligence-Based Digital Pills: A Market Disruptor. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:811. [PMID: 33477865 PMCID: PMC7832873 DOI: 10.3390/ijerph18020811] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 01/16/2021] [Accepted: 01/17/2021] [Indexed: 12/12/2022]
Abstract
Background and Aims: Improving global health requires making current and future drugs more effective and affordable. While healthcare systems around the world are faced with increasing costs, branded and generic drug companies are facing the challenge of creating market differentiators. Two of the problems associated with the partial or complete loss of response to chronic medications are a lack of adherence and compensatory responses to chronic drug administration, which leads to tolerance and loss of effectiveness. Approach and Results: First-generation artificial intelligence (AI) systems do not address these needs and suffer from a low adoption rate by patients and clinicians. Second-generation AI systems are focused on a single subject and on improving patients' clinical outcomes. The digital pill, which combines a personalized second-generation AI system with a branded or generic drug, improves the patient response to drugs by increasing adherence and overcoming the loss of response to chronic medications. By improving the effectiveness of drugs, the digital pill reduces healthcare costs and increases end-user adoption. The digital pill also provides a market differentiator for branded and generic drug companies. Conclusions: Implementing the use of a digital pill is expected to reduce healthcare costs, providing advantages for all the players in the healthcare system including patients, clinicians, healthcare authorities, insurance companies, and drug manufacturers. The described business model for the digital pill is based on distributing the savings across all stakeholders, thereby enabling improved global health.
Collapse
Affiliation(s)
- Yaron Ilan
- Department of Medicine, The Hebrew University of Jerusalem-Hadassah Medical Center, Jerusalem 12000, Israel
| |
Collapse
|
20
|
Ilan Y. Second-Generation Digital Health Platforms: Placing the Patient at the Center and Focusing on Clinical Outcomes. Front Digit Health 2020; 2:569178. [PMID: 34713042 PMCID: PMC8521820 DOI: 10.3389/fdgth.2020.569178] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 10/02/2020] [Indexed: 12/13/2022] Open
Abstract
Artificial intelligence (AI) digital health systems have drawn much attention over the last decade. However, their implementation into medical practice occurs at a much slower pace than expected. This paper reviews some of the achievements of first-generation AI systems, and the barriers facing their implementation into medical practice. The development of second-generation AI systems is discussed with a focus on overcoming some of these obstacles. Second-generation systems are aimed at focusing on a single subject and on improving patients' clinical outcomes. A personalized closed-loop system designed to improve end-organ function and the patient's response to chronic therapies is presented. The system introduces a platform which implements a personalized therapeutic regimen and introduces quantifiable individualized-variability patterns into its algorithm. The platform is designed to achieve a clinically meaningful endpoint by ensuring that chronic therapies will have sustainable effect while overcoming compensatory mechanisms associated with disease progression and drug resistance. Second-generation systems are expected to assist patients and providers in adopting and implementing of these systems into everyday care.
Collapse
|
21
|
Gilson M, Dahmen D, Moreno-Bote R, Insabato A, Helias M. The covariance perceptron: A new paradigm for classification and processing of time series in recurrent neuronal networks. PLoS Comput Biol 2020; 16:e1008127. [PMID: 33044953 PMCID: PMC7595646 DOI: 10.1371/journal.pcbi.1008127] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 10/29/2020] [Accepted: 07/07/2020] [Indexed: 12/29/2022] Open
Abstract
Learning in neuronal networks has developed in many directions, in particular to reproduce cognitive tasks like image recognition and speech processing. Implementations have been inspired by stereotypical neuronal responses like tuning curves in the visual system, where, for example, ON/OFF cells fire or not depending on the contrast in their receptive fields. Classical models of neuronal networks therefore map a set of input signals to a set of activity levels in the output of the network. Each category of inputs is thereby predominantly characterized by its mean. In the case of time series, fluctuations around this mean constitute noise in this view. For this paradigm, the high variability exhibited by the cortical activity may thus imply limitations or constraints, which have been discussed for many years. For example, the need for averaging neuronal activity over long periods or large groups of cells to assess a robust mean and to diminish the effect of noise correlations. To reconcile robust computations with variable neuronal activity, we here propose a conceptual change of perspective by employing variability of activity as the basis for stimulus-related information to be learned by neurons, rather than merely being the noise that corrupts the mean signal. In this new paradigm both afferent and recurrent weights in a network are tuned to shape the input-output mapping for covariances, the second-order statistics of the fluctuating activity. When including time lags, covariance patterns define a natural metric for time series that capture their propagating nature. We develop the theory for classification of time series based on their spatio-temporal covariances, which reflect dynamical properties. We demonstrate that recurrent connectivity is able to transform information contained in the temporal structure of the signal into spatial covariances. Finally, we use the MNIST database to show how the covariance perceptron can capture specific second-order statistical patterns generated by moving digits. The dynamics in cortex is characterized by highly fluctuating activity: Even under the very same experimental conditions the activity typically does not reproduce on the level of individual spikes. Given this variability, how then does the brain realize its quasi-deterministic function? One obvious solution is to compute averages over many cells, assuming that the mean activity, or rate, is actually the decisive signal. Variability across trials of an experiment is thus considered noise. We here explore the opposite view: Can fluctuations be used to actually represent information? And if yes, is there a benefit over a representation using the mean rate? We find that a fluctuation-based scheme is not only powerful in distinguishing signals into several classes, but also that networks can efficiently be trained in the new paradigm. Moreover, we argue why such a scheme of representation is more consistent with known forms of synaptic plasticity than rate-based network dynamics.
Collapse
Affiliation(s)
- Matthieu Gilson
- Center for Brain and Cognition, Department of Information and Telecommunication technologies, Universitat Pompeu Fabra, Barcelona, Spain
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- * E-mail:
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Rubén Moreno-Bote
- Center for Brain and Cognition, Department of Information and Telecommunication technologies, Universitat Pompeu Fabra, Barcelona, Spain
- ICREA, Barcelona, Spain
| | - Andrea Insabato
- IDIBAPS (Institut d’Investigacions Biomèdiques August Pi i Sunyer), Barcelona, Spain
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
22
|
Akbarzadeh-Sherbaf K, Safari S, Vahabie AH. A digital hardware implementation of spiking neural networks with binary FORCE training. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.044] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
23
|
Abstract
We present a theory of neural circuits’ design and function, inspired by the random connectivity of real neural circuits and the mathematical power of random projections. Specifically, we introduce a family of statistical models for large neural population codes, a straightforward neural circuit architecture that would implement these models, and a biologically plausible learning rule for such circuits. The resulting neural architecture suggests a design principle for neural circuit—namely, that they learn to compute the mathematical surprise of their inputs, given past inputs, without an explicit teaching signal. We applied these models to recordings from large neural populations in monkeys’ visual and prefrontal cortices and show them to be highly accurate, efficient, and scalable. The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficient, learnable, and realistic neural implementation. This model’s performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable with or better than that of state-of-the-art models. Importantly, the model can be learned using a small number of samples and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation.
Collapse
|
24
|
A solution to the learning dilemma for recurrent networks of spiking neurons. Nat Commun 2020; 11:3625. [PMID: 32681001 PMCID: PMC7367848 DOI: 10.1038/s41467-020-17236-y] [Citation(s) in RCA: 102] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 06/16/2020] [Indexed: 11/09/2022] Open
Abstract
Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.
Collapse
|
25
|
Ilan Y. Order Through Disorder: The Characteristic Variability of Systems. Front Cell Dev Biol 2020; 8:186. [PMID: 32266266 PMCID: PMC7098948 DOI: 10.3389/fcell.2020.00186] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Accepted: 03/05/2020] [Indexed: 12/17/2022] Open
Abstract
Randomness characterizes many processes in nature, and therefore its importance cannot be overstated. In the present study, we investigate examples of randomness found in various fields, to underlie its fundamental processes. The fields we address include physics, chemistry, biology (biological systems from genes to whole organs), medicine, and environmental science. Through the chosen examples, we explore the seemingly paradoxical nature of life and demonstrate that randomness is preferred under specific conditions. Furthermore, under certain conditions, promoting or making use of variability-associated parameters may be necessary for improving the function of processes and systems.
Collapse
Affiliation(s)
- Yaron Ilan
- Department of Medicine, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| |
Collapse
|
26
|
Brendel W, Bourdoukan R, Vertechi P, Machens CK, Denève S. Learning to represent signals spike by spike. PLoS Comput Biol 2020; 16:e1007692. [PMID: 32176682 PMCID: PMC7135338 DOI: 10.1371/journal.pcbi.1007692] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 04/06/2020] [Accepted: 01/27/2020] [Indexed: 12/31/2022] Open
Abstract
Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks’ convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations. Spiking neural networks can encode information with high efficiency in the spike trains of individual neurons if the synaptic weights between neurons are set to specific, optimal values. In this regime, the networks exhibit irregular spike trains, high trial-to-trial variability, and stimulus tuning, as typically observed in cortex. The strong variability on the level of single neurons paradoxically coincides with a precise, non-redundant, and spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these spiking networks can be learnt with local learning rules. In this study, we show how the required architecture can be learnt. We derive local and biophysically plausible learning rules for recurrent neural networks from first principles. We show both mathematically and using numerical simulations that these learning rules drive the networks into the optimal state, and we show that the optimal state is governed by the statistics of the input signals. After learning, the voltages of individual neurons can be interpreted as measuring the instantaneous error of the code, given by the error between the desired output signal and the actual output signal.
Collapse
Affiliation(s)
- Wieland Brendel
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- Tübingen AI Center, University of Tübingen, Germany
| | - Ralph Bourdoukan
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Pietro Vertechi
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Christian K. Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- * E-mail: (CKM); (SD)
| | - Sophie Denève
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- * E-mail: (CKM); (SD)
| |
Collapse
|
27
|
Gandolfi D, Bigiani A, Porro CA, Mapelli J. Inhibitory Plasticity: From Molecules to Computation and Beyond. Int J Mol Sci 2020; 21:E1805. [PMID: 32155701 PMCID: PMC7084224 DOI: 10.3390/ijms21051805] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 02/28/2020] [Accepted: 03/03/2020] [Indexed: 11/17/2022] Open
Abstract
Synaptic plasticity is the cellular and molecular counterpart of learning and memory and, since its first discovery, the analysis of the mechanisms underlying long-term changes of synaptic strength has been almost exclusively focused on excitatory connections. Conversely, inhibition was considered as a fixed controller of circuit excitability. Only recently, inhibitory networks were shown to be finely regulated by a wide number of mechanisms residing in their synaptic connections. Here, we review recent findings on the forms of inhibitory plasticity (IP) that have been discovered and characterized in different brain areas. In particular, we focus our attention on the molecular pathways involved in the induction and expression mechanisms leading to changes in synaptic efficacy, and we discuss, from the computational perspective, how IP can contribute to the emergence of functional properties of brain circuits.
Collapse
Affiliation(s)
- Daniela Gandolfi
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and Neurotechnology, University of Modena and Reggio Emilia, Via Campi 287, 41125 Modena, Italy; (D.G.); (A.B.); (C.A.P.)
- Department of Brain and behavioral sciences, University of Pavia, 27100 Pavia, Italy
| | - Albertino Bigiani
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and Neurotechnology, University of Modena and Reggio Emilia, Via Campi 287, 41125 Modena, Italy; (D.G.); (A.B.); (C.A.P.)
| | - Carlo Adolfo Porro
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and Neurotechnology, University of Modena and Reggio Emilia, Via Campi 287, 41125 Modena, Italy; (D.G.); (A.B.); (C.A.P.)
| | - Jonathan Mapelli
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and Neurotechnology, University of Modena and Reggio Emilia, Via Campi 287, 41125 Modena, Italy; (D.G.); (A.B.); (C.A.P.)
| |
Collapse
|
28
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
29
|
Calmus R, Wilson B, Kikuchi Y, Petkov CI. Structured sequence processing and combinatorial binding: neurobiologically and computationally informed hypotheses. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190304. [PMID: 31840585 PMCID: PMC6939361 DOI: 10.1098/rstb.2019.0304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/04/2019] [Indexed: 12/13/2022] Open
Abstract
Understanding how the brain forms representations of structured information distributed in time is a challenging endeavour for the neuroscientific community, requiring computationally and neurobiologically informed approaches. The neural mechanisms for segmenting continuous streams of sensory input and establishing representations of dependencies remain largely unknown, as do the transformations and computations occurring between the brain regions involved in these aspects of sequence processing. We propose a blueprint for a neurobiologically informed and informing computational model of sequence processing (entitled: Vector-symbolic Sequencing of Binding INstantiating Dependencies, or VS-BIND). This model is designed to support the transformation of serially ordered elements in sensory sequences into structured representations of bound dependencies, readily operates on multiple timescales, and encodes or decodes sequences with respect to chunked items wherever dependencies occur in time. The model integrates established vector symbolic additive and conjunctive binding operators with neurobiologically plausible oscillatory dynamics, and is compatible with modern spiking neural network simulation methods. We show that the model is capable of simulating previous findings from structured sequence processing tasks that engage fronto-temporal regions, specifying mechanistic roles for regions such as prefrontal areas 44/45 and the frontal operculum during interactions with sensory representations in temporal cortex. Finally, we are able to make predictions based on the configuration of the model alone that underscore the importance of serial position information, which requires input from time-sensitive cells, known to reside in the hippocampus and dorsolateral prefrontal cortex. This article is part of the theme issue 'Towards mechanistic models of meaning composition'.
Collapse
Affiliation(s)
- Ryan Calmus
- Newcastle University Medical School, Framlington Place, Newcastle upon Tyne, UK
| | | | | | | |
Collapse
|
30
|
Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 2020; 16:e1007606. [PMID: 31961853 PMCID: PMC7028299 DOI: 10.1371/journal.pcbi.1007606] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/18/2020] [Accepted: 12/13/2019] [Indexed: 12/15/2022] Open
Abstract
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
31
|
Ilan Y. Advanced Tailored Randomness: A Novel Approach for Improving the Efficacy of Biological Systems. J Comput Biol 2020; 27:20-29. [DOI: 10.1089/cmb.2019.0231] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Affiliation(s)
- Yaron Ilan
- Department of Medicine, Hebrew University-Hadassah Medical Center, Jerusalem, Israel
| |
Collapse
|
32
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
33
|
Kaiser J, Hoff M, Konle A, Vasquez Tieck JC, Kappel D, Reichard D, Subramoney A, Legenstein R, Roennau A, Maass W, Dillmann R. Embodied Synaptic Plasticity With Online Reinforcement Learning. Front Neurorobot 2019; 13:81. [PMID: 31632262 PMCID: PMC6786305 DOI: 10.3389/fnbot.2019.00081] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 09/13/2019] [Indexed: 01/02/2023] Open
Abstract
The endeavor to understand the brain involves multiple collaborating research fields. Classically, synaptic plasticity rules derived by theoretical neuroscientists are evaluated in isolation on pattern classification tasks. This contrasts with the biological brain which purpose is to control a body in closed-loop. This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields. The resulting framework allows to evaluate the validity of biologically-plausibe plasticity models in closed-loop robotics environments. We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following. We show that SPORE is capable of learning to perform policies within the course of simulated hours for both tasks. Provisional parameter explorations indicate that the learning rate and the temperature driving the stochastic processes that govern synaptic learning dynamics need to be regulated for performance improvements to be retained. We conclude by discussing the recent deep reinforcement learning techniques which would be beneficial to increase the functionality of SPORE on visuomotor tasks.
Collapse
Affiliation(s)
- Jacques Kaiser
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Michael Hoff
- FZI Research Center for Information Technology, Karlsruhe, Germany
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Andreas Konle
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | | | - David Kappel
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
- Bernstein Center for Computational Neuroscience, III Physikalisches Institut-Biophysik, Georg-August Universität, Göttingen, Germany
- Technische Universität Dresden, Chair of Highly Parallel VLSI Systems and Neuromorphic Circuits, Dresden, Germany
| | - Daniel Reichard
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Anand Subramoney
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Arne Roennau
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Rüdiger Dillmann
- FZI Research Center for Information Technology, Karlsruhe, Germany
| |
Collapse
|
34
|
Murray JM. Local online learning in recurrent networks with random feedback. eLife 2019; 8:43299. [PMID: 31124785 PMCID: PMC6561704 DOI: 10.7554/elife.43299] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Accepted: 05/23/2019] [Indexed: 01/08/2023] Open
Abstract
Recurrent neural networks (RNNs) enable the production and processing of time-dependent signals such as those involved in movement or working memory. Classic gradient-based algorithms for training RNNs have been available for decades, but are inconsistent with biological features of the brain, such as causality and locality. We derive an approximation to gradient-based learning that comports with these constraints by requiring synaptic weight updates to depend only on local information about pre- and postsynaptic activities, in addition to a random feedback projection of the RNN output error. In addition to providing mathematical arguments for the effectiveness of the new learning rule, we show through simulations that it can be used to train an RNN to perform a variety of tasks. Finally, to overcome the difficulty of training over very large numbers of timesteps, we propose an augmented circuit architecture that allows the RNN to concatenate short-duration patterns into longer sequences.
Collapse
Affiliation(s)
- James M Murray
- Zuckerman Mind, Brain and Behavior Institute, Columbia University, New York, United States
| |
Collapse
|
35
|
Zenke F, Ganguli S. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks. Neural Comput 2018; 30:1514-1541. [PMID: 29652587 PMCID: PMC6118408 DOI: 10.1162/neco_a_01086] [Citation(s) in RCA: 135] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Accepted: 01/23/2018] [Indexed: 01/02/2023]
Abstract
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.
Collapse
Affiliation(s)
- Friedemann Zenke
- Department of Applied Physics, Stanford University, Stanford, CA 94305, U.S.A., and Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA 94305, U.S.A.
| |
Collapse
|
36
|
Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training. Nat Commun 2017; 8:2208. [PMID: 29263361 PMCID: PMC5738356 DOI: 10.1038/s41467-017-01827-3] [Citation(s) in RCA: 83] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 10/19/2017] [Indexed: 12/31/2022] Open
Abstract
Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.
Collapse
Affiliation(s)
- Wilten Nicola
- Department of Bioengineering, Imperial College London, Royal School of Mines, London, SW7 2AZ, UK
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, Royal School of Mines, London, SW7 2AZ, UK.
| |
Collapse
|