1
|
Weerasinghe MMA, Wang G, Whalley J, Crook-Rumsey M. Mental stress recognition on the fly using neuroplasticity spiking neural networks. Sci Rep 2023; 13:14962. [PMID: 37696860 PMCID: PMC10495416 DOI: 10.1038/s41598-023-34517-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Accepted: 05/03/2023] [Indexed: 09/13/2023] Open
Abstract
Mental stress is found to be strongly connected with human cognition and wellbeing. As the complexities of human life increase, the effects of mental stress have impacted human health and cognitive performance across the globe. This highlights the need for effective non-invasive stress detection methods. In this work, we introduce a novel, artificial spiking neural network model called Online Neuroplasticity Spiking Neural Network (O-NSNN) that utilizes a repertoire of learning concepts inspired by the brain to classify mental stress using Electroencephalogram (EEG) data. These models are personalized and tested on EEG data recorded during sessions in which participants listen to different types of audio comments designed to induce acute stress. Our O-NSNN models learn on the fly producing an average accuracy of 90.76% (σ = 2.09) when classifying EEG signals of brain states associated with these audio comments. The brain-inspired nature of the individual models makes them robust and efficient and has the potential to be integrated into wearable technology. Furthermore, this article presents an exploratory analysis of trained O-NSNNs to discover links between perceived and acute mental stress. The O-NSNN algorithm proved to be better for personalized stress recognition in terms of accuracy, efficiency, and model interpretability.
Collapse
Affiliation(s)
- Mahima Milinda Alwis Weerasinghe
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand.
- Brain-Inspired AI and Neuroinformatics Lab, Department of Data Science, Sri Lanka Technological Campus, Padukka, Sri Lanka.
| | - Grace Wang
- School of Psychology and Wellbeing, University of Southern Queensland, Toowoomba, Australia
- Centre for Health Research, University of Southern Queensland, Toowoomba, Australia
| | - Jacqueline Whalley
- Department of Computer Science and Software Engineering, Auckland University of Technology, Auckland, New Zealand
| | - Mark Crook-Rumsey
- Department of Basic and Clinical Neuroscience, King's College London, London, UK
- UK Dementia Research Institute, Centre for Care Research and Technology, Imperial College London, London, UK
| |
Collapse
|
2
|
Dorman DB, Blackwell KT. Synaptic Plasticity Is Predicted by Spatiotemporal Firing Rate Patterns and Robust to In Vivo-like Variability. Biomolecules 2022; 12:1402. [PMID: 36291612 PMCID: PMC9599115 DOI: 10.3390/biom12101402] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/13/2022] [Accepted: 09/28/2022] [Indexed: 11/22/2022] Open
Abstract
Synaptic plasticity, the experience-induced change in connections between neurons, underlies learning and memory in the brain. Most of our understanding of synaptic plasticity derives from in vitro experiments with precisely repeated stimulus patterns; however, neurons exhibit significant variability in vivo during repeated experiences. Further, the spatial pattern of synaptic inputs to the dendritic tree influences synaptic plasticity, yet is not considered in most synaptic plasticity rules. Here, we investigate how spatiotemporal synaptic input patterns produce plasticity with in vivo-like conditions using a data-driven computational model with a plasticity rule based on calcium dynamics. Using in vivo spike train recordings as inputs to different size clusters of spines, we show that plasticity is strongly robust to trial-to-trial variability of spike timing. In addition, we derive general synaptic plasticity rules describing how spatiotemporal patterns of synaptic inputs control the magnitude and direction of plasticity. Synapses that strongly potentiated have greater firing rates and calcium concentration later in the trial, whereas strongly depressing synapses have hiring firing rates early in the trial. The neighboring synaptic activity influences the direction and magnitude of synaptic plasticity, with small clusters of spines producing the greatest increase in synaptic strength. Together, our results reveal that calcium dynamics can unify diverse plasticity rules and reveal how spatiotemporal firing rate patterns control synaptic plasticity.
Collapse
Affiliation(s)
- Daniel B. Dorman
- Interdisciplinary Program in Neuroscience, George Mason University, Fairfax, VA 22030, USA
| | - Kim T. Blackwell
- Interdisciplinary Program in Neuroscience, George Mason University, Fairfax, VA 22030, USA
- Department of Bioengineering, Volgenau School of Engineering, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
3
|
Macdonald FLA, Lepora NF, Conradt J, Ward-Cherrier B. Neuromorphic Tactile Edge Orientation Classification in an Unsupervised Spiking Neural Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:6998. [PMID: 36146344 PMCID: PMC9500632 DOI: 10.3390/s22186998] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 08/24/2022] [Accepted: 09/09/2022] [Indexed: 06/16/2023]
Abstract
Dexterous manipulation in robotic hands relies on an accurate sense of artificial touch. Here we investigate neuromorphic tactile sensation with an event-based optical tactile sensor combined with spiking neural networks for edge orientation detection. The sensor incorporates an event-based vision system (mini-eDVS) into a low-form factor artificial fingertip (the NeuroTac). The processing of tactile information is performed through a Spiking Neural Network with unsupervised Spike-Timing-Dependent Plasticity (STDP) learning, and the resultant output is classified with a 3-nearest neighbours classifier. Edge orientations were classified in 10-degree increments while tapping vertically downward and sliding horizontally across the edge. In both cases, we demonstrate that the sensor is able to reliably detect edge orientation, and could lead to accurate, bio-inspired, tactile processing in robotics and prosthetics applications.
Collapse
Affiliation(s)
- Fraser L. A. Macdonald
- Department of Engineering Mathematics, University of Bristol, Bristol BS8 1TW, UK
- Bristol Robotics Laboratory, University of the West of England, Bristol BS34 8QZ, UK
| | - Nathan F. Lepora
- Department of Engineering Mathematics, University of Bristol, Bristol BS8 1TW, UK
- Bristol Robotics Laboratory, University of the West of England, Bristol BS34 8QZ, UK
| | - Jörg Conradt
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 114 28 Stockholm, Sweden
| | - Benjamin Ward-Cherrier
- Department of Engineering Mathematics, University of Bristol, Bristol BS8 1TW, UK
- Bristol Robotics Laboratory, University of the West of England, Bristol BS34 8QZ, UK
| |
Collapse
|
4
|
Kayikcioglu Bozkir I, Ozcan Z, Kose C, Kayikcioglu T, Cetin AE. Improving a cortical pyramidal neuron model's classification performance on a real-world ecg dataset by extending inputs. J Comput Neurosci 2022; 51:329-341. [PMID: 37148455 DOI: 10.1007/s10827-023-00851-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 04/23/2023] [Accepted: 04/25/2023] [Indexed: 05/08/2023]
Abstract
Pyramidal neurons display a variety of active conductivities and complex morphologies that support nonlinear dendritic computation. Given growing interest in understanding the ability of pyramidal neurons to classify real-world data, in our study we applied both a detailed pyramidal neuron model and the perceptron learning algorithm to classify real-world ECG data. We used Gray coding to generate spike patterns from ECG signals as well as investigated the classification performance of the pyramidal neuron's subcellular regions. Compared with the equivalent single-layer perceptron, the pyramidal neuron performed poorly due to a weight constraint. A proposed mirroring approach for inputs, however, significantly boosted the classification performance of the neuron. We thus conclude that pyramidal neurons can classify real-world data and that the mirroring approach affects performance in a way similar to non-constrained learning.
Collapse
Affiliation(s)
- Ilknur Kayikcioglu Bozkir
- Department of Computer Engineering, Karadeniz Technical University, Trabzon, Türkiye.
- Department of Computer Engineering, Bulent Ecevit University, Zonguldak, Türkiye.
| | - Zubeyir Ozcan
- Department of Electrical and Electronics Engineering, Karadeniz Technical University, Trabzon, Türkiye
| | - Cemal Kose
- Department of Computer Engineering, Karadeniz Technical University, Trabzon, Türkiye
| | - Temel Kayikcioglu
- Department of Electrical and Electronics Engineering, Karadeniz Technical University, Trabzon, Türkiye
- Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA
| | - Ahmet Enis Cetin
- Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA
| |
Collapse
|
5
|
Makarov VA, Lobov SA, Shchanikov S, Mikhaylov A, Kazantsev VB. Toward Reflective Spiking Neural Networks Exploiting Memristive Devices. Front Comput Neurosci 2022; 16:859874. [PMID: 35782090 PMCID: PMC9243340 DOI: 10.3389/fncom.2022.859874] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 05/10/2022] [Indexed: 11/29/2022] Open
Abstract
The design of modern convolutional artificial neural networks (ANNs) composed of formal neurons copies the architecture of the visual cortex. Signals proceed through a hierarchy, where receptive fields become increasingly more complex and coding sparse. Nowadays, ANNs outperform humans in controlled pattern recognition tasks yet remain far behind in cognition. In part, it happens due to limited knowledge about the higher echelons of the brain hierarchy, where neurons actively generate predictions about what will happen next, i.e., the information processing jumps from reflex to reflection. In this study, we forecast that spiking neural networks (SNNs) can achieve the next qualitative leap. Reflective SNNs may take advantage of their intrinsic dynamics and mimic complex, not reflex-based, brain actions. They also enable a significant reduction in energy consumption. However, the training of SNNs is a challenging problem, strongly limiting their deployment. We then briefly overview new insights provided by the concept of a high-dimensional brain, which has been put forward to explain the potential power of single neurons in higher brain stations and deep SNN layers. Finally, we discuss the prospect of implementing neural networks in memristive systems. Such systems can densely pack on a chip 2D or 3D arrays of plastic synaptic contacts directly processing analog information. Thus, memristive devices are a good candidate for implementing in-memory and in-sensor computing. Then, memristive SNNs can diverge from the development of ANNs and build their niche, cognitive, or reflective computations.
Collapse
Affiliation(s)
- Valeri A. Makarov
- Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- *Correspondence: Valeri A. Makarov,
| | - Sergey A. Lobov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Sergey Shchanikov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Department of Information Technologies, Vladimir State University, Vladimir, Russia
| | - Alexey Mikhaylov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Viktor B. Kazantsev
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| |
Collapse
|
6
|
Mo L, Wang G, Long E, Zhuo M. ALSA: Associative Learning Based Supervised Learning Algorithm for SNN. Front Neurosci 2022; 16:838832. [PMID: 35431777 PMCID: PMC9008323 DOI: 10.3389/fnins.2022.838832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Spiking neural network (SNN) is considered to be the brain-like model that best conforms to the biological mechanism of the brain. Due to the non-differentiability of the spike, the training method of SNNs is still incomplete. This paper proposes a supervised learning method for SNNs based on associative learning: ALSA. The method is based on the associative learning mechanism, and its realization is similar to the animal conditioned reflex process, with strong physiological plausibility and rationality. This method uses improved spike-timing-dependent plasticity (STDP) rules, combined with a teacher layer to induct spikes of neurons, to strengthen synaptic connections between input spike patterns and specified output neurons, and weaken synaptic connections between unrelated patterns and unrelated output neurons. Based on ALSA, this paper also completed the supervised learning classification tasks of the IRIS dataset and the MNIST dataset, and achieved 95.7 and 91.58% recognition accuracy, respectively, which fully proves that ALSA is a feasible SNNs supervised learning method. The innovation of this paper is to establish a biological plausible supervised learning method for SNNs, which is based on the STDP learning rules and the associative learning mechanism that exists widely in animal training.
Collapse
|
7
|
Remme MWH, Bergmann U, Alevi D, Schreiber S, Sprekeler H, Kempter R. Hebbian plasticity in parallel synaptic pathways: A circuit mechanism for systems memory consolidation. PLoS Comput Biol 2021; 17:e1009681. [PMID: 34874938 PMCID: PMC8683039 DOI: 10.1371/journal.pcbi.1009681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 12/17/2021] [Accepted: 11/24/2021] [Indexed: 12/03/2022] Open
Abstract
Systems memory consolidation involves the transfer of memories across brain regions and the transformation of memory content. For example, declarative memories that transiently depend on the hippocampal formation are transformed into long-term memory traces in neocortical networks, and procedural memories are transformed within cortico-striatal networks. These consolidation processes are thought to rely on replay and repetition of recently acquired memories, but the cellular and network mechanisms that mediate the changes of memories are poorly understood. Here, we suggest that systems memory consolidation could arise from Hebbian plasticity in networks with parallel synaptic pathways-two ubiquitous features of neural circuits in the brain. We explore this hypothesis in the context of hippocampus-dependent memories. Using computational models and mathematical analyses, we illustrate how memories are transferred across circuits and discuss why their representations could change. The analyses suggest that Hebbian plasticity mediates consolidation by transferring a linear approximation of a previously acquired memory into a parallel pathway. Our modelling results are further in quantitative agreement with lesion studies in rodents. Moreover, a hierarchical iteration of the mechanism yields power-law forgetting-as observed in psychophysical studies in humans. The predicted circuit mechanism thus bridges spatial scales from single cells to cortical areas and time scales from milliseconds to years.
Collapse
Affiliation(s)
- Michiel W. H. Remme
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Urs Bergmann
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Denis Alevi
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Susanne Schreiber
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
| | - Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
- Excellence Cluster Science of Intelligence, Berlin, Germany
| | - Richard Kempter
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
| |
Collapse
|
8
|
A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron 2021; 109:4001-4017.e10. [PMID: 34715026 PMCID: PMC8691952 DOI: 10.1016/j.neuron.2021.09.044] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 08/10/2021] [Accepted: 09/23/2021] [Indexed: 11/23/2022]
Abstract
Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons.
Collapse
|
9
|
Research on learning mechanism designing for equilibrated bipolar spiking neural networks. Artif Intell Rev 2020. [DOI: 10.1007/s10462-020-09818-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
10
|
Taherkhani A, Cosma G, McGinnity TM. Optimization of Output Spike Train Encoding for a Spiking Neuron Based on its Spatio–Temporal Input Pattern. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2909355] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
11
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
12
|
Taherkhani A, Belatreche A, Li Y, Cosma G, Maguire LP, McGinnity TM. A review of learning in biologically plausible spiking neural networks. Neural Netw 2019; 122:253-272. [PMID: 31726331 DOI: 10.1016/j.neunet.2019.09.036] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 09/17/2019] [Accepted: 09/23/2019] [Indexed: 11/30/2022]
Abstract
Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed.
Collapse
Affiliation(s)
- Aboozar Taherkhani
- School of Computer Science and Informatics, Faculty of Computing, Engineering and Media, De Montfort University, Leicester, UK.
| | - Ammar Belatreche
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK
| | - Yuhua Li
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Georgina Cosma
- Department of Computer Science, Loughborough University, Loughborough, UK
| | - Liam P Maguire
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK
| | - T M McGinnity
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK; School of Science and Technology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
13
|
Lobo JL, Del Ser J, Bifet A, Kasabov N. Spiking Neural Networks and online learning: An overview and perspectives. Neural Netw 2019; 121:88-100. [PMID: 31536902 DOI: 10.1016/j.neunet.2019.09.004] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 07/18/2019] [Accepted: 09/02/2019] [Indexed: 11/29/2022]
Abstract
Applications that generate huge amounts of data in the form of fast streams are becoming increasingly prevalent, being therefore necessary to learn in an online manner. These conditions usually impose memory and processing time restrictions, and they often turn into evolving environments where a change may affect the input data distribution. Such a change causes that predictive models trained over these stream data become obsolete and do not adapt suitably to new distributions. Specially in these non-stationary scenarios, there is a pressing need for new algorithms that adapt to these changes as fast as possible, while maintaining good performance scores. Unfortunately, most off-the-shelf classification models need to be retrained if they are used in changing environments, and fail to scale properly. Spiking Neural Networks have revealed themselves as one of the most successful approaches to model the behavior and learning potential of the brain, and exploit them to undertake practical online learning tasks. Besides, some specific flavors of Spiking Neural Networks can overcome the necessity of retraining after a drift occurs. This work intends to merge both fields by serving as a comprehensive overview, motivating further developments that embrace Spiking Neural Networks for online learning scenarios, and being a friendly entry point for non-experts.
Collapse
Affiliation(s)
| | - Javier Del Ser
- TECNALIA, 48160 Derio, Spain; Basque Center for Applied Mathematics (BCAM), 48009 Bilbao, Spain; University of the Basque Country UPV/EHU, 48013 Bilbao, Spain
| | - Albert Bifet
- Télécom ParisTech, París, C201-2, France; University of Waikato, Hamilton, New Zealand
| | - Nikola Kasabov
- Auckland University of Technology (AUT), Auckland, New Zealand
| |
Collapse
|
14
|
|
15
|
George JB, Abraham GM, Rashid Z, Amrutur B, Sikdar SK. Random neuronal ensembles can inherently do context dependent coarse conjunctive encoding of input stimulus without any specific training. Sci Rep 2018; 8:1403. [PMID: 29362477 PMCID: PMC5780417 DOI: 10.1038/s41598-018-19462-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 12/14/2017] [Indexed: 11/09/2022] Open
Abstract
Conjunctive encoding of inputs has been hypothesized to be a key feature in the computational capabilities of the brain. This has been inferred based on behavioral studies and electrophysiological recording from animals. In this report, we show that random neuronal ensembles grown on multi-electrode array perform a coarse-conjunctive encoding for a sequence of inputs with the first input setting the context. Such an encoding scheme creates similar yet unique population codes at the output of the ensemble, for related input sequences, which can then be decoded via a simple perceptron and hence a single STDP neuron layer. The random neuronal ensembles allow for pattern generalization and novel sequence classification without needing any specific learning or training of the ensemble. Such a representation of the inputs as population codes of neuronal ensemble outputs, has inherent redundancy and is suitable for further decoding via even probabilistic/random connections to subsequent neuronal layers. We reproduce this behavior in a mathematical model to show that a random neuronal network with a mix of excitatory and inhibitory neurons and sufficient connectivity creates similar coarse-conjunctive encoding of input sequences.
Collapse
Affiliation(s)
- Jude Baby George
- Center for Nanosicence and Engineering, IISc Bangalore, Bengaluru, Karnataka, India
| | - Grace Mathew Abraham
- Center for Nanosicence and Engineering, IISc Bangalore, Bengaluru, Karnataka, India
| | - Zubin Rashid
- Center for Nanosicence and Engineering, IISc Bangalore, Bengaluru, Karnataka, India
| | - Bharadwaj Amrutur
- Robert Bosch Center for Cyber-Physical Systems and Department of Electrical Communications Engineering, IISc Bangalore, Bengaluru, Karnataka, India
| | | |
Collapse
|
16
|
Suppa A, Quartarone A, Siebner H, Chen R, Di Lazzaro V, Del Giudice P, Paulus W, Rothwell J, Ziemann U, Classen J. The associative brain at work: Evidence from paired associative stimulation studies in humans. Clin Neurophysiol 2017; 128:2140-2164. [DOI: 10.1016/j.clinph.2017.08.003] [Citation(s) in RCA: 91] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Revised: 07/20/2017] [Accepted: 08/03/2017] [Indexed: 12/25/2022]
|
17
|
Pande S, Morgan F, Krewer F, Harkin J, McDaid L, McGinley B. Rapid application prototyping for hardware modular spiking neural network architectures. Neural Comput Appl 2017. [DOI: 10.1007/s00521-015-2136-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
18
|
Supervised learning in multilayer spiking neural networks with inner products of spike trains. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.08.087] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
19
|
Xu Y, Yang J, Zhong S. An online supervised learning method based on gradient descent for spiking neurons. Neural Netw 2017; 93:7-20. [PMID: 28525811 DOI: 10.1016/j.neunet.2017.04.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Revised: 03/04/2017] [Accepted: 04/18/2017] [Indexed: 10/19/2022]
Abstract
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses. The method constructs error function and calculates the adjustment of synaptic weights as soon as the neurons emit a spike during their running process. We analyze and synthesize desired and actual output spikes to select appropriate input spikes in the calculation of weight adjustment in this paper. The experimental results show that our method obviously improves learning performance compared with the offline learning manner and has certain advantage on learning accuracy compared with other learning methods. Stronger learning ability determines that the method has large pattern storage capacity.
Collapse
Affiliation(s)
- Yan Xu
- College of Information Science & Technology, Nanjing Agricultural University, Nanjing 210095, China.
| | - Jing Yang
- School of Management, Beijing Normal University, Zhuhai Campus, Zhuhai 519087, China
| | - Shuiming Zhong
- School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing 210044, China
| |
Collapse
|
20
|
Wang J, Belatreche A, Maguire LP, McGinnity TM. SpikeTemp: An Enhanced Rank-Order-Based Learning Approach for Spiking Neural Networks With Adaptive Structure. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:30-43. [PMID: 26642460 DOI: 10.1109/tnnls.2015.2501322] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper presents an enhanced rank-order-based learning algorithm, called SpikeTemp, for spiking neural networks (SNNs) with a dynamically adaptive structure. The trained feed-forward SNN consists of two layers of spiking neurons: 1) an encoding layer which temporally encodes real-valued features into spatio-temporal spike patterns and 2) an output layer of dynamically grown neurons which perform spatio-temporal classification. Both Gaussian receptive fields and square cosine population encoding schemes are employed to encode real-valued features into spatio-temporal spike patterns. Unlike the rank-order-based learning approach, SpikeTemp uses the precise times of the incoming spikes for adjusting the synaptic weights such that early spikes result in a large weight change and late spikes lead to a smaller weight change. This removes the need to rank all the incoming spikes and, thus, reduces the computational cost of SpikeTemp. The proposed SpikeTemp algorithm is demonstrated on several benchmark data sets and on an image recognition task. The results show that SpikeTemp can achieve better classification performance and is much faster than the existing rank-order-based learning approach. In addition, the number of output neurons is much smaller when the square cosine encoding scheme is employed. Furthermore, SpikeTemp is benchmarked against a selection of existing machine learning algorithms, and the results demonstrate the ability of SpikeTemp to classify different data sets after just one presentation of the training samples with comparable classification performance.
Collapse
|
21
|
Yu Q, Yan R, Tang H, Tan KC, Li H. A Spiking Neural Network System for Robust Sequence Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:621-635. [PMID: 25879976 DOI: 10.1109/tnnls.2015.2416771] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper proposes a biologically plausible network architecture with spiking neurons for sequence recognition. This architecture is a unified and consistent system with functional parts of sensory encoding, learning, and decoding. This is the first systematic model attempting to reveal the neural mechanisms considering both the upstream and the downstream neurons together. The whole system is a consistent temporal framework, where the precise timing of spikes is employed for information processing and cognitive computing. Experimental results show that the system is competent to perform the sequence recognition, being robust to noisy sensory inputs and invariant to changes in the intervals between input stimuli within a certain range. The classification ability of the temporal learning rule used in the system is investigated through two benchmark tasks that outperform the other two widely used learning rules for classification. The results also demonstrate the computational power of spiking neurons over perceptrons for processing spatiotemporal patterns. In summary, the system provides a general way with spiking neurons to encode external stimuli into spatiotemporal spikes, to learn the encoded spike patterns with temporal learning rules, and to decode the sequence order with downstream neurons. The system structure would be beneficial for developments in both hardware and software.
Collapse
|
22
|
Albers C, Westkott M, Pawelzik K. Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity. PLoS One 2016; 11:e0148948. [PMID: 26900845 PMCID: PMC4763343 DOI: 10.1371/journal.pone.0148948] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2015] [Accepted: 12/23/2015] [Indexed: 11/28/2022] Open
Abstract
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.
Collapse
Affiliation(s)
- Christian Albers
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
- * E-mail:
| | - Maren Westkott
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| | - Klaus Pawelzik
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| |
Collapse
|
23
|
Taherkhani A, Belatreche A, Li Y, Maguire LP. DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:3137-3149. [PMID: 25794401 DOI: 10.1109/tnnls.2015.2404938] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.
Collapse
|
24
|
|
25
|
Soudry D, Di Castro D, Gal A, Kolodny A, Kvatinsky S. Memristor-based multilayer neural networks with online gradient descent training. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2408-21. [PMID: 25594981 DOI: 10.1109/tnnls.2014.2383395] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Learning in multilayer neural networks (MNNs) relies on continuous updating of large matrices of synaptic weights by local rules. Such locality can be exploited for massive parallelism when implementing MNNs in hardware. However, these update rules require a multiply and accumulate operation for each synaptic weight, which is challenging to implement compactly using CMOS. In this paper, a method for performing these update operations simultaneously (incremental outer products) using memristor-based arrays is proposed. The method is based on the fact that, approximately, given a voltage pulse, the conductivity of a memristor will increment proportionally to the pulse duration multiplied by the pulse magnitude if the increment is sufficiently small. The proposed method uses a synaptic circuit composed of a small number of components per synapse: one memristor and two CMOS transistors. This circuit is expected to consume between 2% and 8% of the area and static power of previous CMOS-only hardware alternatives. Such a circuit can compactly implement hardware MNNs trainable by scalable algorithms based on online gradient descent (e.g., backpropagation). The utility and robustness of the proposed memristor-based circuit are demonstrated on standard supervised learning tasks.
Collapse
|
26
|
Bouchard KE, Ganguli S, Brainard MS. Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences. Front Comput Neurosci 2015; 9:92. [PMID: 26257637 PMCID: PMC4508839 DOI: 10.3389/fncom.2015.00092] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 06/30/2015] [Indexed: 12/11/2022] Open
Abstract
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions.
Collapse
Affiliation(s)
- Kristofer E Bouchard
- Life Sciences and Computational Research Divisions, Lawrence Berkeley National Laboratory Berkeley, CA, USA
| | - Surya Ganguli
- Department of Applied Physics, Stanford University Stanford, CA, USA
| | - Michael S Brainard
- Department of Physiology, University of California, San Francisco and Center for Integrative Neuroscience, University of California, San Francisco San Francisco, CA, USA ; Howard Hughes Medical Institute Chevy Chase, MD, USA
| |
Collapse
|
27
|
Sadeh S, Clopath C, Rotter S. Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity. PLoS Comput Biol 2015; 11:e1004307. [PMID: 26090844 PMCID: PMC4474917 DOI: 10.1371/journal.pcbi.1004307] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2014] [Accepted: 04/30/2015] [Indexed: 11/19/2022] Open
Abstract
In rodent visual cortex, synaptic connections between orientation-selective neurons are unspecific at the time of eye opening, and become to some degree functionally specific only later during development. An explanation for this two-stage process was proposed in terms of Hebbian plasticity based on visual experience that would eventually enhance connections between neurons with similar response features. For this to work, however, two conditions must be satisfied: First, orientation selective neuronal responses must exist before specific recurrent synaptic connections can be established. Second, Hebbian learning must be compatible with the recurrent network dynamics contributing to orientation selectivity, and the resulting specific connectivity must remain stable for unspecific background activity. Previous studies have mainly focused on very simple models, where the receptive fields of neurons were essentially determined by feedforward mechanisms, and where the recurrent network was small, lacking the complex recurrent dynamics of large-scale networks of excitatory and inhibitory neurons. Here we studied the emergence of functionally specific connectivity in large-scale recurrent networks with synaptic plasticity. Our results show that balanced random networks, which already exhibit highly selective responses at eye opening, can develop feature-specific connectivity if appropriate rules of synaptic plasticity are invoked within and between excitatory and inhibitory populations. If these conditions are met, the initial orientation selectivity guides the process of Hebbian learning and, as a result, functionally specific and a surplus of bidirectional connections emerge. Our results thus demonstrate the cooperation of synaptic plasticity and recurrent dynamics in large-scale functional networks with realistic receptive fields, highlight the role of inhibition as a critical element in this process, and paves the road for further computational studies of sensory processing in neocortical network models equipped with synaptic plasticity. In primary visual cortex of mammals, neurons are selective to the orientation of contrast edges. In some species, as cats and monkeys, neurons preferring similar orientations are adjacent on the cortical surface, leading to smooth orientation maps. In rodents, in contrast, such spatial orientation maps do not exist, and neurons of different specificities are mixed in a salt-and-pepper fashion. During development, however, a “functional” map of orientation selectivity emerges, where connections between neurons of similar preferred orientations are selectively enhanced. Here we show how such feature-specific connectivity can arise in realistic neocortical networks of excitatory and inhibitory neurons. Our results demonstrate how recurrent dynamics can work in cooperation with synaptic plasticity to form networks where neurons preferring similar stimulus features connect more strongly together. Such networks, in turn, are known to enhance the specificity of neuronal responses to a stimulus. Our study thus reveals how self-organizing connectivity in neuronal networks enable them to achieve new or enhanced functions, and it underlines the essential role of recurrent inhibition and plasticity in this process.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
- Bioengineering Department, Imperial College London, London, United Kingdom
- * E-mail:
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
28
|
Energy Efficient Sparse Connectivity from Imbalanced Synaptic Plasticity Rules. PLoS Comput Biol 2015; 11:e1004265. [PMID: 26046817 PMCID: PMC4457870 DOI: 10.1371/journal.pcbi.1004265] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2014] [Accepted: 04/05/2015] [Indexed: 11/22/2022] Open
Abstract
It is believed that energy efficiency is an important constraint in brain evolution. As synaptic transmission dominates energy consumption, energy can be saved by ensuring that only a few synapses are active. It is therefore likely that the formation of sparse codes and sparse connectivity are fundamental objectives of synaptic plasticity. In this work we study how sparse connectivity can result from a synaptic learning rule of excitatory synapses. Information is maximised when potentiation and depression are balanced according to the mean presynaptic activity level and the resulting fraction of zero-weight synapses is around 50%. However, an imbalance towards depression increases the fraction of zero-weight synapses without significantly affecting performance. We show that imbalanced plasticity corresponds to imposing a regularising constraint on the L1-norm of the synaptic weight vector, a procedure that is well-known to induce sparseness. Imbalanced plasticity is biophysically plausible and leads to more efficient synaptic configurations than a previously suggested approach that prunes synapses after learning. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum. Recent estimates point out that a large part of the energetic budget of the mammalian cortex is spent in transmitting signals between neurons across synapses. Despite this, studies of learning and memory do not usually take energy efficiency into account. In this work we address the canonical computational problem of storing memories with synaptic plasticity. However, instead of optimising solely for information capacity, we search for energy efficient solutions. This implies that the number of functional synapses needs to be small (sparse connectivity) while maintaining high information. We suggest imbalanced plasticity, a learning regime where net depression is stronger than potentiation, as a simple and plausible means to learn more efficient neural circuits. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum.
Collapse
|
29
|
Avesani P, Hazan H, Koilis E, Manevitz LM, Sona D. Non-parametric temporal modeling of the hemodynamic response function via a liquid state machine. Neural Netw 2015. [PMID: 26218350 DOI: 10.1016/j.neunet.2015.04.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Standard methods for the analysis of functional MRI data strongly rely on prior implicit and explicit hypotheses made to simplify the analysis. In this work the attention is focused on two such commonly accepted hypotheses: (i) the hemodynamic response function (HRF) to be searched in the BOLD signal can be described by a specific parametric model e.g., double-gamma; (ii) the effect of stimuli on the signal is taken to be linearly additive. While these assumptions have been empirically proven to generate high sensitivity for statistical methods, they also limit the identification of relevant voxels to what is already postulated in the signal, thus not allowing the discovery of unknown correlates in the data due to the presence of unexpected hemodynamics. This paper tries to overcome these limitations by proposing a method wherein the HRF is learned directly from data rather than induced from its basic form assumed in advance. This approach produces a set of voxel-wise models of HRF and, as a result, relevant voxels are filterable according to the accuracy of their prediction in a machine learning framework. This approach is instantiated using a temporal architecture based on the paradigm of Reservoir Computing wherein a Liquid State Machine is combined with a decoding Feed-Forward Neural Network. This splits the modeling into two parts: first a representation of the complex temporal reactivity of the hemodynamic response is determined by a universal global "reservoir" which is essentially temporal; second an interpretation of the encoded representation is determined by a standard feed-forward neural network, which is trained by the data. Thus the reservoir models the temporal state of information during and following temporal stimuli in a feed-back system, while the neural network "translates" this data to fit the specific HRF response as given, e.g. by BOLD signal measurements in fMRI. An empirical analysis on synthetic datasets shows that the learning process can be robust both to noise and to the varying shape of the underlying HRF. A similar investigation on real fMRI datasets provides evidence that BOLD predictability allows for discrimination between relevant and irrelevant voxels for a given set of stimuli.
Collapse
Affiliation(s)
- Paolo Avesani
- NeuroInformatics Laboratory (NILab), Fondazione Bruno Kessler, Trento, Italy; Centro Interdipartimentale Mente e Cervello (CIMeC), Università di Trento, Italy
| | - Hananel Hazan
- Department of Computer Science, University of Haifa, Israel; Network Biology Research Laboratory, Technion, Haifa, Israel
| | - Ester Koilis
- Department of Computer Science, University of Haifa, Israel
| | | | - Diego Sona
- NeuroInformatics Laboratory (NILab), Fondazione Bruno Kessler, Trento, Italy; Pattern Analysis and Computer Vision, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
30
|
Hiratani N, Fukai T. Mixed signal learning by spike correlation propagation in feedback inhibitory circuits. PLoS Comput Biol 2015; 11:e1004227. [PMID: 25910189 PMCID: PMC4409403 DOI: 10.1371/journal.pcbi.1004227] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 03/06/2015] [Indexed: 11/18/2022] Open
Abstract
The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory.
Collapse
Affiliation(s)
- Naoki Hiratani
- Department of Complexity Science and Engineering, The University of Tokyo, Kashiwa, Chiba, Japan
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, Japan
- * E-mail: (NH); (TF)
| | - Tomoki Fukai
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, Japan
- * E-mail: (NH); (TF)
| |
Collapse
|
31
|
Input coding for neuro-electronic hybrid systems. Biosystems 2014; 126:1-11. [DOI: 10.1016/j.biosystems.2014.08.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2014] [Revised: 07/31/2014] [Accepted: 08/05/2014] [Indexed: 11/20/2022]
|
32
|
Wang J, Belatreche A, Maguire L, McGinnity TM. An online supervised learning method for spiking neural networks with adaptive structure. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.04.017] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
33
|
Reid D, Hussain AJ, Tawfik H. Financial time series prediction using spiking neural networks. PLoS One 2014; 9:e103656. [PMID: 25170618 PMCID: PMC4149346 DOI: 10.1371/journal.pone.0103656] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2013] [Accepted: 07/05/2014] [Indexed: 11/18/2022] Open
Abstract
In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.
Collapse
Affiliation(s)
- David Reid
- Department of Mathematics and Computer Science, Liverpool Hope University, Liverpool, United Kingdom
- School of Computing and Mathematical Sciences Liverpool John Moores University Liverpool, United Kingdom
| | - Abir Jaafar Hussain
- Department of Mathematics and Computer Science, Liverpool Hope University, Liverpool, United Kingdom
- School of Computing and Mathematical Sciences Liverpool John Moores University Liverpool, United Kingdom
| | - Hissam Tawfik
- Department of Mathematics and Computer Science, Liverpool Hope University, Liverpool, United Kingdom
- School of Computing and Mathematical Sciences Liverpool John Moores University Liverpool, United Kingdom
| |
Collapse
|
34
|
Yu Q, Tang H, Tan KC, Yu H. A brain-inspired spiking neural network model with temporal encoding and learning. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.06.052] [Citation(s) in RCA: 81] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
35
|
Le Mouel C, Harris KD, Yger P. Supervised learning with decision margins in pools of spiking neurons. J Comput Neurosci 2014; 37:333-44. [PMID: 24862859 PMCID: PMC4159595 DOI: 10.1007/s10827-014-0505-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2013] [Revised: 04/25/2014] [Accepted: 05/01/2014] [Indexed: 11/29/2022]
Abstract
Learning to categorise sensory inputs by generalising from a few examples whose category is precisely known is a crucial step for the brain to produce appropriate behavioural responses. At the neuronal level, this may be performed by adaptation of synaptic weights under the influence of a training signal, in order to group spiking patterns impinging on the neuron. Here we describe a framework that allows spiking neurons to perform such "supervised learning", using principles similar to the Support Vector Machine, a well-established and robust classifier. Using a hinge-loss error function, we show that requesting a margin similar to that of the SVM improves performance on linearly non-separable problems. Moreover, we show that using pools of neurons to discriminate categories can also increase the performance by sharing the load among neurons.
Collapse
Affiliation(s)
- Charlotte Le Mouel
- UCL Institute of Neurology and UCL Department of Neuroscience, Physiology, and Pharmacology, London, UK,
| | | | | |
Collapse
|
36
|
Masquelier T. Oscillations can reconcile slowly changing stimuli with short neuronal integration and STDP timescales. NETWORK (BRISTOL, ENGLAND) 2014; 25:85-96. [PMID: 24571100 DOI: 10.3109/0954898x.2014.881574] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Oscillatory brain activity has been widely reported experimentally, yet its functional roles, if any, are still under debate. In this review we argue two things: firstly, thanks to oscillations, even slowly changing stimuli can be encoded in precise relative spike times, decodable by downstream "coincidence detector" neurons in a feedforward manner. Secondly, the required connectivity to do so can spontaneously emerge with spike timing-dependent plasticity (STDP), in an unsupervised manner. The key here is that a common oscillatory drive enables neurons to remain under a fluctuation-driven regime. In this regime spike time jitter does not accumulate and can thus be lower than the intrinsic timescales of stimulus fluctuations, which leads to so-called "temporal encoding". Furthermore, the oscillatory drive formats the spikes in discrete oversampling volleys, and the relative spike times between neurons indicate the eventual differences in their activation levels. The oversampling accelerates the STDP-based learning for downstream neurons. After learning, readout only takes one oscillatory cycle. Finally, we also discuss experimental evidence, and the question of how the theory is complementary to the so-called "communication through coherence" theory.
Collapse
Affiliation(s)
- Timothée Masquelier
- CNRS and UPMC, Lab. of Neurobiology of Adaptive Processes (UMR7102), 9 quai St. Bernard , Paris , 75005 France
| |
Collapse
|
37
|
Gütig R. To spike, or when to spike? Curr Opin Neurobiol 2014; 25:134-9. [PMID: 24468508 DOI: 10.1016/j.conb.2014.01.004] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2013] [Revised: 12/16/2013] [Accepted: 01/02/2014] [Indexed: 11/25/2022]
Abstract
Recent experimental reports have suggested that cortical networks can operate in regimes were sensory information is encoded by relatively small populations of spikes and their precise relative timing. Combined with the discovery of spike timing dependent plasticity, these findings have sparked growing interest in the capabilities of neurons to encode and decode spike timing based neural representations. To address these questions, a novel family of methodologically diverse supervised learning algorithms for spiking neuron models has been developed. These models have demonstrated the high capacity of simple neural architectures to operate also beyond the regime of the well established independent rate codes and to utilize theoretical advantages of spike timing as an additional coding dimension.
Collapse
Affiliation(s)
- Robert Gütig
- Max Planck Institute of Experimental Medicine, Hermann-Rein-Str. 3, 37075 Göttingen, Germany.
| |
Collapse
|
38
|
Franosch JMP, Urban S, van Hemmen JL. Supervised Spike-Timing-Dependent Plasticity: A Spatiotemporal Neuronal Learning Rule for Function Approximation and Decisions. Neural Comput 2013; 25:3113-30. [PMID: 24047322 DOI: 10.1162/neco_a_00520] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as “supervisor.” Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.
Collapse
|
39
|
Beyeler M, Dutt ND, Krichmar JL. Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule. Neural Netw 2013; 48:109-24. [DOI: 10.1016/j.neunet.2013.07.012] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2013] [Revised: 07/28/2013] [Accepted: 07/31/2013] [Indexed: 11/26/2022]
|
40
|
Yger P, Harris KD. The Convallis rule for unsupervised learning in cortical networks. PLoS Comput Biol 2013; 9:e1003272. [PMID: 24204224 PMCID: PMC3808450 DOI: 10.1371/journal.pcbi.1003272] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2013] [Accepted: 08/28/2013] [Indexed: 01/26/2023] Open
Abstract
The phenomenology and cellular mechanisms of cortical synaptic plasticity are becoming known in increasing detail, but the computational principles by which cortical plasticity enables the development of sensory representations are unclear. Here we describe a framework for cortical synaptic plasticity termed the "Convallis rule", mathematically derived from a principle of unsupervised learning via constrained optimization. Implementation of the rule caused a recurrent cortex-like network of simulated spiking neurons to develop rate representations of real-world speech stimuli, enabling classification by a downstream linear decoder. Applied to spike patterns used in in vitro plasticity experiments, the rule reproduced multiple results including and beyond STDP. However STDP alone produced poorer learning performance. The mathematical form of the rule is consistent with a dual coincidence detector mechanism that has been suggested by experiments in several synaptic classes of juvenile neocortex. Based on this confluence of normative, phenomenological, and mechanistic evidence, we suggest that the rule may approximate a fundamental computational principle of the neocortex.
Collapse
Affiliation(s)
- Pierre Yger
- UCL Institute of Neurology and UCL Department of Neuroscience, Physiology, and Pharmacology, London, United Kingdom
- * E-mail:
| | - Kenneth D. Harris
- UCL Institute of Neurology and UCL Department of Neuroscience, Physiology, and Pharmacology, London, United Kingdom
| |
Collapse
|
41
|
Yu Q, Tang H, Tan KC, Li H. Rapid feedforward computation by temporal encoding and learning with spiking neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:1539-52. [PMID: 24808592 DOI: 10.1109/tnnls.2013.2245677] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Primates perform remarkably well in cognitive tasks such as pattern recognition. Motivated by recent findings in biological systems, a unified and consistent feedforward system network with a proper encoding scheme and supervised temporal rules is built for solving the pattern recognition task. The temporal rules used for processing precise spiking patterns have recently emerged as ways of emulating the brain's computation from its anatomy and physiology. Most of these rules could be used for recognizing different spatiotemporal patterns. However, there arises the question of whether these temporal rules could be used to recognize real-world stimuli such as images. Furthermore, how the information is represented in the brain still remains unclear. To tackle these problems, a proper encoding method and a unified computational model with consistent and efficient learning rule are proposed. Through encoding, external stimuli are converted into sparse representations, which also have properties of invariance. These temporal patterns are then learned through biologically derived algorithms in the learning layer, followed by the final decision presented through the readout layer. The performance of the model with images of digits from the MNIST database is presented. The results show that the proposed model is capable of recognizing images correctly with a performance comparable to that of current benchmark algorithms. The results also suggest a plausibility proof for a class of feedforward models of rapid and robust recognition in the brain.
Collapse
|
42
|
Hino T, Hasegawa T, Tanaka H, Tsuruoka T, Terabe K, Ogawa T, Aono M. Volatile and nonvolatile selective switching of a photo-assisted initialized atomic switch. NANOTECHNOLOGY 2013; 24:384006. [PMID: 23999187 DOI: 10.1088/0957-4484/24/38/384006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
A photo-assisted atomic switch, which has a photoconductive molecular layer in a gap of about 20 nm between an Ag2S electrode and a Pt electrode, is set to a conventional gap-type atomic switch operation mode by light irradiation with the application of a small bias that precipitates Ag atoms from an Ag2S electrode. After this initialization, the switch operates only with application of a bias. In this study, we also found that after the set-operation a photo-assisted initialized atomic switch shows different switching modes depending on the bias range, i.e., volatile switching when the applied bias is smaller than the threshold bias, and nonvolatile switching when the applied bias is larger than the threshold bias. These characteristics can be useful in reconfiguring a circuit such as in neural computing systems.
Collapse
Affiliation(s)
- T Hino
- International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki, Japan.
| | | | | | | | | | | | | |
Collapse
|
43
|
Abstract
Spike-timing-dependent construction (STDC) is the production of new spiking neurons and connections in a simulated neural network in response to neuron activity. Following the discovery of spike-timing-dependent plasticity (STDP), significant effort has gone into the modeling and simulation of adaptation in spiking neural networks (SNNs). Limitations in computational power imposed by network topology, however, constrain learning capabilities through connection weight modification alone. Constructive algorithms produce new neurons and connections, allowing automatic structural responses for applications of unknown complexity and nonstationary solutions. A conceptual analogy is developed and extended to theoretical conditions for modeling synaptic plasticity as network construction. Generalizing past constructive algorithms, we propose a framework for the design of novel constructive SNNs and demonstrate its application in the development of simulations for the validation of developed theory. Potential directions of future research and applications of STDC for biological modeling and machine learning are also discussed.
Collapse
Affiliation(s)
- Toby Lightheart
- School of Mechanical Engineering, University of Adelaide, Adelaide, SA 5005, Australia.
| | | | | |
Collapse
|
44
|
Xu Y, Zeng X, Han L, Yang J. A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks. Neural Netw 2013; 43:99-113. [DOI: 10.1016/j.neunet.2013.02.003] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2011] [Revised: 02/05/2013] [Accepted: 02/08/2013] [Indexed: 10/27/2022]
|
45
|
Abstract
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.
Collapse
Affiliation(s)
- Yan Xu
- Institute of Intelligence Science and Technology, Hohai University, Nanjing, Jiangsu 211100, China
| | - Xiaoqin Zeng
- Institute of Intelligence Science and Technology, Hohai University, Nanjing, Jiangsu 211100, China
| | - Shuiming Zhong
- School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China
| |
Collapse
|
46
|
Mohemmed A, Schliebs S, Matsuda S, Kasabov N. Training spiking neural networks to associate spatio-temporal input–output spike patterns. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2012.08.034] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
47
|
Spike-Timing-dependent plasticity and short-term plasticity jointly control the excitation of Hebbian plasticity without weight constraints in neural networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2013; 2012:968272. [PMID: 23365563 PMCID: PMC3546465 DOI: 10.1155/2012/968272] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2012] [Accepted: 11/28/2012] [Indexed: 11/17/2022]
Abstract
Hebbian plasticity precisely describes how synapses increase their synaptic strengths according to the correlated activities between two neurons; however, it fails to explain how these activities dilute the strength of the same synapses. Recent literature has proposed spike-timing-dependent plasticity and short-term plasticity on multiple dynamic stochastic synapses that can control synaptic excitation and remove many user-defined constraints. Under this hypothesis, a network model was implemented giving more computational power to receptors, and the behavior at a synapse was defined by the collective dynamic activities of stochastic receptors. An experiment was conducted to analyze can spike-timing-dependent plasticity interplay with short-term plasticity to balance the excitation of the Hebbian neurons without weight constraints? If so what underline mechanisms help neurons to maintain such excitation in computational environment? According to our results both plasticity mechanisms work together to balance the excitation of the neural network as our neurons stabilized its weights for Poisson inputs with mean firing rates from 10 Hz to 40 Hz. The behavior generated by the two neurons was similar to the behavior discussed under synaptic redistribution, so that synaptic weights were stabilized while there was a continuous increase of presynaptic probability of release and higher turnover rate of postsynaptic receptors.
Collapse
|
48
|
Sheik S, Pfeiffer M, Stefanini F, Indiveri G. Spatio-temporal Spike Pattern Classification in Neuromorphic Systems. BIOMIMETIC AND BIOHYBRID SYSTEMS 2013. [DOI: 10.1007/978-3-642-39802-5_23] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
49
|
Abstract
We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.
Collapse
Affiliation(s)
- Ioana Sporea
- Department of Computing, University of Surrey, Guildford, GU2 7XH, U.K
| | - André Grüning
- Department of Computing, University of Surrey, Guildford, GU2 7XH, U.K
| |
Collapse
|
50
|
Associative memory of phase-coded spatiotemporal patterns in leaky Integrate and Fire networks. J Comput Neurosci 2012; 34:319-36. [PMID: 23053861 PMCID: PMC3605499 DOI: 10.1007/s10827-012-0423-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2012] [Revised: 08/16/2012] [Accepted: 09/06/2012] [Indexed: 11/09/2022]
Abstract
We study the collective dynamics of a Leaky Integrate and Fire network in which precise relative phase relationship of spikes among neurons are stored, as attractors of the dynamics, and selectively replayed at different time scales. Using an STDP-based learning process, we store in the connectivity several phase-coded spike patterns, and we find that, depending on the excitability of the network, different working regimes are possible, with transient or persistent replay activity induced by a brief signal. We introduce an order parameter to evaluate the similarity between stored and recalled phase-coded pattern, and measure the storage capacity. Modulation of spiking thresholds during replay changes the frequency of the collective oscillation or the number of spikes per cycle, keeping preserved the phases relationship. This allows a coding scheme in which phase, rate and frequency are dissociable. Robustness with respect to noise and heterogeneity of neurons parameters is studied, showing that, since dynamics is a retrieval process, neurons preserve stable precise phase relationship among units, keeping a unique frequency of oscillation, even in noisy conditions and with heterogeneity of internal parameters of the units.
Collapse
|