1
|
Xie S, Jones E, Zhang S, Marsden E, Baistow I, Furber S, Mitra S, Hamilton A. FPGA-based fast bin-ratio spiking ensemble network for radioisotope identification. Neural Netw 2024; 176:106332. [PMID: 38678831 DOI: 10.1016/j.neunet.2024.106332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 03/13/2024] [Accepted: 04/21/2024] [Indexed: 05/01/2024]
Abstract
In this work, we demonstrate the training, conversion, and implementation flow of an FPGA-based bin-ratio ensemble spiking neural network applied for radioisotope identification. The combination of techniques including learned step quantisation (LSQ) and pruning facilitated the implementation by compressing the network's parameters down to 30% yet retaining the accuracy of 97.04% with an accuracy loss of less than 1%. Meanwhile, the proposed ensemble network of 20 3-layer spiking neural networks (SNNs), which incorporates 1160 spiking neurons, only needs 334 μs for a single inference with the given clock frequency of 100 MHz. Under such optimisation, this FPGA implementation in an Artix-7 board consumes 157 μJ per inference by estimation.
Collapse
Affiliation(s)
- Shouyu Xie
- University of Edinburgh, Alexander Crum Brown Road, Kings Buildings, Edinburgh, EH9 3FF, United Kingdom.
| | - Edward Jones
- University of Manchester, Manchester, United Kingdom
| | - Siru Zhang
- University of Liverpool, Liverpool, United Kingdom
| | | | | | - Steve Furber
- University of Manchester, Manchester, United Kingdom
| | - Srinjoy Mitra
- University of Edinburgh, Alexander Crum Brown Road, Kings Buildings, Edinburgh, EH9 3FF, United Kingdom
| | - Alister Hamilton
- University of Edinburgh, Alexander Crum Brown Road, Kings Buildings, Edinburgh, EH9 3FF, United Kingdom
| |
Collapse
|
2
|
Fan ZY, Tang Z, Fang JL, Jiang YP, Liu QX, Tang XG, Zhou YC, Gao J. Neuromorphic Computing of Optoelectronic Artificial BFCO/AZO Heterostructure Memristors Synapses. NANOMATERIALS (BASEL, SWITZERLAND) 2024; 14:583. [PMID: 38607116 PMCID: PMC11013421 DOI: 10.3390/nano14070583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 03/19/2024] [Accepted: 03/25/2024] [Indexed: 04/13/2024]
Abstract
Compared with purely electrical neuromorphic devices, those stimulated by optical signals have gained increasing attention due to their realistic sensory simulation. In this work, an optoelectronic neuromorphic device based on a photoelectric memristor with a Bi2FeCrO6/Al-doped ZnO (BFCO/AZO) heterostructure is fabricated that can respond to both electrical and optical signals and successfully simulate a variety of synaptic behaviors, such as STP, LTP, and PPF. In addition, the photomemory mechanism was identified by analyzing the energy band structures of AZO and BFCO. A convolutional neural network (CNN) architecture for pattern classification at the Mixed National Institute of Standards and Technology (MNIST) was used and improved the recognition accuracy of the MNIST and Fashion-MNIST datasets to 95.21% and 74.19%, respectively, by implementing an improved stochastic adaptive algorithm. These results provide a feasible approach for future implementation of optoelectronic synapses.
Collapse
Affiliation(s)
- Zhao-Yuan Fan
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China; (Z.-Y.F.)
| | - Zhenhua Tang
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China; (Z.-Y.F.)
| | - Jun-Lin Fang
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China; (Z.-Y.F.)
| | - Yan-Ping Jiang
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China; (Z.-Y.F.)
| | - Qiu-Xiang Liu
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China; (Z.-Y.F.)
| | - Xin-Gui Tang
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China; (Z.-Y.F.)
| | - Yi-Chun Zhou
- School of Advanced Materials and Nanotechnology, Xidian University, Xi’an 710126, China
| | - Ju Gao
- Department of Physics, The University of Hong Kong, Hong Kong 999077, China
| |
Collapse
|
3
|
Pan W, Zhao F, Han B, Dong Y, Zeng Y. Emergence of brain-inspired small-world spiking neural network through neuroevolution. iScience 2024; 27:108845. [PMID: 38327781 PMCID: PMC10847652 DOI: 10.1016/j.isci.2024.108845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 08/23/2023] [Accepted: 01/03/2024] [Indexed: 02/09/2024] Open
Abstract
Studies suggest that the brain's high efficiency and low energy consumption may be closely related to its small-world topology and critical dynamics. However, existing efforts on the performance-oriented structural evolution of spiking neural networks (SNNs) are time-consuming and ignore the core structural properties of the brain. Here, we introduce a multi-objective Evolutionary Liquid State Machine (ELSM), which blends the small-world coefficient and criticality to evolve models and guide the emergence of brain-inspired, efficient structures. Experiments reveal ELSM's consistent and comparable performance, achieving 97.23% on NMNIST and outperforming LSM models on MNIST and Fashion-MNIST with 98.12% and 88.81% accuracies, respectively. Further analysis shows its versatility and spontaneous evolution of topologies such as hub nodes, short paths, long-tailed degree distributions, and numerous communities. This study evolves recurrent spiking neural networks into brain-inspired energy-efficient structures, showcasing versatility in multiple tasks and potential for adaptive general artificial intelligence.
Collapse
Affiliation(s)
- Wenxuan Pan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Bing Han
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yiting Dong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
4
|
Kim Y, Kahana A, Yin R, Li Y, Stinis P, Karniadakis GE, Panda P. Rethinking skip connections in Spiking Neural Networks with Time-To-First-Spike coding. Front Neurosci 2024; 18:1346805. [PMID: 38419664 PMCID: PMC10899405 DOI: 10.3389/fnins.2024.1346805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 01/30/2024] [Indexed: 03/02/2024] Open
Abstract
Time-To-First-Spike (TTFS) coding in Spiking Neural Networks (SNNs) offers significant advantages in terms of energy efficiency, closely mimicking the behavior of biological neurons. In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding. Our focus is on two distinct types of skip connection architectures: (1) addition-based skip connections, and (2) concatenation-based skip connections. We find that addition-based skip connections introduce an additional delay in terms of spike timing. On the other hand, concatenation-based skip connections circumvent this delay but produce time gaps between after-convolution and skip connection paths, thereby restricting the effective mixing of information from these two paths. To mitigate these issues, we propose a novel approach involving a learnable delay for skip connections in the concatenation-based skip connection architecture. This approach successfully bridges the time gap between the convolutional and skip branches, facilitating improved information mixing. We conduct experiments on public datasets including MNIST and Fashion-MNIST, illustrating the advantage of the skip connection in TTFS coding architectures. Additionally, we demonstrate the applicability of TTFS coding on beyond image recognition tasks and extend it to scientific machine-learning tasks, broadening the potential uses of SNNs.
Collapse
Affiliation(s)
- Youngeun Kim
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Adar Kahana
- Division of Applied Mathematics, Brown University, Providence, RI, United States
| | - Ruokai Yin
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Yuhang Li
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Panos Stinis
- Division of Applied Mathematics, Brown University, Providence, RI, United States
- Advanced Computing, Mathematics and Data Division, Pacific Northwest National Laboratory, Richland, WA, United States
| | - George Em Karniadakis
- Division of Applied Mathematics, Brown University, Providence, RI, United States
- Advanced Computing, Mathematics and Data Division, Pacific Northwest National Laboratory, Richland, WA, United States
| | - Priyadarshini Panda
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| |
Collapse
|
5
|
Wang J. Training multi-layer spiking neural networks with plastic synaptic weights and delays. Front Neurosci 2024; 17:1253830. [PMID: 38328553 PMCID: PMC10847234 DOI: 10.3389/fnins.2023.1253830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 12/04/2023] [Indexed: 02/09/2024] Open
Abstract
Spiking neural networks are usually considered as the third generation of neural networks, which hold the potential of ultra-low power consumption on corresponding hardware platforms and are very suitable for temporal information processing. However, how to efficiently train the spiking neural networks remains an open question, and most existing learning methods only consider the plasticity of synaptic weights. In this paper, we proposed a new supervised learning algorithm for multiple-layer spiking neural networks based on the typical SpikeProp method. In the proposed method, both the synaptic weights and delays are considered as adjustable parameters to improve both the biological plausibility and the learning performance. In addition, the proposed method inherits the advantages of SpikeProp, which can make full use of the temporal information of spikes. Various experiments are conducted to verify the performance of the proposed method, and the results demonstrate that the proposed method achieves a competitive learning performance compared with the existing related works. Finally, the differences between the proposed method and the existing mainstream multi-layer training algorithms are discussed.
Collapse
Affiliation(s)
- Jing Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
6
|
Li G, Gao Q, Yang M, Gao X. Active learning based on similarity level histogram and adaptive-scale sampling for very high resolution image classification. Neural Netw 2023; 167:22-35. [PMID: 37619511 DOI: 10.1016/j.neunet.2023.08.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 07/16/2023] [Accepted: 08/06/2023] [Indexed: 08/26/2023]
Abstract
In remote sensing image classification, active learning aims to obtain an excellent classification model by selecting informative or representative training samples. However, due to the complexity of remote sensing images, the same class of ground objects usually have different spectral representations. The existing active learning methods may not take into account diverse representations of the same targets, which leads to a possible lack of intra-class diversity in the collected samples. To alleviate this problem, we propose an active learning method based on similarity level histogram (SLH) and adaptive-scale sampling to improve very high resolution remote sensing image classification. Specifically, we construct a SLH for each class of ground objects to effectively consider the intra-class diversity of the same target. To avoid the problem of sample imbalance caused by over-sampling or under-sampling, we design an adaptive-scale sampling strategy. Then, we utilize active learning to mine representative samples from each SLH warehouse according to adaptive-scale sampling strategies until the iteration condition is satisfied. Experiments show that the proposed algorithm can achieve better classification performance with limited training samples and is competitive with other methods based on four sets of publicly available data.
Collapse
Affiliation(s)
- Guangfei Li
- State Key Laboratory of Integrated Services Networks, Xidian University, Shaanxi 710071, China
| | - Quanxue Gao
- State Key Laboratory of Integrated Services Networks, Xidian University, Shaanxi 710071, China.
| | - Ming Yang
- State Key Laboratory of Integrated Services Networks, Xidian University, Shaanxi 710071, China
| | - Xinbo Gao
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
7
|
Bitar A, Rosales R, Paulitsch M. Gradient-based feature-attribution explainability methods for spiking neural networks. Front Neurosci 2023; 17:1153999. [PMID: 37829721 PMCID: PMC10565802 DOI: 10.3389/fnins.2023.1153999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/01/2023] [Indexed: 10/14/2023] Open
Abstract
Introduction Spiking neural networks (SNNs) are a model of computation that mimics the behavior of biological neurons. SNNs process event data (spikes) and operate more sparsely than artificial neural networks (ANNs), resulting in ultra-low latency and small power consumption. This paper aims to adapt and evaluate gradient-based explainability methods for SNNs, which were originally developed for conventional ANNs. Methods The adapted methods aim to create input feature attribution maps for SNNs trained through backpropagation that process either event-based spiking data or real-valued data. The methods address the limitations of existing work on explainability methods for SNNs, such as poor scalability, limited to convolutional layers, requiring the training of another model, and providing maps of activation values instead of true attribution scores. The adapted methods are evaluated on classification tasks for both real-valued and spiking data, and the accuracy of the proposed methods is confirmed through perturbation experiments at the pixel and spike levels. Results and discussion The results reveal that gradient-based SNN attribution methods successfully identify highly contributing pixels and spikes with significantly less computation time than model-agnostic methods. Additionally, we observe that the chosen coding technique has a noticeable effect on the input features that will be most significant. These findings demonstrate the potential of gradient-based explainability methods for SNNs in improving our understanding of how these networks process information and contribute to the development of more efficient and accurate SNNs.
Collapse
Affiliation(s)
- Ammar Bitar
- Intel Labs, Munich, Germany
- Department of Knowledge Engineering, Maastricht University, Maastricht, Netherlands
| | | | | |
Collapse
|
8
|
Weerasinghe MMA, Wang G, Whalley J, Crook-Rumsey M. Mental stress recognition on the fly using neuroplasticity spiking neural networks. Sci Rep 2023; 13:14962. [PMID: 37696860 PMCID: PMC10495416 DOI: 10.1038/s41598-023-34517-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Accepted: 05/03/2023] [Indexed: 09/13/2023] Open
Abstract
Mental stress is found to be strongly connected with human cognition and wellbeing. As the complexities of human life increase, the effects of mental stress have impacted human health and cognitive performance across the globe. This highlights the need for effective non-invasive stress detection methods. In this work, we introduce a novel, artificial spiking neural network model called Online Neuroplasticity Spiking Neural Network (O-NSNN) that utilizes a repertoire of learning concepts inspired by the brain to classify mental stress using Electroencephalogram (EEG) data. These models are personalized and tested on EEG data recorded during sessions in which participants listen to different types of audio comments designed to induce acute stress. Our O-NSNN models learn on the fly producing an average accuracy of 90.76% (σ = 2.09) when classifying EEG signals of brain states associated with these audio comments. The brain-inspired nature of the individual models makes them robust and efficient and has the potential to be integrated into wearable technology. Furthermore, this article presents an exploratory analysis of trained O-NSNNs to discover links between perceived and acute mental stress. The O-NSNN algorithm proved to be better for personalized stress recognition in terms of accuracy, efficiency, and model interpretability.
Collapse
Affiliation(s)
- Mahima Milinda Alwis Weerasinghe
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand.
- Brain-Inspired AI and Neuroinformatics Lab, Department of Data Science, Sri Lanka Technological Campus, Padukka, Sri Lanka.
| | - Grace Wang
- School of Psychology and Wellbeing, University of Southern Queensland, Toowoomba, Australia
- Centre for Health Research, University of Southern Queensland, Toowoomba, Australia
| | - Jacqueline Whalley
- Department of Computer Science and Software Engineering, Auckland University of Technology, Auckland, New Zealand
| | - Mark Crook-Rumsey
- Department of Basic and Clinical Neuroscience, King's College London, London, UK
- UK Dementia Research Institute, Centre for Care Research and Technology, Imperial College London, London, UK
| |
Collapse
|
9
|
Yang G, Lee W, Seo Y, Lee C, Seok W, Park J, Sim D, Park C. Unsupervised Spiking Neural Network with Dynamic Learning of Inhibitory Neurons. SENSORS (BASEL, SWITZERLAND) 2023; 23:7232. [PMID: 37631767 PMCID: PMC10459513 DOI: 10.3390/s23167232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/23/2023] [Accepted: 08/15/2023] [Indexed: 08/27/2023]
Abstract
A spiking neural network (SNN) is a type of artificial neural network that operates based on discrete spikes to process timing information, similar to the manner in which the human brain processes real-world problems. In this paper, we propose a new spiking neural network (SNN) based on conventional, biologically plausible paradigms, such as the leaky integrate-and-fire model, spike timing-dependent plasticity, and the adaptive spiking threshold, by suggesting new biological models; that is, dynamic inhibition weight change, a synaptic wiring method, and Bayesian inference. The proposed network is designed for image recognition tasks, which are frequently used to evaluate the performance of conventional deep neural networks. To manifest the bio-realistic neural architecture, the learning is unsupervised, and the inhibition weight is dynamically changed; this, in turn, affects the synaptic wiring method based on Hebbian learning and the neuronal population. In the inference phase, Bayesian inference successfully classifies the input digits by counting the spikes from the responding neurons. The experimental results demonstrate that the proposed biological model ensures a performance improvement compared with other biologically plausible SNN models.
Collapse
Affiliation(s)
- Geunbo Yang
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Wongyu Lee
- Department of Intelligent Information and Embedded Software Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (W.L.); (W.S.)
| | - Youjung Seo
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Choongseop Lee
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Woojoon Seok
- Department of Intelligent Information and Embedded Software Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (W.L.); (W.S.)
| | - Jongkil Park
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea;
| | - Donggyu Sim
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Cheolsoo Park
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| |
Collapse
|
10
|
Zhang H, Li Y, He B, Fan X, Wang Y, Zhang Y. Direct training high-performance spiking neural networks for object recognition and detection. Front Neurosci 2023; 17:1229951. [PMID: 37614339 PMCID: PMC10442545 DOI: 10.3389/fnins.2023.1229951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 07/19/2023] [Indexed: 08/25/2023] Open
Abstract
Introduction The spiking neural network (SNN) is a bionic model that is energy-efficient when implemented on neuromorphic hardwares. The non-differentiability of the spiking signals and the complicated neural dynamics make direct training of high-performance SNNs a great challenge. There are numerous crucial issues to explore for the deployment of direct training SNNs, such as gradient vanishing and explosion, spiking signal decoding, and applications in upstream tasks. Methods To address gradient vanishing, we introduce a binary selection gate into the basic residual block and propose spiking gate (SG) ResNet to implement residual learning in SNNs. We propose two appropriate representations of the gate signal and verify that SG ResNet can overcome gradient vanishing or explosion by analyzing the gradient backpropagation. For the spiking signal decoding, a better decoding scheme than rate coding is achieved by our attention spike decoder (ASD), which dynamically assigns weights to spiking signals along the temporal, channel, and spatial dimensions. Results and discussion The SG ResNet and ASD modules are evaluated on multiple object recognition datasets, including the static ImageNet, CIFAR-100, CIFAR-10, and neuromorphic DVS-CIFAR10 datasets. Superior accuracy is demonstrated with a tiny simulation time step of four, specifically 94.52% top-1 accuracy on CIFAR-10 and 75.64% top-1 accuracy on CIFAR-100. Spiking RetinaNet is proposed using SG ResNet as the backbone and ASD module for information decoding as the first direct-training hybrid SNN-ANN detector for RGB images. Spiking RetinaNet with a SG ResNet34 backbone achieves an mAP of 0.296 on the object detection dataset MSCOCO.
Collapse
Affiliation(s)
- Hong Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Yang Li
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Bin He
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Xiongfei Fan
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Yue Wang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Yu Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
- Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou, China
| |
Collapse
|
11
|
Dong Y, Zhao D, Li Y, Zeng Y. An unsupervised STDP-based spiking neural network inspired by biologically plausible learning rules and connections. Neural Netw 2023; 165:799-808. [PMID: 37418862 DOI: 10.1016/j.neunet.2023.06.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
The backpropagation algorithm has promoted the rapid development of deep learning, but it relies on a large amount of labeled data and still has a large gap with how humans learn. The human brain can quickly learn various conceptual knowledge in a self-organized and unsupervised manner, accomplished through coordinating various learning rules and structures in the human brain. Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly. In this paper, taking inspiration from short-term synaptic plasticity, we design an adaptive synaptic filter and introduce the adaptive spiking threshold as the neuron plasticity to enrich the representation ability of SNNs. We also introduce an adaptive lateral inhibitory connection to adjust the spikes balance dynamically to help the network learn richer features. To speed up and stabilize the training of unsupervised spiking neural networks, we design a samples temporal batch STDP (STB-STDP), which updates weights based on multiple samples and moments. By integrating the above three adaptive mechanisms and STB-STDP, our model greatly accelerates the training of unsupervised spiking neural networks and improves the performance of unsupervised SNNs on complex tasks. Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets. Further, we tested on the more complex CIFAR10 dataset, and the results fully illustrate the superiority of our algorithm. Our model is also the first work to apply unsupervised STDP-based SNNs to CIFAR10. At the same time, in the small-sample learning scenario, it will far exceed the supervised ANN using the same structure.
Collapse
Affiliation(s)
- Yiting Dong
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China; Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China
| | - Dongcheng Zhao
- Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China
| | - Yang Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China
| | - Yi Zeng
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China; Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences (CAS), Shanghai, China; State Key Laboratory of Multimodal Artifcial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China.
| |
Collapse
|
12
|
Xue X, Wimmer RD, Halassa MM, Chen ZS. Spiking Recurrent Neural Networks Represent Task-Relevant Neural Sequences in Rule-Dependent Computation. Cognit Comput 2023; 15:1167-1189. [PMID: 37771569 PMCID: PMC10530699 DOI: 10.1007/s12559-022-09994-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 01/13/2022] [Indexed: 11/28/2022]
Abstract
Background Prefrontal cortical neurons play essential roles in performing rule-dependent tasks and working memory-based decision making. Methods Motivated by PFG recordings of task-performing mice, we developed an excitatory-inhibitory spiking recurrent neural network (SRNN) to perform a rule-dependent two-alternative forced choice (2AFC) task. We imposed several important biological constraints onto the SRNN, and adapted spike frequency adaptation (SFA) and SuperSpike gradient methods to train the SRNN efficiently. Results The trained SRNN produced emergent rule-specific tunings in single-unit representations, showing rule-dependent population dynamics that resembled experimentally observed data. Under varying test conditions, we manipulated the SRNN parameters or configuration in computer simulations, and we investigated the impacts of rule-coding error, delay duration, recurrent weight connectivity and sparsity, and excitation/inhibition (E/I) balance on both task performance and neural representations. Conclusions Overall, our modeling study provides a computational framework to understand neuronal representations at a fine timescale during working memory and cognitive control, and provides new experimentally testable hypotheses in future experiments.
Collapse
Affiliation(s)
- Xiaohe Xue
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - Ralf D. Wimmer
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Zhe Sage Chen
- Department of Psychiatry, New York University School of Medicine, New York, NY, USA
- Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
13
|
Guo Y, Huang X, Ma Z. Direct learning-based deep spiking neural networks: a review. Front Neurosci 2023; 17:1209795. [PMID: 37397460 PMCID: PMC10313197 DOI: 10.3389/fnins.2023.1209795] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 06/01/2023] [Indexed: 07/04/2023] Open
Abstract
The spiking neural network (SNN), as a promising brain-inspired computational model with binary spike information transmission mechanism, rich spatially-temporal dynamics, and event-driven characteristics, has received extensive attention. However, its intricately discontinuous spike mechanism brings difficulty to the optimization of the deep SNN. Since the surrogate gradient method can greatly mitigate the optimization difficulty and shows great potential in directly training deep SNNs, a variety of direct learning-based deep SNN works have been proposed and achieved satisfying progress in recent years. In this paper, we present a comprehensive survey of these direct learning-based deep SNN works, mainly categorized into accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods. In addition, we also divide these categorizations into finer granularities further to better organize and introduce them. Finally, the challenges and trends that may be faced in future research are prospected.
Collapse
Affiliation(s)
- Yufei Guo
- Intelligent Science & Technology Academy of CASIC, Beijing, China
- Scientific Research Laboratory of Aerospace Intelligent Systems and Technology, Beijing, China
| | - Xuhui Huang
- Intelligent Science & Technology Academy of CASIC, Beijing, China
- Scientific Research Laboratory of Aerospace Intelligent Systems and Technology, Beijing, China
| | - Zhe Ma
- Intelligent Science & Technology Academy of CASIC, Beijing, China
- Scientific Research Laboratory of Aerospace Intelligent Systems and Technology, Beijing, China
| |
Collapse
|
14
|
Malakasis N, Chavlis S, Poirazi P. Synaptic turnover promotes efficient learning in bio-realistic spiking neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.22.541722. [PMID: 37292929 PMCID: PMC10245885 DOI: 10.1101/2023.05.22.541722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
While artificial machine learning systems achieve superhuman performance in specific tasks such as language processing, image and video recognition, they do so use extremely large datasets and huge amounts of power. On the other hand, the brain remains superior in several cognitively challenging tasks while operating with the energy of a small lightbulb. We use a biologically constrained spiking neural network model to explore how the neural tissue achieves such high efficiency and assess its learning capacity on discrimination tasks. We found that synaptic turnover, a form of structural plasticity, which is the ability of the brain to form and eliminate synapses continuously, increases both the speed and the performance of our network on all tasks tested. Moreover, it allows accurate learning using a smaller number of examples. Importantly, these improvements are most significant under conditions of resource scarcity, such as when the number of trainable parameters is halved and when the task difficulty is increased. Our findings provide new insights into the mechanisms that underlie efficient learning in the brain and can inspire the development of more efficient and flexible machine learning algorithms.
Collapse
Affiliation(s)
- Nikos Malakasis
- School of Medicine, University of Crete, Heraklion 70013, Greece
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| | - Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| |
Collapse
|
15
|
Sanchez-Garcia M, Chauhan T, Cottereau BR, Beyeler M. Efficient multi-scale representation of visual objects using a biologically plausible spike-latency code and winner-take-all inhibition. BIOLOGICAL CYBERNETICS 2023; 117:95-111. [PMID: 37004546 DOI: 10.1007/s00422-023-00956-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 02/10/2023] [Indexed: 05/05/2023]
Abstract
Deep neural networks have surpassed human performance in key visual challenges such as object recognition, but require a large amount of energy, computation, and memory. In contrast, spiking neural networks (SNNs) have the potential to improve both the efficiency and biological plausibility of object recognition systems. Here we present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli using multi-scale parallel processing. Mimicking neuronal response properties in early visual cortex, images were preprocessed with three different spatial frequency (SF) channels, before they were fed to a layer of spiking neurons whose synaptic weights were updated using spike-timing-dependent-plasticity. We investigate how the quality of the represented objects changes under different SF bands and WTA-I schemes. We demonstrate that a network of 200 spiking neurons tuned to three SFs can efficiently represent objects with as little as 15 spikes per neuron. Studying how core object recognition may be implemented using biologically plausible learning rules in SNNs may not only further our understanding of the brain, but also lead to novel and efficient artificial vision systems.
Collapse
Affiliation(s)
| | - Tushar Chauhan
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, MA, USA
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
| | - Benoit R Cottereau
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
- IPAL, CNRS IRL 2955, Singapore, Singapore
| | - Michael Beyeler
- Department of Computer Science, University of California, Santa Barbara, CA, USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| |
Collapse
|
16
|
Yi Z, Lian J, Liu Q, Zhu H, Liang D, Liu J. Learning Rules in Spiking Neural Networks: A Survey. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
17
|
Chao Y, Augenstein P, Roennau A, Dillmann R, Xiong Z. Brain inspired path planning algorithms for drones. Front Neurorobot 2023; 17:1111861. [PMID: 36937552 PMCID: PMC10020216 DOI: 10.3389/fnbot.2023.1111861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 02/13/2023] [Indexed: 03/06/2023] Open
Abstract
Introduction With the development of artificial intelligence and brain science, brain-inspired navigation and path planning has attracted widespread attention. Methods In this paper, we present a place cell based path planning algorithm that utilizes spiking neural network (SNN) to create efficient routes for drones. First, place cells are characterized by the leaky integrate-and-fire (LIF) neuron model. Then, the connection weights between neurons are trained by spike-timing-dependent plasticity (STDP) learning rules. Afterwards, a synaptic vector field is created to avoid obstacles and to find the shortest path. Results Finally, simulation experiments both in a Python simulation environment and in an Unreal Engine environment are conducted to evaluate the validity of the algorithms. Discussion Experiment results demonstrate the validity, its robustness and the computational speed of the proposed model.
Collapse
Affiliation(s)
- Yixun Chao
- Navigation Research Center, School of Automation Engineering in Nanjing University of Aeronautics and Astronautics, Nanjing, China
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | | | - Arne Roennau
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | | | - Zhi Xiong
- Navigation Research Center, School of Automation Engineering in Nanjing University of Aeronautics and Astronautics, Nanjing, China
- *Correspondence: Zhi Xiong
| |
Collapse
|
18
|
Liu F, Tao W, Yang J, Wu W, Wang J. STNet: A novel spiking neural network combining its own time signal with the spatial signal of an artificial neural network. Front Neurosci 2023; 17:1151949. [PMID: 37144088 PMCID: PMC10153670 DOI: 10.3389/fnins.2023.1151949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 03/10/2023] [Indexed: 05/06/2023] Open
Abstract
Introduction This article proposes a novel hybrid network that combines the temporal signal of a spiking neural network (SNN) with the spatial signal of an artificial neural network (ANN), namely the Spatio-Temporal Combined Network (STNet). Methods Inspired by the way the visual cortex in the human brain processes visual information, two versions of STNet are designed: a concatenated one (C-STNet) and a parallel one (P-STNet). In the C-STNet, the ANN, simulating the primary visual cortex, extracts the simple spatial information of objects first, and then the obtained spatial information is encoded as spiking time signals for transmission to the rear SNN which simulates the extrastriate visual cortex to process and classify the spikes. With the view that information from the primary visual cortex reaches the extrastriate visual cortex via ventral and dorsal streams, in P-STNet, the parallel combination of the ANN and the SNN is employed to extract the original spatio-temporal information from samples, and the extracted information is transferred to a posterior SNN for classification. Results The experimental results of the two STNets obtained on six small and two large benchmark datasets were compared with eight commonly used approaches, demonstrating that the two STNets can achieve improved performance in terms of accuracy, generalization, stability, and convergence. Discussion These prove that the idea of combining ANN and SNN is feasible and can greatly improve the performance of SNN.
Collapse
Affiliation(s)
- Fang Liu
- School of Mathematical Sciences, Dalian University of Technology, Dalian, China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province, Dalian, China
| | - Wentao Tao
- School of Mathematical Sciences, Dalian University of Technology, Dalian, China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province, Dalian, China
| | - Jie Yang
- School of Mathematical Sciences, Dalian University of Technology, Dalian, China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province, Dalian, China
- *Correspondence: Jie Yang
| | - Wei Wu
- School of Mathematical Sciences, Dalian University of Technology, Dalian, China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province, Dalian, China
| | - Jian Wang
- College of Science, China University of Petroleum (East China), Qingdao, China
| |
Collapse
|
19
|
Sakemi Y, Morino K, Morie T, Aihara K. A Supervised Learning Algorithm for Multilayer Spiking Neural Networks Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:394-408. [PMID: 34280109 DOI: 10.1109/tnnls.2021.3095068] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs) are brain-inspired mathematical models with the ability to process information in the form of spikes. SNNs are expected to provide not only new machine-learning algorithms but also energy-efficient computational models when implemented in very-large-scale integration (VLSI) circuits. In this article, we propose a novel supervised learning algorithm for SNNs based on temporal coding. A spiking neuron in this algorithm is designed to facilitate analog VLSI implementations with analog resistive memory, by which ultrahigh energy efficiency can be achieved. We also propose several techniques to improve the performance on recognition tasks and show that the classification accuracy of the proposed algorithm is as high as that of the state-of-the-art temporal coding SNN algorithms on the MNIST and Fashion-MNIST datasets. Finally, we discuss the robustness of the proposed SNNs against variations that arise from the device manufacturing process and are unavoidable in analog VLSI implementation. We also propose a technique to suppress the effects of variations in the manufacturing process on the recognition performance.
Collapse
|
20
|
A Parallel Spiking Neural Network Based on Adaptive Lateral Inhibition Mechanism for Objective Recognition. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4242235. [DOI: 10.1155/2022/4242235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 09/17/2022] [Accepted: 10/03/2022] [Indexed: 11/17/2022]
Abstract
Spiking neural network (SNN) has attracted extensive attention in the field of machine learning because of its biological interpretability and low power consumption. However, the accuracy of pattern recognition cannot completely surpass deep neural networks (DNNs). The main reason is that the inherent nondifferentiability of spiking neurons makes SNN unable to be trained directly by the gradient descent algorithm, and there is also no unified training algorithm for SNN. Inspired by the biological vision system, this paper proposes a parallel convolution SNN structure combined with an adaptive lateral inhibition mechanism. And, a way of dynamically evolving the time constant with the training of SNN is proposed to ensure the diversity of neurons. This paper verifies the effectiveness of the proposed methods on static datasets and neuromorphic datasets and extends it to the recognition of breast tumors. Experimental results show that the SNN has obvious advantages in dynamical datasets. For breast tumors, it is also an edge-based task, because the edge of a medical image contains the most important information in the image. This kind of information can provide great help for the noninvasive and accurate diagnosis of diseases. The Experimental results show that the proposed method is very close to the recognition results of DNNs on static datasets, and its performance on neuromorphic datasets exceeds that of DNNs.
Collapse
|
21
|
Li Y, Zhao D, Zeng Y. BSNN: Towards faster and better conversion of artificial neural networks to spiking neural networks with bistable neurons. Front Neurosci 2022; 16:991851. [PMID: 36312025 PMCID: PMC9597447 DOI: 10.3389/fnins.2022.991851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 09/26/2022] [Indexed: 11/30/2022] Open
Abstract
The spiking neural network (SNN) computes and communicates information through discrete binary events. Recent work has achieved essential progress on an excellent performance by converting ANN to SNN. Due to the difference in information processing, the converted deep SNN usually suffers serious performance loss and large time delay. In this paper, we analyze the reasons for the performance loss and propose a novel bistable spiking neural network (BSNN) that addresses the problem of the phase lead and phase lag. Also, we design synchronous neurons (SN) to help efficiently improve performance when ResNet structure-based ANNs are converted. BSNN significantly improves the performance of the converted SNN by enabling more accurate delivery of information to the next layer after one cycle. Experimental results show that the proposed method only needs 1/4-1/10 of the time steps compared to previous work to achieve nearly lossless conversion. We demonstrate better ANN-SNN conversion for VGG16, ResNet20, and ResNet34 on challenging datasets including CIFAR-10 (95.16% top-1), CIFAR-100 (78.12% top-1), and ImageNet (72.64% top-1).
Collapse
Affiliation(s)
- Yang Li
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Dongcheng Zhao
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
22
|
Spiking CapsNet: A Spiking Neural Network With A Biologically Plausible Routing Rule Between Capsules. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.07.152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
23
|
Yu Q, Song S, Ma C, Wei J, Chen S, Tan KC. Temporal Encoding and Multispike Learning Framework for Efficient Recognition of Visual Patterns. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3387-3399. [PMID: 33531306 DOI: 10.1109/tnnls.2021.3052804] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Biological systems under a parallel and spike-based computation endow individuals with abilities to have prompt and reliable responses to different stimuli. Spiking neural networks (SNNs) have thus been developed to emulate their efficiency and to explore principles of spike-based processing. However, the design of a biologically plausible and efficient SNN for image classification still remains as a challenging task. Previous efforts can be generally clustered into two major categories in terms of coding schemes being employed: rate and temporal. The rate-based schemes suffer inefficiency, whereas the temporal-based ones typically end with a relatively poor performance in accuracy. It is intriguing and important to develop an SNN with both efficiency and efficacy being considered. In this article, we focus on the temporal-based approaches in a way to advance their accuracy performance by a great margin while keeping the efficiency on the other hand. A new temporal-based framework integrated with the multispike learning is developed for efficient recognition of visual patterns. Different approaches of encoding and learning under our framework are evaluated with the MNIST and Fashion-MNIST data sets. Experimental results demonstrate the efficient and effective performance of our temporal-based approaches across a variety of conditions, improving accuracies to higher levels that are even comparable to rate-based ones but importantly with a lighter network structure and far less number of spikes. This article attempts to extend the advanced multispike learning to the challenging task of image recognition and bring state of the arts in temporal-based approaches to a novel level. The experimental results could be potentially favorable to low-power and high-speed requirements in the field of artificial intelligence and contribute to attract more efforts toward brain-like computing.
Collapse
|
24
|
Li J, Xu H, Sun SY, Li N, Li Q, Li Z, Liu H. In Situ Learning in Hardware Compatible Multilayer Memristive Spiking Neural Network. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3049487] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Jiwei Li
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Hui Xu
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Sheng-Yang Sun
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Nan Li
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Qingjiang Li
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Zhiwei Li
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Haijun Liu
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| |
Collapse
|
25
|
Juarez-Lora A, Ponce-Ponce VH, Sossa H, Rubio-Espino E. R-STDP Spiking Neural Network Architecture for Motion Control on a Changing Friction Joint Robotic Arm. Front Neurorobot 2022; 16:904017. [PMID: 35663727 PMCID: PMC9161736 DOI: 10.3389/fnbot.2022.904017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 04/14/2022] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic computing is a recent class of brain-inspired high-performance computer platforms and algorithms involving biologically-inspired models adopting hardware implementation in integrated circuits. The neuromorphic computing applications have provoked the rise of highly connected neurons and synapses in analog circuit systems that can be used to solve today's challenging machine learning problems. In conjunction with biologically plausible learning rules, such as the Hebbian learning and memristive devices, biologically-inspired spiking neural networks are considered the next-generation neuromorphic hardware construction blocks that will enable the deployment of new analog in situ learning capable and energetic efficient brain-like devices. These features are envisioned for modern mobile robotic implementations, currently challenging to overcome the pervasive von Neumann computer architecture. This study proposes a new neural architecture using the spike-time-dependent plasticity learning method and step-forward encoding algorithm for a self tuning neural control of motion in a joint robotic arm subjected to dynamic modifications. Simulations were conducted to demonstrate the proposed neural architecture's feasibility as the network successfully compensates for changing dynamics at each simulation run.
Collapse
Affiliation(s)
- Alejandro Juarez-Lora
- Instituto Politécnico Nacional, Centro de Investigación en Computación, Mexico City, México
| | - Victor H. Ponce-Ponce
- Instituto Politécnico Nacional, Centro de Investigación en Computación, Mexico City, México
| | | | | |
Collapse
|
26
|
Zhang M, Wang J, Wu J, Belatreche A, Amornpaisannon B, Zhang Z, Miriyala VPK, Qu H, Chua Y, Carlson TE, Li H. Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1947-1958. [PMID: 34534091 DOI: 10.1109/tnnls.2021.3110991] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs) use spatiotemporal spike patterns to represent and transmit information, which are not only biologically realistic but also suitable for ultralow-power event-driven neuromorphic implementation. Just like other deep learning techniques, deep SNNs (DeepSNNs) benefit from the deep architecture. However, the training of DeepSNNs is not straightforward because the well-studied error backpropagation (BP) algorithm is not directly applicable. In this article, we first establish an understanding as to why error BP does not work well in DeepSNNs. We then propose a simple yet efficient rectified linear postsynaptic potential function (ReL-PSP) for spiking neurons and a spike-timing-dependent BP (STDBP) learning algorithm for DeepSNNs where the timing of individual spikes is used to convey information (temporal coding), and learning (BP) is performed based on spike timing in an event-driven manner. We show that DeepSNNs trained with the proposed single spike time-based learning algorithm can achieve the state-of-the-art classification accuracy. Furthermore, by utilizing the trained model parameters obtained from the proposed STDBP learning algorithm, we demonstrate ultralow-power inference operations on a recently proposed neuromorphic inference accelerator. The experimental results also show that the neuromorphic hardware consumes 0.751 mW of the total power consumption and achieves a low latency of 47.71 ms to classify an image from the Modified National Institute of Standards and Technology (MNIST) dataset. Overall, this work investigates the contribution of spike timing dynamics for information encoding, synaptic plasticity, and decision-making, providing a new perspective to the design of future DeepSNNs and neuromorphic hardware.
Collapse
|
27
|
Yu Q, Ma C, Song S, Zhang G, Dang J, Tan KC. Constructing Accurate and Efficient Deep Spiking Neural Networks With Double-Threshold and Augmented Schemes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1714-1726. [PMID: 33471769 DOI: 10.1109/tnnls.2020.3043415] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks (SNNs) are considered as a potential candidate to overcome current challenges, such as the high-power consumption encountered by artificial neural networks (ANNs); however, there is still a gap between them with respect to the recognition accuracy on various tasks. A conversion strategy was, thus, introduced recently to bridge this gap by mapping a trained ANN to an SNN. However, it is still unclear that to what extent this obtained SNN can benefit both the accuracy advantage from ANN and high efficiency from the spike-based paradigm of computation. In this article, we propose two new conversion methods, namely TerMapping and AugMapping. The TerMapping is a straightforward extension of a typical threshold-balancing method with a double-threshold scheme, while the AugMapping additionally incorporates a new scheme of augmented spike that employs a spike coefficient to carry the number of typical all-or-nothing spikes occurring at a time step. We examine the performance of our methods based on the MNIST, Fashion-MNIST, and CIFAR10 data sets. The results show that the proposed double-threshold scheme can effectively improve the accuracies of the converted SNNs. More importantly, the proposed AugMapping is more advantageous for constructing accurate, fast, and efficient deep SNNs compared with other state-of-the-art approaches. Our study, therefore, provides new approaches for further integration of advanced techniques in ANNs to improve the performance of SNNs, which could be of great merit to applied developments with spike-based neuromorphic computing.
Collapse
|
28
|
Mo L, Wang G, Long E, Zhuo M. ALSA: Associative Learning Based Supervised Learning Algorithm for SNN. Front Neurosci 2022; 16:838832. [PMID: 35431777 PMCID: PMC9008323 DOI: 10.3389/fnins.2022.838832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Spiking neural network (SNN) is considered to be the brain-like model that best conforms to the biological mechanism of the brain. Due to the non-differentiability of the spike, the training method of SNNs is still incomplete. This paper proposes a supervised learning method for SNNs based on associative learning: ALSA. The method is based on the associative learning mechanism, and its realization is similar to the animal conditioned reflex process, with strong physiological plausibility and rationality. This method uses improved spike-timing-dependent plasticity (STDP) rules, combined with a teacher layer to induct spikes of neurons, to strengthen synaptic connections between input spike patterns and specified output neurons, and weaken synaptic connections between unrelated patterns and unrelated output neurons. Based on ALSA, this paper also completed the supervised learning classification tasks of the IRIS dataset and the MNIST dataset, and achieved 95.7 and 91.58% recognition accuracy, respectively, which fully proves that ALSA is a feasible SNNs supervised learning method. The innovation of this paper is to establish a biological plausible supervised learning method for SNNs, which is based on the STDP learning rules and the associative learning mechanism that exists widely in animal training.
Collapse
|
29
|
Triche A, Maida AS, Kumar A. Exploration in neo-Hebbian reinforcement learning: Computational approaches to the exploration-exploitation balance with bio-inspired neural networks. Neural Netw 2022; 151:16-33. [DOI: 10.1016/j.neunet.2022.03.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 03/08/2022] [Accepted: 03/14/2022] [Indexed: 10/18/2022]
|
30
|
Supervised learning algorithm based on spike optimization mechanism for multilayer spiking neural networks. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-021-01500-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
31
|
|
32
|
|
33
|
Song S, Ma C, Sun W, Xu J, Dang J, Yu Q. Efficient learning with augmented spikes: A case study with image classification. Neural Netw 2021; 142:205-212. [PMID: 34023641 DOI: 10.1016/j.neunet.2021.05.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 02/15/2021] [Accepted: 05/06/2021] [Indexed: 10/21/2022]
Abstract
Efficient learning of spikes plays a valuable role in training spiking neural networks (SNNs) to have desired responses to input stimuli. However, current learning rules are limited to a binary form of spikes. The seemingly ubiquitous phenomenon of burst in nervous systems suggests a new way to carry more information with spike bursts in addition to times. Based on this, we introduce an advanced form, the augmented spikes, where spike coefficients are used to carry additional information. How could neurons learn and benefit from augmented spikes remains unclear. In this paper, we propose two new efficient learning rules to process spatiotemporal patterns composed of augmented spikes. Moreover, we examine the learning abilities of our methods with a synthetic recognition task of augmented spike patterns and two practical ones for image classification. Experimental results demonstrate that our rules are capable of extracting information carried by both the timing and coefficient of spikes. Our proposed approaches achieve remarkable performance and good robustness under various noise conditions, as compared to benchmarks. The improved performance indicates the merits of augmented spikes and our learning rules, which could be beneficial and generalized to a broad range of spike-based platforms.
Collapse
Affiliation(s)
- Shiming Song
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Chenxiang Ma
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Wei Sun
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Junhai Xu
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Jianwu Dang
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Qiang Yu
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| |
Collapse
|
34
|
Adaptive SNN for Anthropomorphic Finger Control. SENSORS 2021; 21:s21082730. [PMID: 33924453 PMCID: PMC8069700 DOI: 10.3390/s21082730] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 04/09/2021] [Accepted: 04/10/2021] [Indexed: 11/16/2022]
Abstract
Anthropomorphic hands that mimic the smoothness of human hand motions should be controlled by artificial units of high biological plausibility. Adaptability is among the characteristics of such control units, which provides the anthropomorphic hand with the ability to learn motions. This paper presents a simple structure of an adaptive spiking neural network implemented in analogue hardware that can be trained using Hebbian learning mechanisms to rotate the metacarpophalangeal joint of a robotic finger towards targeted angle intervals. Being bioinspired, the spiking neural network drives actuators made of shape memory alloy and receives feedback from neuromorphic sensors that convert the joint rotation angle and compression force into the spiking frequency. The adaptive SNN activates independent neural paths that correspond to angle intervals and learns in which of these intervals the rotation the finger rotation is stopped by an external force. Learning occurs when angle-specific neural paths are stimulated concurrently with the supraliminar stimulus that activates all the neurons that inhibit the SNN output stopping the finger. The results showed that after learning, the finger stopped in the angle interval in which the angle-specific neural path was active, without the activation of the supraliminar stimulus. The proposed concept can be used to implement control units for anthropomorphic robots that are able to learn motions unsupervised, based on principles of high biological plausibility.
Collapse
|
35
|
Zahra O, Tolu S, Navarro-Alarcon D. Differential mapping spiking neural network for sensor-based robot control. BIOINSPIRATION & BIOMIMETICS 2021; 16:036008. [PMID: 33706302 DOI: 10.1088/1748-3190/abedce] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 03/11/2021] [Indexed: 06/12/2023]
Abstract
In this work, a spiking neural network (SNN) is proposed for approximating differential sensorimotor maps of robotic systems. The computed model is used as a local Jacobian-like projection that relates changes in sensor space to changes in motor space. The SNN consists of an input (sensory) layer and an output (motor) layer connected through plastic synapses, with inter-inhibitory connections at the output layer. Spiking neurons are modeled as Izhikevich neurons with a synaptic learning rule based on spike timing-dependent plasticity. Feedback data from proprioceptive and exteroceptive sensors are encoded and fed into the input layer through a motor babbling process. A guideline for tuning the network parameters is proposed and applied along with the particle swarm optimization technique. Our proposed control architecture takes advantage of biologically plausible tools of an SNN to achieve the target reaching task while minimizing deviations from the desired path, and consequently minimizing the execution time. Thanks to the chosen architecture and optimization of the parameters, the number of neurons and the amount of data required for training are considerably low. The SNN is capable of handling noisy sensor readings to guide the robot movements in real-time. Experimental results are presented to validate the control methodology with a vision-guided robot.
Collapse
Affiliation(s)
- Omar Zahra
- The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China
| | | | - David Navarro-Alarcon
- The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China
| |
Collapse
|
36
|
Chu D, Le Nguyen H. Constraints on Hebbian and STDP learned weights of a spiking neuron. Neural Netw 2021; 135:192-200. [PMID: 33401225 DOI: 10.1016/j.neunet.2020.12.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 12/09/2020] [Accepted: 12/10/2020] [Indexed: 10/22/2022]
Abstract
We analyse mathematically the constraints on weights resulting from Hebbian and STDP learning rules applied to a spiking neuron with weight normalisation. In the case of pure Hebbian learning, we find that the normalised weights equal the promotion probabilities of weights up to correction terms that depend on the learning rate and are usually small. A similar relation can be derived for STDP algorithms, where the normalised weight values reflect a difference between the promotion and demotion probabilities of the weight. These relations are practically useful in that they allow checking for convergence of Hebbian and STDP algorithms. Another application is novelty detection. We demonstrate this using the MNIST dataset.
Collapse
Affiliation(s)
- Dominique Chu
- CEMS, School of Computing, University of Kent, CT2 7NF, Canterbury, UK.
| | - Huy Le Nguyen
- CEMS, School of Computing, University of Kent, CT2 7NF, Canterbury, UK
| |
Collapse
|
37
|
Abstract
Neuromorphic devices and systems have attracted attention as next-generation computing due to their high efficiency in processing complex data. So far, they have been demonstrated using both machine-learning software and complementary metal-oxide-semiconductor-based hardware. However, these approaches have drawbacks in power consumption and learning speed. An energy-efficient neuromorphic computing system requires hardware that can mimic the functions of a brain. Therefore, various materials have been introduced for the development of neuromorphic devices. Here, recent advances in neuromorphic devices are reviewed. First, the functions of biological synapses and neurons are discussed. Also, deep neural networks and spiking neural networks are described. Then, the operation mechanism and the neuromorphic functions of emerging devices are reviewed. Finally, the challenges and prospects for developing neuromorphic devices that use emerging materials are discussed.
Collapse
Affiliation(s)
- Min-Kyu Kim
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| | - Youngjun Park
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| | - Ik-Jyae Kim
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| | - Jang-Sik Lee
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| |
Collapse
|
38
|
|
39
|
Zhao D, Zeng Y, Zhang T, Shi M, Zhao F. GLSNN: A Multi-Layer Spiking Neural Network Based on Global Feedback Alignment and Local STDP Plasticity. Front Comput Neurosci 2020; 14:576841. [PMID: 33281591 PMCID: PMC7689090 DOI: 10.3389/fncom.2020.576841] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Accepted: 10/12/2020] [Indexed: 11/21/2022] Open
Abstract
Spiking Neural Networks (SNNs) are considered as the third generation of artificial neural networks, which are more closely with information processing in biological brains. However, it is still a challenge for how to train the non-differential SNN efficiently and robustly with the form of spikes. Here we give an alternative method to train SNNs by biologically-plausible structural and functional inspirations from the brain. Firstly, inspired by the significant top-down structural connections, a global random feedback alignment is designed to help the SNN propagate the error target from the output layer directly to the previous few layers. Then inspired by the local plasticity of the biological system in which the synapses are more tuned by the neighborhood neurons, a differential STDP is used to optimize local plasticity. Extensive experimental results on the benchmark MNIST (98.62%) and Fashion MNIST (89.05%) have shown that the proposed algorithm performs favorably against several state-of-the-art SNNs trained with backpropagation.
Collapse
Affiliation(s)
- Dongcheng Zhao
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Tielin Zhang
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Mengting Shi
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Feifei Zhao
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
40
|
Synthesis of recurrent neural dynamics for monotone inclusion with application to Bayesian inference. Neural Netw 2020; 131:231-241. [PMID: 32818873 DOI: 10.1016/j.neunet.2020.07.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 07/06/2020] [Accepted: 07/31/2020] [Indexed: 11/22/2022]
Abstract
We propose a top-down approach to construct recurrent neural circuit dynamics for the mathematical problem of monotone inclusion (MoI). MoI in a general optimization framework that encompasses a wide range of contemporary problems, including Bayesian inference and Markov decision making. We show that in a recurrent neural circuit/network with Poisson neurons, each neuron's firing curve can be understood as a proximal operator of a local objective function, while the overall circuit dynamics constitutes an operator-splitting system of ordinary differential equations whose equilibrium point corresponds to the solution of the MoI problem. Our analysis thus establishes that neural circuits are a substrate for solving a broad class of computational tasks. In this regard, we provide an explicit synthesis procedure for building neural circuits for specific MoI problems and demonstrate it for the specific case of Bayesian inference and sparse neural coding.
Collapse
|