1
|
Kim Y, Kahana A, Yin R, Li Y, Stinis P, Karniadakis GE, Panda P. Rethinking skip connections in Spiking Neural Networks with Time-To-First-Spike coding. Front Neurosci 2024; 18:1346805. [PMID: 38419664 PMCID: PMC10899405 DOI: 10.3389/fnins.2024.1346805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 01/30/2024] [Indexed: 03/02/2024] Open
Abstract
Time-To-First-Spike (TTFS) coding in Spiking Neural Networks (SNNs) offers significant advantages in terms of energy efficiency, closely mimicking the behavior of biological neurons. In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding. Our focus is on two distinct types of skip connection architectures: (1) addition-based skip connections, and (2) concatenation-based skip connections. We find that addition-based skip connections introduce an additional delay in terms of spike timing. On the other hand, concatenation-based skip connections circumvent this delay but produce time gaps between after-convolution and skip connection paths, thereby restricting the effective mixing of information from these two paths. To mitigate these issues, we propose a novel approach involving a learnable delay for skip connections in the concatenation-based skip connection architecture. This approach successfully bridges the time gap between the convolutional and skip branches, facilitating improved information mixing. We conduct experiments on public datasets including MNIST and Fashion-MNIST, illustrating the advantage of the skip connection in TTFS coding architectures. Additionally, we demonstrate the applicability of TTFS coding on beyond image recognition tasks and extend it to scientific machine-learning tasks, broadening the potential uses of SNNs.
Collapse
Affiliation(s)
- Youngeun Kim
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Adar Kahana
- Division of Applied Mathematics, Brown University, Providence, RI, United States
| | - Ruokai Yin
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Yuhang Li
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Panos Stinis
- Division of Applied Mathematics, Brown University, Providence, RI, United States
- Advanced Computing, Mathematics and Data Division, Pacific Northwest National Laboratory, Richland, WA, United States
| | - George Em Karniadakis
- Division of Applied Mathematics, Brown University, Providence, RI, United States
- Advanced Computing, Mathematics and Data Division, Pacific Northwest National Laboratory, Richland, WA, United States
| | - Priyadarshini Panda
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| |
Collapse
|
2
|
Tian F, Yang J, Zhao S, Sawan M. NeuroCARE: A generic neuromorphic edge computing framework for healthcare applications. Front Neurosci 2023; 17:1093865. [PMID: 36755733 PMCID: PMC9900119 DOI: 10.3389/fnins.2023.1093865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 01/03/2023] [Indexed: 01/24/2023] Open
Abstract
Highly accurate classification methods for multi-task biomedical signal processing are reported, including neural networks. However, reported works are computationally expensive and power-hungry. Such bottlenecks make it hard to deploy existing approaches on edge platforms such as mobile and wearable devices. Gaining motivation from the good performance and high energy-efficiency of spiking neural networks (SNNs), a generic neuromorphic framework for edge healthcare and biomedical applications are proposed and evaluated on various tasks, including electroencephalography (EEG) based epileptic seizure prediction, electrocardiography (ECG) based arrhythmia detection, and electromyography (EMG) based hand gesture recognition. This approach, NeuroCARE, uses a unique sparse spike encoder to generate spike sequences from raw biomedical signals and makes classifications using the spike-based computing engine that combines the advantages of both CNN and SNN. An adaptive weight mapping method specifically co-designed with the spike encoder can efficiently convert CNN to SNN without performance deterioration. The evaluation results show that the overall performance, including the classification accuracy, sensitivity and F1 score, achieve 92.7, 96.7, and 85.7% for seizure prediction, arrhythmia detection and hand gesture recognition, respectively. In comparison with CNN topologies, the computation complexity is reduced by over 80.7% while the energy consumption and area occupation are reduced by over 80% and over 64.8%, respectively, indicating that the proposed neuromorphic computing approach is energy and area efficient and of high precision, which paves the way for deployment at edge platforms.
Collapse
Affiliation(s)
- Fengshi Tian
- CenBRAIN Neurotech, School of Engineering, Westlake University, Hangzhou, Zhejiang, China,The Hong Kong University of Science and Technology (HKUST), New Territories, Hong Kong SAR, China
| | - Jie Yang
- CenBRAIN Neurotech, School of Engineering, Westlake University, Hangzhou, Zhejiang, China,*Correspondence: Jie Yang,
| | - Shiqi Zhao
- CenBRAIN Neurotech, School of Engineering, Westlake University, Hangzhou, Zhejiang, China
| | - Mohamad Sawan
- CenBRAIN Neurotech, School of Engineering, Westlake University, Hangzhou, Zhejiang, China,Mohamad Sawan,
| |
Collapse
|
3
|
Zhao J, Yang J, Wang J, Wu W. Spiking Neural Network Regularization With Fixed and Adaptive Drop-Keep Probabilities. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4096-4109. [PMID: 33571100 DOI: 10.1109/tnnls.2021.3055825] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Dropout and DropConnect are two techniques to facilitate the regularization of neural network models, having achieved the state-of-the-art results in several benchmarks. In this paper, to improve the generalization capability of spiking neural networks (SNNs), the two drop techniques are first applied to the state-of-the-art SpikeProp learning algorithm resulting in two improved learning algorithms called SPDO (SpikeProp with Dropout) and SPDC (SpikeProp with DropConnect). In view that a higher membrane potential of a biological neuron implies a higher probability of neural activation, three adaptive drop algorithms, SpikeProp with Adaptive Dropout (SPADO), SpikeProp with Adaptive DropConnect (SPADC), and SpikeProp with Group Adaptive Drop (SPGAD), are proposed by adaptively adjusting the keep probability for training SNNs. A convergence theorem for SPDC is proven under the assumptions of the bounded norm of connection weights and a finite number of equilibria. In addition, the five proposed algorithms are carried out in a collaborative neurodynamic optimization framework to improve the learning performance of SNNs. The experimental results on the four benchmark data sets demonstrate that the three adaptive algorithms converge faster than SpikeProp, SPDO, and SPDC, and the generalization errors of the five proposed algorithms are significantly smaller than that of SpikeProp. Furthermore, the experimental results also show that the five algorithms based on collaborative neurodynamic optimization can be improved in terms of several measures.
Collapse
|
4
|
Zhang M, Wang J, Wu J, Belatreche A, Amornpaisannon B, Zhang Z, Miriyala VPK, Qu H, Chua Y, Carlson TE, Li H. Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1947-1958. [PMID: 34534091 DOI: 10.1109/tnnls.2021.3110991] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs) use spatiotemporal spike patterns to represent and transmit information, which are not only biologically realistic but also suitable for ultralow-power event-driven neuromorphic implementation. Just like other deep learning techniques, deep SNNs (DeepSNNs) benefit from the deep architecture. However, the training of DeepSNNs is not straightforward because the well-studied error backpropagation (BP) algorithm is not directly applicable. In this article, we first establish an understanding as to why error BP does not work well in DeepSNNs. We then propose a simple yet efficient rectified linear postsynaptic potential function (ReL-PSP) for spiking neurons and a spike-timing-dependent BP (STDBP) learning algorithm for DeepSNNs where the timing of individual spikes is used to convey information (temporal coding), and learning (BP) is performed based on spike timing in an event-driven manner. We show that DeepSNNs trained with the proposed single spike time-based learning algorithm can achieve the state-of-the-art classification accuracy. Furthermore, by utilizing the trained model parameters obtained from the proposed STDBP learning algorithm, we demonstrate ultralow-power inference operations on a recently proposed neuromorphic inference accelerator. The experimental results also show that the neuromorphic hardware consumes 0.751 mW of the total power consumption and achieves a low latency of 47.71 ms to classify an image from the Modified National Institute of Standards and Technology (MNIST) dataset. Overall, this work investigates the contribution of spike timing dynamics for information encoding, synaptic plasticity, and decision-making, providing a new perspective to the design of future DeepSNNs and neuromorphic hardware.
Collapse
|
5
|
Supervised learning algorithm based on spike optimization mechanism for multilayer spiking neural networks. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-021-01500-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
6
|
Supervised Learning Algorithm for Multilayer Spiking Neural Networks with Long-Term Memory Spike Response Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:8592824. [PMID: 34868299 PMCID: PMC8635912 DOI: 10.1155/2021/8592824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 10/17/2021] [Accepted: 10/21/2021] [Indexed: 11/18/2022]
Abstract
As a new brain-inspired computational model of artificial neural networks, spiking neural networks transmit and process information via precisely timed spike trains. Constructing efficient learning methods is a significant research field in spiking neural networks. In this paper, we present a supervised learning algorithm for multilayer feedforward spiking neural networks; all neurons can fire multiple spikes in all layers. The feedforward network consists of spiking neurons governed by biologically plausible long-term memory spike response model, in which the effect of earlier spikes on the refractoriness is not neglected to incorporate adaptation effects. The gradient descent method is employed to derive synaptic weight updating rule for learning spike trains. The proposed algorithm is tested and verified on spatiotemporal pattern learning problems, including a set of spike train learning tasks and nonlinear pattern classification problems on four UCI datasets. Simulation results indicate that the proposed algorithm can improve learning accuracy in comparison with other supervised learning algorithms.
Collapse
|
7
|
Hong C, Wei X, Wang J, Deng B, Yu H, Che Y. Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible With Various Temporal Codes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1285-1296. [PMID: 31247574 DOI: 10.1109/tnnls.2019.2919662] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent studies have demonstrated the effectiveness of supervised learning in spiking neural networks (SNNs). A trainable SNN provides a valuable tool not only for engineering applications but also for theoretical neuroscience studies. Here, we propose a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs and provides more diverse network structures and coding schemes. Specifically, we designed a spike gradient threshold rule to solve the well-known gradient exploding problem in SNN training. In addition, regulation rules on firing rates and connection weights are proposed to control the network activity during training. Based on these rules, biologically realistic features such as lateral connections, complex synaptic dynamics, and sparse activities are included in the network to facilitate neural computation. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, namely, handwritten digit recognition, spatial coordinate transformation, and motor sequence generation. Several important features observed in experimental studies, such as selective activity, excitatory-inhibitory balance, and weak pairwise correlation, emerged in the trained model. This agreement between experimental and computational results further confirmed the importance of these features in neural function. This work provides a new framework, in which various neural behaviors can be modeled and the underlying computational mechanisms can be studied.
Collapse
|
8
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
9
|
Shrestha SB, Song Q. Robust spike-train learning in spike-event based weight update. Neural Netw 2017; 96:33-46. [PMID: 28957730 DOI: 10.1016/j.neunet.2017.08.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Revised: 08/17/2017] [Accepted: 08/22/2017] [Indexed: 10/18/2022]
Abstract
Supervised learning algorithms in a spiking neural network either learn a spike-train pattern for a single neuron receiving input spike-train from multiple input synapses or learn to output the first spike time in a feedforward network setting. In this paper, we build upon spike-event based weight update strategy to learn continuous spike-train in a spiking neural network with a hidden layer using a dead zone on-off based adaptive learning rate rule which ensures convergence of the learning process in the sense of weight convergence and robustness of the learning process to external disturbances. Based on different benchmark problems, we compare this new method with other relevant spike-train learning algorithms. The results show that the speed of learning is much improved and the rate of successful learning is also greatly improved.
Collapse
Affiliation(s)
- Sumit Bam Shrestha
- Temasek Laboratories, 5A Engineering Drive 1, #09-02, Singapore 117411, Singapore.
| | - Qing Song
- School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore.
| |
Collapse
|