1
|
Yu Y, Ding Z, Ren Y, Wang X, Quan H, Jia H, Jiang C. Understanding the Resistive Switching Behaviors of Top Electrode (Au, Cu, and Al)-Dependent TiO 2-Based Memristive Devices. ACS OMEGA 2024; 9:24601-24609. [PMID: 38882132 PMCID: PMC11170736 DOI: 10.1021/acsomega.4c00320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 06/18/2024]
Abstract
Memristor-based neuromorphic computing is promising toward their potential application of handling complex parallel tasks in the period of big data. To implement brain-inspired applications of spiking neural networks, new physical architecture designs are needed. Here, a serial memristive structure (SMS) consisting of memristive devices with different top electrodes is proposed. Top electrodes Au, Cu, and Al are selected for nitrogen-doped TiO2 nanorod array-based memristive devices. The typical I-V cycles, retention, on/off ratio, and variations of cycle to cycle of top electrode-dependent memristive devices have been studied. Devices with Cu and Al electrodes exhibit a retention of over 104 s. And the resistance states of the device with the Al top electrode are reliable. Furthermore, the conductive mechanism underlining the I-V curves is discussed in detail. The interface-type mechanism and block conductance mechanism are illustrated, which are related to electron migration and ion/anion migration, respectively. Finally, the SMS has been constructed using memristive devices with Al and Cu top electrodes, which can mimic the spiking pulse-dependent plasticity of a synapse and a neuron body. The SMS provides a new approach to implement a fundamental physical unit for neuromorphic computing.
Collapse
Affiliation(s)
- Yantao Yu
- College of Physics and Electronic Information, Luoyang Normal University, Luoyang 471934, China
| | - Zizhao Ding
- Powder Metallurgy Research Institute, Central South University, Changsha 410083, China
| | - Yaoying Ren
- College of Physics and Electronic Information, Luoyang Normal University, Luoyang 471934, China
| | - Xiangfei Wang
- College of Physics and Electronic Information, Luoyang Normal University, Luoyang 471934, China
| | - Hongguang Quan
- College of Physics and Electronic Information, Luoyang Normal University, Luoyang 471934, China
| | - Hong Jia
- College of Physics and Electronic Information, Luoyang Normal University, Luoyang 471934, China
| | - Chao Jiang
- Powder Metallurgy Research Institute, Central South University, Changsha 410083, China
| |
Collapse
|
2
|
Bukh AV, Rybalova EV, Shepelev IA, Vadivasova TE. Classification of musical intervals by spiking neural networks: Perfect student in solfége classes. CHAOS (WOODBURY, N.Y.) 2024; 34:063102. [PMID: 38829796 DOI: 10.1063/5.0210790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 05/12/2024] [Indexed: 06/05/2024]
Abstract
We investigate a spike activity of a network of excitable FitzHugh-Nagumo neurons, which is under constant two-frequency auditory signals. The neurons are supplemented with linear frequency filters and nonlinear input signal converters. We show that it is possible to configure the network to recognize a specific frequency ratio (musical interval) by selecting the parameters of the neurons, input filters, and coupling between neurons. A set of appropriately configured subnetworks with different topologies and coupling strengths can serve as a classifier for musical intervals. We have found that the selective properties of the classifier are due to the presence of a specific topology of coupling between the neurons of the network.
Collapse
Affiliation(s)
- A V Bukh
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
| | - E V Rybalova
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
| | - I A Shepelev
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
- Almetyevsk State Petroleum Institute, 2 Lenin Street, Almetyevsk 423462, Russia
| | - T E Vadivasova
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
| |
Collapse
|
3
|
Kang C, Prokop J, Tong L, Zhou H, Hu Y, Novak D. InA: Inhibition Adaption on pre-trained language models. Neural Netw 2024; 178:106410. [PMID: 38850634 DOI: 10.1016/j.neunet.2024.106410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/31/2023] [Accepted: 05/23/2024] [Indexed: 06/10/2024]
Abstract
Fine-tuning pre-trained language models (LMs) may not always be the most practical approach for downstream tasks. While adaptation fine-tuning methods have shown promising results, a clearer explanation of their mechanisms and further inhibition of the transmission of information is needed. To address this, we propose an Inhibition Adaptation (InA) fine-tuning method that aims to reduce the number of added tunable weights and appropriately reweight knowledge derived from pre-trained LMs. The InA method involves (1) inserting a small trainable vector into each Transformer attention architecture and (2) setting a threshold to directly eliminate irrelevant knowledge. This approach draws inspiration from the shunting inhibition, which allows the inhibition of specific neurons to gate other functional neurons. With the inhibition mechanism, InA achieves competitive or even superior performance compared to other fine-tuning methods on BERT-large, RoBERTa-large, and DeBERTa-large for text classification and question-answering tasks.
Collapse
Affiliation(s)
- Cheng Kang
- Department of Cybernetics, Czech Technical University in Prague, Czech Republic.
| | - Jindrich Prokop
- Department of Cybernetics, Czech Technical University in Prague, Czech Republic.
| | - Lei Tong
- School of Informatics, University of Leicester, UK.
| | - Huiyu Zhou
- School of Informatics, University of Leicester, UK.
| | - Yong Hu
- Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong.
| | - Daniel Novak
- Department of Cybernetics, Czech Technical University in Prague, Czech Republic.
| |
Collapse
|
4
|
Yan J, Liu Q, Zhang M, Feng L, Ma D, Li H, Pan G. Efficient spiking neural network design via neural architecture search. Neural Netw 2024; 173:106172. [PMID: 38402808 DOI: 10.1016/j.neunet.2024.106172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 01/09/2024] [Accepted: 02/08/2024] [Indexed: 02/27/2024]
Abstract
Spiking neural networks (SNNs) are brain-inspired models that utilize discrete and sparse spikes to transmit information, thus having the property of energy efficiency. Recent advances in learning algorithms have greatly improved SNN performance due to the automation of feature engineering. While the choice of neural architecture plays a significant role in deep learning, the current SNN architectures are mainly designed manually, which is a time-consuming and error-prone process. In this paper, we propose a spiking neural architecture search (NAS) method that can automatically find efficient SNNs. To tackle the challenge of long search time faced by SNNs when utilizing NAS, the proposed NAS encodes candidate architectures in a branchless spiking supernet which significantly reduces the computation requirements in the search process. Considering that real-world tasks prefer efficient networks with optimal accuracy under a limited computational budget, we propose a Synaptic Operation (SynOps)-aware optimization to automatically find the computationally efficient subspace of the supernet. Experimental results show that, in less search time, our proposed NAS can find SNNs with higher accuracy and lower computational cost than state-of-the-art SNNs. We also conduct experiments to validate the search process and the trade-off between accuracy and computational cost.
Collapse
Affiliation(s)
- Jiaqi Yan
- Zhejiang University, Hangzhou, 310027, China
| | - Qianhui Liu
- National University of Singapore, 119077, Singapore
| | - Malu Zhang
- University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lang Feng
- Zhejiang University, Hangzhou, 310027, China
| | - De Ma
- Zhejiang University, Hangzhou, 310027, China
| | - Haizhou Li
- National University of Singapore, 119077, Singapore; The Chinese University of Hong Kong, Shenzhen, 518172, China
| | - Gang Pan
- Zhejiang University, Hangzhou, 310027, China.
| |
Collapse
|
5
|
Kim Y, Kahana A, Yin R, Li Y, Stinis P, Karniadakis GE, Panda P. Rethinking skip connections in Spiking Neural Networks with Time-To-First-Spike coding. Front Neurosci 2024; 18:1346805. [PMID: 38419664 PMCID: PMC10899405 DOI: 10.3389/fnins.2024.1346805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 01/30/2024] [Indexed: 03/02/2024] Open
Abstract
Time-To-First-Spike (TTFS) coding in Spiking Neural Networks (SNNs) offers significant advantages in terms of energy efficiency, closely mimicking the behavior of biological neurons. In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding. Our focus is on two distinct types of skip connection architectures: (1) addition-based skip connections, and (2) concatenation-based skip connections. We find that addition-based skip connections introduce an additional delay in terms of spike timing. On the other hand, concatenation-based skip connections circumvent this delay but produce time gaps between after-convolution and skip connection paths, thereby restricting the effective mixing of information from these two paths. To mitigate these issues, we propose a novel approach involving a learnable delay for skip connections in the concatenation-based skip connection architecture. This approach successfully bridges the time gap between the convolutional and skip branches, facilitating improved information mixing. We conduct experiments on public datasets including MNIST and Fashion-MNIST, illustrating the advantage of the skip connection in TTFS coding architectures. Additionally, we demonstrate the applicability of TTFS coding on beyond image recognition tasks and extend it to scientific machine-learning tasks, broadening the potential uses of SNNs.
Collapse
Affiliation(s)
- Youngeun Kim
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Adar Kahana
- Division of Applied Mathematics, Brown University, Providence, RI, United States
| | - Ruokai Yin
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Yuhang Li
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Panos Stinis
- Division of Applied Mathematics, Brown University, Providence, RI, United States
- Advanced Computing, Mathematics and Data Division, Pacific Northwest National Laboratory, Richland, WA, United States
| | - George Em Karniadakis
- Division of Applied Mathematics, Brown University, Providence, RI, United States
- Advanced Computing, Mathematics and Data Division, Pacific Northwest National Laboratory, Richland, WA, United States
| | - Priyadarshini Panda
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| |
Collapse
|
6
|
Bahrami MK, Nazari S. Digital design of a spatial-pow-STDP learning block with high accuracy utilizing pow CORDIC for large-scale image classifier spatiotemporal SNN. Sci Rep 2024; 14:3388. [PMID: 38337032 PMCID: PMC10858263 DOI: 10.1038/s41598-024-54043-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 02/07/2024] [Indexed: 02/12/2024] Open
Abstract
The paramount concern of highly accurate energy-efficient computing in machines with significant cognitive capabilities aims to enhance the accuracy and efficiency of bio-inspired Spiking Neural Networks (SNNs). This paper addresses this main objective by introducing a novel spatial power spike-timing-dependent plasticity (Spatial-Pow-STDP) learning rule as a digital block with high accuracy in a bio-inspired SNN model. Motivated by the demand for precise and accelerated computation that reduces high-cost resources in neural network applications, this paper presents a methodology based on COordinate Rotation DIgital Computer (CORDIC) definitions. The proposed designs of CORDIC algorithms for exponential (Exp CORDIC), natural logarithm (Ln CORDIC), and arbitrary power function (Pow CORDIC) are meticulously detailed and evaluated to ensure optimal acceleration and accuracy, which respectively show average errors near 10-9, 10-6, and 10-5 with 4, 4, and 6 iterations. The engineered architectures for the Exp, Ln, and Pow CORDIC implementations are illustrated and assessed, showcasing the efficiency achieved through high frequency, leading to the introduction of a Spatial-Pow-STDP learning block design based on Pow CORDIC that facilitates efficient and accurate hardware computation with 6.93 × 10-3 average error with 9 iterations. The proposed learning mechanism integrates this structure into a large-scale spatiotemporal SNN consisting of three layers with reduced hyper-parameters, enabling unsupervised training in an event-based paradigm using excitatory and inhibitory synapses. As a result, the application of the developed methodology and equations in the computational SNN model for image classification reveals superior accuracy and convergence speed compared to existing spiking networks by achieving up to 97.5%, 97.6%, 93.4%, and 93% accuracy, respectively, when trained on the MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with 6, 2, 2, and 6 training epochs.
Collapse
Affiliation(s)
| | - Soheila Nazari
- Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, 1983969411, Iran.
| |
Collapse
|
7
|
Shi C, Wang L, Gao H, Tian M. Learnable Leakage and Onset-Spiking Self-Attention in SNNs with Local Error Signals. SENSORS (BASEL, SWITZERLAND) 2023; 23:9781. [PMID: 38139626 PMCID: PMC10747667 DOI: 10.3390/s23249781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 11/29/2023] [Accepted: 12/08/2023] [Indexed: 12/24/2023]
Abstract
Spiking neural networks (SNNs) have garnered significant attention due to their computational patterns resembling biological neural networks. However, when it comes to deep SNNs, how to focus on critical information effectively and achieve a balanced feature transformation both temporally and spatially becomes a critical challenge. To address these challenges, our research is centered around two aspects: structure and strategy. Structurally, we optimize the leaky integrate-and-fire (LIF) neuron to enable the leakage coefficient to be learnable, thus making it better suited for contemporary applications. Furthermore, the self-attention mechanism is introduced at the initial time step to ensure improved focus and processing. Strategically, we propose a new normalization method anchored on the learnable leakage coefficient (LLC) and introduce a local loss signal strategy to enhance the SNN's training efficiency and adaptability. The effectiveness and performance of our proposed methods are validated on the MNIST, FashionMNIST, and CIFAR-10 datasets. Experimental results show that our model presents a superior, high-accuracy performance in just eight time steps. In summary, our research provides fresh insights into the structure and strategy of SNNs, paving the way for their efficient and robust application in practical scenarios.
Collapse
Affiliation(s)
- Cong Shi
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China; (C.S.); (L.W.); (H.G.)
- Key Laboratory of Dependable Service Computing in Cyber Physical Society, Ministry of Education, Chongqing University, Chongqing 400044, China
| | - Li Wang
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China; (C.S.); (L.W.); (H.G.)
| | - Haoran Gao
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China; (C.S.); (L.W.); (H.G.)
| | - Min Tian
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China; (C.S.); (L.W.); (H.G.)
| |
Collapse
|
8
|
Hu Y, Zheng Q, Jiang X, Pan G. Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:14546-14562. [PMID: 37721891 DOI: 10.1109/tpami.2023.3275769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/20/2023]
Abstract
Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency over traditional artificial neural networks (ANNs) thanks to their event-driven representations. SNNs also replace weight multiplications in ANNs with additions, which are more energy-efficient and less computationally intensive. However, it remains a challenge to train deep SNNs due to the discrete spiking function. A popular approach to circumvent this challenge is ANN-to-SNN conversion. However, due to the quantization error and accumulating error, it often requires lots of time steps (high inference latency) to achieve high performance, which negates SNN's advantages. To this end, this paper proposes Fast-SNN that achieves high performance with low latency. We demonstrate the equivalent mapping between temporal quantization in SNNs and spatial quantization in ANNs, based on which the minimization of the quantization error is transferred to quantized ANN training. With the minimization of the quantization error, we show that the sequential error is the primary cause of the accumulating error, which is addressed by introducing a signed IF neuron model and a layer-wise fine-tuning mechanism. Our method achieves state-of-the-art performance and low latency on various computer vision tasks, including image classification, object detection, and semantic segmentation. Codes are available at: https://github.com/yangfan-hu/Fast-SNN.
Collapse
|
9
|
Fang W, Chen Y, Ding J, Yu Z, Masquelier T, Chen D, Huang L, Zhou H, Li G, Tian Y. SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence. SCIENCE ADVANCES 2023; 9:eadi1480. [PMID: 37801497 PMCID: PMC10558124 DOI: 10.1126/sciadv.adi1480] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 09/05/2023] [Indexed: 10/08/2023]
Abstract
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties. As the emerging spiking deep learning paradigm attracts increasing interest, traditional programming frameworks cannot meet the demands of the automatic differentiation, parallel computation acceleration, and high integration of processing neuromorphic datasets and deployment. In this work, we present the SpikingJelly framework to address the aforementioned dilemma. We contribute a full-stack toolkit for preprocessing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips. Compared to existing methods, the training of deep SNNs can be accelerated 11×, and the superior extensibility and flexibility of SpikingJelly enable users to accelerate custom models at low costs through multilevel inheritance and semiautomatic code generation. SpikingJelly paves the way for synthesizing truly energy-efficient SNN-based machine intelligence systems, which will enrich the ecology of neuromorphic computing.
Collapse
Affiliation(s)
- Wei Fang
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
| | - Yanqi Chen
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
| | - Jianhao Ding
- School of Computer Science, Peking University, China
| | - Zhaofei Yu
- Institute for Artificial Intelligence, Peking University, China
| | - Timothée Masquelier
- Centre de Recherche Cerveau et Cognition (CERCO), UMR5549 CNRS–Université Toulouse 3, France
| | - Ding Chen
- Peng Cheng Laboratory, China
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
| | - Liwei Huang
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
| | | | - Guoqi Li
- Institute of Automation, Chinese Academy of Sciences, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, China
| | - Yonghong Tian
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
| |
Collapse
|
10
|
Yan Z, Zhou J, Wong WF. CQ + Training: Minimizing Accuracy Loss in Conversion From Convolutional Neural Networks to Spiking Neural Networks. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:11600-11611. [PMID: 37314899 DOI: 10.1109/tpami.2023.3286121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Spiking neural networks (SNNs) are attractive for energy-constrained use-cases due to their binarized activation, eliminating the need for weight multiplication. However, its lag in accuracy compared to traditional convolutional network networks (CNNs) has limited its deployment. In this paper, we propose CQ+ training (extended "clamped" and "quantized" training), an SNN-compatible CNN training algorithm that achieves state-of-the-art accuracy for both CIFAR-10 and CIFAR-100 datasets. Using a 7-layer modified VGG model (VGG-*), we achieved 95.06% accuracy on the CIFAR-10 dataset for equivalent SNNs. The accuracy drop from converting the CNN solution to an SNN is only 0.09% when using a time step of 600. To reduce the latency, we propose a parameterized input encoding method and a threshold training method, which further reduces the time window size to 64 while still achieving an accuracy of 94.09%. For the CIFAR-100 dataset, we achieved an accuracy of 77.27% using the same VGG-* structure and a time window of 500. We also demonstrate the transformation of popular CNNs, including ResNet (basic, bottleneck, and shortcut block), MobileNet v1/2, and Densenet, to SNNs with near-zero conversion accuracy loss and a time window size smaller than 60. The framework was developed in PyTorch and is publicly available.
Collapse
|
11
|
Bitar A, Rosales R, Paulitsch M. Gradient-based feature-attribution explainability methods for spiking neural networks. Front Neurosci 2023; 17:1153999. [PMID: 37829721 PMCID: PMC10565802 DOI: 10.3389/fnins.2023.1153999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/01/2023] [Indexed: 10/14/2023] Open
Abstract
Introduction Spiking neural networks (SNNs) are a model of computation that mimics the behavior of biological neurons. SNNs process event data (spikes) and operate more sparsely than artificial neural networks (ANNs), resulting in ultra-low latency and small power consumption. This paper aims to adapt and evaluate gradient-based explainability methods for SNNs, which were originally developed for conventional ANNs. Methods The adapted methods aim to create input feature attribution maps for SNNs trained through backpropagation that process either event-based spiking data or real-valued data. The methods address the limitations of existing work on explainability methods for SNNs, such as poor scalability, limited to convolutional layers, requiring the training of another model, and providing maps of activation values instead of true attribution scores. The adapted methods are evaluated on classification tasks for both real-valued and spiking data, and the accuracy of the proposed methods is confirmed through perturbation experiments at the pixel and spike levels. Results and discussion The results reveal that gradient-based SNN attribution methods successfully identify highly contributing pixels and spikes with significantly less computation time than model-agnostic methods. Additionally, we observe that the chosen coding technique has a noticeable effect on the input features that will be most significant. These findings demonstrate the potential of gradient-based explainability methods for SNNs in improving our understanding of how these networks process information and contribute to the development of more efficient and accurate SNNs.
Collapse
Affiliation(s)
- Ammar Bitar
- Intel Labs, Munich, Germany
- Department of Knowledge Engineering, Maastricht University, Maastricht, Netherlands
| | | | | |
Collapse
|
12
|
Pei Y, Xu C, Wu Z, Liu Y, Yang Y. ALBSNN: ultra-low latency adaptive local binary spiking neural network with accuracy loss estimator. Front Neurosci 2023; 17:1225871. [PMID: 37771337 PMCID: PMC10525310 DOI: 10.3389/fnins.2023.1225871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 08/24/2023] [Indexed: 09/30/2023] Open
Abstract
Spiking neural network (SNN) is a brain-inspired model with more spatio-temporal information processing capacity and computational energy efficiency. However, with the increasing depth of SNNs, the memory problem caused by the weights of SNNs has gradually attracted attention. In this study, we propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators, which dynamically selects the network layers to be binarized to ensure a balance between quantization degree and classification accuracy by evaluating the error caused by the binarized weights during the network learning process. At the same time, to accelerate the training speed of the network, the global average pooling (GAP) layer is introduced to replace the fully connected layers by combining convolution and pooling. Finally, to further reduce the error caused by the binary weight, we propose binary weight optimization (BWO), which updates the overall weight by directly adjusting the binary weight. This method further reduces the loss of the network that reaches the training bottleneck. The combination of the above methods balances the network's quantization and recognition ability, enabling the network to maintain the recognition capability equivalent to the full precision network and reduce the storage space by more than 20%. So, SNNs can use a small number of time steps to obtain better recognition accuracy. In the extreme case of using only a one-time step, we still can achieve 93.39, 92.12, and 69.55% testing accuracy on three traditional static datasets, Fashion- MNIST, CIFAR-10, and CIFAR-100, respectively. At the same time, we evaluate our method on neuromorphic N-MNIST, CIFAR10-DVS, and IBM DVS128 Gesture datasets and achieve advanced accuracy in SNN with binary weights. Our network has greater advantages in terms of storage resources and training time.
Collapse
Affiliation(s)
- Yijian Pei
- Guangzhou Institute of Technology, Xidian University, Xi'an, China
| | - Changqing Xu
- Guangzhou Institute of Technology, Xidian University, Xi'an, China
- School of Microelectronics, Xidian University, Xi'an, China
| | - Zili Wu
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yi Liu
- School of Microelectronics, Xidian University, Xi'an, China
| | - Yintang Yang
- School of Microelectronics, Xidian University, Xi'an, China
| |
Collapse
|
13
|
Shen J, Zhao Y, Liu JK, Wang Y. HybridSNN: Combining Bio-Machine Strengths by Boosting Adaptive Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:5841-5855. [PMID: 34890341 DOI: 10.1109/tnnls.2021.3131356] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs), inspired by the neuronal network in the brain, provide biologically relevant and low-power consuming models for information processing. Existing studies either mimic the learning mechanism of brain neural networks as closely as possible, for example, the temporally local learning rule of spike-timing-dependent plasticity (STDP), or apply the gradient descent rule to optimize a multilayer SNN with fixed structure. However, the learning rule used in the former is local and how the real brain might do the global-scale credit assignment is still not clear, which means that those shallow SNNs are robust but deep SNNs are difficult to be trained globally and could not work so well. For the latter, the nondifferentiable problem caused by the discrete spike trains leads to inaccuracy in gradient computing and difficulties in effective deep SNNs. Hence, a hybrid solution is interesting to combine shallow SNNs with an appropriate machine learning (ML) technique not requiring the gradient computing, which is able to provide both energy-saving and high-performance advantages. In this article, we propose a HybridSNN, a deep and strong SNN composed of multiple simple SNNs, in which data-driven greedy optimization is used to build powerful classifiers, avoiding the derivative problem in gradient descent. During the training process, the output features (spikes) of selected weak classifiers are fed back to the pool for the subsequent weak SNN training and selection. This guarantees HybridSNN not only represents the linear combination of simple SNNs, as what regular AdaBoost algorithm generates, but also contains neuron connection information, thus closely resembling the neural networks of a brain. HybridSNN has the benefits of both low power consumption in weak units and overall data-driven optimizing strength. The network structure in HybridSNN is learned from training samples, which is more flexible and effective compared with existing fixed multilayer SNNs. Moreover, the topological tree of HybridSNN resembles the neural system in the brain, where pyramidal neurons receive thousands of synaptic input signals through their dendrites. Experimental results show that the proposed HybridSNN is highly competitive among the state-of-the-art SNNs.
Collapse
|
14
|
Fan X, Zhang H, Zhang Y. IDSNN: Towards High-Performance and Low-Latency SNN Training via Initialization and Distillation. Biomimetics (Basel) 2023; 8:375. [PMID: 37622980 PMCID: PMC10452895 DOI: 10.3390/biomimetics8040375] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/03/2023] [Accepted: 08/15/2023] [Indexed: 08/26/2023] Open
Abstract
Spiking neural networks (SNNs) are widely recognized for their biomimetic and efficient computing features. They utilize spikes to encode and transmit information. Despite the many advantages of SNNs, they suffer from the problems of low accuracy and large inference latency, which are, respectively, caused by the direct training and conversion from artificial neural network (ANN) training methods. Aiming to address these limitations, we propose a novel training pipeline (called IDSNN) based on parameter initialization and knowledge distillation, using ANN as a parameter source and teacher. IDSNN maximizes the knowledge extracted from ANNs and achieves competitive top-1 accuracy for CIFAR10 (94.22%) and CIFAR100 (75.41%) with low latency. More importantly, it can achieve 14× faster convergence speed than directly training SNNs under limited training resources, which demonstrates its practical value in applications.
Collapse
Affiliation(s)
- Xiongfei Fan
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; (X.F.); (H.Z.)
| | - Hong Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; (X.F.); (H.Z.)
| | - Yu Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; (X.F.); (H.Z.)
- Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou 310027, China
| |
Collapse
|
15
|
Zhang H, Fan X, Zhang Y. Energy-Efficient Spiking Segmenter for Frame and Event-Based Images. Biomimetics (Basel) 2023; 8:356. [PMID: 37622961 PMCID: PMC10452323 DOI: 10.3390/biomimetics8040356] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/04/2023] [Accepted: 08/08/2023] [Indexed: 08/26/2023] Open
Abstract
Semantic segmentation predicts dense pixel-wise semantic labels, which is crucial for autonomous environment perception systems. For applications on mobile devices, current research focuses on energy-efficient segmenters for both frame and event-based cameras. However, there is currently no artificial neural network (ANN) that can perform efficient segmentation on both types of images. This paper introduces spiking neural network (SNN, a bionic model that is energy-efficient when implemented on neuromorphic hardware) and develops a Spiking Context Guided Network (Spiking CGNet) with substantially lower energy consumption and comparable performance for both frame and event-based images. First, this paper proposes a spiking context guided block that can extract local features and context information with spike computations. On this basis, the directly-trained SCGNet-S and SCGNet-L are established for both frame and event-based images. Our method is verified on the frame-based dataset Cityscapes and the event-based dataset DDD17. On the Cityscapes dataset, SCGNet-S achieves comparable results to ANN CGNet with 4.85 × energy efficiency. On the DDD17 dataset, Spiking CGNet outperforms other spiking segmenters by a large margin.
Collapse
Affiliation(s)
- Hong Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; (H.Z.); (X.F.)
| | - Xiongfei Fan
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; (H.Z.); (X.F.)
| | - Yu Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; (H.Z.); (X.F.)
- Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou 310027, China
| |
Collapse
|
16
|
Zhang H, Li Y, He B, Fan X, Wang Y, Zhang Y. Direct training high-performance spiking neural networks for object recognition and detection. Front Neurosci 2023; 17:1229951. [PMID: 37614339 PMCID: PMC10442545 DOI: 10.3389/fnins.2023.1229951] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 07/19/2023] [Indexed: 08/25/2023] Open
Abstract
Introduction The spiking neural network (SNN) is a bionic model that is energy-efficient when implemented on neuromorphic hardwares. The non-differentiability of the spiking signals and the complicated neural dynamics make direct training of high-performance SNNs a great challenge. There are numerous crucial issues to explore for the deployment of direct training SNNs, such as gradient vanishing and explosion, spiking signal decoding, and applications in upstream tasks. Methods To address gradient vanishing, we introduce a binary selection gate into the basic residual block and propose spiking gate (SG) ResNet to implement residual learning in SNNs. We propose two appropriate representations of the gate signal and verify that SG ResNet can overcome gradient vanishing or explosion by analyzing the gradient backpropagation. For the spiking signal decoding, a better decoding scheme than rate coding is achieved by our attention spike decoder (ASD), which dynamically assigns weights to spiking signals along the temporal, channel, and spatial dimensions. Results and discussion The SG ResNet and ASD modules are evaluated on multiple object recognition datasets, including the static ImageNet, CIFAR-100, CIFAR-10, and neuromorphic DVS-CIFAR10 datasets. Superior accuracy is demonstrated with a tiny simulation time step of four, specifically 94.52% top-1 accuracy on CIFAR-10 and 75.64% top-1 accuracy on CIFAR-100. Spiking RetinaNet is proposed using SG ResNet as the backbone and ASD module for information decoding as the first direct-training hybrid SNN-ANN detector for RGB images. Spiking RetinaNet with a SG ResNet34 backbone achieves an mAP of 0.296 on the object detection dataset MSCOCO.
Collapse
Affiliation(s)
- Hong Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Yang Li
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Bin He
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Xiongfei Fan
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Yue Wang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Yu Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
- Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou, China
| |
Collapse
|
17
|
Kim Y, Li Y, Moitra A, Yin R, Panda P. Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks. Front Neurosci 2023; 17:1230002. [PMID: 37583415 PMCID: PMC10423932 DOI: 10.3389/fnins.2023.1230002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 07/13/2023] [Indexed: 08/17/2023] Open
Abstract
Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes. Although the required memory cost for LIF neurons significantly increases as the input dimension goes larger, a technique to reduce memory for LIF neurons has not been explored so far. To address this, we propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to ~4.3× forward memory efficiency and ~21.9× backward memory efficiency for LIF neurons. We conduct experiments on various datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and N-Caltech101. Furthermore, we show that our approach also offers advantages on Human Activity Recognition (HAR) datasets, which heavily rely on temporal information. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/EfficientLIF-Net.
Collapse
Affiliation(s)
- Youngeun Kim
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | | | | | | | | |
Collapse
|
18
|
Wang X, Yang J, Kasabov NK. Integrating Spatial and Temporal Information for Violent Activity Detection from Video Using Deep Spiking Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094532. [PMID: 37177737 PMCID: PMC10181528 DOI: 10.3390/s23094532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/18/2023] [Accepted: 04/20/2023] [Indexed: 05/15/2023]
Abstract
Increasing violence in workplaces such as hospitals seriously challenges public safety. However, it is time- and labor-consuming to visually monitor masses of video data in real time. Therefore, automatic and timely violent activity detection from videos is vital, especially for small monitoring systems. This paper proposes a two-stream deep learning architecture for video violent activity detection named SpikeConvFlowNet. First, RGB frames and their optical flow data are used as inputs for each stream to extract the spatiotemporal features of videos. After that, the spatiotemporal features from the two streams are concatenated and fed to the classifier for the final decision. Each stream utilizes a supervised neural network consisting of multiple convolutional spiking and pooling layers. Convolutional layers are used to extract high-quality spatial features within frames, and spiking neurons can efficiently extract temporal features across frames by remembering historical information. The spiking neuron-based optical flow can strengthen the capability of extracting critical motion information. This method combines their advantages to enhance the performance and efficiency for recognizing violent actions. The experimental results on public datasets demonstrate that, compared with the latest methods, this approach greatly reduces parameters and achieves higher inference efficiency with limited accuracy loss. It is a potential solution for applications in embedded devices that provide low computing power but require fast processing speeds.
Collapse
Affiliation(s)
- Xiang Wang
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200400, China
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200400, China
| | - Nikola K Kasabov
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland 1020, New Zealand
| |
Collapse
|
19
|
Ma C, Yan R, Yu Z, Yu Q. Deep Spike Learning With Local Classifiers. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:3363-3375. [PMID: 35867374 DOI: 10.1109/tcyb.2022.3188015] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Backpropagation has been successfully generalized to optimize deep spiking neural networks (SNNs), where, nevertheless, gradients need to be propagated back through all layers, resulting in a massive consumption of computing resources and an obstacle to the parallelization of training. A biologically motivated scheme of local learning provides an alternative to efficiently train deep networks but often suffers a low performance of accuracy on practical tasks. Thus, how to train deep SNNs with the local learning scheme to achieve both efficient and accurate performance still remains an important challenge. In this study, we focus on a supervised local learning scheme where each layer is independently optimized with an auxiliary classifier. Accordingly, we first propose a spike-based efficient local learning rule by only considering the direct dependencies in the current time. We then propose two variants that additionally incorporate temporal dependencies through a backward and forward process, respectively. The effectiveness and performance of our proposed methods are extensively evaluated with six mainstream datasets. Experimental results show that our methods can successfully scale up to large networks and substantially outperform the spike-based local learning baselines on all studied benchmarks. Our results also reveal that gradients with temporal dependencies are essential for high performance on temporal tasks, while they have negligible effects on rate-based tasks. Our work is significant as it brings the performance of spike-based local learning to a new level with the computational benefits being retained.
Collapse
|
20
|
Amiri M, Jafari AH, Makkiabadi B, Nazari S, Van Hulle MM. A novel un-supervised burst time dependent plasticity learning approach for biologically pattern recognition networks. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2022.11.162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
21
|
Research Progress of spiking neural network in image classification: a review. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04553-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
|
22
|
Shirsavar SR, Vahabie AH, Dehaqani MRA. Models Developed for Spiking Neural Networks. MethodsX 2023; 10:102157. [PMID: 37077894 PMCID: PMC10106956 DOI: 10.1016/j.mex.2023.102157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 03/23/2023] [Indexed: 03/30/2023] Open
Abstract
Emergence of deep neural networks (DNNs) has raised enormous attention towards artificial neural networks (ANNs) once again. They have become the state-of-the-art models and have won different machine learning challenges. Although these networks are inspired by the brain, they lack biological plausibility, and they have structural differences compared to the brain. Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain. However, their application in real-world and complicated machine learning tasks were limited. Recently, they have shown great potential in solving such tasks. Due to their energy efficiency and temporal dynamics there are many promises in their future development. In this work, we reviewed the structures and performances of SNNs on image classification tasks. The comparisons illustrate that these networks show great capabilities for more complicated problems. Furthermore, the simple learning rules developed for SNNs, such as STDP and R-STDP, can be a potential alternative to replace the backpropagation algorithm used in DNNs.•Different building blocks of spiking neural networks are explained in this work.•Developed models for SNNs are introduced based on their characteristics and building blocks.
Collapse
|
23
|
BPLC + NOSO: backpropagation of errors based on latency code with neurons that only spike once at most. COMPLEX INTELL SYST 2023. [DOI: 10.1007/s40747-023-00983-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/26/2023]
Abstract
AbstractFor mathematical completeness, we propose an error-backpropagation algorithm based on latency code (BPLC) with spiking neurons conforming to the spike–response model but allowed to spike once at most (NOSOs). BPLC is based on gradients derived without approximation unlike previous temporal code-based error-backpropagation algorithms. The latency code uses the spiking latency (period from the first input spike to spiking) as a measure of neuronal activity. To support the latency code, we introduce a minimum-latency pooling layer that passes the spike of the minimum latency only for a given patch. We also introduce a symmetric dual threshold for spiking (i) to avoid the dead neuron issue and (ii) to confine a potential distribution to the range between the symmetric thresholds. Given that the number of spikes (rather than timesteps) is the major cause of inference delay for digital neuromorphic hardware, NOSONets trained using BPLC likely reduce inference delay significantly. To identify the feasibility of BPLC + NOSO, we trained CNN-based NOSONets on Fashion-MNIST and CIFAR-10. The classification accuracy on CIFAR-10 exceeds the state-of-the-art result from an SNN of the same depth and width by approximately 2%. Additionally, the number of spikes for inference is significantly reduced (by approximately one order of magnitude), highlighting a significant reduction in inference delay.
Collapse
|
24
|
Lin R, Dai B, Zhao Y, Chen G, Lu H. Constrain Bias Addition to Train Low-Latency Spiking Neural Networks. Brain Sci 2023; 13:brainsci13020319. [PMID: 36831862 PMCID: PMC9954654 DOI: 10.3390/brainsci13020319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 01/19/2023] [Accepted: 02/10/2023] [Indexed: 02/16/2023] Open
Abstract
In recent years, a third-generation neural network, namely, spiking neural network, has received plethora of attention in the broad areas of Machine learning and Artificial Intelligence. In this paper, a novel differential-based encoding method is proposed and new spike-based learning rules for backpropagation is derived by constraining the addition of bias voltage in spiking neurons. The proposed differential encoding method can effectively exploit the correlation between the data and improve the performance of the proposed model, and the new learning rule can take complete advantage of the modulation properties of bias on the spike firing threshold. We experiment with the proposed model on the environmental sound dataset RWCP and the image dataset MNIST and Fashion-MNIST, respectively, and assign various conditions to test the learning ability and robustness of the proposed model. The experimental results demonstrate that the proposed model achieves near-optimal results with a smaller time step by maintaining the highest accuracy and robustness with less training data. Among them, in MNIST dataset, compared with the original spiking neural network with the same network structure, we achieved a 0.39% accuracy improvement.
Collapse
Affiliation(s)
- Ranxi Lin
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
- University of Chinese Academy of Sciences, Beijing 100089, China
| | - Benzhe Dai
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
- University of Chinese Academy of Sciences, Beijing 100089, China
| | - Yingkai Zhao
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
- University of Chinese Academy of Sciences, Beijing 100089, China
| | - Gang Chen
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
- Semiconductor Neural Network Intelligent Perception and Computing Technology Beijing Key Laboratory, Beijing 100083, China
- Correspondence:
| | - Huaxiang Lu
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
- University of Chinese Academy of Sciences, Beijing 100089, China
- Semiconductor Neural Network Intelligent Perception and Computing Technology Beijing Key Laboratory, Beijing 100083, China
- Collage of Microelectronics, University of Chinese Academy of Sciences, Beijing 100049, China
- Materials and Optoelectronics Research Center, University of Chinese Academy of Sciences, Beijing 200031, China
| |
Collapse
|
25
|
Bio-inspired Active Learning method in spiking neural network. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2022.110193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
26
|
S 3NN: Time step reduction of spiking surrogate gradients for training energy efficient single-step spiking neural networks. Neural Netw 2023; 159:208-219. [PMID: 36657226 DOI: 10.1016/j.neunet.2022.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 10/03/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
As the scales of neural networks increase, techniques that enable them to run with low computational cost and energy efficiency are required. From such demands, various efficient neural network paradigms, such as spiking neural networks (SNNs) or binary neural networks (BNNs), have been proposed. However, they have sticky drawbacks, such as degraded inference accuracy and latency. To solve these problems, we propose a single-step spiking neural network (S3NN), an energy-efficient neural network with low computational cost and high precision. The proposed S3NN processes the information between hidden layers by spikes as SNNs. Nevertheless, it has no temporal dimension so that there is no latency within training and inference phases as BNNs. Thus, the proposed S3NN has a lower computational cost than SNNs that require time-series processing. However, S3NN cannot adopt naïve backpropagation algorithms due to the non-differentiability nature of spikes. We deduce a suitable neuron model by reducing the surrogate gradient for multi-time step SNNs to a single-time step. We experimentally demonstrated that the obtained surrogate gradient allows S3NN to be trained appropriately. We also showed that the proposed S3NN could achieve comparable accuracy to full-precision networks while being highly energy-efficient.
Collapse
|
27
|
Gao H, He J, Wang H, Wang T, Zhong Z, Yu J, Wang Y, Tian M, Shi C. High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron. Front Neurosci 2023; 17:1141701. [PMID: 36968504 PMCID: PMC10030499 DOI: 10.3389/fnins.2023.1141701] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 02/07/2023] [Indexed: 03/29/2023] Open
Abstract
Spiking neural networks (SNNs) have attracted intensive attention due to the efficient event-driven computing paradigm. Among SNN training methods, the ANN-to-SNN conversion is usually regarded to achieve state-of-the-art recognition accuracies. However, many existing ANN-to-SNN techniques impose lengthy post-conversion steps like threshold balancing and weight renormalization, to compensate for the inherent behavioral discrepancy between artificial and spiking neurons. In addition, they require a long temporal window to encode and process as many spikes as possible to better approximate the real-valued ANN neurons, leading to a high inference latency. To overcome these challenges, we propose a calcium-gated bipolar leaky integrate and fire (Ca-LIF) spiking neuron model to better approximate the functions of the ReLU neurons widely adopted in ANNs. We also propose a quantization-aware training (QAT)-based framework leveraging an off-the-shelf QAT toolkit for easy ANN-to-SNN conversion, which directly exports the learned ANN weights to SNNs requiring no post-conversion processing. We benchmarked our method on typical deep network structures with varying time-step lengths from 8 to 128. Compared to other research, our converted SNNs reported competitively high-accuracy performance, while enjoying relatively short inference time steps.
Collapse
Affiliation(s)
- Haoran Gao
- The School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Junxian He
- The School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Haibing Wang
- The School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Tengxiao Wang
- The School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Zhengqing Zhong
- The School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Jianyi Yu
- The School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Ying Wang
- State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Min Tian
- The School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Cong Shi
- The School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
- *Correspondence: Cong Shi
| |
Collapse
|
28
|
Wu J, Chua Y, Zhang M, Li G, Li H, Tan KC. A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:446-460. [PMID: 34288879 DOI: 10.1109/tnnls.2021.3095724] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures. However, due to the nondifferentiable nature of spiking neuronal functions, the standard error backpropagation algorithm is not directly applicable to SNNs. In this work, we propose a tandem learning framework that consists of an SNN and an artificial neural network (ANN) coupled through weight sharing. The ANN is an auxiliary structure that facilitates the error backpropagation for the training of the SNN at the spike-train level. To this end, we consider the spike count as the discrete neural representation in the SNN and design an ANN neuronal activation function that can effectively approximate the spike count of the coupled SNN. The proposed tandem learning rule demonstrates competitive pattern recognition and regression capabilities on both the conventional frame- and event-based vision datasets, with at least an order of magnitude reduced inference time and total synaptic operations over other state-of-the-art SNN implementations. Therefore, the proposed tandem learning rule offers a novel solution to training efficient, low latency, and high-accuracy deep SNNs with low computing resources.
Collapse
|
29
|
Guo W, Fouda ME, Eltawil AM, Salama KN. Efficient training of spiking neural networks with temporally-truncated local backpropagation through time. Front Neurosci 2023; 17:1047008. [PMID: 37090791 PMCID: PMC10117667 DOI: 10.3389/fnins.2023.1047008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
Directly training spiking neural networks (SNNs) has remained challenging due to complex neural dynamics and intrinsic non-differentiability in firing functions. The well-known backpropagation through time (BPTT) algorithm proposed to train SNNs suffers from large memory footprint and prohibits backward and update unlocking, making it impossible to exploit the potential of locally-supervised training methods. This work proposes an efficient and direct training algorithm for SNNs that integrates a locally-supervised training method with a temporally-truncated BPTT algorithm. The proposed algorithm explores both temporal and spatial locality in BPTT and contributes to significant reduction in computational cost including GPU memory utilization, main memory access and arithmetic operations. We thoroughly explore the design space concerning temporal truncation length and local training block size and benchmark their impact on classification accuracy of different networks running different types of tasks. The results reveal that temporal truncation has a negative effect on the accuracy of classifying frame-based datasets, but leads to improvement in accuracy on event-based datasets. In spite of resulting information loss, local training is capable of alleviating overfitting. The combined effect of temporal truncation and local training can lead to the slowdown of accuracy drop and even improvement in accuracy. In addition, training deep SNNs' models such as AlexNet classifying CIFAR10-DVS dataset leads to 7.26% increase in accuracy, 89.94% reduction in GPU memory, 10.79% reduction in memory access, and 99.64% reduction in MAC operations compared to the standard end-to-end BPTT. Thus, the proposed method has shown high potential to enable fast and energy-efficient on-chip training for real-time learning at the edge.
Collapse
Affiliation(s)
- Wenzhe Guo
- Sensors Lab, Advanced Membranes and Porous Materials Center (AMPMC), Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Mohammed E. Fouda
- Center for Embedded & Cyber-Physical Systems, University of California, Irvine, Irvine, CA, United States
| | - Ahmed M. Eltawil
- Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- Center for Embedded & Cyber-Physical Systems, University of California, Irvine, Irvine, CA, United States
| | - Khaled Nabil Salama
- Sensors Lab, Advanced Membranes and Porous Materials Center (AMPMC), Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- *Correspondence: Khaled Nabil Salama
| |
Collapse
|
30
|
Amiri M, Jafari AH, Makkiabadi B, Nazari S. A Novel Unsupervised Spatial–Temporal Learning Mechanism in a Bio-inspired Spiking Neural Network. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10097-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
31
|
Zhang T, Jia S, Cheng X, Xu B. Tuning Convolutional Spiking Neural Network With Biologically Plausible Reward Propagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7621-7631. [PMID: 34125691 DOI: 10.1109/tnnls.2021.3085966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks (SNNs) contain more biologically realistic structures and biologically inspired learning principles than those in standard artificial neural networks (ANNs). SNNs are considered the third generation of ANNs, powerful on the robust computation with a low computational cost. The neurons in SNNs are nondifferential, containing decayed historical states and generating event-based spikes after their states reaching the firing threshold. These dynamic characteristics of SNNs make it difficult to be directly trained with the standard backpropagation (BP), which is also considered not biologically plausible. In this article, a biologically plausible reward propagation (BRP) algorithm is proposed and applied to the SNN architecture with both spiking-convolution (with both 1-D and 2-D convolutional kernels) and full-connection layers. Unlike the standard BP that propagates error signals from postsynaptic to presynaptic neurons layer by layer, the BRP propagates target labels instead of errors directly from the output layer to all prehidden layers. This effort is more consistent with the top-down reward-guiding learning in cortical columns of the neocortex. Synaptic modifications with only local gradient differences are induced with pseudo-BP that might also be replaced with the spike-timing-dependent plasticity (STDP). The performance of the proposed BRP-SNN is further verified on the spatial (including MNIST and Cifar-10) and temporal (including TIDigits and DvsGesture) tasks, where the SNN using BRP has reached a similar accuracy compared to other state-of-the-art (SOTA) BP-based SNNs and saved 50% more computational cost than ANNs. We think that the introduction of biologically plausible learning rules to the training procedure of biologically realistic SNNs will give us more hints and inspiration toward a better understanding of the biological system's intelligent nature.
Collapse
|
32
|
Lele AS, Fang Y, Anwar A, Raychowdhury A. Bio-mimetic high-speed target localization with fused frame and event vision for edge application. Front Neurosci 2022; 16:1010302. [PMID: 36507348 PMCID: PMC9732385 DOI: 10.3389/fnins.2022.1010302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 10/24/2022] [Indexed: 11/26/2022] Open
Abstract
Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNN-based high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for ego-motion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with task-optimized multi-neuronal pathway structure in primates and insects. The system is validated to outperform CNN-only processing using prey-predator drone simulations in realistic 3D virtual environments. The system is then demonstrated in a real-world multi-drone set-up with emulated event data. Subsequently, we use recorded actual sensory data from multi-camera and inertial measurement unit (IMU) assembly to show desired working while tolerating the realistic noise in vision and IMU sensors. We analyze the design space to identify optimal parameters for spiking neurons, CNN models, and for checking their effect on the performance metrics of the fused system. Finally, we map the throughput controlling SNN and fusion network on edge-compatible Zynq-7000 FPGA to show a potential 264 outputs per second even at constrained resource availability. This work may open new research directions by coupling multiple sensing and processing modalities inspired by discoveries in neuroscience to break fundamental trade-offs in frame-based computer vision.
Collapse
Affiliation(s)
- Ashwin Sanjay Lele
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Yan Fang
- Department of Electrical and Computer Engineering, Kennesaw State University, Marietta, GA, United States
| | - Aqeel Anwar
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Arijit Raychowdhury
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| |
Collapse
|
33
|
Wu J, Xu C, Han X, Zhou D, Zhang M, Li H, Tan KC. Progressive Tandem Learning for Pattern Recognition With Deep Spiking Neural Networks. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:7824-7840. [PMID: 34546918 DOI: 10.1109/tpami.2021.3114196] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs) have shown clear advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency, due to their event-driven nature and sparse communication. However, the training of deep SNNs is not straightforward. In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning. By studying the equivalence between ANNs and SNNs in the discrete representation space, a primitive network conversion method is introduced that takes full advantage of spike count to approximate the activation value of ANN neurons. To compensate for the approximation errors arising from the primitive network conversion, we further introduce a layer-wise learning method with an adaptive training scheduler to fine-tune the network weights. The progressive tandem learning framework also allows hardware constraints, such as limited weight precision and fan-in connections, to be progressively imposed during training. The SNNs thus trained have demonstrated remarkable classification and regression capabilities on large-scale object recognition, image reconstruction, and speech separation tasks, while requiring at least an order of magnitude reduced inference time and synaptic operations than other state-of-the-art SNN implementations. It, therefore, opens up a myriad of opportunities for pervasive mobile and embedded devices with a limited power budget.
Collapse
|
34
|
Wu Z, Zhang H, Lin Y, Li G, Wang M, Tang Y. LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6249-6262. [PMID: 33979292 DOI: 10.1109/tnnls.2021.3073016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks (SNNs) based on the leaky integrate and fire (LIF) model have been applied to energy-efficient temporal and spatiotemporal processing tasks. Due to the bioplausible neuronal dynamics and simplicity, LIF-SNN benefits from event-driven processing, however, usually face the embarrassment of reduced performance. This may because, in LIF-SNN, the neurons transmit information via spikes. To address this issue, in this work, we propose a leaky integrate and analog fire (LIAF) neuron model so that analog values can be transmitted among neurons, and a deep network termed LIAF-Net is built on it for efficient spatiotemporal processing. In the temporal domain, LIAF follows the traditional LIF dynamics to maintain its temporal processing capability. In the spatial domain, LIAF is able to integrate spatial information through convolutional integration or fully connected integration. As a spatiotemporal layer, LIAF can also be used with traditional artificial neural network (ANN) layers jointly. In addition, the built network can be trained with backpropagation through time (BPTT) directly, which avoids the performance loss caused by ANN to SNN conversion. Experiment results indicate that LIAF-Net achieves comparable performance to the gated recurrent unit (GRU) and long short-term memory (LSTM) on bAbI question answering (QA) tasks and achieves state-of-the-art performance on spatiotemporal dynamic vision sensor (DVS) data sets, including MNIST-DVS, CIFAR10-DVS, and DVS128 Gesture, with much less number of synaptic weights and computational overhead compared with traditional networks built by LSTM, GRU, convolutional LSTM (ConvLSTM), or 3-D convolution (Conv3D). Compared with traditional LIF-SNN, LIAF-Net also shows dramatic accuracy gain on all these experiments. In conclusion, LIAF-Net provides a framework combining the advantages of both ANNs and SNNs for lightweight and efficient spatiotemporal information processing.
Collapse
|
35
|
Spike-Based Approximate Backpropagation Algorithm of Brain-Inspired Deep SNN for Sonar Target Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1633946. [PMID: 36313052 PMCID: PMC9613403 DOI: 10.1155/2022/1633946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 05/22/2022] [Accepted: 08/08/2022] [Indexed: 11/30/2022]
Abstract
With the development of neuromorphic computing, more and more attention has been paid to a brain-inspired spiking neural network (SNN) because of its ultralow energy consumption and high-performance spatiotemporal information processing. Due to the discontinuity of the spiking neuronal activation function, it is still a difficult problem to train brain-inspired deep SNN directly, so SNN has not yet shown performance comparable to that of an artificial neural network. For this reason, the spike-based approximate backpropagation (SABP) algorithm and a general brain-inspired SNN framework are proposed in this paper. The combination of the two can be used for end-to-end direct training of brain-inspired deep SNN. Experiments show that compared with other spike-based methods of directly training SNN, the classification accuracy of this method is close to the best results on MNIST and CIFAR-10 datasets and achieves the best classification accuracy on sonar image target classification (SITC) of small sample datasets. Further analysis shows that compared with artificial neural networks, our brain-inspired SNN has great advantages in computational complexity and energy consumption in sonar target classification.
Collapse
|
36
|
Hodassman S, Meir Y, Kisos K, Ben-Noam I, Tugendhaft Y, Goldental A, Vardi R, Kanter I. Brain inspired neuronal silencing mechanism to enable reliable sequence identification. Sci Rep 2022; 12:16003. [PMID: 36175466 PMCID: PMC9523036 DOI: 10.1038/s41598-022-20337-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Real-time sequence identification is a core use-case of artificial neural networks (ANNs), ranging from recognizing temporal events to identifying verification codes. Existing methods apply recurrent neural networks, which suffer from training difficulties; however, performing this function without feedback loops remains a challenge. Here, we present an experimental neuronal long-term plasticity mechanism for high-precision feedforward sequence identification networks (ID-nets) without feedback loops, wherein input objects have a given order and timing. This mechanism temporarily silences neurons following their recent spiking activity. Therefore, transitory objects act on different dynamically created feedforward sub-networks. ID-nets are demonstrated to reliably identify 10 handwritten digit sequences, and are generalized to deep convolutional ANNs with continuous activation nodes trained on image sequences. Counterintuitively, their classification performance, even with a limited number of training examples, is high for sequences but low for individual objects. ID-nets are also implemented for writer-dependent recognition, and suggested as a cryptographic tool for encrypted authentication. The presented mechanism opens new horizons for advanced ANN algorithms.
Collapse
Affiliation(s)
- Shiri Hodassman
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Yuval Meir
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Karin Kisos
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Itamar Ben-Noam
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Yael Tugendhaft
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Amir Goldental
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Roni Vardi
- Gonda Interdisciplinary Brain Research Center, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Ido Kanter
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel. .,Gonda Interdisciplinary Brain Research Center, Bar-Ilan University, 52900, Ramat-Gan, Israel.
| |
Collapse
|
37
|
Mutascu M. CO 2 emissions in the USA: new insights based on ANN approach. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2022; 29:68332-68356. [PMID: 35536471 PMCID: PMC9088728 DOI: 10.1007/s11356-022-20615-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 04/30/2022] [Indexed: 06/14/2023]
Abstract
The paper's main aim is to forecast the carbon dioxide (CO2) emissions in the USA and its related components, analysing the contributions of each of those components to CO2 total volume. The empirical ground is a mix of non-linear tools, combining the artificial neural network (ANN) parametric method with a vector autoregressive (VAR) estimator. ANN includes 1 layer and 20 neurons, forecasting being based on the economic growth and net trade effects doubled by different types of renewable energy consumption. The accuracy of estimations for 14 targeted categories of CO2 emissions is ensured by 4360 observations, with 10 types of inputs over 1984M01-2020M04. ANN seems to offer superior forecasting accuracy compared with the widely used autoregressive methods, such as VAR model, but seems to be weak in capturing the output 'spike' forms. The main findings show that, although economic growth and net trade have an important contribution to the targeted outputs, the more prominent ones are wind, solar and total biomass energy consumption. Therefore, the CO2 emissions can be better controlled through non-polluting capacities, in parallel with the use of wind, solar and total biomass energies. The tool excellently predicts the CO2 emissions during pandemic crises being a good instrument in policy decisions. Modest contributions to CO2 prediction seem to have energy consumption generated by waste, hydroelectric power and renewable geothermal systems. This underlines an unclear current status given their collateral effects in environmental damages and high investment costs. The paper contributes to the literature in several ways. It is one of the first works focused on CO2 emissions forecasting in the USA based on a mixed approach by ANN and VAR types, considering an extended pallet of inputs to predict the volume of total CO2 emissions but also its components. As a novelty, the inputs combine both economic and environmental determinants. Not at least, the estimations are performed based on a large span, with monthly frequency.
Collapse
Affiliation(s)
- Mihai Mutascu
- Zeppelin University in Friedrichshafen, Am Seemooser Horn 20, 88045, Friedrichshafen, Germany.
- Faculty of Economics and Business Administration, West University of Timisoara, 16, J. H. Pestalozzi St., 300115, Timisoara, Romania.
- LEO (Laboratoire d'Economie d'Orléans) and Labex Voltaire, CNRS FRE 2014, University of Orléans, Faculté de Droit d'Economie et de Gestion, Rue de Blois - B.P. 6739, 45067, Orléans, France.
| |
Collapse
|
38
|
Relaxation LIF: A gradient-based spiking neuron for direct training deep spiking neural networks. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
39
|
A heuristic approach to the hyperparameters in training spiking neural networks using spike-timing-dependent plasticity. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06824-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractThe third type of neural network called spiking is developed due to a more accurate representation of neuronal activity in living organisms. Spiking neural networks have many different parameters that can be difficult to adjust manually to the current classification problem. The analysis and selection of coefficients’ values in the network can be analyzed as an optimization problem. A practical method for automatic selection of them can decrease the time needed to develop such a model. In this paper, we propose the use of a heuristic approach to analyze and select coefficients with the idea of collaborative working. The proposed idea is based on parallel analyzing of different coefficients and choosing the best of them or average ones. This type of optimization problem allows the selection of all variables, which can significantly affect the convergence of the accuracy. Our proposal was tested using network simulators and popular databases to indicate the possibilities of the described approach. Five different heuristic algorithms were tested and the best results were reached by Cuckoo Search Algorithm, Grasshopper Optimization Algorithm, and Polar Bears Algorithm.
Collapse
|
40
|
Straub J. Automating the design and development of gradient descent trained expert system networks. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
41
|
Makarov VA, Lobov SA, Shchanikov S, Mikhaylov A, Kazantsev VB. Toward Reflective Spiking Neural Networks Exploiting Memristive Devices. Front Comput Neurosci 2022; 16:859874. [PMID: 35782090 PMCID: PMC9243340 DOI: 10.3389/fncom.2022.859874] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 05/10/2022] [Indexed: 11/29/2022] Open
Abstract
The design of modern convolutional artificial neural networks (ANNs) composed of formal neurons copies the architecture of the visual cortex. Signals proceed through a hierarchy, where receptive fields become increasingly more complex and coding sparse. Nowadays, ANNs outperform humans in controlled pattern recognition tasks yet remain far behind in cognition. In part, it happens due to limited knowledge about the higher echelons of the brain hierarchy, where neurons actively generate predictions about what will happen next, i.e., the information processing jumps from reflex to reflection. In this study, we forecast that spiking neural networks (SNNs) can achieve the next qualitative leap. Reflective SNNs may take advantage of their intrinsic dynamics and mimic complex, not reflex-based, brain actions. They also enable a significant reduction in energy consumption. However, the training of SNNs is a challenging problem, strongly limiting their deployment. We then briefly overview new insights provided by the concept of a high-dimensional brain, which has been put forward to explain the potential power of single neurons in higher brain stations and deep SNN layers. Finally, we discuss the prospect of implementing neural networks in memristive systems. Such systems can densely pack on a chip 2D or 3D arrays of plastic synaptic contacts directly processing analog information. Thus, memristive devices are a good candidate for implementing in-memory and in-sensor computing. Then, memristive SNNs can diverge from the development of ANNs and build their niche, cognitive, or reflective computations.
Collapse
Affiliation(s)
- Valeri A. Makarov
- Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- *Correspondence: Valeri A. Makarov,
| | - Sergey A. Lobov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Sergey Shchanikov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Department of Information Technologies, Vladimir State University, Vladimir, Russia
| | - Alexey Mikhaylov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Viktor B. Kazantsev
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| |
Collapse
|
42
|
Shen G, Zhao D, Zeng Y. Backpropagation with biologically plausible spatiotemporal adjustment for training deep spiking neural networks. PATTERNS (NEW YORK, N.Y.) 2022; 3:100522. [PMID: 35755868 PMCID: PMC9214320 DOI: 10.1016/j.patter.2022.100522] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 03/29/2022] [Accepted: 05/06/2022] [Indexed: 11/21/2022]
Abstract
The spiking neural network (SNN) mimics the information-processing operation in the human brain. Directly applying backpropagation to the training of the SNN still has a performance gap compared with traditional deep neural networks. To address the problem, we propose a biologically plausible spatial adjustment that rethinks the relationship between membrane potential and spikes and realizes a reasonable adjustment of gradients to different time steps. It precisely controls the backpropagation of the error along the spatial dimension. Secondly, we propose a biologically plausible temporal adjustment to make the error propagate across the spikes in the temporal dimension, which overcomes the problem of the temporal dependency within a single spike period of traditional spiking neurons. We have verified our algorithm on several datasets, and the experimental results have shown that our algorithm greatly reduces network latency and energy consumption while also improving network performance.
Collapse
Affiliation(s)
- Guobin Shen
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 100190, China
| | - Dongcheng Zhao
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
43
|
Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115749] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
Collapse
|
44
|
Yang Y, Ren J, Duan F. The Spiking Rates Inspired Encoder and Decoder for Spiking Neural Networks: An Illustration of Hand Gesture Recognition. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10027-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
45
|
Wu D, Yi X, Huang X. A Little Energy Goes a Long Way: Build an Energy-Efficient, Accurate Spiking Neural Network From Convolutional Neural Network. Front Neurosci 2022; 16:759900. [PMID: 35692427 PMCID: PMC9179229 DOI: 10.3389/fnins.2022.759900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 02/28/2022] [Indexed: 11/13/2022] Open
Abstract
This article conforms to a recent trend of developing an energy-efficient Spiking Neural Network (SNN), which takes advantage of the sophisticated training regime of Convolutional Neural Network (CNN) and converts a well-trained CNN to an SNN. We observe that the existing CNN-to-SNN conversion algorithms may keep a certain amount of residual current in the spiking neurons in SNN, and the residual current may cause significant accuracy loss when inference time is short. To deal with this, we propose a unified framework to equalize the output of the convolutional or dense layer in CNN and the accumulated current in SNN, and maximally align the spiking rate of a neuron with its corresponding charge. This framework enables us to design a novel explicit current control (ECC) method for the CNN-to-SNN conversion which considers multiple objectives at the same time during the conversion, including accuracy, latency, and energy efficiency. We conduct an extensive set of experiments on different neural network architectures, e.g., VGG, ResNet, and DenseNet, to evaluate the resulting SNNs. The benchmark datasets include not only the image datasets such as CIFAR-10/100 and ImageNet but also the Dynamic Vision Sensor (DVS) image datasets such as DVS-CIFAR-10. The experimental results show the superior performance of our ECC method over the state-of-the-art.
Collapse
Affiliation(s)
- Dengyu Wu
- Department of Computer Science, University of Liverpool, Liverpool, United Kingdom
- *Correspondence: Dengyu Wu
| | - Xinping Yi
- Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, United Kingdom
| | - Xiaowei Huang
- Department of Computer Science, University of Liverpool, Liverpool, United Kingdom
| |
Collapse
|
46
|
Wang C, Lee C, Roy K. Noise resilient leaky integrate-and-fire neurons based on multi-domain spintronic devices. Sci Rep 2022; 12:8361. [PMID: 35589802 PMCID: PMC9120456 DOI: 10.1038/s41598-022-12555-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 05/12/2022] [Indexed: 12/04/2022] Open
Abstract
The capability of emulating neural functionalities efficiently in hardware is crucial for building neuromorphic computing systems. While various types of neuro-mimetic devices have been investigated, it remains challenging to provide a compact device that can emulate spiking neurons. In this work, we propose a non-volatile spin-based device for efficiently emulating a leaky integrate-and-fire neuron. By incorporating an exchange-coupled composite free layer in spin-orbit torque magnetic tunnel junctions, multi-domain magnetization switching dynamics is exploited to realize gradual accumulation of membrane potential for a leaky integrate-and-fire neuron with compact footprints. The proposed device offers significantly improved scalability compared with previously proposed spin-based neuro-mimetic implementations while exhibiting high energy efficiency and good controllability. Moreover, the proposed neuron device exhibits a varying leak constant and a varying membrane resistance that are both dependent on the magnitude of the membrane potential. Interestingly, we demonstrate that such device-inspired dynamic behaviors can be incorporated to construct more robust spiking neural network models, and find improved resiliency against various types of noise injection scenarios. The proposed spintronic neuro-mimetic devices may potentially open up exciting opportunities for the development of efficient and robust neuro-inspired computational hardware.
Collapse
Affiliation(s)
- Cheng Wang
- Department of Electrical and Computer Engineering, Purdue University, West Lafayette, 47907, IN, USA.
| | - Chankyu Lee
- Department of Electrical and Computer Engineering, Purdue University, West Lafayette, 47907, IN, USA
| | - Kaushik Roy
- Department of Electrical and Computer Engineering, Purdue University, West Lafayette, 47907, IN, USA
| |
Collapse
|
47
|
Lu S, Sengupta A. Neuroevolution Guided Hybrid Spiking Neural Network Training. Front Neurosci 2022; 16:838523. [PMID: 35546880 PMCID: PMC9082355 DOI: 10.3389/fnins.2022.838523] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 03/11/2022] [Indexed: 11/16/2022] Open
Abstract
Neuromorphic computing algorithms based on Spiking Neural Networks (SNNs) are evolving to be a disruptive technology driving machine learning research. The overarching goal of this work is to develop a structured algorithmic framework for SNN training that optimizes unique SNN-specific properties like neuron spiking threshold using neuroevolution as a feedback strategy. We provide extensive results for this hybrid bio-inspired training strategy and show that such a feedback-based learning approach leads to explainable neuromorphic systems that adapt to the specific underlying application. Our analysis reveals 53.8, 28.8, and 28.2% latency improvement for the neuroevolution-based SNN training strategy on CIFAR-10, CIFAR-100, and ImageNet datasets, respectively in contrast to state-of-the-art conversion based approaches. The proposed algorithm can be easily extended to other application domains like image classification in presence of adversarial attacks where 43.2 and 27.9% latency improvements were observed on CIFAR-10 and CIFAR-100 datasets, respectively.
Collapse
Affiliation(s)
- Sen Lu
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA, United States
| | - Abhronil Sengupta
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA, United States
| |
Collapse
|
48
|
Kim D, Chakraborty B, She X, Lee E, Kang B, Mukhopadhyay S. MONETA: A Processing-In-Memory-Based Hardware Platform for the Hybrid Convolutional Spiking Neural Network With Online Learning. Front Neurosci 2022; 16:775457. [PMID: 35478844 PMCID: PMC9037635 DOI: 10.3389/fnins.2022.775457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 03/07/2022] [Indexed: 11/24/2022] Open
Abstract
We present a processing-in-memory (PIM)-based hardware platform, referred to as MONETA, for on-chip acceleration of inference and learning in hybrid convolutional spiking neural network. MONETAuses 8T static random-access memory (SRAM)-based PIM cores for vector matrix multiplication (VMM) augmented with spike-time-dependent-plasticity (STDP) based weight update. The spiking neural network (SNN)-focused data flow is presented to minimize data movement in MONETAwhile ensuring learning accuracy. MONETAsupports on-line and on-chip training on PIM architecture. The STDP-trained convolutional neural network within SNN (ConvSNN) with the proposed data flow, 4-bit input precision, and 8-bit weight precision shows only 1.63% lower accuracy in CIFAR-10 compared to the STDP accuracy implemented by the software. Further, the proposed architecture is used to accelerate a hybrid SNN architecture that couples off-chip supervised (back propagation through time) and on-chip unsupervised (STDP) training. We also evaluate the hybrid network architecture with the proposed data flow. The accuracy of this hybrid network is 10.84% higher than STDP trained accuracy result and 1.4% higher compared to the backpropagated training-based ConvSNN result with the CIFAR-10 dataset. Physical design of MONETAin 65 nm complementary metal-oxide-semiconductor (CMOS) shows 18.69 tera operation per second (TOPS)/W, 7.25 TOPS/W and 10.41 TOPS/W power efficiencies for the inference mode, learning mode, and hybrid learning mode, respectively.
Collapse
Affiliation(s)
- Daehyun Kim
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Biswadeep Chakraborty
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Xueyuan She
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Edward Lee
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Beomseok Kang
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Saibal Mukhopadhyay
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| |
Collapse
|
49
|
Rethinking the Role of Normalization and Residual Blocks for Spiking Neural Networks. SENSORS 2022; 22:s22082876. [PMID: 35458860 PMCID: PMC9028401 DOI: 10.3390/s22082876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 03/28/2022] [Accepted: 04/05/2022] [Indexed: 12/10/2022]
Abstract
Biologically inspired spiking neural networks (SNNs) are widely used to realize ultralow-power energy consumption. However, deep SNNs are not easy to train due to the excessive firing of spiking neurons in the hidden layers. To tackle this problem, we propose a novel but simple normalization technique called postsynaptic potential normalization. This normalization removes the subtraction term from the standard normalization and uses the second raw moment instead of the variance as the division term. The spike firing can be controlled, enabling the training to proceed appropriately, by conducting this simple normalization to the postsynaptic potential. The experimental results show that SNNs with our normalization outperformed other models using other normalizations. Furthermore, through the pre-activation residual blocks, the proposed model can train with more than 100 layers without other special techniques dedicated to SNNs.
Collapse
|
50
|
Wang X, Zhong M, Cheng H, Xie J, Zhou Y, Ren J, Liu M. SpikeGoogle: Spiking Neural Networks with GoogLeNet‐like inception module. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2022. [DOI: 10.1049/cit2.12082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Xuan Wang
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| | - Minghong Zhong
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| | - Hoiyuen Cheng
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| | - Junjie Xie
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| | - Yingchu Zhou
- Shenzhen Academy of Metrology and Quality Inspection Shenzhen China
| | - Jun Ren
- Infocare Systems Limited New Zealand
| | - Mengyuan Liu
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology Guangzhou China
| |
Collapse
|