1
|
Yu Q, Gao J, Wei J, Li J, Tan KC, Huang T. Improving Multispike Learning With Plastic Synaptic Delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10254-10265. [PMID: 35442893 DOI: 10.1109/tnnls.2022.3165527] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Emulating the spike-based processing in the brain, spiking neural networks (SNNs) are developed and act as a promising candidate for the new generation of artificial neural networks that aim to produce efficient cognitions as the brain. Due to the complex dynamics and nonlinearity of SNNs, designing efficient learning algorithms has remained a major difficulty, which attracts great research attention. Most existing ones focus on the adjustment of synaptic weights. However, other components, such as synaptic delays, are found to be adaptive and important in modulating neural behavior. How could plasticity on different components cooperate to improve the learning of SNNs remains as an interesting question. Advancing our previous multispike learning, we propose a new joint weight-delay plasticity rule, named TDP-DL, in this article. Plastic delays are integrated into the learning framework, and as a result, the performance of multispike learning is significantly improved. Simulation results highlight the effectiveness and efficiency of our TDP-DL rule compared to baseline ones. Moreover, we reveal the underlying principle of how synaptic weights and delays cooperate with each other through a synthetic task of interval selectivity and show that plastic delays can enhance the selectivity and flexibility of neurons by shifting information across time. Due to this capability, useful information distributed away in the time domain can be effectively integrated for a better accuracy performance, as highlighted in our generalization tasks of the image, speech, and event-based object recognitions. Our work is thus valuable and significant to improve the performance of spike-based neuromorphic computing.
Collapse
|
2
|
Yue Y, Baltes M, Abuhajar N, Sun T, Karanth A, Smith CD, Bihl T, Liu J. Spiking neural networks fine-tuning for brain image segmentation. Front Neurosci 2023; 17:1267639. [PMID: 38027484 PMCID: PMC10646327 DOI: 10.3389/fnins.2023.1267639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 10/09/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The field of machine learning has undergone a significant transformation with the progress of deep artificial neural networks (ANNs) and the growing accessibility of annotated data. ANNs usually require substantial power and memory usage to achieve optimal performance. Spiking neural networks (SNNs) have recently emerged as a low-power alternative to ANNs due to their sparsity nature. Despite their energy efficiency, SNNs are generally more difficult to be trained than ANNs. Methods In this study, we propose a novel three-stage SNN training scheme designed specifically for segmenting human hippocampi from magnetic resonance images. Our training pipeline starts with optimizing an ANN to its maximum capacity, then employs a quick ANN-SNN conversion to initialize the corresponding spiking network. This is followed by spike-based backpropagation to fine-tune the converted SNN. In order to understand the reason behind performance decline in the converted SNNs, we conduct a set of experiments to investigate the output scaling issue. Furthermore, we explore the impact of binary and ternary representations in SNN networks and conduct an empirical evaluation of their performance through image classification and segmentation tasks. Results and discussion By employing our hybrid training scheme, we observe significant advantages over both ANN-SNN conversion and direct SNN training solutions in terms of segmentation accuracy and training efficiency. Experimental results demonstrate the effectiveness of our model in achieving our design goals.
Collapse
Affiliation(s)
- Ye Yue
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| | - Marc Baltes
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| | - Nidal Abuhajar
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| | - Tao Sun
- Centrum Wiskunde and Informatica (CWI), Machine Learning Group, Amsterdam, Netherlands
| | - Avinash Karanth
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| | - Charles D. Smith
- Department of Neurology, University of Kentucky, Lexington, KY, United States
| | - Trevor Bihl
- Department of Biomedical, Industrial and Human Factors Engineering, Wright State University, Dayton, OH, United States
| | - Jundong Liu
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| |
Collapse
|
3
|
Sanchez-Garcia M, Chauhan T, Cottereau BR, Beyeler M. Efficient multi-scale representation of visual objects using a biologically plausible spike-latency code and winner-take-all inhibition. BIOLOGICAL CYBERNETICS 2023; 117:95-111. [PMID: 37004546 DOI: 10.1007/s00422-023-00956-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 02/10/2023] [Indexed: 05/05/2023]
Abstract
Deep neural networks have surpassed human performance in key visual challenges such as object recognition, but require a large amount of energy, computation, and memory. In contrast, spiking neural networks (SNNs) have the potential to improve both the efficiency and biological plausibility of object recognition systems. Here we present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli using multi-scale parallel processing. Mimicking neuronal response properties in early visual cortex, images were preprocessed with three different spatial frequency (SF) channels, before they were fed to a layer of spiking neurons whose synaptic weights were updated using spike-timing-dependent-plasticity. We investigate how the quality of the represented objects changes under different SF bands and WTA-I schemes. We demonstrate that a network of 200 spiking neurons tuned to three SFs can efficiently represent objects with as little as 15 spikes per neuron. Studying how core object recognition may be implemented using biologically plausible learning rules in SNNs may not only further our understanding of the brain, but also lead to novel and efficient artificial vision systems.
Collapse
Affiliation(s)
| | - Tushar Chauhan
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, MA, USA
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
| | - Benoit R Cottereau
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
- IPAL, CNRS IRL 2955, Singapore, Singapore
| | - Michael Beyeler
- Department of Computer Science, University of California, Santa Barbara, CA, USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| |
Collapse
|
4
|
Dong J, Jiang R, Xiao R, Yan R, Tang H. Event stream learning using spatio-temporal event surface. Neural Netw 2022; 154:543-559. [DOI: 10.1016/j.neunet.2022.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/12/2022] [Accepted: 07/10/2022] [Indexed: 11/29/2022]
|
5
|
Spatio-Temporal Coding-Based Helicopter Trajectory Planning for Pulsed Neural Membrane System. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1787013. [PMID: 35498182 PMCID: PMC9054418 DOI: 10.1155/2022/1787013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 02/19/2022] [Indexed: 12/04/2022]
Abstract
For the trajectory planning problem under the nonlinear and strongly coupled characteristics of unmanned helicopters, membrane computing with distributed parallel processing capability is introduced for unmanned helicopter trajectory planning. The global and local spatial information is temporally characterized; the temporal characterization algorithm under mapping information is designed; the hierarchical discriminant regression algorithm is designed based on incremental principal component analysis to realize the process of building and identifying trees in trajectory planning; and the pulsed neural membrane system (PNMS) with spatio-temporal coding function under membrane computing is constructed. Compared with the RRT algorithm in two experimental environments, the original path length, the trimmed path length, the time used to plan the trajectory, and the number of search nodes have different levels of improvement; the feasibility and effectiveness of the PNMS in unmanned helicopter trajectory planning are verified. It expands the theoretical research of membrane computing in the field of optimal control and provides theoretical support for the subsequent application practice.
Collapse
|
6
|
Ran X, Xu M, Mei L, Xu Q, Liu Q. Detecting out-of-distribution samples via variational auto-encoder with reliable uncertainty estimation. Neural Netw 2021; 145:199-208. [PMID: 34768090 DOI: 10.1016/j.neunet.2021.10.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 08/28/2021] [Accepted: 10/22/2021] [Indexed: 10/20/2022]
Abstract
Variational autoencoders (VAEs) are influential generative models with rich representation capabilities from the deep neural network architecture and Bayesian method. However, VAE models have a weakness that assign a higher likelihood to out-of-distribution (OOD) inputs than in-distribution (ID) inputs. To address this problem, a reliable uncertainty estimation is considered to be critical for in-depth understanding of OOD inputs. In this study, we propose an improved noise contrastive prior (INCP) to be able to integrate into the encoder of VAEs, called INCPVAE. INCP is scalable, trainable and compatible with VAEs, and it also adopts the merits from the INCP for uncertainty estimation. Experiments on various datasets demonstrate that compared to the standard VAEs, our model is superior in uncertainty estimation for the OOD data and is robust in anomaly detection tasks. The INCPVAE model obtains reliable uncertainty estimation for OOD inputs and solves the OOD problem in VAE models.
Collapse
Affiliation(s)
- Xuming Ran
- Shenzhen Key Laboratory of Smart Healthcare Engineering, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, China; College of Mathematics and Statistics, Chongqing Jiaotong University, Chongqing 400074, China.
| | - Mingkun Xu
- Center for Brain Inspired Computing Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Lingrui Mei
- China Automotive Engineering Research Institute, Chongqing 401122, China
| | - Qi Xu
- School of Artifical Intelligence, Electronic and Electrical Engineering, School of Artifical Intelligence Dalian University of Technology, Dalian 116024, China; College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| | - Quanying Liu
- Shenzhen Key Laboratory of Smart Healthcare Engineering, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, China.
| |
Collapse
|
7
|
Kim Y, Panda P. Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing. Neural Netw 2021; 144:686-698. [PMID: 34662827 DOI: 10.1016/j.neunet.2021.09.022] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 09/22/2021] [Accepted: 09/24/2021] [Indexed: 11/20/2022]
Abstract
Spiking Neural Networks (SNNs) have recently emerged as a new generation of low-power deep neural networks due to sparse, asynchronous, and binary event-driven processing. Most previous deep SNN optimization methods focus on static datasets (e.g., MNIST) from a conventional frame-based camera. On the other hand, optimization techniques for event data from Dynamic Vision Sensor (DVS) cameras are still at infancy. Most prior SNN techniques handling DVS data are limited to shallow networks and thus, show low performance. Generally, we observe that the integrate-and-fire behavior of spiking neurons diminishes spike activity in deeper layers. The sparse spike activity results in a sub-optimal solution during training (i.e., performance degradation). To address this limitation, we propose novel algorithmic and architectural advances to accelerate the training of very deep SNNs on DVS data. Specifically, we propose Spike Activation Lift Training (SALT) which increases spike activity across all layers by optimizing both weights and thresholds in convolutional layers. After applying SALT, we train the weights based on the cross-entropy loss. SALT helps the networks to convey ample information across all layers during training and therefore improves the performance. Furthermore, we propose a simple and effective architecture, called Switched-BN, which exploits Batch Normalization (BN). Previous methods show that the standard BN is incompatible with the temporal dynamics of SNNs. Therefore, in Switched-BN architecture, we apply BN to the last layer of an SNN after accumulating all the spikes from previous layer with a spike voltage accumulator (i.e., converting temporal spike information to float value). Even though we apply BN in just one layer of SNNs, our results demonstrate a considerable performance gain without any significant computational overhead. Through extensive experiments, we show the effectiveness of SALT and Switched-BN for training very deep SNNs from scratch on various benchmarks including, DVS-Cifar10, N-Caltech, DHP19, CIFAR10, and CIFAR100. To the best of our knowledge, this is the first work showing state-of-the-art performance with deep SNNs on DVS data.
Collapse
Affiliation(s)
- Youngeun Kim
- Department of Electrical Engineering, Yale University, New Haven, CT, USA.
| | | |
Collapse
|