1
|
Bukh AV, Rybalova EV, Shepelev IA, Vadivasova TE. Classification of musical intervals by spiking neural networks: Perfect student in solfége classes. CHAOS (WOODBURY, N.Y.) 2024; 34:063102. [PMID: 38829796 DOI: 10.1063/5.0210790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 05/12/2024] [Indexed: 06/05/2024]
Abstract
We investigate a spike activity of a network of excitable FitzHugh-Nagumo neurons, which is under constant two-frequency auditory signals. The neurons are supplemented with linear frequency filters and nonlinear input signal converters. We show that it is possible to configure the network to recognize a specific frequency ratio (musical interval) by selecting the parameters of the neurons, input filters, and coupling between neurons. A set of appropriately configured subnetworks with different topologies and coupling strengths can serve as a classifier for musical intervals. We have found that the selective properties of the classifier are due to the presence of a specific topology of coupling between the neurons of the network.
Collapse
Affiliation(s)
- A V Bukh
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
| | - E V Rybalova
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
| | - I A Shepelev
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
- Almetyevsk State Petroleum Institute, 2 Lenin Street, Almetyevsk 423462, Russia
| | - T E Vadivasova
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
| |
Collapse
|
2
|
Vignoud G, Venance L, Touboul JD. Anti-Hebbian plasticity drives sequence learning in striatum. Commun Biol 2024; 7:555. [PMID: 38724614 PMCID: PMC11082161 DOI: 10.1038/s42003-024-06203-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 04/17/2024] [Indexed: 05/12/2024] Open
Abstract
Spatio-temporal activity patterns have been observed in a variety of brain areas in spontaneous activity, prior to or during action, or in response to stimuli. Biological mechanisms endowing neurons with the ability to distinguish between different sequences remain largely unknown. Learning sequences of spikes raises multiple challenges, such as maintaining in memory spike history and discriminating partially overlapping sequences. Here, we show that anti-Hebbian spike-timing dependent plasticity (STDP), as observed at cortico-striatal synapses, can naturally lead to learning spike sequences. We design a spiking model of the striatal output neuron receiving spike patterns defined as sequential input from a fixed set of cortical neurons. We use a simple synaptic plasticity rule that combines anti-Hebbian STDP and non-associative potentiation for a subset of the presented patterns called rewarded patterns. We study the ability of striatal output neurons to discriminate rewarded from non-rewarded patterns by firing only after the presentation of a rewarded pattern. In particular, we show that two biological properties of striatal networks, spiking latency and collateral inhibition, contribute to an increase in accuracy, by allowing a better discrimination of partially overlapping sequences. These results suggest that anti-Hebbian STDP may serve as a biological substrate for learning sequences of spikes.
Collapse
Affiliation(s)
- Gaëtan Vignoud
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France
| | - Laurent Venance
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France.
| | - Jonathan D Touboul
- Department of Mathematics and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA.
| |
Collapse
|
3
|
Liu F, Zheng H, Ma S, Zhang W, Liu X, Chua Y, Shi L, Zhao R. Advancing brain-inspired computing with hybrid neural networks. Natl Sci Rev 2024; 11:nwae066. [PMID: 38577666 PMCID: PMC10989656 DOI: 10.1093/nsr/nwae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 01/25/2024] [Accepted: 01/31/2024] [Indexed: 04/06/2024] Open
Abstract
Brain-inspired computing, drawing inspiration from the fundamental structure and information-processing mechanisms of the human brain, has gained significant momentum in recent years. It has emerged as a research paradigm centered on brain-computer dual-driven and multi-network integration. One noteworthy instance of this paradigm is the hybrid neural network (HNN), which integrates computer-science-oriented artificial neural networks (ANNs) with neuroscience-oriented spiking neural networks (SNNs). HNNs exhibit distinct advantages in various intelligent tasks, including perception, cognition and learning. This paper presents a comprehensive review of HNNs with an emphasis on their origin, concepts, biological perspective, construction framework and supporting systems. Furthermore, insights and suggestions for potential research directions are provided aiming to propel the advancement of the HNN paradigm.
Collapse
Affiliation(s)
- Faqiang Liu
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Hao Zheng
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Songchen Ma
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Weihao Zhang
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Xue Liu
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Yansong Chua
- Neuromorphic Computing Laboratory, China Nanhu Academy of Electronics and Information Technology, Jiaxing 314001, China
| | - Luping Shi
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Rong Zhao
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| |
Collapse
|
4
|
Wang J. Training multi-layer spiking neural networks with plastic synaptic weights and delays. Front Neurosci 2024; 17:1253830. [PMID: 38328553 PMCID: PMC10847234 DOI: 10.3389/fnins.2023.1253830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 12/04/2023] [Indexed: 02/09/2024] Open
Abstract
Spiking neural networks are usually considered as the third generation of neural networks, which hold the potential of ultra-low power consumption on corresponding hardware platforms and are very suitable for temporal information processing. However, how to efficiently train the spiking neural networks remains an open question, and most existing learning methods only consider the plasticity of synaptic weights. In this paper, we proposed a new supervised learning algorithm for multiple-layer spiking neural networks based on the typical SpikeProp method. In the proposed method, both the synaptic weights and delays are considered as adjustable parameters to improve both the biological plausibility and the learning performance. In addition, the proposed method inherits the advantages of SpikeProp, which can make full use of the temporal information of spikes. Various experiments are conducted to verify the performance of the proposed method, and the results demonstrate that the proposed method achieves a competitive learning performance compared with the existing related works. Finally, the differences between the proposed method and the existing mainstream multi-layer training algorithms are discussed.
Collapse
Affiliation(s)
- Jing Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
5
|
Xu F, Pan D, Zheng H, Ouyang Y, Jia Z, Zeng H. EESCN: A novel spiking neural network method for EEG-based emotion recognition. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107927. [PMID: 38000320 DOI: 10.1016/j.cmpb.2023.107927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 10/16/2023] [Accepted: 11/13/2023] [Indexed: 11/26/2023]
Abstract
BACKGROUND AND OBJECTIVE Although existing artificial neural networks have achieved good results in electroencephalograph (EEG) emotion recognition, further improvements are needed in terms of bio-interpretability and robustness. In this research, we aim to develop a highly efficient and high-performance method for emotion recognition based on EEG. METHODS We propose an Emo-EEGSpikeConvNet (EESCN), a novel emotion recognition method based on spiking neural network (SNN). It consists of a neuromorphic data generation module and a NeuroSpiking framework. The neuromorphic data generation module converts EEG data into 2D frame format as input to the NeuroSpiking framework, while the NeuroSpiking framework is used to extract spatio-temporal features of EEG for classification. RESULTS EESCN achieves high emotion recognition accuracies on DEAP and SEED-IV datasets, ranging from 94.56% to 94.81% on DEAP and a mean accuracy of 79.65% on SEED-IV. Compared to existing SNN methods, EESCN significantly improves EEG emotion recognition performance. In addition, it also has the advantages of faster running speed and less memory footprint. CONCLUSIONS EESCN has shown excellent performance and efficiency in EEG-based emotion recognition with potential for practical applications requiring portability and resource constraints.
Collapse
Affiliation(s)
- FeiFan Xu
- Hangzhou Dianzi University, School of Computer Science and Technology, HangZhou, ZheJiang, China.
| | - Deng Pan
- Hangzhou Dianzi University, School of Computer Science and Technology, HangZhou, ZheJiang, China.
| | - Haohao Zheng
- Hangzhou Dianzi University, School of Computer Science and Technology, HangZhou, ZheJiang, China.
| | - Yu Ouyang
- Hangzhou Dianzi University, School of Computer Science and Technology, HangZhou, ZheJiang, China.
| | - Zhe Jia
- Hangzhou Dianzi University, School of Computer Science and Technology, HangZhou, ZheJiang, China.
| | - Hong Zeng
- Hangzhou Dianzi University, School of Computer Science and Technology, HangZhou, ZheJiang, China; Key Laboratory of Brain Machine Collaborative of Zhejiang Province, HangZhou, ZheJiang, China.
| |
Collapse
|
6
|
Yu Q, Gao J, Wei J, Li J, Tan KC, Huang T. Improving Multispike Learning With Plastic Synaptic Delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10254-10265. [PMID: 35442893 DOI: 10.1109/tnnls.2022.3165527] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Emulating the spike-based processing in the brain, spiking neural networks (SNNs) are developed and act as a promising candidate for the new generation of artificial neural networks that aim to produce efficient cognitions as the brain. Due to the complex dynamics and nonlinearity of SNNs, designing efficient learning algorithms has remained a major difficulty, which attracts great research attention. Most existing ones focus on the adjustment of synaptic weights. However, other components, such as synaptic delays, are found to be adaptive and important in modulating neural behavior. How could plasticity on different components cooperate to improve the learning of SNNs remains as an interesting question. Advancing our previous multispike learning, we propose a new joint weight-delay plasticity rule, named TDP-DL, in this article. Plastic delays are integrated into the learning framework, and as a result, the performance of multispike learning is significantly improved. Simulation results highlight the effectiveness and efficiency of our TDP-DL rule compared to baseline ones. Moreover, we reveal the underlying principle of how synaptic weights and delays cooperate with each other through a synthetic task of interval selectivity and show that plastic delays can enhance the selectivity and flexibility of neurons by shifting information across time. Due to this capability, useful information distributed away in the time domain can be effectively integrated for a better accuracy performance, as highlighted in our generalization tasks of the image, speech, and event-based object recognitions. Our work is thus valuable and significant to improve the performance of spike-based neuromorphic computing.
Collapse
|
7
|
Luo X, Qu H, Wang Y, Yi Z, Zhang J, Zhang M. Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10141-10153. [PMID: 35436200 DOI: 10.1109/tnnls.2022.3164930] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The brain-inspired spiking neural networks (SNNs) hold the advantages of lower power consumption and powerful computing capability. However, the lack of effective learning algorithms has obstructed the theoretical advance and applications of SNNs. The majority of the existing learning algorithms for SNNs are based on the synaptic weight adjustment. However, neuroscience findings confirm that synaptic delays can also be modulated to play an important role in the learning process. Here, we propose a gradient descent-based learning algorithm for synaptic delays to enhance the sequential learning performance of single spiking neuron. Moreover, we extend the proposed method to multilayer SNNs with spike temporal-based error backpropagation. In the proposed multilayer learning algorithm, information is encoded in the relative timing of individual neuronal spikes, and learning is performed based on the exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. Experimental results on both synthetic and realistic datasets show significant improvements in learning efficiency and accuracy over the existing spike temporal-based learning algorithms. We also evaluate the proposed learning method in an SNN-based multimodal computational model for audiovisual pattern recognition, and it achieves better performance compared with its counterparts.
Collapse
|
8
|
Wu X, Song Y, Zhou Y, Jiang Y, Bai Y, Li X, Yang X. STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks. Front Neurosci 2023; 17:1261543. [PMID: 38027490 PMCID: PMC10667472 DOI: 10.3389/fnins.2023.1261543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Spiking Neural Networks (SNNs) have shown great promise in processing spatio-temporal information compared to Artificial Neural Networks (ANNs). However, there remains a performance gap between SNNs and ANNs, which impedes the practical application of SNNs. With intrinsic event-triggered property and temporal dynamics, SNNs have the potential to effectively extract spatio-temporal features from event streams. To leverage the temporal potential of SNNs, we propose a self-attention-based temporal-channel joint attention SNN (STCA-SNN) with end-to-end training, which infers attention weights along both temporal and channel dimensions concurrently. It models global temporal and channel information correlations with self-attention, enabling the network to learn 'what' and 'when' to attend simultaneously. Our experimental results show that STCA-SNNs achieve better performance on N-MNIST (99.67%), CIFAR10-DVS (81.6%), and N-Caltech 101 (80.88%) compared with the state-of-the-art SNNs. Meanwhile, our ablation study demonstrates that STCA-SNNs improve the accuracy of event stream classification tasks.
Collapse
Affiliation(s)
| | - Yong Song
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Ya Zhou
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | | | | | | | | |
Collapse
|
9
|
Fang W, Chen Y, Ding J, Yu Z, Masquelier T, Chen D, Huang L, Zhou H, Li G, Tian Y. SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence. SCIENCE ADVANCES 2023; 9:eadi1480. [PMID: 37801497 PMCID: PMC10558124 DOI: 10.1126/sciadv.adi1480] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 09/05/2023] [Indexed: 10/08/2023]
Abstract
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties. As the emerging spiking deep learning paradigm attracts increasing interest, traditional programming frameworks cannot meet the demands of the automatic differentiation, parallel computation acceleration, and high integration of processing neuromorphic datasets and deployment. In this work, we present the SpikingJelly framework to address the aforementioned dilemma. We contribute a full-stack toolkit for preprocessing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips. Compared to existing methods, the training of deep SNNs can be accelerated 11×, and the superior extensibility and flexibility of SpikingJelly enable users to accelerate custom models at low costs through multilevel inheritance and semiautomatic code generation. SpikingJelly paves the way for synthesizing truly energy-efficient SNN-based machine intelligence systems, which will enrich the ecology of neuromorphic computing.
Collapse
Affiliation(s)
- Wei Fang
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
| | - Yanqi Chen
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
| | - Jianhao Ding
- School of Computer Science, Peking University, China
| | - Zhaofei Yu
- Institute for Artificial Intelligence, Peking University, China
| | - Timothée Masquelier
- Centre de Recherche Cerveau et Cognition (CERCO), UMR5549 CNRS–Université Toulouse 3, France
| | - Ding Chen
- Peng Cheng Laboratory, China
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
| | - Liwei Huang
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
| | | | - Guoqi Li
- Institute of Automation, Chinese Academy of Sciences, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, China
| | - Yonghong Tian
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
| |
Collapse
|
10
|
Zhang Y, Xiang S, Jiang S, Han Y, Guo X, Zheng L, Shi Y, Hao Y. Hybrid photonic deep convolutional residual spiking neural networks for text classification. OPTICS EXPRESS 2023; 31:28489-28502. [PMID: 37710902 DOI: 10.1364/oe.497218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 07/30/2023] [Indexed: 09/16/2023]
Abstract
Spiking neural networks (SNNs) offer powerful computation capability due to its event-driven nature and temporal processing. However, it is still limited to shallow structure and simple tasks due to the training difficulty. In this work, we propose a deep convolutional residual spiking neural network (DCRSNN) for text classification tasks. In the DCRSNN, the feature extraction is achieved via a convolution SNN with residual connection, using the surrogate gradient direct training technique. Classification is performed by a fully-connected network. We also suggest a hybrid photonic DCRSNN, in which photonic SNNs are used for classification with a converted training method. The accuracy of hard and soft reset methods, as well as three different surrogate functions, were evaluated and compared across four different datasets. Results indicated a maximum accuracy of 76.36% for MR, 91.03% for AG News, 88.06% for IMDB and 93.99% for Yelp review polarity. Soft reset methods used in the deep convolutional SNN yielded slightly better accuracy than their hard reset counterparts. We also considered the effects of different pooling methods and observation time windows and found that the convergence accuracy achieved by convolutional SNNs was comparable to that of convolutional neural networks under the same conditions. Moreover, the hybrid photonic DCRSNN also shows comparable testing accuracy. This work provides new insights into extending the SNN applications in the field of text classification and natural language processing, which is interesting for the resources-restrained scenarios.
Collapse
|
11
|
Hwang S, Hwang Y, Kim D, Lee J, Choe HK, Lee J, Kang H, Kung J. ReplaceNet: real-time replacement of a biological neural circuit with a hardware-assisted spiking neural network. Front Neurosci 2023; 17:1161592. [PMID: 37638314 PMCID: PMC10448768 DOI: 10.3389/fnins.2023.1161592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 07/28/2023] [Indexed: 08/29/2023] Open
Abstract
Recent developments in artificial neural networks and their learning algorithms have enabled new research directions in computer vision, language modeling, and neuroscience. Among various neural network algorithms, spiking neural networks (SNNs) are well-suited for understanding the behavior of biological neural circuits. In this work, we propose to guide the training of a sparse SNN in order to replace a sub-region of a cultured hippocampal network with limited hardware resources. To verify our approach with a realistic experimental setup, we record spikes of cultured hippocampal neurons with a microelectrode array (in vitro). The main focus of this work is to dynamically cut unimportant synapses during SNN training on the fly so that the model can be realized on resource-constrained hardware, e.g., implantable devices. To do so, we adopt a simple STDP learning rule to easily select important synapses that impact the quality of spike timing learning. By combining the STDP rule with online supervised learning, we can precisely predict the spike pattern of the cultured network in real-time. The reduction in the model complexity, i.e., the reduced number of connections, significantly reduces the required hardware resources, which is crucial in developing an implantable chip for the treatment of neurological disorders. In addition to the new learning algorithm, we prototype a sparse SNN hardware on a small FPGA with pipelined execution and parallel computing to verify the possibility of real-time replacement. As a result, we can replace a sub-region of the biological neural circuit within 22 μs using 2.5 × fewer hardware resources, i.e., by allowing 80% sparsity in the SNN model, compared to the fully-connected SNN model. With energy-efficient algorithms and hardware, this work presents an essential step toward real-time neuroprosthetic computation.
Collapse
Affiliation(s)
- Sangwoo Hwang
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Yujin Hwang
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Duhee Kim
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Junhee Lee
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Han Kyoung Choe
- Department of Brain Sciences, DGIST, Daegu, Republic of Korea
| | - Junghyup Lee
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Hongki Kang
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Jaeha Kung
- School of Electrical Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
12
|
Xue X, Wimmer RD, Halassa MM, Chen ZS. Spiking Recurrent Neural Networks Represent Task-Relevant Neural Sequences in Rule-Dependent Computation. Cognit Comput 2023; 15:1167-1189. [PMID: 37771569 PMCID: PMC10530699 DOI: 10.1007/s12559-022-09994-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 01/13/2022] [Indexed: 11/28/2022]
Abstract
Background Prefrontal cortical neurons play essential roles in performing rule-dependent tasks and working memory-based decision making. Methods Motivated by PFG recordings of task-performing mice, we developed an excitatory-inhibitory spiking recurrent neural network (SRNN) to perform a rule-dependent two-alternative forced choice (2AFC) task. We imposed several important biological constraints onto the SRNN, and adapted spike frequency adaptation (SFA) and SuperSpike gradient methods to train the SRNN efficiently. Results The trained SRNN produced emergent rule-specific tunings in single-unit representations, showing rule-dependent population dynamics that resembled experimentally observed data. Under varying test conditions, we manipulated the SRNN parameters or configuration in computer simulations, and we investigated the impacts of rule-coding error, delay duration, recurrent weight connectivity and sparsity, and excitation/inhibition (E/I) balance on both task performance and neural representations. Conclusions Overall, our modeling study provides a computational framework to understand neuronal representations at a fine timescale during working memory and cognitive control, and provides new experimentally testable hypotheses in future experiments.
Collapse
Affiliation(s)
- Xiaohe Xue
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - Ralf D. Wimmer
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Zhe Sage Chen
- Department of Psychiatry, New York University School of Medicine, New York, NY, USA
- Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
13
|
Khatiboun DF, Rezaeiyan Y, Ronchini M, Sadeghi M, Zamani M, Moradi F. Digital Hardware Implementation of ReSuMe Learning Algorithm for Spiking Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083592 DOI: 10.1109/embc40787.2023.10340282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Within this paper, we demonstrate the feasibility of the FPGA implementation as well as the 180nm CMOS circuit design of a particular biologically plausible supervised learning algorithm (ReSuMe). Based on the Spike-Timing-Dependent Plasticity (STDP) learning phenomenon, this design proposes a fully configurable implementation of STDP learning window function to adjust the learning process for different applications, optimizing results for each use case. The CMOS implementation in 180nm technology node supplied with 1.8V shows a core area of 0.78mm2 and verifies the suitability of an on-chip ReSuMe learning algorithm implementation and its capability of integration with a multitude of external and already designed structures of Spiking Neural Networks (SNNs).
Collapse
|
14
|
Zhang Y, Xiang S, Han Y, Guo X, Zhang W, Tan Q, Han G, Hao Y. BP-based supervised learning algorithm for multilayer photonic spiking neural network and hardware implementation. OPTICS EXPRESS 2023; 31:16549-16559. [PMID: 37157731 DOI: 10.1364/oe.487047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We introduce a supervised learning algorithm for photonic spiking neural network (SNN) based on back propagation. For the supervised learning algorithm, the information is encoded into spike trains with different strength, and the SNN is trained according to different patterns composed of different spike numbers of the output neurons. Furthermore, the classification task is performed numerically and experimentally based on the supervised learning algorithm in the SNN. The SNN is composed of photonic spiking neuron based on vertical-cavity surface-emitting laser which is functionally similar to leaky-integrate and fire neuron. The results prove the demonstration of the algorithm implementation on hardware. To seek ultra-low power consumption and ultra-low delay, it is great significance to design and implement a hardware-friendly learning algorithm of photonic neural networks and realize hardware-algorithm collaborative computing.
Collapse
|
15
|
Jiang Z, Xu J, Zhang T, Poo MM, Xu B. Origin of the efficiency of spike timing-based neural computation for processing temporal information. Neural Netw 2023; 160:84-96. [PMID: 36621172 DOI: 10.1016/j.neunet.2022.12.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 10/12/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022]
Abstract
Although the advantage of spike timing-based over rate-based network computation has been recognized, the underlying mechanism remains unclear. Using Tempotron and Perceptron as elementary neural models, we examined the intrinsic difference between spike timing-based and rate-based computations. For more direct comparison, we modified Tempotron computation into rate-based computation with the retention of some temporal information. Previous studies have shown that spike timing-based computation are computationally more powerful than rate-based computation in terms of the number of computational units required and the capability in classifying random patterns. Our study showed that spike timing-based and rate-based Tempotron computations provided similar capability in classifying random spike patterns, as well as in text sentiment classification and spam text detection. However, spike timing-based computation is superior in performing a task involving discriminating forward vs. reverse sequence of events, i.e., information mainly temporal in nature. Further studies revealed that this superiority required the asymmetry in the profile of the postsynaptic potential (PSP), and that temporal sequence information was converted to biased spatial distribution of synaptic weight modifications during learning. Thus, the intrinsic PSP asymmetry is a mechanistic basis for the high efficiency of spike timing-based computation for processing temporal information.
Collapse
Affiliation(s)
- Zhiwei Jiang
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Jiaming Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Tielin Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Mu-Ming Poo
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; University of Chinese Academy of Sciences, Beijing 100190, China; Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Lingang Laboratory, Shanghai 200031, China.
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China.
| |
Collapse
|
16
|
Guo L, Liu D, Wu Y, Xu G. Comparison of spiking neural networks with different topologies based on anti-disturbance ability under external noise. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
17
|
Sakemi Y, Morino K, Morie T, Aihara K. A Supervised Learning Algorithm for Multilayer Spiking Neural Networks Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:394-408. [PMID: 34280109 DOI: 10.1109/tnnls.2021.3095068] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs) are brain-inspired mathematical models with the ability to process information in the form of spikes. SNNs are expected to provide not only new machine-learning algorithms but also energy-efficient computational models when implemented in very-large-scale integration (VLSI) circuits. In this article, we propose a novel supervised learning algorithm for SNNs based on temporal coding. A spiking neuron in this algorithm is designed to facilitate analog VLSI implementations with analog resistive memory, by which ultrahigh energy efficiency can be achieved. We also propose several techniques to improve the performance on recognition tasks and show that the classification accuracy of the proposed algorithm is as high as that of the state-of-the-art temporal coding SNN algorithms on the MNIST and Fashion-MNIST datasets. Finally, we discuss the robustness of the proposed SNNs against variations that arise from the device manufacturing process and are unavoidable in analog VLSI implementation. We also propose a technique to suppress the effects of variations in the manufacturing process on the recognition performance.
Collapse
|
18
|
Hu L, Liao X. Voltage slope guided learning in spiking neural networks. Front Neurosci 2022; 16:1012964. [PMID: 36440266 PMCID: PMC9685168 DOI: 10.3389/fnins.2022.1012964] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 10/25/2022] [Indexed: 04/19/2024] Open
Abstract
A thorny problem in machine learning is how to extract useful clues related to delayed feedback signals from the clutter of input activity, known as the temporal credit-assignment problem. The aggregate-label learning algorithms make an explicit representation of this problem by training spiking neurons to assign the aggregate feedback signal to potentially effective clues. However, earlier aggregate-label learning algorithms suffered from inefficiencies due to the large amount of computation, while recent algorithms that have solved this problem may fail to learn due to the inability to find adjustment points. Therefore, we propose a membrane voltage slope guided algorithm (VSG) to further cope with this limitation. Direct dependence on the membrane voltage when finding the key point of weight adjustment makes VSG avoid intensive calculation, but more importantly, the membrane voltage that always exists makes it impossible to lose the adjustment point. Experimental results show that the proposed algorithm can correlate delayed feedback signals with the effective clues embedded in background spiking activity, and also achieves excellent performance on real medical classification datasets and speech classification datasets. The superior performance makes it a meaningful reference for aggregate-label learning on spiking neural networks.
Collapse
Affiliation(s)
- Lvhui Hu
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Xin Liao
- Information Center, Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, China
| |
Collapse
|
19
|
Zhou Y, Xu N, Gao B, Zhuge F, Tang Z, Deng X, Li Y, He Y, Miao X. Complementary Memtransistor-Based Multilayer Neural Networks for Online Supervised Learning Through (Anti-)Spike-Timing-Dependent Plasticity. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6640-6651. [PMID: 34081587 DOI: 10.1109/tnnls.2021.3082911] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose a complete hardware-based architecture of multilayer neural networks (MNNs), including electronic synapses, neurons, and periphery circuitry to implement supervised learning (SL) algorithm of extended remote supervised method (ReSuMe). In this system, complementary (a pair of n- and p-type) memtransistors (C-MTs) are used as an electrical synapse. By applying the learning rule of spike-timing-dependent plasticity (STDP) to the memtransistor connecting presynaptic neuron to the output one whereas the contrary anti-STDP rule to the other memtransistor connecting presynaptic neuron to the teacher one, extended ReSuMe with multiple layers is realized without the usage of those complicated supervising modules in previous approaches. In this way, both the C-MT-based chip area and power consumption of the learning circuit for weight updating operation are drastically decreased comparing with the conventional single memtransistor (S-MT)-based designs. Two typical benchmarks, the linearly nonseparable benchmark XOR problem and Mixed National Institute of Standards and Technology database (MNIST) recognition have been successfully tackled using the proposed MNN system while impact of the nonideal factors of realistic devices has been evaluated.
Collapse
|
20
|
George AM, Dey S, Banerjee D, Mukherjee A, Suri M. Online Time-Series Forecasting using Spiking Reservoir. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.10.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
21
|
Spike-train level supervised learning algorithm based on bidirectional modification for liquid state machines. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04152-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
22
|
Wang Z, Liu J, Ma Y, Chen B, Zheng N, Ren P. Perturbation of Spike Timing Benefits Neural Network Performance on Similarity Search. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4361-4372. [PMID: 33606643 DOI: 10.1109/tnnls.2021.3056694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Perturbation has a positive effect, as it contributes to the stability of neural systems through adaptation and robustness. For example, deep reinforcement learning generally engages in exploratory behavior by injecting noise into the action space and network parameters. It can consistently increase the agent's exploration ability and lead to richer sets of behaviors. Evolutionary strategies also apply parameter perturbations, which makes network architecture robust and diverse. Our main concern is whether the notion of synaptic perturbation introduced in a spiking neural network (SNN) is biologically relevant or if novel frameworks and components are desired to account for the perturbation properties of artificial neural systems. In this work, we first review part of the locality-sensitive hashing (LSH) of similarity search, the FLY algorithm, as recently published in Science, and propose an improved architecture, time-shifted spiking LSH (TS-SLSH), with the consideration of temporal perturbations of the firing moments of spike pulses. Experiment results show promising performance of the proposed method and demonstrate its generality to various spiking neuron models. Therefore, we expect temporal perturbation to play an active role in SNN performance.
Collapse
|
23
|
Gao S, Xiang SY, Song ZW, Han YN, Zhang YN, Hao Y. Motion detection and direction recognition in a photonic spiking neural network consisting of VCSELs-SA. OPTICS EXPRESS 2022; 30:31701-31713. [PMID: 36242247 DOI: 10.1364/oe.465653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/04/2022] [Indexed: 06/16/2023]
Abstract
Motion detection and direction recognition are two important fundamental visual functions among the many cognitive functions performed by the human visual system. The retina and visual cortex are indispensable for composing the visual nervous system. The retina is responsible for transmitting electrical signals converted from light signals to the visual cortex of the brain. We propose a photonic spiking neural network (SNN) based on vertical-cavity surface-emitting lasers with an embedding saturable absorber (VCSELs-SA) with temporal integration effects, and demonstrate that the motion detection and direction recognition tasks can be solved by mimicking the visual nervous system. Simulation results reveal that the proposed photonic SNN with a modified supervised algorithm combining the tempotron and the STDP rule can correctly detect the motion and recognize the direction angles, and is robust to time jitter and the current difference between VCSEL-SAs. The proposed approach adopts a low-power photonic neuromorphic system for real-time information processing, which provides theoretical support for the large-scale application of hardware photonic SNN in the future.
Collapse
|
24
|
Spiking VGG7: Deep Convolutional Spiking Neural Network with Direct Training for Object Recognition. ELECTRONICS 2022. [DOI: 10.3390/electronics11132097] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
We propose a deep convolutional spiking neural network (DCSNN) with direct training to classify concrete bridge damage in a real engineering environment. The leaky-integrate-and-fire (LIF) neuron model is employed in our DCSNN that is similar to VGG. Poisson encoding and convolution encoding strategies are considered. The gradient surrogate method is introduced to realize the supervised training for the DCSNN. In addition, we have examined the effect of observation time step on the network performance. The testing performance for two different spike encoding strategies are compared. The results show that the DCSNN using gradient surrogate method can achieve a performance of 97.83%, which is comparable to traditional CNN. We also present a comparison with STDP-based unsupervised learning and a converted algorithm, and the proposed DCSNN is proved to have the best performance. To demonstrate the generalization performance of the model, we also use a public dataset for comparison. This work paves the way for the practical engineering applications of the deep SNNs.
Collapse
|
25
|
Cramer B, Stradmann Y, Schemmel J, Zenke F. The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2744-2757. [PMID: 33378266 DOI: 10.1109/tnnls.2020.3044364] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification data sets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Furthermore, we applied this conversion to an existing and a novel speech data set. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit data set that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these data sets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.
Collapse
|
26
|
Makarov VA, Lobov SA, Shchanikov S, Mikhaylov A, Kazantsev VB. Toward Reflective Spiking Neural Networks Exploiting Memristive Devices. Front Comput Neurosci 2022; 16:859874. [PMID: 35782090 PMCID: PMC9243340 DOI: 10.3389/fncom.2022.859874] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 05/10/2022] [Indexed: 11/29/2022] Open
Abstract
The design of modern convolutional artificial neural networks (ANNs) composed of formal neurons copies the architecture of the visual cortex. Signals proceed through a hierarchy, where receptive fields become increasingly more complex and coding sparse. Nowadays, ANNs outperform humans in controlled pattern recognition tasks yet remain far behind in cognition. In part, it happens due to limited knowledge about the higher echelons of the brain hierarchy, where neurons actively generate predictions about what will happen next, i.e., the information processing jumps from reflex to reflection. In this study, we forecast that spiking neural networks (SNNs) can achieve the next qualitative leap. Reflective SNNs may take advantage of their intrinsic dynamics and mimic complex, not reflex-based, brain actions. They also enable a significant reduction in energy consumption. However, the training of SNNs is a challenging problem, strongly limiting their deployment. We then briefly overview new insights provided by the concept of a high-dimensional brain, which has been put forward to explain the potential power of single neurons in higher brain stations and deep SNN layers. Finally, we discuss the prospect of implementing neural networks in memristive systems. Such systems can densely pack on a chip 2D or 3D arrays of plastic synaptic contacts directly processing analog information. Thus, memristive devices are a good candidate for implementing in-memory and in-sensor computing. Then, memristive SNNs can diverge from the development of ANNs and build their niche, cognitive, or reflective computations.
Collapse
Affiliation(s)
- Valeri A. Makarov
- Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- *Correspondence: Valeri A. Makarov,
| | - Sergey A. Lobov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Sergey Shchanikov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Department of Information Technologies, Vladimir State University, Vladimir, Russia
| | - Alexey Mikhaylov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Viktor B. Kazantsev
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| |
Collapse
|
27
|
Iranmehr E, Shouraki SB, Faraji M. Developing a structural-based local learning rule for classification tasks using ionic liquid space-based reservoir. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07345-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
28
|
Agebure MA, Oyetunji EO, Baagyere EY. A three-tier road condition classification system using a spiking neural network model. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2020.08.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
29
|
Petschenig H, Bisio M, Maschietto M, Leparulo A, Legenstein R, Vassanelli S. Classification of Whisker Deflections From Evoked Responses in the Somatosensory Barrel Cortex With Spiking Neural Networks. Front Neurosci 2022; 16:838054. [PMID: 35495034 PMCID: PMC9047904 DOI: 10.3389/fnins.2022.838054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
Spike-based neuromorphic hardware has great potential for low-energy brain-machine interfaces, leading to a novel paradigm for neuroprosthetics where spiking neurons in silicon read out and control activity of brain circuits. Neuromorphic processors can receive rich information about brain activity from both spikes and local field potentials (LFPs) recorded by implanted neural probes. However, it was unclear whether spiking neural networks (SNNs) implemented on such devices can effectively process that information. Here, we demonstrate that SNNs can be trained to classify whisker deflections of different amplitudes from evoked responses in a single barrel of the rat somatosensory cortex. We show that the classification performance is comparable or even superior to state-of-the-art machine learning approaches. We find that SNNs are rather insensitive to recorded signal type: both multi-unit spiking activity and LFPs yield similar results, where LFPs from cortical layers III and IV seem better suited than those of deep layers. In addition, no hand-crafted features need to be extracted from the data—multi-unit activity can directly be fed into these networks and a simple event-encoding of LFPs is sufficient for good performance. Furthermore, we find that the performance of SNNs is insensitive to the network state—their performance is similar during UP and DOWN states.
Collapse
Affiliation(s)
- Horst Petschenig
- Faculty of Computer Science and Biomedical Engineering, Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Marta Bisio
- NeuroChip Laboratory, Department of Biomedical Sciences, University of Padova, Padova, Italy
| | - Marta Maschietto
- NeuroChip Laboratory, Department of Biomedical Sciences, University of Padova, Padova, Italy
| | - Alessandro Leparulo
- NeuroChip Laboratory, Department of Biomedical Sciences, University of Padova, Padova, Italy
| | - Robert Legenstein
- Faculty of Computer Science and Biomedical Engineering, Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
- Robert Legenstein
| | - Stefano Vassanelli
- NeuroChip Laboratory, Department of Biomedical Sciences, University of Padova, Padova, Italy
- *Correspondence: Stefano Vassanelli
| |
Collapse
|
30
|
Mo L, Wang G, Long E, Zhuo M. ALSA: Associative Learning Based Supervised Learning Algorithm for SNN. Front Neurosci 2022; 16:838832. [PMID: 35431777 PMCID: PMC9008323 DOI: 10.3389/fnins.2022.838832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Spiking neural network (SNN) is considered to be the brain-like model that best conforms to the biological mechanism of the brain. Due to the non-differentiability of the spike, the training method of SNNs is still incomplete. This paper proposes a supervised learning method for SNNs based on associative learning: ALSA. The method is based on the associative learning mechanism, and its realization is similar to the animal conditioned reflex process, with strong physiological plausibility and rationality. This method uses improved spike-timing-dependent plasticity (STDP) rules, combined with a teacher layer to induct spikes of neurons, to strengthen synaptic connections between input spike patterns and specified output neurons, and weaken synaptic connections between unrelated patterns and unrelated output neurons. Based on ALSA, this paper also completed the supervised learning classification tasks of the IRIS dataset and the MNIST dataset, and achieved 95.7 and 91.58% recognition accuracy, respectively, which fully proves that ALSA is a feasible SNNs supervised learning method. The innovation of this paper is to establish a biological plausible supervised learning method for SNNs, which is based on the STDP learning rules and the associative learning mechanism that exists widely in animal training.
Collapse
|
31
|
Yu Q, Li S, Tang H, Wang L, Dang J, Tan KC. Toward Efficient Processing and Learning With Spikes: New Approaches for Multispike Learning. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1364-1376. [PMID: 32356771 DOI: 10.1109/tcyb.2020.2984888] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Spikes are the currency in central nervous systems for information transmission and processing. They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing. However, efficient processing and learning of discrete spikes still remain a challenging problem. In this article, we make our contributions toward this direction. A simplified spiking neuron model is first introduced with the effects of both synaptic input and firing output on the membrane potential being modeled with an impulse function. An event-driven scheme is then presented to further improve the processing efficiency. Based on the neuron model, we propose two new multispike learning rules which demonstrate better performance over other baselines on various tasks, including association, classification, and feature detection. In addition to efficiency, our learning rules demonstrate high robustness against the strong noise of different types. They can also be generalized to different spike coding schemes for the classification task, and notably, the single neuron is capable of solving multicategory classifications with our learning rules. In the feature detection task, we re-examine the ability of unsupervised spike-timing-dependent plasticity with its limitations being presented, and find a new phenomenon of losing selectivity. In contrast, our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied. Moreover, our rules cannot only detect features but also discriminate them. The improved performance of our methods would contribute to neuromorphic computing as a preferable choice.
Collapse
|
32
|
Yu Q, Song S, Ma C, Pan L, Tan KC. Synaptic Learning With Augmented Spikes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1134-1146. [PMID: 33471768 DOI: 10.1109/tnnls.2020.3040969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Traditional neuron models use analog values for information representation and computation, while all-or-nothing spikes are employed in the spiking ones. With a more brain-like processing paradigm, spiking neurons are more promising for improvements in efficiency and computational capability. They extend the computation of traditional neurons with an additional dimension of time carried by all-or-nothing spikes. Could one benefit from both the accuracy of analog values and the time-processing capability of spikes? In this article, we introduce a concept of augmented spikes to carry complementary information with spike coefficients in addition to spike latencies. New augmented spiking neuron model and synaptic learning rules are proposed to process and learn patterns of augmented spikes. We provide systematic insights into the properties and characteristics of our methods, including classification of augmented spike patterns, learning capacity, construction of causality, feature detection, robustness, and applicability to practical tasks, such as acoustic and visual pattern recognition. Our augmented approaches show several advanced learning properties and reliably outperform the baseline ones that use typical all-or-nothing spikes. Our approaches significantly improve the accuracies of a temporal-based approach on sound and MNIST recognition tasks to 99.38% and 97.90%, respectively, highlighting the effectiveness and potential merits of our methods. More importantly, our augmented approaches are versatile and can be easily generalized to other spike-based systems, contributing to a potential development for them, including neuromorphic computing.
Collapse
|
33
|
Supervised learning algorithm based on spike optimization mechanism for multilayer spiking neural networks. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-021-01500-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
34
|
Jiang J, Tian F, Liang J, Shen Z, Liu Y, Zheng J, Wu H, Zhang Z, Fang C, Zhao Y, Shi J, Xue X, Zeng X. MSPAN: A Memristive Spike-Based Computing Engine With Adaptive Neuron for Edge Arrhythmia Detection. Front Neurosci 2021; 15:761127. [PMID: 34975373 PMCID: PMC8715923 DOI: 10.3389/fnins.2021.761127] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 11/22/2021] [Indexed: 11/13/2022] Open
Abstract
In this work, a memristive spike-based computing in memory (CIM) system with adaptive neuron (MSPAN) is proposed to realize energy-efficient remote arrhythmia detection with high accuracy in edge devices by software and hardware co-design. A multi-layer deep integrative spiking neural network (DiSNN) is first designed with an accuracy of 93.6% in 4-class ECG classification tasks. Then a memristor-based CIM architecture and the corresponding mapping method are proposed to deploy the DiSNN. By evaluation, the overall system achieves an accuracy of over 92.25% on the MIT-BIH dataset while the area is 3.438 mm2 and the power consumption is 0.178 μJ per heartbeat at a clock frequency of 500 MHz. These results reveal that the proposed MSPAN system is promising for arrhythmia detection in edge devices.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | - Xiaoyong Xue
- State Key Laboratory of ASIC and System, School of Microelectronics, Fudan University, Shanghai, China
| | | |
Collapse
|
35
|
Supervised Learning Algorithm for Multilayer Spiking Neural Networks with Long-Term Memory Spike Response Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:8592824. [PMID: 34868299 PMCID: PMC8635912 DOI: 10.1155/2021/8592824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 10/17/2021] [Accepted: 10/21/2021] [Indexed: 11/18/2022]
Abstract
As a new brain-inspired computational model of artificial neural networks, spiking neural networks transmit and process information via precisely timed spike trains. Constructing efficient learning methods is a significant research field in spiking neural networks. In this paper, we present a supervised learning algorithm for multilayer feedforward spiking neural networks; all neurons can fire multiple spikes in all layers. The feedforward network consists of spiking neurons governed by biologically plausible long-term memory spike response model, in which the effect of earlier spikes on the refractoriness is not neglected to incorporate adaptation effects. The gradient descent method is employed to derive synaptic weight updating rule for learning spike trains. The proposed algorithm is tested and verified on spatiotemporal pattern learning problems, including a set of spike train learning tasks and nonlinear pattern classification problems on four UCI datasets. Simulation results indicate that the proposed algorithm can improve learning accuracy in comparison with other supervised learning algorithms.
Collapse
|
36
|
Wang Z, Zhang Y, Shi H, Cao L, Yan C, Xu G. Recurrent spiking neural network with dynamic presynaptic currents based on backpropagation. INT J INTELL SYST 2021. [DOI: 10.1002/int.22772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Zijian Wang
- School of Computer Science and Technology Donghua University Shanghai China
| | - Yanting Zhang
- School of Computer Science and Technology Donghua University Shanghai China
| | - Haibo Shi
- School of Statistics and Management Shanghai University of Finance and Economics Shanghai China
| | - Lei Cao
- Department of Electronic Engineering Shanghai Maritime University Shanghai China
| | - Cairong Yan
- School of Computer Science and Technology Donghua University Shanghai China
| | - Guangwei Xu
- School of Computer Science and Technology Donghua University Shanghai China
| |
Collapse
|
37
|
Tran T, Rekabdar B, Ekenna C. Deep Learning Methods in Predicting Gene Expression Levels for the Malaria Parasite. Front Genet 2021; 12:721068. [PMID: 34630516 PMCID: PMC8493083 DOI: 10.3389/fgene.2021.721068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 08/25/2021] [Indexed: 11/13/2022] Open
Abstract
Malaria is a mosquito-borne disease caused by single-celled blood parasites of the genus Plasmodium. The most severe cases of this disease are caused by the Plasmodium species, Falciparum. Once infected, a human host experiences symptoms of recurrent and intermittent fevers occurring over a time-frame of 48 hours, attributed to the synchronized developmental cycle of the parasite during the blood stage. To understand the regulated periodicity of Plasmodium falciparum transcription, this paper forecast and predict the P. falciparum gene transcription during its blood stage life cycle implementing a well-tuned recurrent neural network with gated recurrent units. Additionally, we also employ a spiking neural network to predict the expression levels of the P. falciparum gene. We provide results of this prediction on multiple genes including potential genes that express possible drug target enzymes. Our results show a high level of accuracy in being able to predict and forecast the expression levels of the different genes.
Collapse
Affiliation(s)
- Tuan Tran
- Department of Computer Science, University at Albany, Albany, NY, United States
| | - Banafsheh Rekabdar
- Department of Computer Science, Southern Illinois University, Carbondale, IL, United States
| | - Chinwe Ekenna
- Department of Computer Science, University at Albany, Albany, NY, United States
| |
Collapse
|
38
|
Madhavan A, Daniels MW, Stiles MD. Temporal State Machines: Using Temporal Memory to Stitch Time-based Graph Computations. ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS 2021; 17:10.1145/3451214. [PMID: 36575655 PMCID: PMC9792072 DOI: 10.1145/3451214] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 02/01/2021] [Indexed: 06/17/2023]
Abstract
Race logic, an arrival-time-coded logic family, has demonstrated energy and performance improvements for applications ranging from dynamic programming to machine learning. However, the various ad hoc mappings of algorithms into hardware rely on researcher ingenuity and result in custom architectures that are difficult to systematize. We propose to associate race logic with the mathematical field of tropical algebra, enabling a more methodical approach toward building temporal circuits. This association between the mathematical primitives of tropical algebra and generalized race logic computations guides the design of temporally coded tropical circuits. It also serves as a framework for expressing high-level timing-based algorithms. This abstraction, when combined with temporal memory, allows for the systematic exploration of race logic-based temporal architectures by making it possible to partition feed-forward computations into stages and organize them into a state machine. We leverage analog memristor-based temporal memories to design such a state machine that operates purely on time-coded wavefronts. We implement a version of Dijkstra's algorithm to evaluate this temporal state machine. This demonstration shows the promise of expanding the expressibility of temporal computing to enable it to deliver significant energy and throughput advantages.
Collapse
Affiliation(s)
- Advait Madhavan
- University of Maryland and National Institute of Standards and Technology
| | | | | |
Collapse
|
39
|
|
40
|
Xiang S, Ren Z, Song Z, Zhang Y, Guo X, Han G, Hao Y. Computing Primitive of Fully VCSEL-Based All-Optical Spiking Neural Network for Supervised Learning and Pattern Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2494-2505. [PMID: 32673197 DOI: 10.1109/tnnls.2020.3006263] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose computing primitive for an all-optical spiking neural network (SNN) based on vertical-cavity surface-emitting lasers (VCSELs) for supervised learning by using biologically plausible mechanisms. The spike-timing-dependent plasticity (STDP) model was established based on the dynamics of the vertical-cavity semiconductor optical amplifier (VCSOA) subject to dual-optical pulse injection. The neuron-synapse self-consistent unified model of the all-optical SNN was developed, which enables reproducing the essential neuron-like dynamics and STDP function. Optical character numbers are trained and tested by the proposed fully VCSEL-based all-optical SNN. Simulation results show that the proposed all-optical SNN is capable of recognizing ten numbers by a supervised learning algorithm, in which the input and output patterns as well as the teacher signals of the all-optical SNN are represented by spatiotemporal fashions. Moreover, the lateral inhibition is not required in our proposed architecture, which is friendly to the hardware implementation. The system-level unified model enables architecture-algorithm codesigns and optimization of all-optical SNN. To the best of our knowledge, the computing primitive of an all-optical SNN based on VCSELs for supervised learning has not yet been reported, which paves the way toward fully VCSEL-based large-scale photonic neuromorphic systems with low power consumption.
Collapse
|
41
|
Wu T, Pan L, Yu Q, Tan KC. Numerical Spiking Neural P Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2443-2457. [PMID: 32649281 DOI: 10.1109/tnnls.2020.3005538] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Spiking neural P (SN P) systems are a class of discrete neuron-inspired computation models, where information is encoded by the numbers of spikes in neurons and the timing of spikes. However, due to the discontinuous nature of the integrate-and-fire behavior of neurons and the symbolic representation of information, SN P systems are incompatible with the gradient descent-based training algorithms, such as the backpropagation algorithm, and lack the capability of processing the numerical representation of information. In this work, motivated by the numerical nature of numerical P (NP) systems in the area of membrane computing, a novel class of SN P systems is proposed, called numerical SN P (NSN P) systems. More precisely, information is encoded by the values of variables, and the integrate-and-fire way of neurons and the distribution of produced values are described by continuous production functions. The computation power of NSN P systems is investigated. We prove that NSN P is Turing universal as number generating devices, where the production functions in each neuron are linear functions, each involving at most one variable; as number accepting devices, NSN P systems are proved to be universal as well, even if each neuron contains only one production function. These results show that even if a single neuron is simple in the sense that it contains one or two production functions and the production functions in each neuron are linear functions with one variable, a network of simple neurons are still computationally powerful. With the powerful computation power and the characteristic of continuous production functions, developing learning algorithms for NSN P systems is potentially exploitable.
Collapse
|
42
|
Lan Y, Wang X, Wang Y. Spatio-Temporal Sequential Memory Model With Mini-Column Neural Network. Front Neurosci 2021; 15:650430. [PMID: 34121986 PMCID: PMC8195288 DOI: 10.3389/fnins.2021.650430] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 03/15/2021] [Indexed: 11/13/2022] Open
Abstract
Memory is an intricate process involving various faculties of the brain and is a central component in human cognition. However, the exact mechanism that brings about memory in our brain remains elusive and the performance of the existing memory models is not satisfactory. To overcome these problems, this paper puts forward a brain-inspired spatio-temporal sequential memory model based on spiking neural networks (SNNs). Inspired by the structure of the neocortex, the proposed model is structured by many mini-columns composed of biological spiking neurons. Each mini-column represents one memory item, and the firing of different spiking neurons in the mini-column depends on the context of the previous inputs. The Spike-Timing-Dependant Plasticity (STDP) is used to update the connections between excitatory neurons and formulates association between two memory items. In addition, the inhibitory neurons are employed to prevent incorrect prediction, which contributes to improving the retrieval accuracy. Experimental results demonstrate that the proposed model can effectively store a huge number of data and accurately retrieve them when sufficient context is provided. This work not only provides a new memory model but also suggests how memory could be formulated with excitatory/inhibitory neurons, spike-based encoding, and mini-column structure.
Collapse
Affiliation(s)
- Yawen Lan
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China.,School of Information Engineering, Southwest University of Science and Technology, Mianyang, China
| | - Xiaobin Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuchen Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
43
|
Song S, Ma C, Sun W, Xu J, Dang J, Yu Q. Efficient learning with augmented spikes: A case study with image classification. Neural Netw 2021; 142:205-212. [PMID: 34023641 DOI: 10.1016/j.neunet.2021.05.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 02/15/2021] [Accepted: 05/06/2021] [Indexed: 10/21/2022]
Abstract
Efficient learning of spikes plays a valuable role in training spiking neural networks (SNNs) to have desired responses to input stimuli. However, current learning rules are limited to a binary form of spikes. The seemingly ubiquitous phenomenon of burst in nervous systems suggests a new way to carry more information with spike bursts in addition to times. Based on this, we introduce an advanced form, the augmented spikes, where spike coefficients are used to carry additional information. How could neurons learn and benefit from augmented spikes remains unclear. In this paper, we propose two new efficient learning rules to process spatiotemporal patterns composed of augmented spikes. Moreover, we examine the learning abilities of our methods with a synthetic recognition task of augmented spike patterns and two practical ones for image classification. Experimental results demonstrate that our rules are capable of extracting information carried by both the timing and coefficient of spikes. Our proposed approaches achieve remarkable performance and good robustness under various noise conditions, as compared to benchmarks. The improved performance indicates the merits of augmented spikes and our learning rules, which could be beneficial and generalized to a broad range of spike-based platforms.
Collapse
Affiliation(s)
- Shiming Song
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Chenxiang Ma
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Wei Sun
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Junhai Xu
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Jianwu Dang
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Qiang Yu
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| |
Collapse
|
44
|
Gardner B, Grüning A. Supervised Learning With First-to-Spike Decoding in Multilayer Spiking Neural Networks. Front Comput Neurosci 2021; 15:617862. [PMID: 33912021 PMCID: PMC8072060 DOI: 10.3389/fncom.2021.617862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 03/08/2021] [Indexed: 11/18/2022] Open
Abstract
Experimental studies support the notion of spike-based neuronal information processing in the brain, with neural circuits exhibiting a wide range of temporally-based coding strategies to rapidly and efficiently represent sensory stimuli. Accordingly, it would be desirable to apply spike-based computation to tackling real-world challenges, and in particular transferring such theory to neuromorphic systems for low-power embedded applications. Motivated by this, we propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems based on a rapid, first-to-spike decoding strategy. The proposed learning rule supports multiple spikes fired by stochastic hidden neurons, and yet is stable by relying on first-spike responses generated by a deterministic output layer. In addition to this, we also explore several distinct, spike-based encoding strategies in order to form compact representations of presented input data. We demonstrate the classification performance of the learning rule as applied to several benchmark datasets, including MNIST. The learning rule is capable of generalizing from the data, and is successful even when used with constrained network architectures containing few input and hidden layer neurons. Furthermore, we highlight a novel encoding strategy, termed "scanline encoding," that can transform image data into compact spatiotemporal patterns for subsequent network processing. Designing constrained, but optimized, network structures and performing input dimensionality reduction has strong implications for neuromorphic applications.
Collapse
Affiliation(s)
- Brian Gardner
- Department of Computer Science, University of Surrey, Guildford, United Kingdom
| | - André Grüning
- Faculty of Electrical Engineering and Computer Science, University of Applied Sciences, Stralsund, Germany
| |
Collapse
|
45
|
WOLIF: An efficiently tuned classifier that learns to classify non-linear temporal patterns without hidden layers. APPL INTELL 2021. [DOI: 10.1007/s10489-020-01934-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
46
|
Hwang S, Chang J, Oh MH, Min KK, Jang T, Park K, Yu J, Lee JH, Park BG. Low-Latency Spiking Neural Networks Using Pre-Charged Membrane Potential and Delayed Evaluation. Front Neurosci 2021; 15:629000. [PMID: 33679308 PMCID: PMC7935527 DOI: 10.3389/fnins.2021.629000] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 01/27/2021] [Indexed: 12/15/2022] Open
Abstract
Spiking neural networks (SNNs) have attracted many researchers' interests due to its biological plausibility and event-driven characteristic. In particular, recently, many studies on high-performance SNNs comparable to the conventional analog-valued neural networks (ANNs) have been reported by converting weights trained from ANNs into SNNs. However, unlike ANNs, SNNs have an inherent latency that is required to reach the best performance because of differences in operations of neuron. In SNNs, not only spatial integration but also temporal integration exists, and the information is encoded by spike trains rather than values in ANNs. Therefore, it takes time to achieve a steady-state of the performance in SNNs. The latency is worse in deep networks and required to be reduced for the practical applications. In this work, we propose a pre-charged membrane potential (PCMP) for the latency reduction in SNN. A variety of neural network applications (e.g., classification, autoencoder using MNIST and CIFAR-10 datasets) are trained and converted to SNNs to demonstrate the effect of the proposed approach. The latency of SNNs is successfully reduced without accuracy loss. In addition, we propose a delayed evaluation method (DE), by which the errors during the initial transient are discarded. The error spikes occurring in the initial transient is removed by DE, resulting in the further latency reduction. DE can be used in combination with PCMP for further latency reduction. Finally, we also show the advantages of the proposed methods in improving the number of spikes required to reach a steady-state of the performance in SNNs for energy-efficient computing.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Byung-Gook Park
- Inter-university Semiconductor Research Center (ISRC) and Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| |
Collapse
|
47
|
Zhang Y, Qu H, Luo X, Chen Y, Wang Y, Zhang M, Li Z. A new recursive least squares-based learning algorithm for spiking neurons. Neural Netw 2021; 138:110-125. [PMID: 33636484 DOI: 10.1016/j.neunet.2021.01.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 12/15/2020] [Accepted: 01/18/2021] [Indexed: 10/22/2022]
Abstract
Spiking neural networks (SNNs) are regarded as effective models for processing spatio-temporal information. However, their inherent complexity of temporal coding makes it an arduous task to put forward an effective supervised learning algorithm, which still puzzles researchers in this area. In this paper, we propose a Recursive Least Squares-Based Learning Rule (RLSBLR) for SNN to generate the desired spatio-temporal spike train. During the learning process of our method, the weight update is driven by the cost function defined by the difference between the membrane potential and the firing threshold. The amount of weight modification depends not only on the impact of the current error function, but also on the previous error functions which are evaluated by current weights. In order to improve the learning performance, we integrate a modified synaptic delay learning to the proposed RLSBLR. We conduct experiments in different settings, such as spiking lengths, number of inputs, firing rates, noises and learning parameters, to thoroughly investigate the performance of this learning algorithm. The proposed RLSBLR is compared with competitive algorithms of Perceptron-Based Spiking Neuron Learning Rule (PBSNLR) and Remote Supervised Method (ReSuMe). Experimental results demonstrate that the proposed RLSBLR can achieve higher learning accuracy, higher efficiency and better robustness against different types of noise. In addition, we apply the proposed RLSBLR to open source database TIDIGITS, and the results show that our algorithm has a good practical application performance.
Collapse
Affiliation(s)
- Yun Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Hong Qu
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China.
| | - Xiaoling Luo
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yi Chen
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yuchen Wang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Malu Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Zefang Li
- China Coal Research Institute, Beijing 100013, PR China
| |
Collapse
|
48
|
Yu Q, Yao Y, Wang L, Tang H, Dang J, Tan KC. Robust Environmental Sound Recognition With Sparse Key-Point Encoding and Efficient Multispike Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:625-638. [PMID: 32203038 DOI: 10.1109/tnnls.2020.2978764] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The capability for environmental sound recognition (ESR) can determine the fitness of individuals in a way to avoid dangers or pursue opportunities when critical sound events occur. It still remains mysterious about the fundamental principles of biological systems that result in such a remarkable ability. Additionally, the practical importance of ESR has attracted an increasing amount of research attention, but the chaotic and nonstationary difficulties continue to make it a challenging task. In this article, we propose a spike-based framework from a more brain-like perspective for the ESR task. Our framework is a unifying system with consistent integration of three major functional parts which are sparse encoding, efficient learning, and robust readout. We first introduce a simple sparse encoding, where key points are used for feature representation, and demonstrate its generalization to both spike- and nonspike-based systems. Then, we evaluate the learning properties of different learning rules in detail with our contributions being added for improvements. Our results highlight the advantages of multispike learning, providing a selection reference for various spike-based developments. Finally, we combine the multispike readout with the other parts to form a system for ESR. Experimental results show that our framework performs the best as compared to other baseline approaches. In addition, we show that our spike-based framework has several advantageous characteristics including early decision making, small dataset acquiring, and ongoing dynamic processing. Our framework is the first attempt to apply the multispike characteristic of nervous neurons to ESR. The outstanding performance of our approach would potentially contribute to draw more research efforts to push the boundaries of spike-based paradigm to a new horizon.
Collapse
|
49
|
Liu Z, Huang B, Wu J, Shi G. Lightweight Convolutional SNN for Address Event Representation Signal Recognition. ARTIF INTELL 2021. [DOI: 10.1007/978-3-030-93046-2_26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
50
|
Yang JQ, Wang R, Ren Y, Mao JY, Wang ZP, Zhou Y, Han ST. Neuromorphic Engineering: From Biological to Spike-Based Hardware Nervous Systems. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2020; 32:e2003610. [PMID: 33165986 DOI: 10.1002/adma.202003610] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 07/27/2020] [Indexed: 06/11/2023]
Abstract
The human brain is a sophisticated, high-performance biocomputer that processes multiple complex tasks in parallel with high efficiency and remarkably low power consumption. Scientists have long been pursuing an artificial intelligence (AI) that can rival the human brain. Spiking neural networks based on neuromorphic computing platforms simulate the architecture and information processing of the intelligent brain, providing new insights for building AIs. The rapid development of materials engineering, device physics, chip integration, and neuroscience has led to exciting progress in neuromorphic computing with the goal of overcoming the von Neumann bottleneck. Herein, fundamental knowledge related to the structures and working principles of neurons and synapses of the biological nervous system is reviewed. An overview is then provided on the development of neuromorphic hardware systems, from artificial synapses and neurons to spike-based neuromorphic computing platforms. It is hoped that this review will shed new light on the evolution of brain-like computing.
Collapse
Affiliation(s)
- Jia-Qin Yang
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen, 518060, P. R. China
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Ruopeng Wang
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen, 518060, P. R. China
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Yi Ren
- Institute for Advanced Study, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Jing-Yu Mao
- Institute for Advanced Study, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Zhan-Peng Wang
- Institute for Advanced Study, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Ye Zhou
- Institute for Advanced Study, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Su-Ting Han
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, 518060, P. R. China
| |
Collapse
|