1
|
Zhang Y, Chen Y, Zhang J, Luo X, Zhang M, Qu H, Yi Z. Minicolumn-Based Episodic Memory Model With Spiking Neurons, Dendrites and Delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7072-7086. [PMID: 36279337 DOI: 10.1109/tnnls.2022.3213688] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Episodic memory is fundamental to the brain's cognitive function, but how neuronal activity is temporally organized during its encoding and retrieval is still unknown. In this article, combining hippocampus structure with a spiking neural network (SNN), a new bionic spiking temporal memory (BSTM) model is proposed to explore the encoding, formation, and retrieval of episodic memory. For encoding episodic memory, the spike-timing-dependent-plasticity (STDP) learning algorithm and a proposed minicolumn selection algorithm are used to encode each input item into several active minicolumns. For the formation of episodic memory, a sequential memory algorithm is proposed to store the contexts between items. For retrieval of episodic memory, the local retrieval algorithm and the global retrieval algorithm are proposed to retrieve sequence information, achieving multisentence prediction and multitime step prediction. All functions of BSTM are based on bionic spiking neurons, which have biological characteristics including columnar and dendritic structures, firing and receiving spikes, and delaying transmission. To test the performance of the BSTM model, the Children's Book Test (CBT) data set was used to conduct a series of experiments under different settings, including changing the number of minicolumns, neurons and sequences, modifying sequence items, etc. Compared to other sequence memory algorithms, the experimental results show that the proposed BSTM achieves higher accuracy and better robustness.
Collapse
|
2
|
Zhang Y, Shi K, Luo X, Chen Y, Wang Y, Qu H. A biologically inspired auto-associative network with sparse temporal population coding. Neural Netw 2023; 166:670-682. [PMID: 37604076 DOI: 10.1016/j.neunet.2023.07.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 06/25/2023] [Accepted: 07/26/2023] [Indexed: 08/23/2023]
Abstract
Associative system has attracted increasing attention for it can store basic information and then infer details to match perception with an efficient self-organization algorithm. However, the implementation of the associative system with the application of real-world data is relatively difficult. To address this issue, we propose a novel biologically inspired auto-associative (BIAA) network to explore the structure, encoding and formation of associative memory as well as to extend the ability to real-world application. Our network is constructed by imitating the organization of the cortical minicolumns where each minicolumn contains plenty of parallel biological spiking neurons. To allow the network to learn and predict one symbol per theta cycle, we incorporate synaptic delay and theta oscillation into the neuron dynamic process. Subsequently, we design a sparse temporal population (STP) coding scheme that allows each input symbol to be represented as stable, unique, and easily recallable sparsely distributed representations. By combining associative learning dynamics with the STP coding, our network realizes efficient storage and inference in an ordered manner. Experimental results indicate that the proposed network successfully performs sequence retrieval from partial text and sequence recovery from distorted information. BIAA network provides new insight into introducing biologically inspired mechanisms into associative system and has enormous potential for hardware and software applications.
Collapse
Affiliation(s)
- Ya Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Kexin Shi
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Xiaoling Luo
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yi Chen
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yucheng Wang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Hong Qu
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China.
| |
Collapse
|
3
|
Xu Q, Shen J, Ran X, Tang H, Pan G, Liu JK. Robust Transcoding Sensory Information With Neural Spikes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1935-1946. [PMID: 34665741 DOI: 10.1109/tnnls.2021.3107449] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Neural coding, including encoding and decoding, is one of the key problems in neuroscience for understanding how the brain uses neural signals to relate sensory perception and motor behaviors with neural systems. However, most of the existed studies only aim at dealing with the continuous signal of neural systems, while lacking a unique feature of biological neurons, termed spike, which is the fundamental information unit for neural computation as well as a building block for brain-machine interface. Aiming at these limitations, we propose a transcoding framework to encode multi-modal sensory information into neural spikes and then reconstruct stimuli from spikes. Sensory information can be compressed into 10% in terms of neural spikes, yet re-extract 100% of information by reconstruction. Our framework can not only feasibly and accurately reconstruct dynamical visual and auditory scenes, but also rebuild the stimulus patterns from functional magnetic resonance imaging (fMRI) brain activities. More importantly, it has a superb ability of noise immunity for various types of artificial noises and background signals. The proposed framework provides efficient ways to perform multimodal feature representation and reconstruction in a high-throughput fashion, with potential usage for efficient neuromorphic computing in a noisy environment.
Collapse
|
4
|
Lan Y, Wang X, Wang Y. Spatio-Temporal Sequential Memory Model With Mini-Column Neural Network. Front Neurosci 2021; 15:650430. [PMID: 34121986 PMCID: PMC8195288 DOI: 10.3389/fnins.2021.650430] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 03/15/2021] [Indexed: 11/13/2022] Open
Abstract
Memory is an intricate process involving various faculties of the brain and is a central component in human cognition. However, the exact mechanism that brings about memory in our brain remains elusive and the performance of the existing memory models is not satisfactory. To overcome these problems, this paper puts forward a brain-inspired spatio-temporal sequential memory model based on spiking neural networks (SNNs). Inspired by the structure of the neocortex, the proposed model is structured by many mini-columns composed of biological spiking neurons. Each mini-column represents one memory item, and the firing of different spiking neurons in the mini-column depends on the context of the previous inputs. The Spike-Timing-Dependant Plasticity (STDP) is used to update the connections between excitatory neurons and formulates association between two memory items. In addition, the inhibitory neurons are employed to prevent incorrect prediction, which contributes to improving the retrieval accuracy. Experimental results demonstrate that the proposed model can effectively store a huge number of data and accurately retrieve them when sufficient context is provided. This work not only provides a new memory model but also suggests how memory could be formulated with excitatory/inhibitory neurons, spike-based encoding, and mini-column structure.
Collapse
Affiliation(s)
- Yawen Lan
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China.,School of Information Engineering, Southwest University of Science and Technology, Mianyang, China
| | - Xiaobin Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuchen Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
5
|
Liu Q, Pan G, Ruan H, Xing D, Xu Q, Tang H. Unsupervised AER Object Recognition Based on Multiscale Spatio-Temporal Features and Spiking Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5300-5311. [PMID: 32054587 DOI: 10.1109/tnnls.2020.2966058] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
This article proposes an unsupervised address event representation (AER) object recognition approach. The proposed approach consists of a novel multiscale spatio-temporal feature (MuST) representation of input AER events and a spiking neural network (SNN) using spike-timing-dependent plasticity (STDP) for object recognition with MuST. MuST extracts the features contained in both the spatial and temporal information of AER event flow, and forms an informative and compact feature spike representation. We show not only how MuST exploits spikes to convey information more effectively, but also how it benefits the recognition using SNN. The recognition process is performed in an unsupervised manner, which does not need to specify the desired status of every single neuron of SNN, and thus can be flexibly applied in real-world recognition tasks. The experiments are performed on five AER datasets including a new one named GESTURE-DVS. Extensive experimental results show the effectiveness and advantages of the proposed approach.
Collapse
|
6
|
Yu S, Wu J, Xu H, Sun R, Sun L. Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM. Front Neurorobot 2020; 14:568091. [PMID: 33101002 PMCID: PMC7546858 DOI: 10.3389/fnbot.2020.568091] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Accepted: 08/18/2020] [Indexed: 11/14/2022] Open
Abstract
This paper describes an improved brain-inspired simultaneous localization and mapping (RatSLAM) that extracts visual features from saliency maps using a frequency-tuned (FT) model. In the traditional RatSLAM algorithm, the visual template feature is organized as a one-dimensional vector whose values only depend on pixel intensity; therefore, this feature is susceptible to changes in illumination intensity. In contrast to this approach, which directly generates visual templates from raw RGB images, we propose an FT model that converts RGB images into saliency maps to obtain visual templates. The visual templates extracted from the saliency maps contain more of the feature information contained within the original images. Our experimental results demonstrate that the accuracy of loop closure detection was improved, as measured by the number of loop closures detected by our method compared with the traditional RatSLAM system. We additionally verified that the proposed FT model-based visual templates improve the robustness of familiar visual scene identification by RatSLAM.
Collapse
Affiliation(s)
- Shumei Yu
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, China
| | - Junyi Wu
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, China
| | - Haidong Xu
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, China
| | - Rongchuan Sun
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, China
| | - Lining Sun
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, China
| |
Collapse
|
7
|
Liang Q, Zeng Y, Xu B. Temporal-Sequential Learning With a Brain-Inspired Spiking Neural Network and Its Application to Musical Memory. Front Comput Neurosci 2020; 14:51. [PMID: 32714173 PMCID: PMC7343962 DOI: 10.3389/fncom.2020.00051] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 05/11/2020] [Indexed: 11/13/2022] Open
Abstract
Sequence learning is a fundamental cognitive function of the brain. However, the ways in which sequential information is represented and memorized are not dealt with satisfactorily by existing models. To overcome this deficiency, this paper introduces a spiking neural network based on psychological and neurobiological findings at multiple scales. Compared with existing methods, our model has four novel features: (1) It contains several collaborative subnetworks similar to those in brain regions with different cognitive functions. The individual building blocks of the simulated areas are neural functional minicolumns composed of biologically plausible neurons. Both excitatory and inhibitory connections between neurons are modulated dynamically using a spike-timing-dependent plasticity learning rule. (2) Inspired by the mechanisms of the brain's cortical-striatal loop, a dependent timing module is constructed to encode temporal information, which is essential in sequence learning but has not been processed well by traditional algorithms. (3) Goal-based and episodic retrievals can be achieved at different time scales. (4) Musical memory is used as an application to validate the model. Experiments show that the model can store a huge amount of data on melodies and recall them with high accuracy. In addition, it can remember the entirety of a melody given only an episode or the melody played at different paces.
Collapse
Affiliation(s)
- Qian Liang
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Bo Xu
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
8
|
Yuan M, Wu X, Yan R, Tang H. Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses. Neural Comput 2019; 31:2368-2389. [PMID: 31614099 DOI: 10.1162/neco_a_01238] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Though succeeding in solving various learning tasks, most existing reinforcement learning (RL) models have failed to take into account the complexity of synaptic plasticity in the neural system. Models implementing reinforcement learning with spiking neurons involve only a single plasticity mechanism. Here, we propose a neural realistic reinforcement learning model that coordinates the plasticities of two types of synapses: stochastic and deterministic. The plasticity of the stochastic synapse is achieved by the hedonistic rule through modulating the release probability of synaptic neurotransmitter, while the plasticity of the deterministic synapse is achieved by a variant of a reward-modulated spike-timing-dependent plasticity rule through modulating the synaptic strengths. We evaluate the proposed learning model on two benchmark tasks: learning a logic gate function and the 19-state random walk problem. Experimental results show that the coordination of diverse synaptic plasticities can make the RL model learn in a rapid and stable form.
Collapse
Affiliation(s)
- Mengwen Yuan
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Xi Wu
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Rui Yan
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Huajin Tang
- College of Computer Science, Sichuan University, Chengdu 610065, China, and College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
9
|
Liu K, Cui X, Zhong Y, Kuang Y, Wang Y, Tang H, Huang R. A Hardware Implementation of SNN-Based Spatio-Temporal Memory Model. Front Neurosci 2019; 13:835. [PMID: 31447641 PMCID: PMC6697024 DOI: 10.3389/fnins.2019.00835] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 07/26/2019] [Indexed: 11/13/2022] Open
Abstract
Simulating human brain with hardware has been an attractive project for many years, since memory is one of the fundamental functions of our brains. Several memory models have been proposed up to now in order to unveil how the memory is organized in the brain. In this paper, we adopt spatio-temporal memory (STM) model, in which both associative memory and episodic memory are analyzed and emulated, as the reference of our hardware network architecture. Furthermore, some reasonable adaptations are carried out for the hardware implementation. We finally implement this memory model on FPGA, and additional experiments are performed to fine tune the parameters of our network deployed on FPGA.
Collapse
Affiliation(s)
- Kefei Liu
- Institute of Microelectronics, Peking University, Beijing, China
| | - Xiaoxin Cui
- Institute of Microelectronics, Peking University, Beijing, China.,National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Peking University, Beijing, China
| | - Yi Zhong
- Institute of Microelectronics, Peking University, Beijing, China
| | - Yisong Kuang
- Institute of Microelectronics, Peking University, Beijing, China
| | - Yuan Wang
- Institute of Microelectronics, Peking University, Beijing, China.,National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Peking University, Beijing, China
| | - Huajin Tang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Ru Huang
- Institute of Microelectronics, Peking University, Beijing, China.,National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Peking University, Beijing, China
| |
Collapse
|
10
|
Yu Q, Li H, Tan KC. Spike Timing or Rate? Neurons Learn to Make Decisions for Both Through Threshold-Driven Plasticity. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:2178-2189. [PMID: 29993593 DOI: 10.1109/tcyb.2018.2821692] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spikes play an essential role in information transmission in central nervous system, but how neurons learn from them remains a challenging question. Most algorithms studied how to train spiking neurons to process patterns encoded with a sole assumption of either a rate or a temporal code. Is there a general learning algorithm capable of processing both codes regardless of the intense debate on them within neuroscience community? In this paper, we propose several threshold-driven plasticity algorithms to address the above question. In addition to formulating the algorithms, we also provide proofs with respect to several properties, such as robustness and convergence. The experimental results illustrate that our algorithms are simple, effective and yet efficient for training neurons to learn spike patterns. Due to their simplicity and high efficiency, our algorithms would be potentially beneficial for both software and hardware implementations. Neurons with our algorithms can also detect and recognize embedded features from a background sensory activity. With the as-proposed algorithms, a single neuron can successfully perform multicategory classifications by making decisions based on its output spike number in response to each category. Spike patterns being processed can be encoded with both spike rates and precise timings. When afferent spike timings matter, neurons will automatically extract temporal features without being explicitly instructed as to which point to fire.
Collapse
|
11
|
Yu S, Zheng N, Ma Y, Wu H, Chen B. A Novel Brain Decoding Method: A Correlation Network Framework for Revealing Brain Connections. IEEE Trans Cogn Dev Syst 2019. [DOI: 10.1109/tcds.2018.2854274] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
12
|
Chen S, Zhang S, Shang J, Chen B, Zheng N. Brain-Inspired Cognitive Model With Attention for Self-Driving Cars. IEEE Trans Cogn Dev Syst 2019. [DOI: 10.1109/tcds.2017.2717451] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
13
|
Zhang M, Qu H, Belatreche A, Chen Y, Yi Z. A Highly Effective and Robust Membrane Potential-Driven Supervised Learning Method for Spiking Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:123-137. [PMID: 29993588 DOI: 10.1109/tnnls.2018.2833077] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spiking neurons are becoming increasingly popular owing to their biological plausibility and promising computational properties. Unlike traditional rate-based neural models, spiking neurons encode information in the temporal patterns of the transmitted spike trains, which makes them more suitable for processing spatiotemporal information. One of the fundamental computations of spiking neurons is to transform streams of input spike trains into precisely timed firing activity. However, the existing learning methods, used to realize such computation, often result in relatively low accuracy performance and poor robustness to noise. In order to address these limitations, we propose a novel highly effective and robust membrane potential-driven supervised learning (MemPo-Learn) method, which enables the trained neurons to generate desired spike trains with higher precision, higher efficiency, and better noise robustness than the current state-of-the-art spiking neuron learning methods. While the traditional spike-driven learning methods use an error function based on the difference between the actual and desired output spike trains, the proposed MemPo-Learn method employs an error function based on the difference between the output neuron membrane potential and its firing threshold. The efficiency of the proposed learning method is further improved through the introduction of an adaptive strategy, called skip scan training strategy, that selectively identifies the time steps when to apply weight adjustment. The proposed strategy enables the MemPo-Learn method to effectively and efficiently learn the desired output spike train even when much smaller time steps are used. In addition, the learning rule of MemPo-Learn is improved further to help mitigate the impact of the input noise on the timing accuracy and reliability of the neuron firing dynamics. The proposed learning method is thoroughly evaluated on synthetic data and is further demonstrated on real-world classification tasks. Experimental results show that the proposed method can achieve high learning accuracy with a significant improvement in learning time and better robustness to different types of noise.
Collapse
|
14
|
Zheng Y, Li S, Yan R, Tang H, Tan KC. Sparse Temporal Encoding of Visual Features for Robust Object Recognition by Spiking Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5823-5833. [PMID: 29994102 DOI: 10.1109/tnnls.2018.2812811] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Robust object recognition in spiking neural systems remains a challenging in neuromorphic computing area as it needs to solve both the effective encoding of sensory information and also its integration with downstream learning neurons. We target this problem by developing a spiking neural system consisting of sparse temporal encoding and temporal classifier. We propose a sparse temporal encoding algorithm which exploits both spatial and temporal information derived from an spike-timing-dependent plasticity-based HMAX feature extraction process. The temporal feature representation, thus, becomes more appropriate to be integrated with a temporal classifier based on spiking neurons rather than with nontemporal classifier. The algorithm has been validated on two benchmark data sets and the results show the temporal feature encoding and learning-based method achieves high recognition accuracy. The proposed model provides an efficient approach to perform feature representation and recognition in a consistent temporal learning framework, which is easily adapted to neuromorphic implementations.
Collapse
|
15
|
Ntinas V, Vourkas I, Abusleme A, Sirakoulis GC, Rubio A. Experimental Study of Artificial Neural Networks Using a Digital Memristor Simulator. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5098-5110. [PMID: 29994426 DOI: 10.1109/tnnls.2018.2791458] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents a fully digital implementation of a memristor hardware (HW) simulator, as the core of an emulator, based on a behavioral model of voltage-controlled threshold-type bipolar memristors. Compared to other analog solutions, the proposed digital design is compact, easily reconfigurable, demonstrates very good matching with the mathematical model on which it is based, and complies with all the required features for memristor emulators. We validated its functionality using Altera Quartus II and ModelSim tools targeting low-cost yet powerful field-programmable gate array families. We tested its suitability for complex memristive circuits as well as its synapse functioning in artificial neural networks, implementing examples of associative memory and unsupervised learning of spatiotemporal correlations in parallel input streams using a simplified spike-timing-dependent plasticity. We provide the full circuit schematics of all our digital circuit designs and comment on the required HW resources and their scaling trends, thus presenting a design framework for applications based on our HW simulator.
Collapse
|
16
|
Zheng N, Mazumder P. Online Supervised Learning for Hardware-Based Multilayer Spiking Neural Networks Through the Modulation of Weight-Dependent Spike-Timing-Dependent Plasticity. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4287-4302. [PMID: 29990088 DOI: 10.1109/tnnls.2017.2761335] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this paper, we propose an online learning algorithm for supervised learning in multilayer spiking neural networks (SNNs). It is found that the spike timings of neurons in an SNN can be exploited to estimate the gradients that are associated with each synapse. With the proposed method of estimating gradients, learning similar to the stochastic gradient descent process employed in a conventional artificial neural network (ANN) can be achieved. In addition to the conventional layer-by-layer backpropagation, a one-pass direct backpropagation is possible using the proposed learning algorithm. Two neural networks, with one and two hidden layers, are employed as examples to demonstrate the effectiveness of the proposed learning algorithms. Several techniques for more effective learning are discussed, including utilizing a random refractory period to avoid saturation of spikes, employing a quantization noise injection technique and pseudorandom initial conditions to decorrelate spike timings, in addition to leveraging the progressive precision in an SNN to reduce the inference latency and energy. Extensive parametric simulations are conducted to examine the aforementioned techniques. The learning algorithm is developed with the considerations of ease of hardware implementation and relative compatibility with the classic ANN-based learning. Therefore, the proposed algorithm not only enjoys the high energy efficiency and good scalability of an SNN in its specialized hardware but also benefits from the well-developed theory and techniques of conventional ANN-based learning. The Modified National Institute of Standards and Technology database benchmark test is conducted to verify the newly proposed learning algorithm. Classification correct rates of 97.2% and 97.8% are achieved for the one-hidden-layer and two-hidden-layer neural networks, respectively. Moreover, a brief discussion of the hardware implementations is presented for two mainstream architectures.
Collapse
|
17
|
Tang H, Yan R, Tan KC. Cognitive Navigation by Neuro-Inspired Localization, Mapping, and Episodic Memory. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2776965] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
18
|
Saputra AA, Toda Y, Botzheim J, Kubota N. Neuro-Activity-Based Dynamic Path Planner for 3-D Rough Terrain. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2711013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
19
|
Xing D, Qian C, Li H, Zhang S, Zhang Q, Hao Y, Zheng X, Wu Z, Wang Y, Pan G. Predicting Spike Trains from PMd to M1 Using Discrete Time Rescaling Targeted GLM. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2707466] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
20
|
Xu X, Jin X, Yan R, Fang Q, Lu W. Visual Pattern Recognition Using Enhanced Visual Features and PSD-Based Learning Rule. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2769166] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
21
|
Zhang M, Qu H, Belatreche A, Xie X. EMPD: An Efficient Membrane Potential Driven Supervised Learning Algorithm for Spiking Neurons. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2651943] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
Chen Q, Luley R, Wu Q, Bishop M, Linderman RW, Qiu Q. AnRAD: A Neuromorphic Anomaly Detection Framework for Massive Concurrent Data Streams. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1622-1636. [PMID: 28328516 DOI: 10.1109/tnnls.2017.2676110] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The evolution of high performance computing technologies has enabled the large-scale implementation of neuromorphic models and pushed the research in computational intelligence into a new era. Among the machine learning applications, unsupervised detection of anomalous streams is especially challenging due to the requirements of detection accuracy and real-time performance. Designing a computing framework that harnesses the growing computing power of the multicore systems while maintaining high sensitivity and specificity to the anomalies is an urgent research topic. In this paper, we propose anomaly recognition and detection (AnRAD), a bioinspired detection framework that performs probabilistic inferences. We analyze the feature dependency and develop a self-structuring method that learns an efficient confabulation network using unlabeled data. This network is capable of fast incremental learning, which continuously refines the knowledge base using streaming data. Compared with several existing anomaly detection approaches, our method provides competitive detection quality. Furthermore, we exploit the massive parallel structure of the AnRAD framework. Our implementations of the detection algorithm on the graphic processing unit and the Xeon Phi coprocessor both obtain substantial speedups over the sequential implementation on general-purpose microprocessor. The framework provides real-time service to concurrent data streams within diversified knowledge contexts, and can be applied to large problems with multiple local patterns. Experimental results demonstrate high computing performance and memory efficiency. For vehicle behavior detection, the framework is able to monitor up to 16000 vehicles (data streams) and their interactions in real time with a single commodity coprocessor, and uses less than 0.2 ms for one testing subject. Finally, the detection network is ported to our spiking neural network simulator to show the potential of adapting to the emerging neuromorphic architectures.
Collapse
|
23
|
Ren R, Hung T, Tan KC. A Generic Deep-Learning-Based Approach for Automated Surface Inspection. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:929-940. [PMID: 28252414 DOI: 10.1109/tcyb.2017.2668395] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Automated surface inspection (ASI) is a challenging task in industry, as collecting training dataset is usually costly and related methods are highly dataset-dependent. In this paper, a generic approach that requires small training data for ASI is proposed. First, this approach builds classifier on the features of image patches, where the features are transferred from a pretrained deep learning network. Next, pixel-wise prediction is obtained by convolving the trained classifier over input image. An experiment on three public and one industrial data set is carried out. The experiment involves two tasks: 1) image classification and 2) defect segmentation. The results of proposed algorithm are compared against several best benchmarks in literature. In the classification tasks, the proposed method improves accuracy by 0.66%-25.50%. In the segmentation tasks, the proposed method reduces error escape rates by 6.00%-19.00% in three defect types and improves accuracies by 2.29%-9.86% in all seven defect types. In addition, the proposed method achieves 0.0% error escape rate in the segmentation task of industrial data.
Collapse
|
24
|
Ferguson MK, Ronay A, Lee YTT, Law KH. Detection and Segmentation of Manufacturing Defects with Convolutional Neural Networks and Transfer Learning. SMART AND SUSTAINABLE MANUFACTURING SYSTEMS 2018; 2:10.1520/SSMS20180033. [PMID: 31093604 PMCID: PMC6512995 DOI: 10.1520/ssms20180033] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Quality control is a fundamental component of many manufacturing processes, especially those involving casting or welding. However, manual quality control procedures are often time-consuming and error-prone. In order to meet the growing demand for high-quality products, the use of intelligent visual inspection systems is becoming essential in production lines. Recently, Convolutional Neural Networks (CNNs) have shown outstanding performance in both image classification and localization tasks. In this article, a system is proposed for the identification of casting defects in X-ray images, based on the Mask Region-based CNN architecture. The proposed defect detection system simultaneously performs defect detection and segmentation on input images, making it suitable for a range of defect detection tasks. It is shown that training the network to simultaneously perform defect detection and defect instance segmentation, results in a higher defect detection accuracy than training on defect detection alone. Transfer learning is leveraged to reduce the training data demands and increase the prediction accuracy of the trained model. More specifically, the model is first trained with two large openly-available image datasets before finetuning on a relatively small metal casting X-ray dataset. The accuracy of the trained model exceeds state-of-the art performance on the GRIMA database of X-ray images (GDXray) Castings dataset and is fast enough to be used in a production setting. The system also performs well on the GDXray Welds dataset. A number of in-depth studies are conducted to explore how transfer learning, multi-task learning, and multi-class learning influence the performance of the trained system.
Collapse
Affiliation(s)
- Max K Ferguson
- Stanford University, Civil and Environmental Engineering, Stanford, CA, USA
| | - Ak Ronay
- National Institute of Standards and Technology, Systems Integration Division, Gaithersburg, MD, USA
| | - Yung-Tsun Tina Lee
- National Institute of Standards and Technology, Systems Integration Division, Gaithersburg, MD, USA
| | - Kincho H Law
- Stanford University, Civil and Environmental Engineering, Stanford, CA, USA
| |
Collapse
|
25
|
Zhang C, Lim P, Qin AK, Tan KC. Multiobjective Deep Belief Networks Ensemble for Remaining Useful Life Estimation in Prognostics. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2306-2318. [PMID: 27416606 DOI: 10.1109/tnnls.2016.2582798] [Citation(s) in RCA: 86] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In numerous industrial applications where safety, efficiency, and reliability are among primary concerns, condition-based maintenance (CBM) is often the most effective and reliable maintenance policy. Prognostics, as one of the key enablers of CBM, involves the core task of estimating the remaining useful life (RUL) of the system. Neural networks-based approaches have produced promising results on RUL estimation, although their performances are influenced by handcrafted features and manually specified parameters. In this paper, we propose a multiobjective deep belief networks ensemble (MODBNE) method. MODBNE employs a multiobjective evolutionary algorithm integrated with the traditional DBN training technique to evolve multiple DBNs simultaneously subject to accuracy and diversity as two conflicting objectives. The eventually evolved DBNs are combined to establish an ensemble model used for RUL estimation, where combination weights are optimized via a single-objective differential evolution algorithm using a task-oriented objective function. We evaluate the proposed method on several prognostic benchmarking data sets and also compare it with some existing approaches. Experimental results demonstrate the superiority of our proposed method.
Collapse
|
26
|
Zu C, Zhu L, Zhang D. Iterative sparsity score for feature selection and its extension for multimodal data. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.08.124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
27
|
Goh SK, Abbass HA, Tan KC, Al-Mamun A, Wang C, Guan C. Automatic EEG Artifact Removal Techniques by Detecting Influential Independent Components. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2017. [DOI: 10.1109/tetci.2017.2690913] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
Ren R, Hung T, Tan KC. Automatic Microstructure Defect Detection of Ti-6Al-4V Titanium Alloy by Regions-Based Graph. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2017. [DOI: 10.1109/tetci.2017.2669523] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
29
|
Cognitive memory and mapping in a brain-like system for robotic navigation. Neural Netw 2016; 87:27-37. [PMID: 28064015 DOI: 10.1016/j.neunet.2016.08.015] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2015] [Revised: 07/01/2016] [Accepted: 08/04/2016] [Indexed: 11/21/2022]
Abstract
Electrophysiological studies in animals may provide a great insight into developing brain-like models of spatial cognition for robots. These studies suggest that the spatial ability of animals requires proper functioning of the hippocampus and the entorhinal cortex (EC). The involvement of the hippocampus in spatial cognition has been extensively studied, both in animal as well as in theoretical studies, such as in the brain-based models by Edelman and colleagues. In this work, we extend these earlier models, with a particular focus on the spatial coding properties of the EC and how it functions as an interface between the hippocampus and the neocortex, as proposed by previous work. By realizing the cognitive memory and mapping functions of the hippocampus and the EC, respectively, we develop a neurobiologically-inspired system to enable a mobile robot to perform task-based navigation in a maze environment.
Collapse
|