1
|
Kohan A, Rietman EA, Siegelmann HT. Signal Propagation: The Framework for Learning and Inference in a Forward Pass. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:8585-8596. [PMID: 37022224 DOI: 10.1109/tnnls.2022.3230914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
We propose a new learning framework, signal propagation (sigprop), for propagating a learning signal and updating neural network parameters via a forward pass, as an alternative to backpropagation (BP). In sigprop, there is only the forward path for inference and learning. So, there are no structural or computational constraints necessary for learning to take place, beyond the inference model itself, such as feedback connectivity, weight transport, or a backward pass, which exist under BP-based approaches. That is, sigprop enables global supervised learning with only a forward path. This is ideal for parallel training of layers or modules. In biology, this explains how neurons without feedback connections can still receive a global learning signal. In hardware, this provides an approach for global supervised learning without backward connectivity. Sigprop by construction has compatibility with models of learning in the brain and in hardware than BP, including alternative approaches relaxing learning constraints. We also demonstrate that sigprop is more efficient in time and memory than they are. To further explain the behavior of sigprop, we provide evidence that sigprop provides useful learning signals in context to BP. To further support relevance to biological and hardware learning, we use sigprop to train continuous time neural networks with the Hebbian updates and train spiking neural networks (SNNs) with only the voltage or with biologically and hardware-compatible surrogate functions.
Collapse
|
2
|
Khatiboun DF, Rezaeiyan Y, Ronchini M, Sadeghi M, Zamani M, Moradi F. Digital Hardware Implementation of ReSuMe Learning Algorithm for Spiking Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083592 DOI: 10.1109/embc40787.2023.10340282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Within this paper, we demonstrate the feasibility of the FPGA implementation as well as the 180nm CMOS circuit design of a particular biologically plausible supervised learning algorithm (ReSuMe). Based on the Spike-Timing-Dependent Plasticity (STDP) learning phenomenon, this design proposes a fully configurable implementation of STDP learning window function to adjust the learning process for different applications, optimizing results for each use case. The CMOS implementation in 180nm technology node supplied with 1.8V shows a core area of 0.78mm2 and verifies the suitability of an on-chip ReSuMe learning algorithm implementation and its capability of integration with a multitude of external and already designed structures of Spiking Neural Networks (SNNs).
Collapse
|
3
|
Zhao J, Yang J, Wang J, Wu W. Spiking Neural Network Regularization With Fixed and Adaptive Drop-Keep Probabilities. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4096-4109. [PMID: 33571100 DOI: 10.1109/tnnls.2021.3055825] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Dropout and DropConnect are two techniques to facilitate the regularization of neural network models, having achieved the state-of-the-art results in several benchmarks. In this paper, to improve the generalization capability of spiking neural networks (SNNs), the two drop techniques are first applied to the state-of-the-art SpikeProp learning algorithm resulting in two improved learning algorithms called SPDO (SpikeProp with Dropout) and SPDC (SpikeProp with DropConnect). In view that a higher membrane potential of a biological neuron implies a higher probability of neural activation, three adaptive drop algorithms, SpikeProp with Adaptive Dropout (SPADO), SpikeProp with Adaptive DropConnect (SPADC), and SpikeProp with Group Adaptive Drop (SPGAD), are proposed by adaptively adjusting the keep probability for training SNNs. A convergence theorem for SPDC is proven under the assumptions of the bounded norm of connection weights and a finite number of equilibria. In addition, the five proposed algorithms are carried out in a collaborative neurodynamic optimization framework to improve the learning performance of SNNs. The experimental results on the four benchmark data sets demonstrate that the three adaptive algorithms converge faster than SpikeProp, SPDO, and SPDC, and the generalization errors of the five proposed algorithms are significantly smaller than that of SpikeProp. Furthermore, the experimental results also show that the five algorithms based on collaborative neurodynamic optimization can be improved in terms of several measures.
Collapse
|
4
|
A Spike Neural Network Model for Lateral Suppression of Spike-Timing-Dependent Plasticity with Adaptive Threshold. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125980] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Aiming at the practical constraints of high resource occupancy and complex calculations in the existing Spike Neural Network (SNN) image classification model, in order to seek a more lightweight and efficient machine vision solution, this paper proposes an adaptive threshold Spike Neural Network (SNN) model of lateral inhibition of Spike-Timing-Dependent Plasticity (STDP). The conversion from grayscale image to pulse sequence is completed by convolution normalization and first pulse time coding. The network self-classification is realized by combining the classical Spike-Timing-Dependent Plasticity algorithm (STDP) and lateral suppression algorithm. The occurrence of overfitting is effectively suppressed by introducing an adaptive threshold. The experimental results on the MNIST data set show that compared with the traditional SNN classification model, the complexity of the weight update algorithm is reduced from O(n2) to O(1), and the accuracy rate can still remain stable at about 96%. The provided model is conducive to the migration of software algorithms to the bottom layer of the hardware platform, and can provide a reference for the realization of edge computing solutions for small intelligent hardware terminals with high efficiency and low power consumption.
Collapse
|
5
|
Li J, Xu H, Sun SY, Li N, Li Q, Li Z, Liu H. In Situ Learning in Hardware Compatible Multilayer Memristive Spiking Neural Network. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3049487] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Jiwei Li
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Hui Xu
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Sheng-Yang Sun
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Nan Li
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Qingjiang Li
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Zhiwei Li
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| | - Haijun Liu
- College of Electronic Science and Technology, National University of Defense Technology, Changsha~, China
| |
Collapse
|
6
|
Javanshir A, Nguyen TT, Mahmud MAP, Kouzani AZ. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput 2022; 34:1289-1328. [PMID: 35534005 DOI: 10.1162/neco_a_01499] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 01/18/2022] [Indexed: 11/04/2022]
Abstract
Artificial neural networks (ANNs) have experienced a rapid advancement for their success in various application domains, including autonomous driving and drone vision. Researchers have been improving the performance efficiency and computational requirement of ANNs inspired by the mechanisms of the biological brain. Spiking neural networks (SNNs) provide a power-efficient and brain-inspired computing paradigm for machine learning applications. However, evaluating large-scale SNNs on classical von Neumann architectures (central processing units/graphics processing units) demands a high amount of power and time. Therefore, hardware designers have developed neuromorphic platforms to execute SNNs in and approach that combines fast processing and low power consumption. Recently, field-programmable gate arrays (FPGAs) have been considered promising candidates for implementing neuromorphic solutions due to their varied advantages, such as higher flexibility, shorter design, and excellent stability. This review aims to describe recent advances in SNNs and the neuromorphic hardware platforms (digital, analog, hybrid, and FPGA based) suitable for their implementation. We present that biological background of SNN learning, such as neuron models and information encoding techniques, followed by a categorization of SNN training. In addition, we describe state-of-the-art SNN simulators. Furthermore, we review and present FPGA-based hardware implementation of SNNs. Finally, we discuss some future directions for research in this field.
Collapse
Affiliation(s)
| | - Thanh Thi Nguyen
- School of Information Technology, Deakin University (Burwood Campus) Burwood, VIC 3125, Australia
| | - M A Parvez Mahmud
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| | - Abbas Z Kouzani
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| |
Collapse
|
7
|
Yu Q, Li S, Tang H, Wang L, Dang J, Tan KC. Toward Efficient Processing and Learning With Spikes: New Approaches for Multispike Learning. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1364-1376. [PMID: 32356771 DOI: 10.1109/tcyb.2020.2984888] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Spikes are the currency in central nervous systems for information transmission and processing. They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing. However, efficient processing and learning of discrete spikes still remain a challenging problem. In this article, we make our contributions toward this direction. A simplified spiking neuron model is first introduced with the effects of both synaptic input and firing output on the membrane potential being modeled with an impulse function. An event-driven scheme is then presented to further improve the processing efficiency. Based on the neuron model, we propose two new multispike learning rules which demonstrate better performance over other baselines on various tasks, including association, classification, and feature detection. In addition to efficiency, our learning rules demonstrate high robustness against the strong noise of different types. They can also be generalized to different spike coding schemes for the classification task, and notably, the single neuron is capable of solving multicategory classifications with our learning rules. In the feature detection task, we re-examine the ability of unsupervised spike-timing-dependent plasticity with its limitations being presented, and find a new phenomenon of losing selectivity. In contrast, our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied. Moreover, our rules cannot only detect features but also discriminate them. The improved performance of our methods would contribute to neuromorphic computing as a preferable choice.
Collapse
|
8
|
Supervised learning algorithm based on spike optimization mechanism for multilayer spiking neural networks. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-021-01500-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
9
|
Xiang S, Ren Z, Song Z, Zhang Y, Guo X, Han G, Hao Y. Computing Primitive of Fully VCSEL-Based All-Optical Spiking Neural Network for Supervised Learning and Pattern Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2494-2505. [PMID: 32673197 DOI: 10.1109/tnnls.2020.3006263] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose computing primitive for an all-optical spiking neural network (SNN) based on vertical-cavity surface-emitting lasers (VCSELs) for supervised learning by using biologically plausible mechanisms. The spike-timing-dependent plasticity (STDP) model was established based on the dynamics of the vertical-cavity semiconductor optical amplifier (VCSOA) subject to dual-optical pulse injection. The neuron-synapse self-consistent unified model of the all-optical SNN was developed, which enables reproducing the essential neuron-like dynamics and STDP function. Optical character numbers are trained and tested by the proposed fully VCSEL-based all-optical SNN. Simulation results show that the proposed all-optical SNN is capable of recognizing ten numbers by a supervised learning algorithm, in which the input and output patterns as well as the teacher signals of the all-optical SNN are represented by spatiotemporal fashions. Moreover, the lateral inhibition is not required in our proposed architecture, which is friendly to the hardware implementation. The system-level unified model enables architecture-algorithm codesigns and optimization of all-optical SNN. To the best of our knowledge, the computing primitive of an all-optical SNN based on VCSELs for supervised learning has not yet been reported, which paves the way toward fully VCSEL-based large-scale photonic neuromorphic systems with low power consumption.
Collapse
|
10
|
Li J, Dong Z, Luo L, Duan S, Wang L. A novel versatile window function for memristor model with application in spiking neural network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.111] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
11
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
12
|
Nigus M, Priyadarshini R, Mehra RM. Stochastic and novel generic scalable window function-based deterministic memristor SPICE model comparison and implementation for synaptic circuit design. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-019-1888-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
|
13
|
Frenkel C, Legat JD, Bol D. MorphIC: A 65-nm 738k-Synapse/mm 2 Quad-Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike-Driven Online Learning. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:999-1010. [PMID: 31329562 DOI: 10.1109/tbcas.2019.2928793] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient SNNs still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this paper, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86 mm 2 in 65-nm CMOS, achieving a high density of 738k synapses/mm 2. MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while having no penalty in the energy-accuracy tradeoff.
Collapse
|
14
|
Camuñas-Mesa LA, Linares-Barranco B, Serrano-Gotarredona T. Neuromorphic Spiking Neural Networks and Their Memristor-CMOS Hardware Implementations. MATERIALS (BASEL, SWITZERLAND) 2019; 12:E2745. [PMID: 31461877 PMCID: PMC6747825 DOI: 10.3390/ma12172745] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 08/02/2019] [Accepted: 08/10/2019] [Indexed: 11/17/2022]
Abstract
Inspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses. However, the realization of learning strategies in these systems consumes an important proportion of resources in terms of area and power. The recent development of nanoscale memristors that can be integrated with Complementary Metal-Oxide-Semiconductor (CMOS) technology opens a very promising solution to emulate the behavior of biological synapses. Therefore, hybrid memristor-CMOS approaches have been proposed to implement large-scale neural networks with learning capabilities, offering a scalable and lower-cost alternative to existing CMOS systems.
Collapse
Affiliation(s)
- Luis A Camuñas-Mesa
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain.
| | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| | - Teresa Serrano-Gotarredona
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| |
Collapse
|
15
|
Tapiador-Morales R, Linares-Barranco A, Jimenez-Fernandez A, Jimenez-Moreno G. Neuromorphic LIF Row-by-Row Multiconvolution Processor for FPGA. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:159-169. [PMID: 30418884 DOI: 10.1109/tbcas.2018.2880012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Deep Learning algorithms have become state-of-the-art methods for multiple fields, including computer vision, speech recognition, natural language processing, and audio recognition, among others. In image vision, convolutional neural networks (CNN) stand out. This kind of network is expensive in terms of computational resources due to the large number of operations required to process a frame. In recent years, several frame-based chip solutions to deploy CNN for real time have been developed. Despite the good results in power and accuracy given by these solutions, the number of operations is still high, due the complexity of the current network models. However, it is possible to reduce the number of operations using different computer vision techniques other than frame-based, e.g., neuromorphic event-based techniques. There exist several neuromorphic vision sensors whose pixels detect changes in luminosity. Inspired in the leaky integrate-and-fire (LIF) neuron, we propose in this manuscript an event-based field-programmable gate array (FPGA) multiconvolution system. Its main novelty is the combination of a memory arbiter for efficient memory access to allow row-by-row kernel processing. This system is able to convolve 64 filters across multiple kernel sizes, from 1 × 1 to 7 × 7, with latencies of 1.3 μs and 9.01 μs, respectively, generating a continuous flow of output events. The proposed architecture will easily fit spike-based CNNs.
Collapse
|
16
|
Camuñas-Mesa LA, Domínguez-Cordero YL, Linares-Barranco A, Serrano-Gotarredona T, Linares-Barranco B. A Configurable Event-Driven Convolutional Node with Rate Saturation Mechanism for Modular ConvNet Systems Implementation. Front Neurosci 2018; 12:63. [PMID: 29515349 PMCID: PMC5826227 DOI: 10.3389/fnins.2018.00063] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Accepted: 01/26/2018] [Indexed: 11/13/2022] Open
Abstract
Convolutional Neural Networks (ConvNets) are a particular type of neural network often used for many applications like image recognition, video analysis or natural language processing. They are inspired by the human brain, following a specific organization of the connectivity pattern between layers of neurons known as receptive field. These networks have been traditionally implemented in software, but they are becoming more computationally expensive as they scale up, having limitations for real-time processing of high-speed stimuli. On the other hand, hardware implementations show difficulties to be used for different applications, due to their reduced flexibility. In this paper, we propose a fully configurable event-driven convolutional node with rate saturation mechanism that can be used to implement arbitrary ConvNets on FPGAs. This node includes a convolutional processing unit and a routing element which allows to build large 2D arrays where any multilayer structure can be implemented. The rate saturation mechanism emulates the refractory behavior in biological neurons, guaranteeing a minimum separation in time between consecutive events. A 4-layer ConvNet with 22 convolutional nodes trained for poker card symbol recognition has been implemented in a Spartan6 FPGA. This network has been tested with a stimulus where 40 poker cards were observed by a Dynamic Vision Sensor (DVS) in 1 s time. Different slow-down factors were applied to characterize the behavior of the system for high speed processing. For slow stimulus play-back, a 96% recognition rate is obtained with a power consumption of 0.85 mW. At maximum play-back speed, a traffic control mechanism downsamples the input stimulus, obtaining a recognition rate above 63% when less than 20% of the input events are processed, demonstrating the robustness of the network.
Collapse
Affiliation(s)
- Luis A. Camuñas-Mesa
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC y Universidad de Sevilla, Sevilla, Spain
| | | | | | | | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC y Universidad de Sevilla, Sevilla, Spain
| |
Collapse
|