1
|
Pietrzak P, Szczęsny S, Huderek D, Przyborowski Ł. Overview of Spiking Neural Network Learning Approaches and Their Computational Complexities. SENSORS (BASEL, SWITZERLAND) 2023; 23:3037. [PMID: 36991750 PMCID: PMC10053242 DOI: 10.3390/s23063037] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/08/2023] [Accepted: 03/09/2023] [Indexed: 06/19/2023]
Abstract
Spiking neural networks (SNNs) are subjects of a topic that is gaining more and more interest nowadays. They more closely resemble actual neural networks in the brain than their second-generation counterparts, artificial neural networks (ANNs). SNNs have the potential to be more energy efficient than ANNs on event-driven neuromorphic hardware. This can yield drastic maintenance cost reduction for neural network models, as the energy consumption would be much lower in comparison to regular deep learning models hosted in the cloud today. However, such hardware is still not yet widely available. On standard computer architectures consisting mainly of central processing units (CPUs) and graphics processing units (GPUs) ANNs, due to simpler models of neurons and simpler models of connections between neurons, have the upper hand in terms of execution speed. In general, they also win in terms of learning algorithms, as SNNs do not reach the same levels of performance as their second-generation counterparts in typical machine learning benchmark tasks, such as classification. In this paper, we review existing learning algorithms for spiking neural networks, divide them into categories by type, and assess their computational complexity.
Collapse
|
2
|
Wang H, He Z, Wang T, He J, Zhou X, Wang Y, Liu L, Wu N, Tian M, Shi C. TripleBrain: A Compact Neuromorphic Hardware Core With Fast On-Chip Self-Organizing and Reinforcement Spike-Timing Dependent Plasticity. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:636-650. [PMID: 35802542 DOI: 10.1109/tbcas.2022.3189240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Human brain cortex acts as a rich inspiration source for constructing efficient artificial cognitive systems. In this paper, we investigate to incorporate multiple brain-inspired computing paradigms for compact, fast and high-accuracy neuromorphic hardware implementation. We propose the TripleBrain hardware core that tightly combines three common brain-inspired factors: the spike-based processing and plasticity, the self-organizing map (SOM) mechanism and the reinforcement learning scheme, to improve object recognition accuracy and processing throughput, while keeping low resource costs. The proposed hardware core is fully event-driven to mitigate unnecessary operations, and enables various on-chip learning rules (including the proposed SOM-STDP & R-STDP rule and the R-SOM-STDP rule regarded as the two variants of our TripleBrain learning rule) with different accuracy-latency tradeoffs to satisfy user requirements. An FPGA prototype of the neuromorphic core was implemented and elaborately tested. It realized high-speed learning (1349 frame/s) and inference (2698 frame/s), and obtained comparably high recognition accuracies of 95.10%, 80.89%, 100%, 94.94%, 82.32%, 100% and 97.93% on the MNIST, ETH-80, ORL-10, Yale-10, N-MNIST, Poker-DVS and Posture-DVS datasets, respectively, while only consuming 4146 (7.59%) slices, 32 (3.56%) DSPs and 131 (24.04%) Block RAMs on a Xilinx Zynq-7045 FPGA chip. Our neuromorphic core is very attractive for real-time resource-limited edge intelligent systems.
Collapse
|
3
|
Ahmadi-Farsani J, Ricci S, Hashemkhani S, Ielmini D, Linares-Barranco B, Serrano-Gotarredona T. A CMOS-memristor hybrid system for implementing stochastic binary spike timing-dependent plasticity. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2022; 380:20210018. [PMID: 35658675 PMCID: PMC9168445 DOI: 10.1098/rsta.2021.0018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 02/08/2022] [Indexed: 06/15/2023]
Abstract
This paper describes a fully experimental hybrid system in which a [Formula: see text] memristive crossbar spiking neural network (SNN) was assembled using custom high-resistance state memristors with analogue CMOS neurons fabricated in 180 nm CMOS technology. The custom memristors used NMOS selector transistors, made available on a second 180 nm CMOS chip. One drawback is that memristors operate with currents in the micro-amperes range, while analogue CMOS neurons may need to operate with currents in the pico-amperes range. One possible solution was to use a compact circuit to scale the memristor-domain currents down to the analogue CMOS neuron domain currents by at least 5-6 orders of magnitude. Here, we proposed using an on-chip compact current splitter circuit based on MOS ladders to aggressively attenuate the currents by over 5 orders of magnitude. This circuit was added before each neuron. This paper describes the proper experimental operation of an SNN circuit using a [Formula: see text] 1T1R synaptic crossbar together with four post-synaptic CMOS circuits, each with a 5-decade current attenuator and an integrate-and-fire neuron. It also demonstrates one-shot winner-takes-all training and stochastic binary spike-timing-dependent-plasticity learning using this small system. This article is part of the theme issue 'Advanced neurotechnologies: translating innovation for health and well-being'.
Collapse
Affiliation(s)
- Javad Ahmadi-Farsani
- Instituto de Microelectrónica de Sevilla, IMSE-CNM (CSIC and Universidad de Sevilla), Av. Américo Vespucio 28, 41092 Sevilla, Spain
| | - Saverio Ricci
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milano, Italy
| | - Shahin Hashemkhani
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milano, Italy
| | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milano, Italy
| | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla, IMSE-CNM (CSIC and Universidad de Sevilla), Av. Américo Vespucio 28, 41092 Sevilla, Spain
| | - Teresa Serrano-Gotarredona
- Instituto de Microelectrónica de Sevilla, IMSE-CNM (CSIC and Universidad de Sevilla), Av. Américo Vespucio 28, 41092 Sevilla, Spain
| |
Collapse
|
4
|
Bio-plausible digital implementation of a reward modulated STDP synapse. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07220-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractReward-modulated Spike-Timing-Dependent Plasticity (R-STDP) is a learning method for Spiking Neural Network (SNN) that makes use of an external learning signal to modulate the synaptic plasticity produced by Spike-Timing-Dependent Plasticity (STDP). Combining the advantages of reinforcement learning and the biological plausibility of STDP, online learning on SNN in real-world scenarios can be applied. This paper presents a fully digital architecture, implemented on an Field-Programmable Gate Array (FPGA), including the R-STDP learning mechanism in a SNN. The hardware results obtained are comparable to the software simulations results using the Brian2 simulator. The maximum error is of 0.083 when a 14-bits fix-point precision is used in realtime. The presented architecture shows an accuracy of 95% when tested in an obstacle avoidance problem on mobile robotics with a minimum use of resources.
Collapse
|
5
|
Vanarse A, Osseiran A, Rassau A, van der Made P. Application of Neuromorphic Olfactory Approach for High-Accuracy Classification of Malts. SENSORS (BASEL, SWITZERLAND) 2022; 22:440. [PMID: 35062402 PMCID: PMC8778084 DOI: 10.3390/s22020440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 12/30/2021] [Accepted: 01/03/2022] [Indexed: 06/14/2023]
Abstract
Current developments in artificial olfactory systems, also known as electronic nose (e-nose) systems, have benefited from advanced machine learning techniques that have significantly improved the conditioning and processing of multivariate feature-rich sensor data. These advancements are complemented by the application of bioinspired algorithms and architectures based on findings from neurophysiological studies focusing on the biological olfactory pathway. The application of spiking neural networks (SNNs), and concepts from neuromorphic engineering in general, are one of the key factors that has led to the design and development of efficient bioinspired e-nose systems. However, only a limited number of studies have focused on deploying these models on a natively event-driven hardware platform that exploits the benefits of neuromorphic implementation, such as ultra-low-power consumption and real-time processing, for simplified integration in a portable e-nose system. In this paper, we extend our previously reported neuromorphic encoding and classification approach to a real-world dataset that consists of sensor responses from a commercial e-nose system when exposed to eight different types of malts. We show that the proposed SNN-based classifier was able to deliver 97% accurate classification results at a maximum latency of 0.4 ms per inference with a power consumption of less than 1 mW when deployed on neuromorphic hardware. One of the key advantages of the proposed neuromorphic architecture is that the entire functionality, including pre-processing, event encoding, and classification, can be mapped on the neuromorphic system-on-a-chip (NSoC) to develop power-efficient and highly-accurate real-time e-nose systems.
Collapse
Affiliation(s)
- Anup Vanarse
- Brainchip Research Institute, Perth 6000, Australia; (A.O.); (P.v.d.M.)
| | - Adam Osseiran
- Brainchip Research Institute, Perth 6000, Australia; (A.O.); (P.v.d.M.)
| | - Alexander Rassau
- School of Engineering, Edith Cowan University, Joondalup 6027, Australia;
| | | |
Collapse
|
6
|
Abstract
Stochastic computing is an emerging scientific field pushed by the need for developing high-performance artificial intelligence systems in hardware to quickly solve complex data processing problems. This is the case of virtual screening, a computational task aimed at searching across huge molecular databases for new drug leads. In this work, we show a classification framework in which molecules are described by an energy-based vector. This vector is then processed by an ultra-fast artificial neural network implemented through FPGA by using stochastic computing techniques. Compared to other previously published virtual screening methods, this proposal provides similar or higher accuracy, while it improves processing speed by about two or three orders of magnitude.
Collapse
|
7
|
Chakraborty B, She X, Mukhopadhyay S. A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:9014-9029. [PMID: 34705647 DOI: 10.1109/tip.2021.3122092] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper proposes a Fully Spiking Hybrid Neural Network (FSHNN) for energy-efficient and robust object detection in resource-constrained platforms. The network architecture is based on a Spiking Convolutional Neural Network using leaky-integrate-fire neuron models. The model combines unsupervised Spike Time-Dependent Plasticity (STDP) learning with back-propagation (STBP) learning methods and also uses Monte Carlo Dropout to get an estimate of the uncertainty error. FSHNN provides better accuracy compared to DNN based object detectors while being more energy-efficient. It also outperforms these object detectors, when subjected to noisy input data and less labeled training data with a lower uncertainty error.
Collapse
|
8
|
Kim Y, Panda P. Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing. Neural Netw 2021; 144:686-698. [PMID: 34662827 DOI: 10.1016/j.neunet.2021.09.022] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 09/22/2021] [Accepted: 09/24/2021] [Indexed: 11/20/2022]
Abstract
Spiking Neural Networks (SNNs) have recently emerged as a new generation of low-power deep neural networks due to sparse, asynchronous, and binary event-driven processing. Most previous deep SNN optimization methods focus on static datasets (e.g., MNIST) from a conventional frame-based camera. On the other hand, optimization techniques for event data from Dynamic Vision Sensor (DVS) cameras are still at infancy. Most prior SNN techniques handling DVS data are limited to shallow networks and thus, show low performance. Generally, we observe that the integrate-and-fire behavior of spiking neurons diminishes spike activity in deeper layers. The sparse spike activity results in a sub-optimal solution during training (i.e., performance degradation). To address this limitation, we propose novel algorithmic and architectural advances to accelerate the training of very deep SNNs on DVS data. Specifically, we propose Spike Activation Lift Training (SALT) which increases spike activity across all layers by optimizing both weights and thresholds in convolutional layers. After applying SALT, we train the weights based on the cross-entropy loss. SALT helps the networks to convey ample information across all layers during training and therefore improves the performance. Furthermore, we propose a simple and effective architecture, called Switched-BN, which exploits Batch Normalization (BN). Previous methods show that the standard BN is incompatible with the temporal dynamics of SNNs. Therefore, in Switched-BN architecture, we apply BN to the last layer of an SNN after accumulating all the spikes from previous layer with a spike voltage accumulator (i.e., converting temporal spike information to float value). Even though we apply BN in just one layer of SNNs, our results demonstrate a considerable performance gain without any significant computational overhead. Through extensive experiments, we show the effectiveness of SALT and Switched-BN for training very deep SNNs from scratch on various benchmarks including, DVS-Cifar10, N-Caltech, DHP19, CIFAR10, and CIFAR100. To the best of our knowledge, this is the first work showing state-of-the-art performance with deep SNNs on DVS data.
Collapse
Affiliation(s)
- Youngeun Kim
- Department of Electrical Engineering, Yale University, New Haven, CT, USA.
| | | |
Collapse
|
9
|
Nishi Y, Nomura K, Marukame T, Mizushima K. Stochastic binary synapses having sigmoidal cumulative distribution functions for unsupervised learning with spike timing-dependent plasticity. Sci Rep 2021; 11:18282. [PMID: 34521895 PMCID: PMC8440757 DOI: 10.1038/s41598-021-97583-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 08/23/2021] [Indexed: 11/17/2022] Open
Abstract
Spike timing-dependent plasticity (STDP), which is widely studied as a fundamental synaptic update rule for neuromorphic hardware, requires precise control of continuous weights. From the viewpoint of hardware implementation, a simplified update rule is desirable. Although simplified STDP with stochastic binary synapses was proposed previously, we find that it leads to degradation of memory maintenance during learning, which is unfavourable for unsupervised online learning. In this work, we propose a stochastic binary synaptic model where the cumulative probability of the weight change evolves in a sigmoidal fashion with potentiation or depression trials, which can be implemented using a pair of switching devices consisting of serially connected multiple binary memristors. As a benchmark test we perform simulations of unsupervised learning of MNIST images with a two-layer network and show that simplified STDP in combination with this model can outperform conventional rules with continuous weights not only in memory maintenance but also in recognition accuracy. Our method achieves 97.3% in recognition accuracy, which is higher than that reported with standard STDP in the same framework. We also show that the high performance of our learning rule is robust against device-to-device variability of the memristor's probabilistic behaviour.
Collapse
Affiliation(s)
- Yoshifumi Nishi
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan.
| | - Kumiko Nomura
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan
| | - Takao Marukame
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan
| | - Koichi Mizushima
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan
| |
Collapse
|
10
|
Bensimon M, Greenberg S, Haiut M. Using a Low-Power Spiking Continuous Time Neuron (SCTN) for Sound Signal Processing. SENSORS 2021; 21:s21041065. [PMID: 33557214 PMCID: PMC7913968 DOI: 10.3390/s21041065] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 01/23/2021] [Accepted: 02/01/2021] [Indexed: 11/16/2022]
Abstract
This work presents a new approach based on a spiking neural network for sound preprocessing and classification. The proposed approach is biologically inspired by the biological neuron’s characteristic using spiking neurons, and Spike-Timing-Dependent Plasticity (STDP)-based learning rule. We propose a biologically plausible sound classification framework that uses a Spiking Neural Network (SNN) for detecting the embedded frequencies contained within an acoustic signal. This work also demonstrates an efficient hardware implementation of the SNN network based on the low-power Spike Continuous Time Neuron (SCTN). The proposed sound classification framework suggests direct Pulse Density Modulation (PDM) interfacing of the acoustic sensor with the SCTN-based network avoiding the usage of costly digital-to-analog conversions. This paper presents a new connectivity approach applied to Spiking Neuron (SN)-based neural networks. We suggest considering the SCTN neuron as a basic building block in the design of programmable analog electronics circuits. Usually, a neuron is used as a repeated modular element in any neural network structure, and the connectivity between the neurons located at different layers is well defined. Thus, generating a modular Neural Network structure composed of several layers with full or partial connectivity. The proposed approach suggests controlling the behavior of the spiking neurons, and applying smart connectivity to enable the design of simple analog circuits based on SNN. Unlike existing NN-based solutions for which the preprocessing phase is carried out using analog circuits and analog-to-digital conversion, we suggest integrating the preprocessing phase into the network. This approach allows referring to the basic SCTN as an analog module enabling the design of simple analog circuits based on SNN with unique inter-connections between the neurons. The efficiency of the proposed approach is demonstrated by implementing SCTN-based resonators for sound feature extraction and classification. The proposed SCTN-based sound classification approach demonstrates a classification accuracy of 98.73% using the Real-World Computing Partnership (RWCP) database.
Collapse
Affiliation(s)
- Moshe Bensimon
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba 8400711, Israel;
| | - Shlomo Greenberg
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba 8400711, Israel;
- Correspondence:
| | | |
Collapse
|
11
|
Analogue pattern recognition with stochastic switching binary CMOS-integrated memristive devices. Sci Rep 2020; 10:14450. [PMID: 32879397 PMCID: PMC7467933 DOI: 10.1038/s41598-020-71334-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 08/14/2020] [Indexed: 11/29/2022] Open
Abstract
Biological neural networks outperform current computer technology in terms of power consumption and computing speed while performing associative tasks, such as pattern recognition. The analogue and massive parallel in-memory computing in biology differs strongly from conventional transistor electronics that rely on the von Neumann architecture. Therefore, novel bio-inspired computing architectures have been attracting a lot of attention in the field of neuromorphic computing. Here, memristive devices, which serve as non-volatile resistive memory, are employed to emulate the plastic behaviour of biological synapses. In particular, CMOS integrated resistive random access memory (RRAM) devices are promising candidates to extend conventional CMOS technology to neuromorphic systems. However, dealing with the inherent stochasticity of resistive switching can be challenging for network performance. In this work, the probabilistic switching is exploited to emulate stochastic plasticity with fully CMOS integrated binary RRAM devices. Two different RRAM technologies with different device variabilities are investigated in detail, and their potential applications in stochastic artificial neural networks (StochANNs) capable of solving MNIST pattern recognition tasks is examined. A mixed-signal implementation with hardware synapses and software neurons combined with numerical simulations shows that the proposed concept of stochastic computing is able to process analogue data with binary memory cells.
Collapse
|
12
|
Dutta S, Schafer C, Gomez J, Ni K, Joshi S, Datta S. Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges. Front Neurosci 2020; 14:634. [PMID: 32670012 PMCID: PMC7327100 DOI: 10.3389/fnins.2020.00634] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 05/22/2020] [Indexed: 11/13/2022] Open
Abstract
The two possible pathways toward artificial intelligence (AI)-(i) neuroscience-oriented neuromorphic computing [like spiking neural network (SNN)] and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al., 2019). Deviating from traditional deep learning approach of relying on neuronal models with static nonlinearities, SNNs attempt to capture brain-like features like computation using spikes. This holds the promise of improving the energy efficiency of the computing platforms. In order to achieve a much higher areal and energy efficiency compared to today's hardware implementation of SNN, we need to go beyond the traditional route of relying on CMOS-based digital or mixed-signal neuronal circuits and segregation of computation and memory under the von Neumann architecture. Recently, ferroelectric field-effect transistors (FeFETs) are being explored as a promising alternative for building neuromorphic hardware by utilizing their non-volatile nature and rich polarization switching dynamics. In this work, we propose an all FeFET-based SNN hardware that allows low-power spike-based information processing and co-localized memory and computing (a.k.a. in-memory computing). We experimentally demonstrate the essential neuronal and synaptic dynamics in a 28 nm high-K metal gate FeFET technology. Furthermore, drawing inspiration from the traditional machine learning approach of optimizing a cost function to adjust the synaptic weights, we implement a surrogate gradient (SG) learning algorithm on our SNN platform that allows us to perform supervised learning on MNIST dataset. As such, we provide a pathway toward building energy-efficient neuromorphic hardware that can support traditional machine learning algorithms. Finally, we undertake synergistic device-algorithm co-design by accounting for the impacts of device-level variation (stochasticity) and limited bit precision of on-chip synaptic weights (available analog states) on the classification accuracy.
Collapse
Affiliation(s)
- Sourav Dutta
- Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Clemens Schafer
- Department of Computer Science and Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Jorge Gomez
- Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Kai Ni
- Department of Microsystems Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Siddharth Joshi
- Department of Computer Science and Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Suman Datta
- Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| |
Collapse
|
13
|
Mikhaylov A, Pimashkin A, Pigareva Y, Gerasimova S, Gryaznov E, Shchanikov S, Zuev A, Talanov M, Lavrov I, Demin V, Erokhin V, Lobov S, Mukhina I, Kazantsev V, Wu H, Spagnolo B. Neurohybrid Memristive CMOS-Integrated Systems for Biosensors and Neuroprosthetics. Front Neurosci 2020; 14:358. [PMID: 32410943 PMCID: PMC7199501 DOI: 10.3389/fnins.2020.00358] [Citation(s) in RCA: 110] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 03/24/2020] [Indexed: 11/18/2022] Open
Abstract
Here we provide a perspective concept of neurohybrid memristive chip based on the combination of living neural networks cultivated in microfluidic/microelectrode system, metal-oxide memristive devices or arrays integrated with mixed-signal CMOS layer to control the analog memristive circuits, process the decoded information, and arrange a feedback stimulation of biological culture as parts of a bidirectional neurointerface. Our main focus is on the state-of-the-art approaches for cultivation and spatial ordering of the network of dissociated hippocampal neuron cells, fabrication of a large-scale cross-bar array of memristive devices tailored using device engineering, resistive state programming, or non-linear dynamics, as well as hardware implementation of spiking neural networks (SNNs) based on the arrays of memristive devices and integrated CMOS electronics. The concept represents an example of a brain-on-chip system belonging to a more general class of memristive neurohybrid systems for a new-generation robotics, artificial intelligence, and personalized medicine, discussed in the framework of the proposed roadmap for the next decade period.
Collapse
Affiliation(s)
- Alexey Mikhaylov
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Alexey Pimashkin
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Yana Pigareva
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | | | - Evgeny Gryaznov
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Sergey Shchanikov
- Department of Information Technologies, Vladimir State University, Murom, Russia
| | - Anton Zuev
- Department of Information Technologies, Vladimir State University, Murom, Russia
| | - Max Talanov
- Neuroscience Laboratory, Kazan Federal University, Kazan, Russia
| | - Igor Lavrov
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, United States
- Laboratory of Motor Neurorehabilitation, Kazan Federal University, Kazan, Russia
| | | | - Victor Erokhin
- Neuroscience Laboratory, Kazan Federal University, Kazan, Russia
- Kurchatov Institute, Moscow, Russia
- CNR-Institute of Materials for Electronics and Magnetism, Italian National Research Council, Parma, Italy
| | - Sergey Lobov
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Irina Mukhina
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Cell Technology Group, Privolzhsky Research Medical University, Nizhny Novgorod, Russia
| | - Victor Kazantsev
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Huaqiang Wu
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Bernardo Spagnolo
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Dipartimento di Fisica e Chimica-Emilio Segrè, Group of Interdisciplinary Theoretical Physics, Università di Palermo and CNISM, Unità di Palermo, Palermo, Italy
- Istituto Nazionale di Fisica Nucleare, Sezione di Catania, Catania, Italy
| |
Collapse
|
14
|
Chakraborty I, Agrawal A, Jaiswal A, Srinivasan G, Roy K. In situ unsupervised learning using stochastic switching in magneto-electric magnetic tunnel junctions. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2020; 378:20190157. [PMID: 31865881 PMCID: PMC6939242 DOI: 10.1098/rsta.2019.0157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/30/2019] [Indexed: 06/10/2023]
Abstract
Spiking neural networks (SNNs) offer a bio-plausible and potentially power-efficient alternative to conventional deep learning. Although there has been progress towards implementing SNN functionalities in custom CMOS-based hardware using beyond Von Neumann architectures, the power-efficiency of the human brain has remained elusive. This has necessitated investigations of novel material systems which can efficiently mimic the functional units of SNNs, such as neurons and synapses. In this paper, we present a magnetoelectric-magnetic tunnel junction (ME-MTJ) device as a synapse. We arrange these synapses in a crossbar fashion and perform in situ unsupervised learning. We leverage the capacitive nature of write-ports in ME-MTJs, wherein by applying appropriately shaped voltage pulses across the write-port, the ME-MTJ can be switched in a probabilistic manner. We further exploit the sigmoidal switching characteristics of ME-MTJ to tune the synapses to follow the well-known spike timing-dependent plasticity (STDP) rule in a stochastic fashion. Finally, we use the stochastic STDP rule in ME-MTJ synapses to simulate a two-layered SNN to perform image classification tasks on a handwritten digit dataset. Thus, the capacitive write-port and the decoupled-nature of read-write path of ME-MTJs allow us to construct a transistor-less crossbar, suitable for energy-efficient implementation of in situ learning in SNNs. This article is part of the theme issue 'Harmonizing energy-autonomous computing and intelligence'.
Collapse
|
15
|
A Hardware-Deployable Neuromorphic Solution for Encoding and Classification of Electronic Nose Data. SENSORS 2019; 19:s19224831. [PMID: 31698785 PMCID: PMC6891685 DOI: 10.3390/s19224831] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 10/26/2019] [Accepted: 10/31/2019] [Indexed: 11/17/2022]
Abstract
In several application domains, electronic nose systems employing conventional data processing approaches incur substantial power and computational costs and limitations, such as significant latency and poor accuracy for classification. Recent developments in spike-based bio-inspired approaches have delivered solutions for the highly accurate classification of multivariate sensor data with minimized computational and power requirements. Although these methods have addressed issues related to efficient data processing and classification accuracy, other areas, such as reducing the processing latency to support real-time application and deploying spike-based solutions on supported hardware, have yet to be studied in detail. Through this investigation, we proposed a spiking neural network (SNN)-based classifier, implemented in a chip-emulation-based development environment, that can be seamlessly deployed on a neuromorphic system-on-a-chip (NSoC). Under three different scenarios of increasing complexity, the SNN was determined to be able to classify real-valued sensor data with greater than 90% accuracy and with a maximum latency of 3 s on the software-based platform. Highlights of this work included the design and implementation of a novel encoder for artificial olfactory systems, implementation of unsupervised spike-timing-dependent plasticity (STDP) for learning, and a foundational study on early classification capability using the SNN-based classifier.
Collapse
|
16
|
Frenkel C, Legat JD, Bol D. MorphIC: A 65-nm 738k-Synapse/mm 2 Quad-Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike-Driven Online Learning. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:999-1010. [PMID: 31329562 DOI: 10.1109/tbcas.2019.2928793] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient SNNs still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this paper, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86 mm 2 in 65-nm CMOS, achieving a high density of 738k synapses/mm 2. MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while having no penalty in the energy-accuracy tradeoff.
Collapse
|
17
|
Abstract
In this paper, we present an electrical circuit of a leaky integrate-and-fire neuron with one VO2 switch, which models the properties of biological neurons. Based on VO2 neurons, a two-layer spiking neural network consisting of nine input and three output neurons is modeled in the SPICE simulator. The network contains excitatory and inhibitory couplings, and implements the winner-takes-all principle in pattern recognition. Using a supervised Spike-Timing-Dependent Plasticity training method and a timing method of information coding, the network was trained to recognize three patterns with dimensions of 3 × 3 pixels. The neural network is able to recognize up to 105 images per second, and has the potential to increase the recognition speed further.
Collapse
|
18
|
Camuñas-Mesa LA, Linares-Barranco B, Serrano-Gotarredona T. Neuromorphic Spiking Neural Networks and Their Memristor-CMOS Hardware Implementations. MATERIALS (BASEL, SWITZERLAND) 2019; 12:E2745. [PMID: 31461877 PMCID: PMC6747825 DOI: 10.3390/ma12172745] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 08/02/2019] [Accepted: 08/10/2019] [Indexed: 11/17/2022]
Abstract
Inspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses. However, the realization of learning strategies in these systems consumes an important proportion of resources in terms of area and power. The recent development of nanoscale memristors that can be integrated with Complementary Metal-Oxide-Semiconductor (CMOS) technology opens a very promising solution to emulate the behavior of biological synapses. Therefore, hybrid memristor-CMOS approaches have been proposed to implement large-scale neural networks with learning capabilities, offering a scalable and lower-cost alternative to existing CMOS systems.
Collapse
Affiliation(s)
- Luis A Camuñas-Mesa
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain.
| | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| | - Teresa Serrano-Gotarredona
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| |
Collapse
|
19
|
Srinivasan G, Roy K. ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing. Front Neurosci 2019; 13:189. [PMID: 30941003 PMCID: PMC6434391 DOI: 10.3389/fnins.2019.00189] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 02/18/2019] [Indexed: 11/13/2022] Open
Abstract
In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers >20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks.
Collapse
|