1
|
Li K, Yao J, Zhao P, Luo Y, Ge X, Yang R, Cheng X, Miao X. Ovonic threshold switching-based artificial afferent neurons for thermal in-sensor computing. MATERIALS HORIZONS 2024; 11:2106-2114. [PMID: 38545857 DOI: 10.1039/d4mh00053f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Artificial afferent neurons in the sensory nervous system inspired by biology have enormous potential for efficiently perceiving and processing environmental information. However, the previously reported artificial afferent neurons suffer from two prominent challenges: considerable power consumption and limited scalability efficiency. Herein, addressing these challenges, a bioinspired artificial thermal afferent neuron based on a N-doped SiTe ovonic threshold switching (OTS) device is presented for the first time. The engineered OTS device shows remarkable uniformity and robust endurance, ensuring the reliability and efficacy of the artificial afferent neurons. A substantially decreased leakage current of the SiTe OTS device by nitrogen doping results in ultra-low power consumption less than 0.3 nJ per spike for artificial afferent neurons. The inherent temperature response exhibited by N-doped SiTe OTS materials allows us to construct a highly compact artificial thermal afferent neuron over a wide temperature range. An edge detection task is performed to further verify its thermal perceptual computing function. Our work provides an insight into OTS-based artificial afferent neurons for electronic skin and sensory neurorobotics.
Collapse
Affiliation(s)
- Kai Li
- School of Integrated Circuits, Hubei Key Laboratory for Advanced Memories, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Jiaping Yao
- School of Integrated Circuits, Hubei Key Laboratory for Advanced Memories, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Peng Zhao
- School of Integrated Circuits, Hubei Key Laboratory for Advanced Memories, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Yunhao Luo
- School of Integrated Circuits, Hubei Key Laboratory for Advanced Memories, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Xiang Ge
- School of Integrated Circuits, Hubei Key Laboratory for Advanced Memories, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Rui Yang
- School of Integrated Circuits, Hubei Key Laboratory for Advanced Memories, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
- Hubei Yangtze Memory Laboratories, Wuhan 430205, China
| | - Xiaomin Cheng
- School of Integrated Circuits, Hubei Key Laboratory for Advanced Memories, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
- Hubei Yangtze Memory Laboratories, Wuhan 430205, China
| | - Xiangshui Miao
- School of Integrated Circuits, Hubei Key Laboratory for Advanced Memories, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
- Hubei Yangtze Memory Laboratories, Wuhan 430205, China
| |
Collapse
|
2
|
Talin AA, Li Y, Robinson DA, Fuller EJ, Kumar S. ECRAM Materials, Devices, Circuits and Architectures: A Perspective. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2204771. [PMID: 36354177 DOI: 10.1002/adma.202204771] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 07/09/2022] [Indexed: 06/16/2023]
Abstract
Non-von-Neumann computing using neuromorphic systems based on two-terminal resistive nonvolatile memory elements has emerged as a promising approach, but its full potential has not been realized due to the lack of materials and devices with the appropriate attributes. Unlike memristors, which require large write currents to drive phase transformations or filament growth, electrochemical random access memory (ECRAM) decouples the "write" and "read" operations using a "gate" electrode to tune the conductance state through charge-transfer reactions, and every electron transferred through the external circuit in ECRAM corresponds to the migration of ≈1 ion used to store analogue information. Like static dopants in traditional semiconductors, electrochemically inserted ions modulate the conductivity by locally perturbing a host's electronic structure; however, ECRAM does so in a dynamic and reversible manner. The resulting change in conductance can span orders of magnitude, from gradual increments needed for analog elements, to large, abrupt changes for dynamically reconfigurable adaptive architectures. In this in-depth perspective, the history of ECRAM, the recent progress in devices spanning organic, inorganic, and 2D materials, circuits, architectures, the rich portfolio of challenging, fundamental questions, and how ECRAM can be harnessed to realize a new paradigm for low-power neuromorphic computing are discussed.
Collapse
Affiliation(s)
- A Alec Talin
- Sandia National Laboratories, Livermore, CA, 94551, USA
| | - Yiyang Li
- Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI, 48109, USA
| | | | | | - Suhas Kumar
- Sandia National Laboratories, Livermore, CA, 94551, USA
| |
Collapse
|
3
|
Halaly R, Ezra Tsur E. Autonomous driving controllers with neuromorphic spiking neural networks. Front Neurorobot 2023; 17:1234962. [PMID: 37636326 PMCID: PMC10451073 DOI: 10.3389/fnbot.2023.1234962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023] Open
Abstract
Autonomous driving is one of the hallmarks of artificial intelligence. Neuromorphic (brain-inspired) control is posed to significantly contribute to autonomous behavior by leveraging spiking neural networks-based energy-efficient computational frameworks. In this work, we have explored neuromorphic implementations of four prominent controllers for autonomous driving: pure-pursuit, Stanley, PID, and MPC, using a physics-aware simulation framework. We extensively evaluated these models with various intrinsic parameters and compared their performance with conventional CPU-based implementations. While being neural approximations, we show that neuromorphic models can perform competitively with their conventional counterparts. We provide guidelines for building neuromorphic architectures for control and describe the importance of their underlying tuning parameters and neuronal resources. Our results show that most models would converge to their optimal performances with merely 100-1,000 neurons. They also highlight the importance of hybrid conventional and neuromorphic designs, as was suggested here with the MPC controller. This study also highlights the limitations of neuromorphic implementations, particularly at higher (> 15 m/s) speeds where they tend to degrade faster than in conventional designs.
Collapse
Affiliation(s)
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| |
Collapse
|
4
|
Timcheck J, Kadmon J, Boahen K, Ganguli S. Optimal noise level for coding with tightly balanced networks of spiking neurons in the presence of transmission delays. PLoS Comput Biol 2022; 18:e1010593. [PMID: 36251693 PMCID: PMC9576105 DOI: 10.1371/journal.pcbi.1010593] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 09/21/2022] [Indexed: 11/19/2022] Open
Abstract
Neural circuits consist of many noisy, slow components, with individual neurons subject to ion channel noise, axonal propagation delays, and unreliable and slow synaptic transmission. This raises a fundamental question: how can reliable computation emerge from such unreliable components? A classic strategy is to simply average over a population of N weakly-coupled neurons to achieve errors that scale as [Formula: see text]. But more interestingly, recent work has introduced networks of leaky integrate-and-fire (LIF) neurons that achieve coding errors that scale superclassically as 1/N by combining the principles of predictive coding and fast and tight inhibitory-excitatory balance. However, spike transmission delays preclude such fast inhibition, and computational studies have observed that such delays can cause pathological synchronization that in turn destroys superclassical coding performance. Intriguingly, it has also been observed in simulations that noise can actually improve coding performance, and that there exists some optimal level of noise that minimizes coding error. However, we lack a quantitative theory that describes this fascinating interplay between delays, noise and neural coding performance in spiking networks. In this work, we elucidate the mechanisms underpinning this beneficial role of noise by deriving analytical expressions for coding error as a function of spike propagation delay and noise levels in predictive coding tight-balance networks of LIF neurons. Furthermore, we compute the minimal coding error and the associated optimal noise level, finding that they grow as power-laws with the delay. Our analysis reveals quantitatively how optimal levels of noise can rescue neural coding performance in spiking neural networks with delays by preventing the build up of pathological synchrony without overwhelming the overall spiking dynamics. This analysis can serve as a foundation for the further study of precise computation in the presence of noise and delays in efficient spiking neural circuits.
Collapse
Affiliation(s)
- Jonathan Timcheck
- Department of Physics, Stanford University, Stanford, California, United States of America
| | - Jonathan Kadmon
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| | - Kwabena Boahen
- Department of Bioengineering, Stanford University, Stanford, California, United States of America
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| |
Collapse
|
5
|
Wang J, Peng Z, Zhan Y, Li Y, Yu G, Chong KS, Wang C. A High-Accuracy and Energy-Efficient CORDIC Based Izhikevich Neuron With Error Suppression and Compensation. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:807-821. [PMID: 35834464 DOI: 10.1109/tbcas.2022.3191004] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Bio-inspired neuron models are the key building blocks of brain-like neural networks for brain-science exploration and neuromorphic engineering applications. The efficient hardware design of bio-inspired neuron models is one of the challenges to implement brain-like neural networks, as the balancing of model accuracy, energy consumption and hardware cost is very challenging. This paper proposes a high-accuracy and energy-efficient Fast-Convergence COordinate Rotation DIgital Computer (FC-CORDIC) based Izhikevich neuron design. For ensuring the model accuracy, an error propagation model of the Izhikevich neuron is presented for systematic error analysis and effective error reduction. Parameter-Tuning Error Compensation (PTEC) method and Bitwidth-Extension Error Suppression (BEES) method are proposed to reduce the error of Izhikevich neuron design effectively. In addition, by utilizing the FC-CORDIC instead of conventional CORDIC for square calculation in the Izhikevich model, the redundant CORDIC iterations are removed and therefore, both the accumulated errors and required computation are effectively reduced, which significantly improve the accuracy and energy efficiency. An optimized fixed-point design of FC-CORDIC is also proposed to save hardware overhead while ensuring the accuracy. FPGA implementation results exhibit that the proposed Izhikevich neuron design can achieve high accuracy and energy efficiency with an acceptable hardware overhead, among the state-of-the-art designs.
Collapse
|
6
|
Cramer B, Stradmann Y, Schemmel J, Zenke F. The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2744-2757. [PMID: 33378266 DOI: 10.1109/tnnls.2020.3044364] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification data sets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Furthermore, we applied this conversion to an existing and a novel speech data set. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit data set that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these data sets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.
Collapse
|
7
|
Semenova N, Brunner D. Noise-mitigation strategies in physical feedforward neural networks. CHAOS (WOODBURY, N.Y.) 2022; 32:061106. [PMID: 35778142 DOI: 10.1063/5.0096637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
Physical neural networks are promising candidates for next generation artificial intelligence hardware. In such architectures, neurons and connections are physically realized and do not leverage digital concepts with their practically infinite signal-to-noise ratio to encode, transduce, and transform information. They, therefore, are prone to noise with a variety of statistical and architectural properties, and effective strategies leveraging network-inherent assets to mitigate noise in a hardware-efficient manner are important in the pursuit of next generation neural network hardware. Based on analytical derivations, we here introduce and analyze a variety of different noise-mitigation approaches. We analytically show that intra-layer connections in which the connection matrix's squared mean exceeds the mean of its square fully suppress uncorrelated noise. We go beyond and develop two synergistic strategies for noise that is uncorrelated and correlated across populations of neurons. First, we introduce the concept of ghost neurons, where each group of neurons perturbed by correlated noise has a negative connection to a single neuron, yet without receiving any input information. Second, we show that pooling of neuron populations is an efficient approach to suppress uncorrelated noise. As such, we developed a general noise-mitigation strategy leveraging the statistical properties of the different noise terms most relevant in analog hardware. Finally, we demonstrate the effectiveness of this combined approach for a trained neural network classifying the modified National Institute of Standards and Technology handwritten digits, for which we achieve a fourfold improvement of the output signal-to-noise ratio. Our noise mitigation lifts the 92.07% classification accuracy of the noisy neural network to 97.49%, which is essentially identical to the 97.54% of the noise-free network.
Collapse
Affiliation(s)
- N Semenova
- Département d'Optique P. M. Duffieux, Institut FEMTO-ST, Université Bourgogne-Franche-Comté, CNRS UMR 6174, Besançon, France
| | - D Brunner
- Département d'Optique P. M. Duffieux, Institut FEMTO-ST, Université Bourgogne-Franche-Comté, CNRS UMR 6174, Besançon, France
| |
Collapse
|
8
|
Calaim N, Dehmelt FA, Gonçalves PJ, Machens CK. The geometry of robustness in spiking neural networks. eLife 2022; 11:73276. [PMID: 35635432 PMCID: PMC9307274 DOI: 10.7554/elife.73276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 05/22/2022] [Indexed: 11/18/2022] Open
Abstract
Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a 'bounding box'. Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks — low-dimensional representations, heterogeneity of tuning, and precise negative feedback — may be key to understanding the robustness of neural systems at the circuit level.
Collapse
Affiliation(s)
| | | | - Pedro J Gonçalves
- Department of Electrical and Computer Engineering, University of Tübingen, Tübingen, Germany
| | | |
Collapse
|
9
|
Neuromorphic Neural Engineering Framework-Inspired Online Continuous Learning with Analog Circuitry. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094528] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Neuromorphic hardware designs realize neural principles in electronics to provide high-performing, energy-efficient frameworks for machine learning. Here, we propose a neuromorphic analog design for continuous real-time learning. Our hardware design realizes the underlying principles of the neural engineering framework (NEF). NEF brings forth a theoretical framework for the representation and transformation of mathematical constructs with spiking neurons, thus providing efficient means for neuromorphic machine learning and the design of intricate dynamical systems. Our analog circuit design implements the neuromorphic prescribed error sensitivity (PES) learning rule with OZ neurons. OZ is an analog implementation of a spiking neuron, which was shown to have complete correspondence with NEF across firing rates, encoding vectors, and intercepts. We demonstrate PES-based neuromorphic representation of mathematical constructs with varying neuron configurations, the transformation of mathematical constructs, and the construction of a dynamical system with the design of an inducible leaky oscillator. We further designed a circuit emulator, allowing the evaluation of our electrical designs on a large scale. We used the circuit emulator in conjunction with a robot simulator to demonstrate adaptive learning-based control of a robotic arm with six degrees of freedom.
Collapse
|
10
|
Kudithipudi D, Aguilar-Simon M, Babb J, Bazhenov M, Blackiston D, Bongard J, Brna AP, Chakravarthi Raja S, Cheney N, Clune J, Daram A, Fusi S, Helfer P, Kay L, Ketz N, Kira Z, Kolouri S, Krichmar JL, Kriegman S, Levin M, Madireddy S, Manicka S, Marjaninejad A, McNaughton B, Miikkulainen R, Navratilova Z, Pandit T, Parker A, Pilly PK, Risi S, Sejnowski TJ, Soltoggio A, Soures N, Tolias AS, Urbina-Meléndez D, Valero-Cuevas FJ, van de Ven GM, Vogelstein JT, Wang F, Weiss R, Yanguas-Gil A, Zou X, Siegelmann H. Biological underpinnings for lifelong learning machines. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00452-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
11
|
Abstract
Neuromorphic systems aim to accomplish efficient computation in electronics by mirroring neurobiological principles. Taking advantage of neuromorphic technologies requires effective learning algorithms capable of instantiating high-performing neural networks, while also dealing with inevitable manufacturing variations of individual components, such as memristors or analog neurons. We present a learning framework resulting in bioinspired spiking neural networks with high performance, low inference latency, and sparse spike-coding schemes, which also self-corrects for device mismatch. We validate our approach on the BrainScaleS-2 analog spiking neuromorphic system, demonstrating state-of-the-art accuracy, low latency, and energy efficiency. Our work sketches a path for building powerful neuromorphic processors that take advantage of emerging analog technologies. To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum, but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Surrogate gradient learning has emerged as a promising training strategy for spiking networks, but its applicability for analog neuromorphic systems has not been demonstrated. Here, we demonstrate surrogate gradient learning on the BrainScaleS-2 analog neuromorphic system using an in-the-loop approach. We show that learning self-corrects for device mismatch, resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, less than one spike per hidden neuron and input, perform inference at rates of up to 85,000 frames per second, and consume less than 200 mW. In summary, our work sets several benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.
Collapse
|
12
|
Semenova N, Larger L, Brunner D. Understanding and mitigating noise in trained deep neural networks. Neural Netw 2021; 146:151-160. [PMID: 34864223 DOI: 10.1016/j.neunet.2021.11.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 11/04/2021] [Accepted: 11/05/2021] [Indexed: 11/29/2022]
Abstract
Deep neural networks unlocked a vast range of new applications by solving tasks of which many were previously deemed as reserved to higher human intelligence. One of the developments enabling this success was a boost in computing power provided by special purpose hardware, such as graphic or tensor processing units. However, these do not leverage fundamental features of neural networks like parallelism and analog state variables. Instead, they emulate neural networks relying on binary computing, which results in unsustainable energy consumption and comparatively low speed. Fully parallel and analogue hardware promises to overcome these challenges, yet the impact of analogue neuron noise and its propagation, i.e. accumulation, threatens rendering such approaches inept. Here, we determine for the first time the propagation of noise in deep neural networks comprising noisy nonlinear neurons in trained fully connected layers. We study additive and multiplicative as well as correlated and uncorrelated noise, and develop analytical methods that predict the noise level in any layer of symmetric deep neural networks or deep neural networks trained with back propagation. We find that noise accumulation is generally bound, and adding additional network layers does not worsen the signal to noise ratio beyond a limit. Most importantly, noise accumulation can be suppressed entirely when neuron activation functions have a slope smaller than unity. We therefore developed the framework for noise in fully connected deep neural networks implemented in analog systems, and identify criteria allowing engineers to design noise-resilient novel neural network hardware.
Collapse
Affiliation(s)
- Nadezhda Semenova
- Département d'Optique P. M. Duffieux, Institut FEMTO-ST, Université Bourgogne-Franche-Comté CNRS UMR 6174, Besançon, France; Institute of Physics, Saratov State University, 83 Astrakhanskaya str., 410012 Saratov, Russia.
| | - Laurent Larger
- Département d'Optique P. M. Duffieux, Institut FEMTO-ST, Université Bourgogne-Franche-Comté CNRS UMR 6174, Besançon, France.
| | - Daniel Brunner
- Département d'Optique P. M. Duffieux, Institut FEMTO-ST, Université Bourgogne-Franche-Comté CNRS UMR 6174, Besançon, France.
| |
Collapse
|
13
|
Shalumov A, Halaly R, Tsur EE. LiDAR-driven spiking neural network for collision avoidance in autonomous driving. BIOINSPIRATION & BIOMIMETICS 2021; 16:066016. [PMID: 34551395 DOI: 10.1088/1748-3190/ac290c] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 09/22/2021] [Indexed: 06/13/2023]
Abstract
Facilitated by advances in real-time sensing, low and high-level control, and machine learning, autonomous vehicles draw ever-increasing attention from many branches of knowledge. Neuromorphic (brain-inspired) implementation of robotic control has been shown to outperform conventional control paradigms in terms of energy efficiency, robustness to perturbations, and adaptation to varying conditions. Here we propose LiDAR-driven neuromorphic control of both vehicle's speed and steering. We evaluated and compared neuromorphic PID control and online learning for autonomous vehicle control in static and dynamic environments, finally suggesting proportional learning as a preferred control scheme. We employed biologically plausible basal-ganglia and thalamus neural models for steering and collision-avoidance, finally extending them to support a null controller and a target-reaching optimization, significantly increasing performance.
Collapse
Affiliation(s)
- Albert Shalumov
- Neuro-Biomorphic Engineering Lab at the Open University of Israel, Ra'anana, Israel
| | - Raz Halaly
- Neuro-Biomorphic Engineering Lab at the Open University of Israel, Ra'anana, Israel
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab at the Open University of Israel, Ra'anana, Israel
| |
Collapse
|
14
|
Zenke F, Vogels TP. The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. Neural Comput 2021; 33:899-925. [PMID: 33513328 DOI: 10.1162/neco_a_01367] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 11/06/2020] [Indexed: 01/10/2023]
Abstract
Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.
Collapse
Affiliation(s)
- Friedemann Zenke
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland,
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Institute for Science and Technology, 3400 Klosterneuburg, Austria,
| |
Collapse
|
15
|
Hazan A, Ezra Tsur E. Neuromorphic Analog Implementation of Neural Engineering Framework-Inspired Spiking Neuron for High-Dimensional Representation. Front Neurosci 2021; 15:627221. [PMID: 33692670 PMCID: PMC7937893 DOI: 10.3389/fnins.2021.627221] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 01/25/2021] [Indexed: 11/13/2022] Open
Abstract
Brain-inspired hardware designs realize neural principles in electronics to provide high-performing, energy-efficient frameworks for artificial intelligence. The Neural Engineering Framework (NEF) brings forth a theoretical framework for representing high-dimensional mathematical constructs with spiking neurons to implement functional large-scale neural networks. Here, we present OZ, a programable analog implementation of NEF-inspired spiking neurons. OZ neurons can be dynamically programmed to feature varying high-dimensional response curves with positive and negative encoders for a neuromorphic distributed representation of normalized input data. Our hardware design demonstrates full correspondence with NEF across firing rates, encoding vectors, and intercepts. OZ neurons can be independently configured in real-time to allow efficient spanning of a representation space, thus using fewer neurons and therefore less power for neuromorphic data representation.
Collapse
Affiliation(s)
- Avi Hazan
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra'anana, Israel
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra'anana, Israel
| |
Collapse
|
16
|
Zaidel Y, Shalumov A, Volinski A, Supic L, Ezra Tsur E. Neuromorphic NEF-Based Inverse Kinematics and PID Control. Front Neurorobot 2021; 15:631159. [PMID: 33613225 PMCID: PMC7887770 DOI: 10.3389/fnbot.2021.631159] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 01/05/2021] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic implementation of robotic control has been shown to outperform conventional control paradigms in terms of robustness to perturbations and adaptation to varying conditions. Two main ingredients of robotics are inverse kinematic and Proportional-Integral-Derivative (PID) control. Inverse kinematics is used to compute an appropriate state in a robot's configuration space, given a target position in task space. PID control applies responsive correction signals to a robot's actuators, allowing it to reach its target accurately. The Neural Engineering Framework (NEF) offers a theoretical framework for a neuromorphic encoding of mathematical constructs with spiking neurons for the implementation of functional large-scale neural networks. In this work, we developed NEF-based neuromorphic algorithms for inverse kinematics and PID control, which we used to manipulate 6 degrees of freedom robotic arm. We used online learning for inverse kinematics and signal integration and differentiation for PID, offering high performing and energy-efficient neuromorphic control. Algorithms were evaluated in simulation as well as on Intel's Loihi neuromorphic hardware.
Collapse
Affiliation(s)
- Yuval Zaidel
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| | - Albert Shalumov
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| | - Alex Volinski
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| | - Lazar Supic
- Accenture Labs, San Francisco, CA, United States
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| |
Collapse
|
17
|
Hulea M, Ghassemlooy Z, Rajbhandari S, Younus OI, Barleanu A. Optical Axons for Electro-Optical Neural Networks. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6119. [PMID: 33121207 PMCID: PMC7663001 DOI: 10.3390/s20216119] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Revised: 10/19/2020] [Accepted: 10/23/2020] [Indexed: 11/30/2022]
Abstract
Recently, neuromorphic sensors, which convert analogue signals to spiking frequencies, have been reported for neurorobotics. In bio-inspired systems these sensors are connected to the main neural unit to perform post-processing of the sensor data. The performance of spiking neural networks has been improved using optical synapses, which offer parallel communications between the distanced neural areas but are sensitive to the intensity variations of the optical signal. For systems with several neuromorphic sensors, which are connected optically to the main unit, the use of optical synapses is not an advantage. To address this, in this paper we propose and experimentally verify optical axons with synapses activated optically using digital signals. The synaptic weights are encoded by the energy of the stimuli, which are then optically transmitted independently. We show that the optical intensity fluctuations and link's misalignment result in delay in activation of the synapses. For the proposed optical axon, we have demonstrated line of sight transmission over a maximum link length of 190 cm with a delay of 8 μs. Furthermore, we show the axon delay as a function of the illuminance using a fitted model for which the root mean square error (RMS) similarity is 0.95.
Collapse
Affiliation(s)
- Mircea Hulea
- Faculty of Automatic Control and Computer Engineering at Gheorghe Asachi Technical University of Iasi, 700050 Iasi, Romania;
| | - Zabih Ghassemlooy
- Optical Communications Research Group, Faculty of Engineering and Environment at Northumbria University, Newcastle upon Tyne NE7 7XA, UK; (Z.G.); (O.I.Y.)
| | | | - Othman Isam Younus
- Optical Communications Research Group, Faculty of Engineering and Environment at Northumbria University, Newcastle upon Tyne NE7 7XA, UK; (Z.G.); (O.I.Y.)
| | - Alexandru Barleanu
- Faculty of Automatic Control and Computer Engineering at Gheorghe Asachi Technical University of Iasi, 700050 Iasi, Romania;
| |
Collapse
|
18
|
Stöckel A, Eliasmith C. Passive Nonlinear Dendritic Interactions as a Computational Resource in Spiking Neural Networks. Neural Comput 2020; 33:96-128. [PMID: 33080158 DOI: 10.1162/neco_a_01338] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Nonlinear interactions in the dendritic tree play a key role in neural computation. Nevertheless, modeling frameworks aimed at the construction of large-scale, functional spiking neural networks, such as the Neural Engineering Framework, tend to assume a linear superposition of postsynaptic currents. In this letter, we present a series of extensions to the Neural Engineering Framework that facilitate the construction of networks incorporating Dale's principle and nonlinear conductance-based synapses. We apply these extensions to a two-compartment LIF neuron that can be seen as a simple model of passive dendritic computation. We show that it is possible to incorporate neuron models with input-dependent nonlinearities into the Neural Engineering Framework without compromising high-level function and that nonlinear postsynaptic currents can be systematically exploited to compute a wide variety of multivariate, band-limited functions, including the Euclidean norm, controlled shunting, and nonnegative multiplication. By avoiding an additional source of spike noise, the function approximation accuracy of a single layer of two-compartment LIF neurons is on a par with or even surpasses that of two-layer spiking neural networks up to a certain target function bandwidth.
Collapse
Affiliation(s)
- Andreas Stöckel
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| |
Collapse
|
19
|
Spiking neurons with spatiotemporal dynamics and gain modulation for monolithically integrated memristive neural networks. Nat Commun 2020; 11:3399. [PMID: 32636385 PMCID: PMC7341810 DOI: 10.1038/s41467-020-17215-3] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 06/15/2020] [Indexed: 11/18/2022] Open
Abstract
As a key building block of biological cortex, neurons are powerful information processing units and can achieve highly complex nonlinear computations even in individual cells. Hardware implementation of artificial neurons with similar capability is of great significance for the construction of intelligent, neuromorphic systems. Here, we demonstrate an artificial neuron based on NbOx volatile memristor that not only realizes traditional all-or-nothing, threshold-driven spiking and spatiotemporal integration, but also enables dynamic logic including XOR function that is not linearly separable and multiplicative gain modulation among different dendritic inputs, therefore surpassing neuronal functions described by a simple point neuron model. A monolithically integrated 4 × 4 fully memristive neural network consisting of volatile NbOx memristor based neurons and nonvolatile TaOx memristor based synapses in a single crossbar array is experimentally demonstrated, showing capability in pattern recognition through online learning using a simplified δ-rule and coincidence detection, which paves the way for bio-inspired intelligent systems. Designing energy efficient and scalable artificial networks for neuromorphic computing remains a challenge. Here, the authors demonstrate online learning in a monolithically integrated 4 × 4 fully memristive neural network consisting of volatile NbOx memristor neurons and nonvolatile TaOx memristor synapses.
Collapse
|
20
|
Chakraborty I, Agrawal A, Jaiswal A, Srinivasan G, Roy K. In situ unsupervised learning using stochastic switching in magneto-electric magnetic tunnel junctions. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2020; 378:20190157. [PMID: 31865881 PMCID: PMC6939242 DOI: 10.1098/rsta.2019.0157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/30/2019] [Indexed: 06/10/2023]
Abstract
Spiking neural networks (SNNs) offer a bio-plausible and potentially power-efficient alternative to conventional deep learning. Although there has been progress towards implementing SNN functionalities in custom CMOS-based hardware using beyond Von Neumann architectures, the power-efficiency of the human brain has remained elusive. This has necessitated investigations of novel material systems which can efficiently mimic the functional units of SNNs, such as neurons and synapses. In this paper, we present a magnetoelectric-magnetic tunnel junction (ME-MTJ) device as a synapse. We arrange these synapses in a crossbar fashion and perform in situ unsupervised learning. We leverage the capacitive nature of write-ports in ME-MTJs, wherein by applying appropriately shaped voltage pulses across the write-port, the ME-MTJ can be switched in a probabilistic manner. We further exploit the sigmoidal switching characteristics of ME-MTJ to tune the synapses to follow the well-known spike timing-dependent plasticity (STDP) rule in a stochastic fashion. Finally, we use the stochastic STDP rule in ME-MTJ synapses to simulate a two-layered SNN to perform image classification tasks on a handwritten digit dataset. Thus, the capacitive write-port and the decoupled-nature of read-write path of ME-MTJs allow us to construct a transistor-less crossbar, suitable for energy-efficient implementation of in situ learning in SNNs. This article is part of the theme issue 'Harmonizing energy-autonomous computing and intelligence'.
Collapse
|
21
|
Tsur EE, Rivlin-Etzion M. Neuromorphic implementation of motion detection using oscillation interference. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.072] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
22
|
Jeong Y, Lu W. Neuromorphic Computing Using Memristor Crossbar Networks: A Focus on Bio-Inspired Approaches. IEEE NANOTECHNOLOGY MAGAZINE 2018. [DOI: 10.1109/mnano.2018.2844901] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
23
|
Voelker AR, Eliasmith C. Improving Spiking Dynamical Networks: Accurate Delays, Higher-Order Synapses, and Time Cells. Neural Comput 2018; 30:569-609. [DOI: 10.1162/neco_a_01046] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks. For completeness, we provide characterizations for both continuous-time (i.e., analog) and discrete-time (i.e., digital) simulations. We demonstrate the utility of these extensions by mapping an optimal delay line onto various spiking dynamical networks using higher-order models of the synapse. We show that these networks nonlinearly encode rolling windows of input history, using a scale invariant representation, with accuracy depending on the frequency content of the input signal. Finally, we reveal that these methods provide a novel explanation of time cell responses during a delay task, which have been observed throughout hippocampus, striatum, and cortex.
Collapse
Affiliation(s)
- Aaron R. Voelker
- Centre for Theoretical Neuroscience and David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
24
|
Taube Navaraj W, García Núñez C, Shakthivel D, Vinciguerra V, Labeau F, Gregory DH, Dahiya R. Nanowire FET Based Neural Element for Robotic Tactile Sensing Skin. Front Neurosci 2017; 11:501. [PMID: 28979183 PMCID: PMC5611376 DOI: 10.3389/fnins.2017.00501] [Citation(s) in RCA: 82] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 08/23/2017] [Indexed: 11/13/2022] Open
Abstract
This paper presents novel Neural Nanowire Field Effect Transistors (υ-NWFETs) based hardware-implementable neural network (HNN) approach for tactile data processing in electronic skin (e-skin). The viability of Si nanowires (NWs) as the active material for υ-NWFETs in HNN is explored through modeling and demonstrated by fabricating the first device. Using υ-NWFETs to realize HNNs is an interesting approach as by printing NWs on large area flexible substrates it will be possible to develop a bendable tactile skin with distributed neural elements (for local data processing, as in biological skin) in the backplane. The modeling and simulation of υ-NWFET based devices show that the overlapping areas between individual gates and the floating gate determines the initial synaptic weights of the neural network - thus validating the working of υ-NWFETs as the building block for HNN. The simulation has been further extended to υ-NWFET based circuits and neuronal computation system and this has been validated by interfacing it with a transparent tactile skin prototype (comprising of 6 × 6 ITO based capacitive tactile sensors array) integrated on the palm of a 3D printed robotic hand. In this regard, a tactile data coding system is presented to detect touch gesture and the direction of touch. Following these simulation studies, a four-gated υ-NWFET is fabricated with Pt/Ti metal stack for gates, source and drain, Ni floating gate, and Al2O3 high-k dielectric layer. The current-voltage characteristics of fabricated υ-NWFET devices confirm the dependence of turn-off voltages on the (synaptic) weight of each gate. The presented υ-NWFET approach is promising for a neuro-robotic tactile sensory system with distributed computing as well as numerous futuristic applications such as prosthetics, and electroceuticals.
Collapse
Affiliation(s)
- William Taube Navaraj
- Bendable Electronics and Sensing Technologies Group, School of Engineering, University of GlasgowGlasgow, United Kingdom
| | - Carlos García Núñez
- Bendable Electronics and Sensing Technologies Group, School of Engineering, University of GlasgowGlasgow, United Kingdom
| | - Dhayalan Shakthivel
- Bendable Electronics and Sensing Technologies Group, School of Engineering, University of GlasgowGlasgow, United Kingdom
| | | | - Fabrice Labeau
- Department of Electrical and Computer Engineering, McGill UniversityMontreal, QC, Canada
| | - Duncan H. Gregory
- WestCHEM, School of Chemistry, University of GlasgowGlasgow, United Kingdom
| | - Ravinder Dahiya
- Bendable Electronics and Sensing Technologies Group, School of Engineering, University of GlasgowGlasgow, United Kingdom
| |
Collapse
|