1
|
Daddinounou S, Vatajelu EI. Bi-sigmoid spike-timing dependent plasticity learning rule for magnetic tunnel junction-based SNN. Front Neurosci 2024; 18:1387339. [PMID: 38817912 PMCID: PMC11137280 DOI: 10.3389/fnins.2024.1387339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 04/22/2024] [Indexed: 06/01/2024] Open
Abstract
In this study, we explore spintronic synapses composed of several Magnetic Tunnel Junctions (MTJs), leveraging their attractive characteristics such as endurance, nonvolatility, stochasticity, and energy efficiency for hardware implementation of unsupervised neuromorphic systems. Spiking Neural Networks (SNNs) running on dedicated hardware are suitable for edge computing and IoT devices where continuous online learning and energy efficiency are important characteristics. We focus in this work on synaptic plasticity by conducting comprehensive electrical simulations to optimize the MTJ-based synapse design and find the accurate neuronal pulses that are responsible for the Spike Timing Dependent Plasticity (STDP) behavior. Most proposals in the literature are based on hardware-independent algorithms that require the network to store the spiking history to be able to update the weights accordingly. In this work, we developed a new learning rule, the Bi-Sigmoid STDP (B2STDP), which originates from the physical properties of MTJs. This rule enables immediate synaptic plasticity based on neuronal activity, leveraging in-memory computing. Finally, the integration of this learning approach within an SNN framework leads to a 91.71% accuracy in unsupervised image classification, demonstrating the potential of MTJ-based synapses for effective online learning in hardware-implemented SNNs.
Collapse
|
2
|
Aguirre F, Sebastian A, Le Gallo M, Song W, Wang T, Yang JJ, Lu W, Chang MF, Ielmini D, Yang Y, Mehonic A, Kenyon A, Villena MA, Roldán JB, Wu Y, Hsu HH, Raghavan N, Suñé J, Miranda E, Eltawil A, Setti G, Smagulova K, Salama KN, Krestinskaya O, Yan X, Ang KW, Jain S, Li S, Alharbi O, Pazos S, Lanza M. Hardware implementation of memristor-based artificial neural networks. Nat Commun 2024; 15:1974. [PMID: 38438350 PMCID: PMC10912231 DOI: 10.1038/s41467-024-45670-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 02/01/2024] [Indexed: 03/06/2024] Open
Abstract
Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL) techniques, which rely on networks of connected simple computing units operating in parallel. The low communication bandwidth between memory and processing units in conventional von Neumann machines does not support the requirements of emerging applications that rely extensively on large sets of data. More recent computing paradigms, such as high parallelization and near-memory computing, help alleviate the data communication bottleneck to some extent, but paradigm- shifting concepts are required. Memristors, a novel beyond-complementary metal-oxide-semiconductor (CMOS) technology, are a promising choice for memory devices due to their unique intrinsic device-level properties, enabling both storing and computing with a small, massively-parallel footprint at low power. Theoretically, this directly translates to a major boost in energy efficiency and computational throughput, but various practical challenges remain. In this work we review the latest efforts for achieving hardware-based memristive artificial neural networks (ANNs), describing with detail the working principia of each block and the different design alternatives with their own advantages and disadvantages, as well as the tools required for accurate estimation of performance metrics. Ultimately, we aim to provide a comprehensive protocol of the materials and methods involved in memristive neural networks to those aiming to start working in this field and the experts looking for a holistic approach.
Collapse
Affiliation(s)
- Fernando Aguirre
- Physical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
- Departament d'Enginyeria Electrònica, Universitat Autònoma de Barcelona (UAB), 08193, Barcelona, Spain
| | | | | | - Wenhao Song
- Department of Electrical and Computer Engineering, University of Southern California (USC), Los Angeles, CA, 90089, USA
| | - Tong Wang
- Department of Electrical and Computer Engineering, University of Southern California (USC), Los Angeles, CA, 90089, USA
| | - J Joshua Yang
- Department of Electrical and Computer Engineering, University of Southern California (USC), Los Angeles, CA, 90089, USA
| | - Wei Lu
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Meng-Fan Chang
- Department of Electrical Engineering, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IUNET, Piazza L. da Vinci 32, 20133, Milano, Italy
| | - Yuchao Yang
- School of Electronic and Computer Engineering, Peking University, Shenzhen, China
| | - Adnan Mehonic
- Department of Electronic and Electrical Engineering, University College London (UCL), Torrington Place, WC1E 7JE, London, UK
| | - Anthony Kenyon
- Department of Electronic and Electrical Engineering, University College London (UCL), Torrington Place, WC1E 7JE, London, UK
| | - Marco A Villena
- Physical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Juan B Roldán
- Departamento de Electrónica y Tecnología de Computadores, Facultad de Ciencias, Universidad de Granada, Avenida Fuentenueva s/n, 18071, Granada, Spain
| | - Yuting Wu
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Hung-Hsi Hsu
- Department of Electrical Engineering, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Nagarajan Raghavan
- Engineering Product Development (EPD) Pillar, Singapore University of Technology & Design, 8 Somapah Road, 487372, Singapore, Singapore
| | - Jordi Suñé
- Departament d'Enginyeria Electrònica, Universitat Autònoma de Barcelona (UAB), 08193, Barcelona, Spain
| | - Enrique Miranda
- Departament d'Enginyeria Electrònica, Universitat Autònoma de Barcelona (UAB), 08193, Barcelona, Spain
| | - Ahmed Eltawil
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Gianluca Setti
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Kamilya Smagulova
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Khaled N Salama
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Olga Krestinskaya
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Xiaobing Yan
- Key Laboratory of Brain-Like Neuromorphic Devices and Systems of Hebei Province, Hebei University, Baoding, 071002, China
| | - Kah-Wee Ang
- Department of Electrical and Computer Engineering, College of Design and Engineering, National University of Singapore (NUS), Singapore, Singapore
| | - Samarth Jain
- Department of Electrical and Computer Engineering, College of Design and Engineering, National University of Singapore (NUS), Singapore, Singapore
| | - Sifan Li
- Department of Electrical and Computer Engineering, College of Design and Engineering, National University of Singapore (NUS), Singapore, Singapore
| | - Osamah Alharbi
- Physical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Sebastian Pazos
- Physical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia
| | - Mario Lanza
- Physical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia.
| |
Collapse
|
3
|
Vieth M, Rahimi A, Gorgan Mohammadi A, Triesch J, Ganjtabesh M. Accelerating spiking neural network simulations with PymoNNto and PymoNNtorch. Front Neuroinform 2024; 18:1331220. [PMID: 38444756 PMCID: PMC10913591 DOI: 10.3389/fninf.2024.1331220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/29/2024] [Indexed: 03/07/2024] Open
Abstract
Spiking neural network simulations are a central tool in Computational Neuroscience, Artificial Intelligence, and Neuromorphic Engineering research. A broad range of simulators and software frameworks for such simulations exist with different target application areas. Among these, PymoNNto is a recent Python-based toolbox for spiking neural network simulations that emphasizes the embedding of custom code in a modular and flexible way. While PymoNNto already supports GPU implementations, its backend relies on NumPy operations. Here we introduce PymoNNtorch, which is natively implemented with PyTorch while retaining PymoNNto's modular design. Furthermore, we demonstrate how changes to the implementations of common network operations in combination with PymoNNtorch's native GPU support can offer speed-up over conventional simulators like NEST, ANNarchy, and Brian 2 in certain situations. Overall, we show how PymoNNto's modular and flexible design in combination with PymoNNtorch's GPU acceleration and optimized indexing operations facilitate research and development of spiking neural networks in the Python programming language.
Collapse
Affiliation(s)
- Marius Vieth
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Ali Rahimi
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| | - Ashena Gorgan Mohammadi
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Mohammad Ganjtabesh
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| |
Collapse
|
4
|
Gorgan Mohammadi A, Ganjtabesh M. On computational models of theory of mind and the imitative reinforcement learning in spiking neural networks. Sci Rep 2024; 14:1945. [PMID: 38253595 PMCID: PMC10803361 DOI: 10.1038/s41598-024-52299-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 01/16/2024] [Indexed: 01/24/2024] Open
Abstract
Theory of Mind is referred to the ability of inferring other's mental states, and it plays a crucial role in social cognition and learning. Biological evidences indicate that complex circuits are involved in this ability, including the mirror neuron system. The mirror neuron system influences imitation abilities and action understanding, leading to learn through observing others. To simulate this imitative learning behavior, a Theory-of-Mind-based Imitative Reinforcement Learning (ToM-based ImRL) framework is proposed. Employing the bio-inspired spiking neural networks and the mechanisms of the mirror neuron system, ToM-based ImRL is a bio-inspired computational model which enables an agent to effectively learn how to act in an interactive environment through observing an expert, inferring its goals, and imitating its behaviors. The aim of this paper is to review some computational attempts in modeling ToM and to explain the proposed ToM-based ImRL framework which is tested in the environment of River Raid game from Atari 2600 series.
Collapse
Affiliation(s)
- Ashena Gorgan Mohammadi
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, College of Science, University of Tehran, Tehran, Iran
| | - Mohammad Ganjtabesh
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, College of Science, University of Tehran, Tehran, Iran.
| |
Collapse
|
5
|
Gemo E, Spiga S, Brivio S. SHIP: a computational framework for simulating and validating novel technologies in hardware spiking neural networks. Front Neurosci 2024; 17:1270090. [PMID: 38264497 PMCID: PMC10804805 DOI: 10.3389/fnins.2023.1270090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 12/14/2023] [Indexed: 01/25/2024] Open
Abstract
Investigations in the field of spiking neural networks (SNNs) encompass diverse, yet overlapping, scientific disciplines. Examples range from purely neuroscientific investigations, researches on computational aspects of neuroscience, or applicative-oriented studies aiming to improve SNNs performance or to develop artificial hardware counterparts. However, the simulation of SNNs is a complex task that can not be adequately addressed with a single platform applicable to all scenarios. The optimization of a simulation environment to meet specific metrics often entails compromises in other aspects. This computational challenge has led to an apparent dichotomy of approaches, with model-driven algorithms dedicated to the detailed simulation of biological networks, and data-driven algorithms designed for efficient processing of large input datasets. Nevertheless, material scientists, device physicists, and neuromorphic engineers who develop new technologies for spiking neuromorphic hardware solutions would find benefit in a simulation environment that borrows aspects from both approaches, thus facilitating modeling, analysis, and training of prospective SNN systems. This manuscript explores the numerical challenges deriving from the simulation of spiking neural networks, and introduces SHIP, Spiking (neural network) Hardware In PyTorch, a numerical tool that supports the investigation and/or validation of materials, devices, small circuit blocks within SNN architectures. SHIP facilitates the algorithmic definition of the models for the components of a network, the monitoring of states and output of the modeled systems, and the training of the synaptic weights of the network, by way of user-defined unsupervised learning rules or supervised training techniques derived from conventional machine learning. SHIP offers a valuable tool for researchers and developers in the field of hardware-based spiking neural networks, enabling efficient simulation and validation of novel technologies.
Collapse
Affiliation(s)
- Emanuele Gemo
- CNR–IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| | | | | |
Collapse
|
6
|
Abunahla H, Abbas Y, Gebregiorgis A, Waheed W, Mohammad B, Hamdioui S, Alazzam A, Rezeq M. Analog monolayer SWCNTs-based memristive 2D structure for energy-efficient deep learning in spiking neural networks. Sci Rep 2023; 13:21350. [PMID: 38049534 PMCID: PMC10696067 DOI: 10.1038/s41598-023-48529-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 11/27/2023] [Indexed: 12/06/2023] Open
Abstract
Advances in materials science and memory devices work in tandem for the evolution of Artificial Intelligence systems. Energy-efficient computation is the ultimate goal of emerging memristor technology, in which the storage and computation can be done in the same memory crossbar. In this work, an analog memristor device is fabricated utilizing the unique characteristics of single-wall carbon nanotubes (SWCNTs) to act as the switching medium of the device. Via the planar structure, the memristor device exhibits analog switching ability with high state stability. The device's conductance and capacitance can be tuned simultaneously, increasing the device's potential and broadening its applications' horizons. The multi-state storage capability and long-term memory are the key factors that make the device a promising candidate for bio-inspired computing applications. As a demonstrator, the fabricated memristor is deployed in spiking neural networks (SNN) to exploit its analog switching feature for energy-efficient classification operation. Results reveal that the computation-in-memory implementation performs Vector Matrix Multiplication with 95% inference accuracy and few femtojoules per spike energy efficiency. The memristor device presented in this work opens new insights towards utilizing the outstanding features of SWCNTs for efficient analog computation in deep learning systems.
Collapse
Affiliation(s)
- Heba Abunahla
- Quantum & Computer Engineering Department, Delft University of Technology, Delft, The Netherlands.
| | - Yawar Abbas
- System on Chip Center (SoCC), Physics Department, Khalifa University, Abu Dhabi, UAE
| | - Anteneh Gebregiorgis
- Quantum & Computer Engineering Department, Delft University of Technology, Delft, The Netherlands
| | - Waqas Waheed
- SoCC, Mechanical Engineering Department, Khalifa University, Abu Dhabi, UAE
| | - Baker Mohammad
- SoCC, Electrical Engineering & Computer Science Department, Khalifa University, Abu Dhabi, UAE.
| | - Said Hamdioui
- Quantum & Computer Engineering Department, Delft University of Technology, Delft, The Netherlands
| | - Anas Alazzam
- SoCC, Mechanical Engineering Department, Khalifa University, Abu Dhabi, UAE
| | - Moh'd Rezeq
- System on Chip Center (SoCC), Physics Department, Khalifa University, Abu Dhabi, UAE.
| |
Collapse
|
7
|
Yue Y, Baltes M, Abuhajar N, Sun T, Karanth A, Smith CD, Bihl T, Liu J. Spiking neural networks fine-tuning for brain image segmentation. Front Neurosci 2023; 17:1267639. [PMID: 38027484 PMCID: PMC10646327 DOI: 10.3389/fnins.2023.1267639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 10/09/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The field of machine learning has undergone a significant transformation with the progress of deep artificial neural networks (ANNs) and the growing accessibility of annotated data. ANNs usually require substantial power and memory usage to achieve optimal performance. Spiking neural networks (SNNs) have recently emerged as a low-power alternative to ANNs due to their sparsity nature. Despite their energy efficiency, SNNs are generally more difficult to be trained than ANNs. Methods In this study, we propose a novel three-stage SNN training scheme designed specifically for segmenting human hippocampi from magnetic resonance images. Our training pipeline starts with optimizing an ANN to its maximum capacity, then employs a quick ANN-SNN conversion to initialize the corresponding spiking network. This is followed by spike-based backpropagation to fine-tune the converted SNN. In order to understand the reason behind performance decline in the converted SNNs, we conduct a set of experiments to investigate the output scaling issue. Furthermore, we explore the impact of binary and ternary representations in SNN networks and conduct an empirical evaluation of their performance through image classification and segmentation tasks. Results and discussion By employing our hybrid training scheme, we observe significant advantages over both ANN-SNN conversion and direct SNN training solutions in terms of segmentation accuracy and training efficiency. Experimental results demonstrate the effectiveness of our model in achieving our design goals.
Collapse
Affiliation(s)
- Ye Yue
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| | - Marc Baltes
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| | - Nidal Abuhajar
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| | - Tao Sun
- Centrum Wiskunde and Informatica (CWI), Machine Learning Group, Amsterdam, Netherlands
| | - Avinash Karanth
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| | - Charles D. Smith
- Department of Neurology, University of Kentucky, Lexington, KY, United States
| | - Trevor Bihl
- Department of Biomedical, Industrial and Human Factors Engineering, Wright State University, Dayton, OH, United States
| | - Jundong Liu
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States
| |
Collapse
|
8
|
Fang W, Chen Y, Ding J, Yu Z, Masquelier T, Chen D, Huang L, Zhou H, Li G, Tian Y. SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence. SCIENCE ADVANCES 2023; 9:eadi1480. [PMID: 37801497 PMCID: PMC10558124 DOI: 10.1126/sciadv.adi1480] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 09/05/2023] [Indexed: 10/08/2023]
Abstract
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties. As the emerging spiking deep learning paradigm attracts increasing interest, traditional programming frameworks cannot meet the demands of the automatic differentiation, parallel computation acceleration, and high integration of processing neuromorphic datasets and deployment. In this work, we present the SpikingJelly framework to address the aforementioned dilemma. We contribute a full-stack toolkit for preprocessing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips. Compared to existing methods, the training of deep SNNs can be accelerated 11×, and the superior extensibility and flexibility of SpikingJelly enable users to accelerate custom models at low costs through multilevel inheritance and semiautomatic code generation. SpikingJelly paves the way for synthesizing truly energy-efficient SNN-based machine intelligence systems, which will enrich the ecology of neuromorphic computing.
Collapse
Affiliation(s)
- Wei Fang
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
| | - Yanqi Chen
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
| | - Jianhao Ding
- School of Computer Science, Peking University, China
| | - Zhaofei Yu
- Institute for Artificial Intelligence, Peking University, China
| | - Timothée Masquelier
- Centre de Recherche Cerveau et Cognition (CERCO), UMR5549 CNRS–Université Toulouse 3, France
| | - Ding Chen
- Peng Cheng Laboratory, China
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
| | - Liwei Huang
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
| | | | - Guoqi Li
- Institute of Automation, Chinese Academy of Sciences, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, China
| | - Yonghong Tian
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
| |
Collapse
|
9
|
Pham MD, D’Angiulli A, Dehnavi MM, Chhabra R. From Brain Models to Robotic Embodied Cognition: How Does Biological Plausibility Inform Neuromorphic Systems? Brain Sci 2023; 13:1316. [PMID: 37759917 PMCID: PMC10526461 DOI: 10.3390/brainsci13091316] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 09/05/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
We examine the challenging "marriage" between computational efficiency and biological plausibility-A crucial node in the domain of spiking neural networks at the intersection of neuroscience, artificial intelligence, and robotics. Through a transdisciplinary review, we retrace the historical and most recent constraining influences that these parallel fields have exerted on descriptive analysis of the brain, construction of predictive brain models, and ultimately, the embodiment of neural networks in an enacted robotic agent. We study models of Spiking Neural Networks (SNN) as the central means enabling autonomous and intelligent behaviors in biological systems. We then provide a critical comparison of the available hardware and software to emulate SNNs for investigating biological entities and their application on artificial systems. Neuromorphics is identified as a promising tool to embody SNNs in real physical systems and different neuromorphic chips are compared. The concepts required for describing SNNs are dissected and contextualized in the new no man's land between cognitive neuroscience and artificial intelligence. Although there are recent reviews on the application of neuromorphic computing in various modules of the guidance, navigation, and control of robotic systems, the focus of this paper is more on closing the cognition loop in SNN-embodied robotics. We argue that biologically viable spiking neuronal models used for electroencephalogram signals are excellent candidates for furthering our knowledge of the explainability of SNNs. We complete our survey by reviewing different robotic modules that can benefit from neuromorphic hardware, e.g., perception (with a focus on vision), localization, and cognition. We conclude that the tradeoff between symbolic computational power and biological plausibility of hardware can be best addressed by neuromorphics, whose presence in neurorobotics provides an accountable empirical testbench for investigating synthetic and natural embodied cognition. We argue this is where both theoretical and empirical future work should converge in multidisciplinary efforts involving neuroscience, artificial intelligence, and robotics.
Collapse
Affiliation(s)
- Martin Do Pham
- Department of Computer Science, University of Toronto, Toronto, ON M5S 1A1, Canada; (M.D.P.); (M.M.D.)
| | - Amedeo D’Angiulli
- Department of Neuroscience, Carleton University, Ottawa, ON K1S 5B6, Canada;
| | - Maryam Mehri Dehnavi
- Department of Computer Science, University of Toronto, Toronto, ON M5S 1A1, Canada; (M.D.P.); (M.M.D.)
| | - Robin Chhabra
- Department of Mechanical and Aerospace Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada
| |
Collapse
|
10
|
Sanaullah, Koravuna S, Rückert U, Jungeblut T. Evaluation of Spiking Neural Nets-Based Image Classification Using the Runtime Simulator RAVSim. Int J Neural Syst 2023; 33:2350044. [PMID: 37604777 DOI: 10.1142/s0129065723500442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/23/2023]
Abstract
Spiking Neural Networks (SNNs) help achieve brain-like efficiency and functionality by building neurons and synapses that mimic the human brain's transmission of electrical signals. However, optimal SNN implementation requires a precise balance of parametric values. To design such ubiquitous neural networks, a graphical tool for visualizing, analyzing, and explaining the internal behavior of spikes is crucial. Although some popular SNN simulators are available, these tools do not allow users to interact with the neural network during simulation. To this end, we have introduced the first runtime interactive simulator, called Runtime Analyzing and Visualization Simulator (RAVSim),a developed to analyze and dynamically visualize the behavior of SNNs, allowing end-users to interact, observe output concentration reactions, and make changes directly during the simulation. In this paper, we present RAVSim with the current implementation of runtime interaction using the LIF neural model with different connectivity schemes, an image classification model using SNNs, and a dataset creation feature. Our main objective is to primarily investigate binary classification using SNNs with RGB images. We created a feed-forward network using the LIF neural model for an image classification algorithm and evaluated it by using RAVSim. The algorithm classifies faces with and without masks, achieving an accuracy of 91.8% using 1000 neurons in a hidden layer, 0.0758 MSE, and an execution time of ∼10[Formula: see text]min on the CPU. The experimental results show that using RAVSim not only increases network design speed but also accelerates user learning capability.
Collapse
Affiliation(s)
- Sanaullah
- Department of Engineering and Mathematics, Bielefeld University of Applied Science, Bielefeld, Germany
| | - Shamini Koravuna
- Department of Cognitive Interaction Technology Center, Bielefeld University, Bielefeld, Germany
| | - Ulrich Rückert
- Department of Cognitive Interaction Technology Center, Bielefeld University, Bielefeld, Germany
| | - Thorsten Jungeblut
- Department of Engineering and Mathematics, Bielefeld University of Applied Science, Bielefeld, Germany
| |
Collapse
|
11
|
Yang G, Lee W, Seo Y, Lee C, Seok W, Park J, Sim D, Park C. Unsupervised Spiking Neural Network with Dynamic Learning of Inhibitory Neurons. SENSORS (BASEL, SWITZERLAND) 2023; 23:7232. [PMID: 37631767 PMCID: PMC10459513 DOI: 10.3390/s23167232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/23/2023] [Accepted: 08/15/2023] [Indexed: 08/27/2023]
Abstract
A spiking neural network (SNN) is a type of artificial neural network that operates based on discrete spikes to process timing information, similar to the manner in which the human brain processes real-world problems. In this paper, we propose a new spiking neural network (SNN) based on conventional, biologically plausible paradigms, such as the leaky integrate-and-fire model, spike timing-dependent plasticity, and the adaptive spiking threshold, by suggesting new biological models; that is, dynamic inhibition weight change, a synaptic wiring method, and Bayesian inference. The proposed network is designed for image recognition tasks, which are frequently used to evaluate the performance of conventional deep neural networks. To manifest the bio-realistic neural architecture, the learning is unsupervised, and the inhibition weight is dynamically changed; this, in turn, affects the synaptic wiring method based on Hebbian learning and the neuronal population. In the inference phase, Bayesian inference successfully classifies the input digits by counting the spikes from the responding neurons. The experimental results demonstrate that the proposed biological model ensures a performance improvement compared with other biologically plausible SNN models.
Collapse
Affiliation(s)
- Geunbo Yang
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Wongyu Lee
- Department of Intelligent Information and Embedded Software Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (W.L.); (W.S.)
| | - Youjung Seo
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Choongseop Lee
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Woojoon Seok
- Department of Intelligent Information and Embedded Software Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (W.L.); (W.S.)
| | - Jongkil Park
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea;
| | - Donggyu Sim
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Cheolsoo Park
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| |
Collapse
|
12
|
Zeng Y, Zhao D, Zhao F, Shen G, Dong Y, Lu E, Zhang Q, Sun Y, Liang Q, Zhao Y, Zhao Z, Fang H, Wang Y, Li Y, Liu X, Du C, Kong Q, Ruan Z, Bi W. BrainCog: A spiking neural network based, brain-inspired cognitive intelligence engine for brain-inspired AI and brain simulation. PATTERNS (NEW YORK, N.Y.) 2023; 4:100789. [PMID: 37602224 PMCID: PMC10435966 DOI: 10.1016/j.patter.2023.100789] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 02/06/2023] [Accepted: 06/05/2023] [Indexed: 08/22/2023]
Abstract
Spiking neural networks (SNNs) serve as a promising computational framework for integrating insights from the brain into artificial intelligence (AI). Existing software infrastructures based on SNNs exclusively support brain simulation or brain-inspired AI, but not both simultaneously. To decode the nature of biological intelligence and create AI, we present the brain-inspired cognitive intelligence engine (BrainCog). This SNN-based platform provides essential infrastructure support for developing brain-inspired AI and brain simulation. BrainCog integrates different biological neurons, encoding strategies, learning rules, brain areas, and hardware-software co-design as essential components. Leveraging these user-friendly components, BrainCog incorporates various cognitive functions, including perception and learning, decision-making, knowledge representation and reasoning, motor control, social cognition, and brain structure and function simulations across multiple scales. BORN is an AI engine developed by BrainCog, showcasing seamless integration of BrainCog's components and cognitive functions to build advanced AI models and applications.
Collapse
Affiliation(s)
- Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Dongcheng Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Guobin Shen
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yiting Dong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Enmeng Lu
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Qian Zhang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yinqian Sun
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Qian Liang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yuxuan Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Zhuoya Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Hongjian Fang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yuwei Wang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yang Li
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Xin Liu
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Chengcheng Du
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Qingqun Kong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Zizhe Ruan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Weida Bi
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
13
|
Wu Z, Shen Y, Zhang J, Liang H, Zhao R, Li H, Xiong J, Zhang X, Chua Y. BIDL: a brain-inspired deep learning framework for spatiotemporal processing. Front Neurosci 2023; 17:1213720. [PMID: 37564366 PMCID: PMC10410154 DOI: 10.3389/fnins.2023.1213720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/22/2023] [Indexed: 08/12/2023] Open
Abstract
Brain-inspired deep spiking neural network (DSNN) which emulates the function of the biological brain provides an effective approach for event-stream spatiotemporal perception (STP), especially for dynamic vision sensor (DVS) signals. However, there is a lack of generalized learning frameworks that can handle various spatiotemporal modalities beyond event-stream, such as video clips and 3D imaging data. To provide a unified design flow for generalized spatiotemporal processing (STP) and to investigate the capability of lightweight STP processing via brain-inspired neural dynamics, this study introduces a training platform called brain-inspired deep learning (BIDL). This framework constructs deep neural networks, which leverage neural dynamics for processing temporal information and ensures high-accuracy spatial processing via artificial neural network layers. We conducted experiments involving various types of data, including video information processing, DVS information processing, 3D medical imaging classification, and natural language processing. These experiments demonstrate the efficiency of the proposed method. Moreover, as a research framework for researchers in the fields of neuroscience and machine learning, BIDL facilitates the exploration of different neural models and enables global-local co-learning. For easily fitting to neuromorphic chips and GPUs, the framework incorporates several optimizations, including iteration representation, state-aware computational graph, and built-in neural functions. This study presents a user-friendly and efficient DSNN builder for lightweight STP applications and has the potential to drive future advancements in bio-inspired research.
Collapse
Affiliation(s)
- Zhenzhi Wu
- Lynxi Technologies, Co. Ltd., Beijing, China
| | - Yangshu Shen
- Lynxi Technologies, Co. Ltd., Beijing, China
- Department of Precision Instruments and Mechanology, Tsinghua University, Beijing, China
| | - Jing Zhang
- Lynxi Technologies, Co. Ltd., Beijing, China
| | - Huaju Liang
- Neuromorphic Computing Laboratory, China Nanhu Academy of Electronics and Information Technology (CNAEIT), Jiaxing, Zhejiang, China
| | | | - Han Li
- Lynxi Technologies, Co. Ltd., Beijing, China
| | - Jianping Xiong
- Department of Precision Instruments and Mechanology, Tsinghua University, Beijing, China
| | - Xiyu Zhang
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Yansong Chua
- Neuromorphic Computing Laboratory, China Nanhu Academy of Electronics and Information Technology (CNAEIT), Jiaxing, Zhejiang, China
| |
Collapse
|
14
|
Xue J, Xie L, Chen F, Wu L, Tian Q, Zhou Y, Ying R, Liu P. EdgeMap: An Optimized Mapping Toolchain for Spiking Neural Network in Edge Computing. SENSORS (BASEL, SWITZERLAND) 2023; 23:6548. [PMID: 37514842 PMCID: PMC10383546 DOI: 10.3390/s23146548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 07/13/2023] [Accepted: 07/18/2023] [Indexed: 07/30/2023]
Abstract
Spiking neural networks (SNNs) have attracted considerable attention as third-generation artificial neural networks, known for their powerful, intelligent features and energy-efficiency advantages. These characteristics render them ideally suited for edge computing scenarios. Nevertheless, the current mapping schemes for deploying SNNs onto neuromorphic hardware face limitations such as extended execution times, low throughput, and insufficient consideration of energy consumption and connectivity, which undermine their suitability for edge computing applications. To address these challenges, we introduce EdgeMap, an optimized mapping toolchain specifically designed for deploying SNNs onto edge devices without compromising performance. EdgeMap consists of two main stages. The first stage involves partitioning the SNN graph into small neuron clusters based on the streaming graph partition algorithm, with the sizes of neuron clusters limited by the physical neuron cores. In the subsequent mapping stage, we adopt a multi-objective optimization algorithm specifically geared towards mitigating energy costs and communication costs for efficient deployment. EdgeMap-evaluated across four typical SNN applications-substantially outperforms other state-of-the-art mapping schemes. The performance improvements include a reduction in average latency by up to 19.8%, energy consumption by 57%, and communication cost by 58%. Moreover, EdgeMap exhibits an impressive enhancement in execution time by a factor of 1225.44×, alongside a throughput increase of up to 4.02×. These results highlight EdgeMap's efficiency and effectiveness, emphasizing its utility for deploying SNN applications in edge computing scenarios.
Collapse
Affiliation(s)
- Jianwei Xue
- School of Electronic and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lisheng Xie
- School of Electronic and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Faquan Chen
- School of Electronic and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Liangshun Wu
- School of Electronic and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Qingyang Tian
- School of Electronic and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yifan Zhou
- School of Electronic and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Rendong Ying
- School of Electronic and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Peilin Liu
- School of Electronic and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
15
|
Nourse WRP, Jackson C, Szczecinski NS, Quinn RD. SNS-Toolbox: An Open Source Tool for Designing Synthetic Nervous Systems and Interfacing Them with Cyber-Physical Systems. Biomimetics (Basel) 2023; 8:247. [PMID: 37366842 DOI: 10.3390/biomimetics8020247] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 06/02/2023] [Accepted: 06/09/2023] [Indexed: 06/28/2023] Open
Abstract
One developing approach for robotic control is the use of networks of dynamic neurons connected with conductance-based synapses, also known as Synthetic Nervous Systems (SNS). These networks are often developed using cyclic topologies and heterogeneous mixtures of spiking and non-spiking neurons, which is a difficult proposition for existing neural simulation software. Most solutions apply to either one of two extremes, the detailed multi-compartment neural models in small networks, and the large-scale networks of greatly simplified neural models. In this work, we present our open-source Python package SNS-Toolbox, which is capable of simulating hundreds to thousands of spiking and non-spiking neurons in real-time or faster on consumer-grade computer hardware. We describe the neural and synaptic models supported by SNS-Toolbox, and provide performance on multiple software and hardware backends, including GPUs and embedded computing platforms. We also showcase two examples using the software, one for controlling a simulated limb with muscles in the physics simulator Mujoco, and another for a mobile robot using ROS. We hope that the availability of this software will reduce the barrier to entry when designing SNS networks, and will increase the prevalence of SNS networks in the field of robotic control.
Collapse
Affiliation(s)
- William R P Nourse
- Department of Electrical, Computer, and Systems Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Clayton Jackson
- Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Nicholas S Szczecinski
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV 26506, USA
| | - Roger D Quinn
- Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
16
|
Deng S, Yu H, Park TJ, Islam AN, Manna S, Pofelski A, Wang Q, Zhu Y, Sankaranarayanan SK, Sengupta A, Ramanathan S. Selective area doping for Mott neuromorphic electronics. SCIENCE ADVANCES 2023; 9:eade4838. [PMID: 36930716 PMCID: PMC10022892 DOI: 10.1126/sciadv.ade4838] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 02/16/2023] [Indexed: 06/18/2023]
Abstract
The cointegration of artificial neuronal and synaptic devices with homotypic materials and structures can greatly simplify the fabrication of neuromorphic hardware. We demonstrate experimental realization of vanadium dioxide (VO2) artificial neurons and synapses on the same substrate through selective area carrier doping. By locally configuring pairs of catalytic and inert electrodes that enable nanoscale control over carrier density, volatility or nonvolatility can be appropriately assigned to each two-terminal Mott memory device per lithographic design, and both neuron- and synapse-like devices are successfully integrated on a single chip. Feedforward excitation and inhibition neural motifs are demonstrated at hardware level, followed by simulation of network-level handwritten digit and fashion product recognition tasks with experimental characteristics. Spatially selective electron doping opens up previously unidentified avenues for integration of emerging correlated semiconductors in electronic device technologies.
Collapse
Affiliation(s)
- Sunbin Deng
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Haoming Yu
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Tae Joon Park
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - A. N. M. Nafiul Islam
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA 16802, USA
| | - Sukriti Manna
- Center for Nanoscale Materials, Argonne National Laboratory, Lemont, IL 60439, USA
- Department of Mechanical and Industrial Engineering, University of Illinois, Chicago, IL 60607, USA
| | - Alexandre Pofelski
- Department of Condensed Matter Physics and Materials Science, Brookhaven National Laboratory, Upton, NY 11973, USA
| | - Qi Wang
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Yimei Zhu
- Department of Condensed Matter Physics and Materials Science, Brookhaven National Laboratory, Upton, NY 11973, USA
| | - Subramanian K. R. S. Sankaranarayanan
- Center for Nanoscale Materials, Argonne National Laboratory, Lemont, IL 60439, USA
- Department of Mechanical and Industrial Engineering, University of Illinois, Chicago, IL 60607, USA
| | - Abhronil Sengupta
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA 16802, USA
| | - Shriram Ramanathan
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| |
Collapse
|
17
|
Shirsavar SR, Vahabie AH, Dehaqani MRA. Models Developed for Spiking Neural Networks. MethodsX 2023; 10:102157. [PMID: 37077894 PMCID: PMC10106956 DOI: 10.1016/j.mex.2023.102157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 03/23/2023] [Indexed: 03/30/2023] Open
Abstract
Emergence of deep neural networks (DNNs) has raised enormous attention towards artificial neural networks (ANNs) once again. They have become the state-of-the-art models and have won different machine learning challenges. Although these networks are inspired by the brain, they lack biological plausibility, and they have structural differences compared to the brain. Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain. However, their application in real-world and complicated machine learning tasks were limited. Recently, they have shown great potential in solving such tasks. Due to their energy efficiency and temporal dynamics there are many promises in their future development. In this work, we reviewed the structures and performances of SNNs on image classification tasks. The comparisons illustrate that these networks show great capabilities for more complicated problems. Furthermore, the simple learning rules developed for SNNs, such as STDP and R-STDP, can be a potential alternative to replace the backpropagation algorithm used in DNNs.•Different building blocks of spiking neural networks are explained in this work.•Developed models for SNNs are introduced based on their characteristics and building blocks.
Collapse
|
18
|
Schmitt FJ, Rostami V, Nawrot MP. Efficient parameter calibration and real-time simulation of large-scale spiking neural networks with GeNN and NEST. Front Neuroinform 2023; 17:941696. [PMID: 36844916 PMCID: PMC9950635 DOI: 10.3389/fninf.2023.941696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.
Collapse
Affiliation(s)
- Felix Johannes Schmitt
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| | - Vahid Rostami
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| | | |
Collapse
|
19
|
Nilsson M, Schelén O, Lindgren A, Bodin U, Paniagua C, Delsing J, Sandin F. Integration of neuromorphic AI in event-driven distributed digitized systems: Concepts and research directions. Front Neurosci 2023; 17:1074439. [PMID: 36875653 PMCID: PMC9981939 DOI: 10.3389/fnins.2023.1074439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 01/23/2023] [Indexed: 02/19/2023] Open
Abstract
Increasing complexity and data-generation rates in cyber-physical systems and the industrial Internet of things are calling for a corresponding increase in AI capabilities at the resource-constrained edges of the Internet. Meanwhile, the resource requirements of digital computing and deep learning are growing exponentially, in an unsustainable manner. One possible way to bridge this gap is the adoption of resource-efficient brain-inspired "neuromorphic" processing and sensing devices, which use event-driven, asynchronous, dynamic neurosynaptic elements with colocated memory for distributed processing and machine learning. However, since neuromorphic systems are fundamentally different from conventional von Neumann computers and clock-driven sensor systems, several challenges are posed to large-scale adoption and integration of neuromorphic devices into the existing distributed digital-computational infrastructure. Here, we describe the current landscape of neuromorphic computing, focusing on characteristics that pose integration challenges. Based on this analysis, we propose a microservice-based conceptual framework for neuromorphic systems integration, consisting of a neuromorphic-system proxy, which would provide virtualization and communication capabilities required in distributed systems of systems, in combination with a declarative programming approach offering engineering-process abstraction. We also present concepts that could serve as a basis for the realization of this framework, and identify directions for further research required to enable large-scale system integration of neuromorphic devices.
Collapse
Affiliation(s)
- Mattias Nilsson
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Olov Schelén
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Anders Lindgren
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden.,Applied AI and IoT, Industrial Systems, Digital Systems, RISE Research Institutes of Sweden, Kista, Sweden
| | - Ulf Bodin
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Cristina Paniagua
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Jerker Delsing
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Fredrik Sandin
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| |
Collapse
|
20
|
Putra RVW, Hanif MA, Shafique M. RescueSNN: enabling reliable executions on spiking neural network accelerators under permanent faults. Front Neurosci 2023; 17:1159440. [PMID: 37123371 PMCID: PMC10130579 DOI: 10.3389/fnins.2023.1159440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 03/24/2023] [Indexed: 05/02/2023] Open
Abstract
To maximize the performance and energy efficiency of Spiking Neural Network (SNN) processing on resource-constrained embedded systems, specialized hardware accelerators/chips are employed. However, these SNN chips may suffer from permanent faults which can affect the functionality of weight memory and neuron behavior, thereby causing potentially significant accuracy degradation and system malfunctioning. Such permanent faults may come from manufacturing defects during the fabrication process, and/or from device/transistor damages (e.g., due to wear out) during the run-time operation. However, the impact of permanent faults in SNN chips and the respective mitigation techniques have not been thoroughly investigated yet. Toward this, we propose RescueSNN, a novel methodology to mitigate permanent faults in the compute engine of SNN chips without requiring additional retraining, thereby significantly cutting down the design time and retraining costs, while maintaining the throughput and quality. The key ideas of our RescueSNN methodology are (1) analyzing the characteristics of SNN under permanent faults; (2) leveraging this analysis to improve the SNN fault-tolerance through effective fault-aware mapping (FAM); and (3) devising lightweight hardware enhancements to support FAM. Our FAM technique leverages the fault map of SNN compute engine for (i) minimizing weight corruption when mapping weight bits on the faulty memory cells, and (ii) selectively employing faulty neurons that do not cause significant accuracy degradation to maintain accuracy and throughput, while considering the SNN operations and processing dataflow. The experimental results show that our RescueSNN improves accuracy by up to 80% while maintaining the throughput reduction below 25% in high fault rate (e.g., 0.5 of the potential fault locations), as compared to running SNNs on the faulty chip without mitigation. In this manner, the embedded systems that employ RescueSNN-enhanced chips can efficiently ensure reliable executions against permanent faults during their operational lifetime.
Collapse
Affiliation(s)
- Rachmad Vidya Wicaksana Putra
- Embedded Computing Systems, Institute of Computer Engineering, Technische Universität Wien (TU Wien), Vienna, Austria
- *Correspondence: Rachmad Vidya Wicaksana Putra
| | - Muhammad Abdullah Hanif
- eBrain Lab, Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi, United Arab Emirates
| | - Muhammad Shafique
- eBrain Lab, Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi, United Arab Emirates
| |
Collapse
|
21
|
Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sci 2022; 13:brainsci13010068. [PMID: 36672049 PMCID: PMC9856822 DOI: 10.3390/brainsci13010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022] Open
Abstract
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption-a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.
Collapse
|
22
|
Ma B, Zhang J, Zhao Y, Zou W. Analog-to-spike encoding and time-efficient RF signal processing with photonic neurons. OPTICS EXPRESS 2022; 30:46541-46551. [PMID: 36558605 DOI: 10.1364/oe.479077] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/22/2022] [Indexed: 06/17/2023]
Abstract
The radio-frequency (RF) signal processing in real time is indispensable for advanced information systems, such as radar and communications. However, the latency performance of conventional processing paradigm is worsened by high-speed analog-to-digital conversion (ADC) generating massive data, and computation-intensive digital processing. Here, we propose to encode and process RF signals harnessing photonic spiking response in fully-analog domain. The dependence of photonic analog-to-spike encoding on threshold level and time constant is theoretically and experimentally investigated. For two classes of waveforms from real RF devices, the photonic spiking neuron exhibits distinct distributions of encoded spike numbers. In a waveform classification task, the photonic-spiking-based scheme achieves an accuracy of 92%, comparable to the K-nearest neighbor (KNN) digital algorithm for 94%, and the processing latency is reduced approximately from 0.7 s (code running time on a CPU platform) to 80 ns (light transmission delay) by more than one million times. It is anticipated that the asynchronous-encoding, and binary-output nature of photonic spiking response could pave the way to real-time RF signal processing.
Collapse
|
23
|
George AM, Dey S, Banerjee D, Mukherjee A, Suri M. Online Time-Series Forecasting using Spiking Reservoir. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.10.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
24
|
Haşegan D, Deible M, Earl C, D’Onofrio D, Hazan H, Anwar H, Neymotin SA. Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning. Front Comput Neurosci 2022; 16:1017284. [PMID: 36249482 PMCID: PMC9563231 DOI: 10.3389/fncom.2022.1017284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 08/31/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial neural networks (ANNs) have been successfully trained to perform a wide range of sensory-motor behaviors. In contrast, the performance of spiking neuronal network (SNN) models trained to perform similar behaviors remains relatively suboptimal. In this work, we aimed to push the field of SNNs forward by exploring the potential of different learning mechanisms to achieve optimal performance. We trained SNNs to solve the CartPole reinforcement learning (RL) control problem using two learning mechanisms operating at different timescales: (1) spike-timing-dependent reinforcement learning (STDP-RL) and (2) evolutionary strategy (EVOL). Though the role of STDP-RL in biological systems is well established, several other mechanisms, though not fully understood, work in concert during learning in vivo. Recreating accurate models that capture the interaction of STDP-RL with these diverse learning mechanisms is extremely difficult. EVOL is an alternative method and has been successfully used in many studies to fit model neural responsiveness to electrophysiological recordings and, in some cases, for classification problems. One advantage of EVOL is that it may not need to capture all interacting components of synaptic plasticity and thus provides a better alternative to STDP-RL. Here, we compared the performance of each algorithm after training, which revealed EVOL as a powerful method for training SNNs to perform sensory-motor behaviors. Our modeling opens up new capabilities for SNNs in RL and could serve as a testbed for neurobiologists aiming to understand multi-timescale learning mechanisms and dynamics in neuronal circuits.
Collapse
Affiliation(s)
- Daniel Haşegan
- Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY, United States
| | - Matt Deible
- Department of Computer Science, University of Pittsburgh, Pittsburgh, PA, United States
| | - Christopher Earl
- Department of Computer Science, University of Massachusetts Amherst, Amherst, MA, United States
| | - David D’Onofrio
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Hananel Hazan
- Allen Discovery Center, Tufts University, Boston, MA, United States
| | - Haroon Anwar
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Samuel A. Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
- Department of Psychiatry, NYU Grossman School of Medicine, New York, NY, United States
| |
Collapse
|
25
|
Mo L, Tao Z. EvtSNN: Event-driven SNN simulator optimized by population and pre-filtering. Front Neurosci 2022; 16:944262. [PMID: 36248639 PMCID: PMC9560128 DOI: 10.3389/fnins.2022.944262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 08/30/2022] [Indexed: 11/13/2022] Open
Abstract
Recently, spiking neural networks (SNNs) have been widely studied by researchers due to their biological interpretability and potential application of low power consumption. However, the traditional clock-driven simulators have the problem that the accuracy is limited by the time-step and the lateral inhibition failure. To address this issue, we introduce EvtSNN (Event SNN), a faster SNN event-driven simulator inspired by EDHA (Event-Driven High Accuracy). Two innovations are proposed to accelerate the calculation of event-driven neurons. Firstly, the intermediate results can be reused in population computing without repeated calculations. Secondly, unnecessary peak calculations will be skipped according to a condition. In the MNIST classification task, EvtSNN took 56 s to complete one epoch of unsupervised training and achieved 89.56% accuracy, while EDHA takes 642 s. In the benchmark experiments, the simulation speed of EvtSNN is 2.9–14.0 times that of EDHA under different network scales.
Collapse
|
26
|
Putra RVW, Hanif MA, Shafique M. EnforceSNN: Enabling resilient and energy-efficient spiking neural network inference considering approximate DRAMs for embedded systems. Front Neurosci 2022; 16:937782. [PMID: 36033624 PMCID: PMC9399768 DOI: 10.3389/fnins.2022.937782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 07/11/2022] [Indexed: 11/13/2022] Open
Abstract
Spiking Neural Networks (SNNs) have shown capabilities of achieving high accuracy under unsupervised settings and low operational power/energy due to their bio-plausible computations. Previous studies identified that DRAM-based off-chip memory accesses dominate the energy consumption of SNN processing. However, state-of-the-art works do not optimize the DRAM energy-per-access, thereby hindering the SNN-based systems from achieving further energy efficiency gains. To substantially reduce the DRAM energy-per-access, an effective solution is to decrease the DRAM supply voltage, but it may lead to errors in DRAM cells (i.e., so-called approximate DRAM). Toward this, we propose EnforceSNN, a novel design framework that provides a solution for resilient and energy-efficient SNN inference using reduced-voltage DRAM for embedded systems. The key mechanisms of our EnforceSNN are: (1) employing quantized weights to reduce the DRAM access energy; (2) devising an efficient DRAM mapping policy to minimize the DRAM energy-per-access; (3) analyzing the SNN error tolerance to understand its accuracy profile considering different bit error rate (BER) values; (4) leveraging the information for developing an efficient fault-aware training (FAT) that considers different BER values and bit error locations in DRAM to improve the SNN error tolerance; and (5) developing an algorithm to select the SNN model that offers good trade-offs among accuracy, memory, and energy consumption. The experimental results show that our EnforceSNN maintains the accuracy (i.e., no accuracy loss for BER ≤ 10−3) as compared to the baseline SNN with accurate DRAM while achieving up to 84.9% of DRAM energy saving and up to 4.1x speed-up of DRAM data throughput across different network sizes.
Collapse
Affiliation(s)
- Rachmad Vidya Wicaksana Putra
- Embedded Computing Systems, Institute of Computer Engineering, Technische Universität Wien, Vienna, Austria
- *Correspondence: Rachmad Vidya Wicaksana Putra
| | - Muhammad Abdullah Hanif
- eBrain Lab, Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi, United Arab Emirates
| | - Muhammad Shafique
- eBrain Lab, Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi, United Arab Emirates
| |
Collapse
|
27
|
A heuristic approach to the hyperparameters in training spiking neural networks using spike-timing-dependent plasticity. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06824-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractThe third type of neural network called spiking is developed due to a more accurate representation of neuronal activity in living organisms. Spiking neural networks have many different parameters that can be difficult to adjust manually to the current classification problem. The analysis and selection of coefficients’ values in the network can be analyzed as an optimization problem. A practical method for automatic selection of them can decrease the time needed to develop such a model. In this paper, we propose the use of a heuristic approach to analyze and select coefficients with the idea of collaborative working. The proposed idea is based on parallel analyzing of different coefficients and choosing the best of them or average ones. This type of optimization problem allows the selection of all variables, which can significantly affect the convergence of the accuracy. Our proposal was tested using network simulators and popular databases to indicate the possibilities of the described approach. Five different heuristic algorithms were tested and the best results were reached by Cuckoo Search Algorithm, Grasshopper Optimization Algorithm, and Polar Bears Algorithm.
Collapse
|
28
|
Spiking Neural Networks and Their Applications: A Review. Brain Sci 2022; 12:brainsci12070863. [PMID: 35884670 PMCID: PMC9313413 DOI: 10.3390/brainsci12070863] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 05/12/2022] [Accepted: 06/13/2022] [Indexed: 02/04/2023] Open
Abstract
The past decade has witnessed the great success of deep neural networks in various domains. However, deep neural networks are very resource-intensive in terms of energy consumption, data requirements, and high computational costs. With the recent increasing need for the autonomy of machines in the real world, e.g., self-driving vehicles, drones, and collaborative robots, exploitation of deep neural networks in those applications has been actively investigated. In those applications, energy and computational efficiencies are especially important because of the need for real-time responses and the limited energy supply. A promising solution to these previously infeasible applications has recently been given by biologically plausible spiking neural networks. Spiking neural networks aim to bridge the gap between neuroscience and machine learning, using biologically realistic models of neurons to carry out the computation. Due to their functional similarity to the biological neural network, spiking neural networks can embrace the sparsity found in biology and are highly compatible with temporal code. Our contributions in this work are: (i) we give a comprehensive review of theories of biological neurons; (ii) we present various existing spike-based neuron models, which have been studied in neuroscience; (iii) we detail synapse models; (iv) we provide a review of artificial neural networks; (v) we provide detailed guidance on how to train spike-based neuron models; (vi) we revise available spike-based neuron frameworks that have been developed to support implementing spiking neural networks; (vii) finally, we cover existing spiking neural network applications in computer vision and robotics domains. The paper concludes with discussions of future perspectives.
Collapse
|
29
|
Müller E, Arnold E, Breitwieser O, Czierlinski M, Emmel A, Kaiser J, Mauch C, Schmitt S, Spilger P, Stock R, Stradmann Y, Weis J, Baumbach A, Billaudelle S, Cramer B, Ebert F, Göltz J, Ilmberger J, Karasenko V, Kleider M, Leibfried A, Pehle C, Schemmel J. A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware. Front Neurosci 2022; 16:884128. [PMID: 35663548 PMCID: PMC9157770 DOI: 10.3389/fnins.2022.884128] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 04/20/2022] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging to unite efficiency and usability. This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation. We present use cases to discuss and derive requirements for the software and showcase the implementation. The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-the-loop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development. Finally, we discuss further developments in terms of hardware scale-up, system usability, and efficiency.
Collapse
Affiliation(s)
- Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Elias Arnold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Milena Czierlinski
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Arne Emmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Jakob Kaiser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Third Institute of Physics, University of Göttingen, Göttingen, Germany
| | - Philipp Spilger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Raphael Stock
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Yannik Stradmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Weis
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | | | - Benjamin Cramer
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Falk Ebert
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Julian Göltz
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Joscha Ilmberger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Aron Leibfried
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
30
|
Lu S, Sengupta A. Neuroevolution Guided Hybrid Spiking Neural Network Training. Front Neurosci 2022; 16:838523. [PMID: 35546880 PMCID: PMC9082355 DOI: 10.3389/fnins.2022.838523] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 03/11/2022] [Indexed: 11/16/2022] Open
Abstract
Neuromorphic computing algorithms based on Spiking Neural Networks (SNNs) are evolving to be a disruptive technology driving machine learning research. The overarching goal of this work is to develop a structured algorithmic framework for SNN training that optimizes unique SNN-specific properties like neuron spiking threshold using neuroevolution as a feedback strategy. We provide extensive results for this hybrid bio-inspired training strategy and show that such a feedback-based learning approach leads to explainable neuromorphic systems that adapt to the specific underlying application. Our analysis reveals 53.8, 28.8, and 28.2% latency improvement for the neuroevolution-based SNN training strategy on CIFAR-10, CIFAR-100, and ImageNet datasets, respectively in contrast to state-of-the-art conversion based approaches. The proposed algorithm can be easily extended to other application domains like image classification in presence of adversarial attacks where 43.2 and 27.9% latency improvements were observed on CIFAR-10 and CIFAR-100 datasets, respectively.
Collapse
Affiliation(s)
- Sen Lu
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA, United States
| | - Abhronil Sengupta
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA, United States
| |
Collapse
|
31
|
Javanshir A, Nguyen TT, Mahmud MAP, Kouzani AZ. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput 2022; 34:1289-1328. [PMID: 35534005 DOI: 10.1162/neco_a_01499] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 01/18/2022] [Indexed: 11/04/2022]
Abstract
Artificial neural networks (ANNs) have experienced a rapid advancement for their success in various application domains, including autonomous driving and drone vision. Researchers have been improving the performance efficiency and computational requirement of ANNs inspired by the mechanisms of the biological brain. Spiking neural networks (SNNs) provide a power-efficient and brain-inspired computing paradigm for machine learning applications. However, evaluating large-scale SNNs on classical von Neumann architectures (central processing units/graphics processing units) demands a high amount of power and time. Therefore, hardware designers have developed neuromorphic platforms to execute SNNs in and approach that combines fast processing and low power consumption. Recently, field-programmable gate arrays (FPGAs) have been considered promising candidates for implementing neuromorphic solutions due to their varied advantages, such as higher flexibility, shorter design, and excellent stability. This review aims to describe recent advances in SNNs and the neuromorphic hardware platforms (digital, analog, hybrid, and FPGA based) suitable for their implementation. We present that biological background of SNN learning, such as neuron models and information encoding techniques, followed by a categorization of SNN training. In addition, we describe state-of-the-art SNN simulators. Furthermore, we review and present FPGA-based hardware implementation of SNNs. Finally, we discuss some future directions for research in this field.
Collapse
Affiliation(s)
| | - Thanh Thi Nguyen
- School of Information Technology, Deakin University (Burwood Campus) Burwood, VIC 3125, Australia
| | - M A Parvez Mahmud
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| | - Abbas Z Kouzani
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| |
Collapse
|
32
|
Tan W, Kozma R, Patel D. Optimization methods for improved efficiency and performance of Deep Q-Networks upon conversion to neuromorphic population platforms. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108257] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
33
|
Hussaini S, Milford M, Fischer T. Spiking Neural Networks for Visual Place Recognition Via Weighted Neuronal Assignments. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3149030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Somayeh Hussaini
- QUT Centre for Robotics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Michael Milford
- QUT Centre for Robotics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Tobias Fischer
- QUT Centre for Robotics, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
34
|
Nagarajan K, Li J, Ensan SS, Kannan S, Ghosh S. Fault Injection Attacks in Spiking Neural Networks and Countermeasures. FRONTIERS IN NANOTECHNOLOGY 2022. [DOI: 10.3389/fnano.2021.801999] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Spiking Neural Networks (SNN) are fast emerging as an alternative option to Deep Neural Networks (DNN). They are computationally more powerful and provide higher energy-efficiency than DNNs. While exciting at first glance, SNNs contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that can be exploited by the adversaries. We explore global fault injection attacks using external power supply and laser-induced local power glitches on SNN designed using common analog neurons to corrupt critical training parameters such as spike amplitude and neuron’s membrane threshold potential. We also analyze the impact of power-based attacks on the SNN for digit classification task and observe a worst-case classification accuracy degradation of −85.65%. We explore the impact of various design parameters of SNN (e.g., learning rate, spike trace decay constant, and number of neurons) and identify design choices for robust implementation of SNN. We recover classification accuracy degradation by 30–47% for a subset of power-based attacks by modifying SNN training parameters such as learning rate, trace decay constant, and neurons per layer. We also propose hardware-level defenses, e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area, and 25% power overhead. We also propose a dummy neuron-based detection of voltage fault injection at ∼1% power and area overhead each.
Collapse
|
35
|
Kozma R, Baars BJ, Geld N. Evolutionary Advantages of Stimulus-Driven EEG Phase Transitions in the Upper Cortical Layers. Front Syst Neurosci 2021; 15:784404. [PMID: 34955771 PMCID: PMC8692947 DOI: 10.3389/fnsys.2021.784404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
Spatio-temporal brain activity monitored by EEG recordings in humans and other mammals has identified beta/gamma oscillations (20-80 Hz), which are self-organized into spatio-temporal structures recurring at theta/alpha rates (4-12 Hz). These structures have statistically significant correlations with sensory stimuli and reinforcement contingencies perceived by the subject. The repeated collapse of self-organized structures at theta/alpha rates generates laterally propagating phase gradients (phase cones), ignited at some specific location of the cortical sheet. Phase cones have been interpreted as neural signatures of transient perceptual experiences according to the cinematic theory of brain dynamics. The rapid expansion of essentially isotropic phase cones is consistent with the propagation of perceptual broadcasts postulated by Global Workspace Theory (GWT). What is the evolutionary advantage of brains operating with repeatedly collapsing dynamics? This question is answered using thermodynamic concepts. According to neuropercolation theory, waking brains are described as non-equilibrium thermodynamic systems operating at the edge of criticality, undergoing repeated phase transitions. This work analyzes the role of long-range axonal connections and metabolic processes in the regulation of critical brain dynamics. Historically, the near 10 Hz domain has been associated with conscious sensory integration, cortical "ignitions" linked to conscious visual perception, and conscious experiences. We can therefore combine a very large body of experimental evidence and theory, including graph theory, neuropercolation, and GWT. This cortical operating style may optimize a tradeoff between rapid adaptation to novelty vs. stable and widespread self-organization, therefore resulting in significant Darwinian benefits.
Collapse
Affiliation(s)
- Robert Kozma
- Center for Large-Scale Intelligent Optimization and Networks, Department of Mathematics, University of Memphis, Memphis, TN, United States
| | - Bernard J. Baars
- Center for the Future Mind, Florida Atlantic University, Boca Raton, FL, United States
- Society for MindBrain Sciences, San Diego, CA, United States
| | | |
Collapse
|
36
|
Spiking Neural Networks for Computational Intelligence: An Overview. BIG DATA AND COGNITIVE COMPUTING 2021. [DOI: 10.3390/bdcc5040067] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future.
Collapse
|
37
|
Vieth M, Stöber TM, Triesch J. PymoNNto: A Flexible Modular Toolbox for Designing Brain-Inspired Neural Networks. Front Neuroinform 2021; 15:715131. [PMID: 34790108 PMCID: PMC8591031 DOI: 10.3389/fninf.2021.715131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/07/2021] [Indexed: 11/13/2022] Open
Abstract
The Python Modular Neural Network Toolbox (PymoNNto) provides a versatile and adaptable Python-based framework to develop and investigate brain-inspired neural networks. In contrast to other commonly used simulators such as Brian2 and NEST, PymoNNto imposes only minimal restrictions for implementation and execution. The basic structure of PymoNNto consists of one network class with several neuron- and synapse-groups. The behaviour of each group can be flexibly defined by exchangeable modules. The implementation of these modules is up to the user and only limited by Python itself. Behaviours can be implemented in Python, Numpy, Tensorflow, and other libraries to perform computations on CPUs and GPUs. PymoNNto comes with convenient high level behaviour modules, allowing differential equation-based implementations similar to Brian2, and an adaptable modular Graphical User Interface for real-time observation and modification of the simulated network and its parameters.
Collapse
Affiliation(s)
- Marius Vieth
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | | | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| |
Collapse
|
38
|
Abstract
In recent years, spiking neural networks (SNNs) have attracted increasingly more researchers to study by virtue of its bio-interpretability and low-power computing. The SNN simulator is an essential tool to accomplish image classification, recognition, speech recognition, and other tasks using SNN. However, most of the existing simulators for spike neural networks are clock-driven, which has two main problems. First, the calculation result is affected by time slice, which obviously shows that when the calculation accuracy is low, the calculation speed is fast, but when the calculation accuracy is high, the calculation speed is unacceptable. The other is the failure of lateral inhibition, which severely affects SNN learning. In order to solve these problems, an event-driven high accurate simulator named EDHA (Event-Driven High Accuracy) for spike neural networks is proposed in this paper. EDHA takes full advantage of the event-driven characteristics of SNN and only calculates when a spike is generated, which is independent of the time slice. Compared with previous SNN simulators, EDHA is completely event-driven, which reduces a large amount of calculations and achieves higher computational accuracy. The calculation speed of EDHA in the MNIST classification task is more than 10 times faster than that of mainstream clock-driven simulators. By optimizing the spike encoding method, the former can even achieve more than 100 times faster than the latter. Due to the cross-platform characteristics of Java, EDHA can run on x86, amd64, ARM, and other platforms that support Java.
Collapse
|
39
|
Beck M, Maier G, Flitter M, Gruna R, Längle T, Heizmann M, Beyerer J. An Extended Modular Processing Pipeline for Event-Based Vision in Automatic Visual Inspection. SENSORS 2021; 21:s21186143. [PMID: 34577349 PMCID: PMC8472878 DOI: 10.3390/s21186143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 09/02/2021] [Accepted: 09/03/2021] [Indexed: 11/16/2022]
Abstract
Dynamic Vision Sensors differ from conventional cameras in that only intensity changes of individual pixels are perceived and transmitted as an asynchronous stream instead of an entire frame. The technology promises, among other things, high temporal resolution and low latencies and data rates. While such sensors currently enjoy much scientific attention, there are only little publications on practical applications. One field of application that has hardly been considered so far, yet potentially fits well with the sensor principle due to its special properties, is automatic visual inspection. In this paper, we evaluate current state-of-the-art processing algorithms in this new application domain. We further propose an algorithmic approach for the identification of ideal time windows within an event stream for object classification. For the evaluation of our method, we acquire two novel datasets that contain typical visual inspection scenarios, i.e., the inspection of objects on a conveyor belt and during free fall. The success of our algorithmic extension for data processing is demonstrated on the basis of these new datasets by showing that classification accuracy of current algorithms is highly increased. By making our new datasets publicly available, we intend to stimulate further research on application of Dynamic Vision Sensors in machine vision applications.
Collapse
Affiliation(s)
- Moritz Beck
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
| | - Georg Maier
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
- Correspondence:
| | - Merle Flitter
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
| | - Robin Gruna
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
| | - Thomas Längle
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
| | - Michael Heizmann
- Institute of Industrial Information Technology (IIIT), Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany;
| | - Jürgen Beyerer
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
- Vision and Fusion Laboratory (IES), Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany
| |
Collapse
|
40
|
Kulkarni SR, Parsa M, Mitchell JP, Schuman CD. Benchmarking the performance of neuromorphic and spiking neural network simulators. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
41
|
Ben Abdallah A, Dang KN. Toward Robust Cognitive 3D Brain-Inspired Cross-Paradigm System. Front Neurosci 2021; 15:690208. [PMID: 34248491 PMCID: PMC8267251 DOI: 10.3389/fnins.2021.690208] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/04/2021] [Indexed: 11/13/2022] Open
Abstract
Spiking Neuromorphic systems have been introduced as promising platforms for energy-efficient spiking neural network (SNNs) execution. SNNs incorporate neuronal and synaptic states in addition to the variant time scale into their computational model. Since each neuron in these networks is connected to many others, high bandwidth is required. Moreover, since the spike times are used to encode information in SNN, a precise communication latency is also needed, although SNN is tolerant to the spike delay variation in some limits when it is seen as a whole. The two-dimensional packet-switched network-on-chip was proposed as a solution to provide a scalable interconnect fabric in large-scale spike-based neural networks. The 3D-ICs have also attracted a lot of attention as a potential solution to resolve the interconnect bottleneck. Combining these two emerging technologies provides a new horizon for IC design to satisfy the high requirements of low power and small footprint in emerging AI applications. Moreover, although fault-tolerance is a natural feature of biological systems, integrating many computation and memory units into neuromorphic chips confronts the reliability issue, where a defective part can affect the overall system's performance. This paper presents the design and simulation of R-NASH-a reliable three-dimensional digital neuromorphic system geared explicitly toward the 3D-ICs biological brain's three-dimensional structure, where information in the network is represented by sparse patterns of spike timing and learning is based on the local spike-timing-dependent-plasticity rule. Our platform enables high integration density and small spike delay of spiking networks and features a scalable design. R-NASH is a design based on the Through-Silicon-Via technology, facilitating spiking neural network implementation on clustered neurons based on Network-on-Chip. We provide a memory interface with the host CPU, allowing for online training and inference of spiking neural networks. Moreover, R-NASH supports fault recovery with graceful performance degradation.
Collapse
Affiliation(s)
- Abderazek Ben Abdallah
- Adaptive Systems Laboratory, Graduate School of Computer Science and Engineering, The University of Aizu, Aizu-Wakamatsu, Japan
| | - Khanh N Dang
- Adaptive Systems Laboratory, Graduate School of Computer Science and Engineering, The University of Aizu, Aizu-Wakamatsu, Japan.,VNU Key Laboratory for Smart Integrated Systems (SISLAB), VNU University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
42
|
Abstract
Understanding of the evolved biological function of sleep has advanced considerably in the past decade. However, no equivalent understanding of dreams has emerged. Contemporary neuroscientific theories often view dreams as epiphenomena, and many of the proposals for their biological function are contradicted by the phenomenology of dreams themselves. Now, the recent advent of deep neural networks (DNNs) has finally provided the novel conceptual framework within which to understand the evolved function of dreams. Notably, all DNNs face the issue of overfitting as they learn, which is when performance on one dataset increases but the network's performance fails to generalize (often measured by the divergence of performance on training versus testing datasets). This ubiquitous problem in DNNs is often solved by modelers via "noise injections" in the form of noisy or corrupted inputs. The goal of this paper is to argue that the brain faces a similar challenge of overfitting and that nightly dreams evolved to combat the brain's overfitting during its daily learning. That is, dreams are a biological mechanism for increasing generalizability via the creation of corrupted sensory inputs from stochastic activity across the hierarchy of neural structures. Sleep loss, specifically dream loss, leads to an overfitted brain that can still memorize and learn but fails to generalize appropriately. Herein this "overfitted brain hypothesis" is explicitly developed and then compared and contrasted with existing contemporary neuroscientific theories of dreams. Existing evidence for the hypothesis is surveyed within both neuroscience and deep learning, and a set of testable predictions is put forward that can be pursued both in vivo and in silico.
Collapse
Affiliation(s)
- Erik Hoel
- Allen Discovery Center, Tufts University, Medford, MA, USA
| |
Collapse
|
43
|
Lopez CD, Constant M, Anderson MJJ, Confino JE, Heffernan JT, Jobin CM. Using machine learning methods to predict nonhome discharge after elective total shoulder arthroplasty. JSES Int 2021; 5:692-698. [PMID: 34223417 PMCID: PMC8245980 DOI: 10.1016/j.jseint.2021.02.011] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Background Machine learning has shown potential in accurately predicting outcomes after orthopedic surgery, thereby allowing for improved patient selection, risk stratification, and preoperative planning. This study sought to develop machine learning models to predict nonhome discharge after total shoulder arthroplasty (TSA). Methods The American College of Surgeons National Surgical Quality Improvement Program database was queried for patients who underwent elective TSA from 2012 to 2018. Boosted decision tree and artificial neural networks (ANN) machine learning models were developed to predict non-home discharge and 30-day postoperative complications. Model performance was measured using the area under the receiver operating characteristic curve (AUC) and overall accuracy (%). Multivariate binary logistic regression analyses were used to identify variables that were significantly associated with the predicted outcomes. Results There were 21,544 elective TSA cases identified in the National Surgical Quality Improvement Program registry from 2012 to 2018 that met inclusion criteria. Multivariate logistic regression identified several variables associated with increased risk of nonhome discharge including female sex (odds ratio [OR] = 2.83; 95% confidence interval [CI] = 2.53-3.17; P < .001), age older than 70 years (OR = 3.19; 95% CI = 2.86-3.57; P < .001), American Society of Anesthesiologists classification 3 or greater (OR = 2.70; 95% CI = 2.41-2.03; P < .001), prolonged operative time (OR = 1.38; 95% CI = 1.20-1.58; P < .001), as well as history of diabetes (OR = 1.56; 95% CI = 1.38-1.75; P < .001), chronic obstructive pulmonary disease (OR = 1.71; 95% CI = 1.46-2.01; P < .001), congestive heart failure (OR = 2.65; 95% CI = 1.72-4.01; P < .001), hypertension (OR = 1.35; 95% CI = 1.20-1.52; P = .004), dialysis (OR = 3.58; 95% CI = 2.01-6.39; P = .002), wound infection (OR = 5.67; 95% CI = 3.46-9.29; P < .001), steroid use (OR = 1.43; 95% CI = 1.18-1.74; P = .010), and bleeding disorder (OR = 1.84; 95% CI = 1.45-2.34; P < .001). The boosted decision tree model for predicting nonhome discharge had an AUC of 0.788 and an overall accuracy of 90.3%. The ANN model for predicting nonhome discharge had an AUC of 0.851 and an overall accuracy of 89.9%. For predicting the occurrence of 1 or more postoperative complications, the boosted decision tree model had an AUC of 0.795 and an overall accuracy of 95.5%. The ANN model yielded an AUC of 0.788 and an overall accuracy of 92.5%. Conclusions Both the boosted decision tree and ANN models performed well in predicting nonhome discharge with similar overall accuracy, but the ANN had higher discriminative ability. Based on the findings of this study, machine learning has the potential to accurately predict nonhome discharge after elective TSA. Surgeons can use such tools to guide patient expectations and to improve preoperative discharge planning, with the ultimate goal of decreasing hospital length of stay and improving cost-efficiency.
Collapse
Affiliation(s)
- Cesar D Lopez
- New York-Presbyterian/Columbia University Irving Medical Center, New York, NY, USA
| | - Michael Constant
- New York-Presbyterian/Columbia University Irving Medical Center, New York, NY, USA
| | - Matthew J J Anderson
- New York-Presbyterian/Columbia University Irving Medical Center, New York, NY, USA
| | - Jamie E Confino
- New York-Presbyterian/Columbia University Irving Medical Center, New York, NY, USA
| | - John T Heffernan
- New York-Presbyterian/Columbia University Irving Medical Center, New York, NY, USA
| | - Charles M Jobin
- New York-Presbyterian/Columbia University Irving Medical Center, New York, NY, USA
| |
Collapse
|
44
|
Meizlish ML, Pine AB, Bishai JD, Goshua G, Nadelmann ER, Simonov M, Chang CH, Zhang H, Shallow M, Bahel P, Owusu K, Yamamoto Y, Arora T, Atri DS, Patel A, Gbyli R, Kwan J, Won CH, Dela Cruz C, Price C, Koff J, King BA, Rinder HM, Wilson FP, Hwa J, Halene S, Damsky W, van Dijk D, Lee AI, Chun HJ. A neutrophil activation signature predicts critical illness and mortality in COVID-19. Blood Adv 2021; 5:1164-1177. [PMID: 33635335 PMCID: PMC7908851 DOI: 10.1182/bloodadvances.2020003568] [Citation(s) in RCA: 201] [Impact Index Per Article: 67.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 01/13/2021] [Indexed: 12/29/2022] Open
Abstract
Pathologic immune hyperactivation is emerging as a key feature of critical illness in COVID-19, but the mechanisms involved remain poorly understood. We carried out proteomic profiling of plasma from cross-sectional and longitudinal cohorts of hospitalized patients with COVID-19 and analyzed clinical data from our health system database of more than 3300 patients. Using a machine learning algorithm, we identified a prominent signature of neutrophil activation, including resistin, lipocalin-2, hepatocyte growth factor, interleukin-8, and granulocyte colony-stimulating factor, which were the strongest predictors of critical illness. Evidence of neutrophil activation was present on the first day of hospitalization in patients who would only later require transfer to the intensive care unit, thus preceding the onset of critical illness and predicting increased mortality. In the health system database, early elevations in developing and mature neutrophil counts also predicted higher mortality rates. Altogether, these data suggest a central role for neutrophil activation in the pathogenesis of severe COVID-19 and identify molecular markers that distinguish patients at risk of future clinical decompensation.
Collapse
Affiliation(s)
| | | | - Jason D Bishai
- Yale Cardiovascular Research Center, Section of Cardiovascular Medicine, Department of Internal Medicine, and
- Department of Microbial Pathogenesis, Yale School of Medicine, New Haven, CT
| | - George Goshua
- Section of Hematology, Department of Internal Medicine
| | | | - Michael Simonov
- Clinical and Translational Research Accelerator, Department of Internal Medicine
- Department of Dermatology, and
| | - C-Hong Chang
- Yale Cardiovascular Research Center, Section of Cardiovascular Medicine, Department of Internal Medicine, and
| | - Hanming Zhang
- Yale Cardiovascular Research Center, Section of Cardiovascular Medicine, Department of Internal Medicine, and
| | - Marcus Shallow
- Yale Cardiovascular Research Center, Section of Cardiovascular Medicine, Department of Internal Medicine, and
| | - Parveen Bahel
- Department of Laboratory Medicine, Yale School of Medicine, New Haven, CT
| | - Kent Owusu
- Department of Pharmacy, Yale New Haven Health System, New Haven, CT
| | - Yu Yamamoto
- Clinical and Translational Research Accelerator, Department of Internal Medicine
| | - Tanima Arora
- Clinical and Translational Research Accelerator, Department of Internal Medicine
| | - Deepak S Atri
- Division of Cardiovascular Medicine, Brigham and Women's Hospital, Boston, MA; and
| | - Amisha Patel
- Section of Hematology, Department of Internal Medicine
| | - Rana Gbyli
- Section of Hematology, Department of Internal Medicine
| | - Jennifer Kwan
- Yale Cardiovascular Research Center, Section of Cardiovascular Medicine, Department of Internal Medicine, and
| | - Christine H Won
- Section of Section of Pulmonary, Critical Care, and Sleep Medicine, Department of Internal Medicine, and
| | - Charles Dela Cruz
- Section of Section of Pulmonary, Critical Care, and Sleep Medicine, Department of Internal Medicine, and
| | - Christina Price
- Section of Immunology, Department of Internal Medicine, Yale School of Medicine, New Haven, CT
| | - Jonathan Koff
- Section of Section of Pulmonary, Critical Care, and Sleep Medicine, Department of Internal Medicine, and
| | - Brett A King
- Section of Immunology, Department of Internal Medicine, Yale School of Medicine, New Haven, CT
| | - Henry M Rinder
- Section of Hematology, Department of Internal Medicine
- Department of Laboratory Medicine, Yale School of Medicine, New Haven, CT
| | - F Perry Wilson
- Clinical and Translational Research Accelerator, Department of Internal Medicine
| | - John Hwa
- Yale Cardiovascular Research Center, Section of Cardiovascular Medicine, Department of Internal Medicine, and
| | | | | | - David van Dijk
- Yale Cardiovascular Research Center, Section of Cardiovascular Medicine, Department of Internal Medicine, and
| | - Alfred I Lee
- Section of Hematology, Department of Internal Medicine
| | - Hyung J Chun
- Yale Cardiovascular Research Center, Section of Cardiovascular Medicine, Department of Internal Medicine, and
| |
Collapse
|
45
|
Li H, Omange RW, Liang B, Toledo N, Hai Y, Liu LR, Schalk D, Crecente-Campo J, Dacoba TG, Lambe AB, Lim SY, Li L, Kashem MA, Wan Y, Correia-Pinto JF, Seaman MS, Liu XQ, Balshaw RF, Li Q, Schultz-Darken N, Alonso MJ, Plummer FA, Whitney JB, Luo M. Vaccine targeting SIVmac251 protease cleavage sites protects macaques against vaginal infection. J Clin Invest 2021; 130:6429-6442. [PMID: 32853182 DOI: 10.1172/jci138728] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Accepted: 08/20/2020] [Indexed: 01/03/2023] Open
Abstract
After over 3 decades of research, an effective anti-HIV vaccine remains elusive. The recently halted HVTN702 clinical trial not only further stresses the challenge to develop an effective HIV vaccine but also emphasizes that unconventional and novel vaccine strategies are urgently needed. Here, we report that a vaccine focusing the immune response on the sequences surrounding the 12 viral protease cleavage sites (PCSs) provided greater than 80% protection to Mauritian cynomolgus macaques against repeated intravaginal SIVmac251 challenges. The PCS-specific T cell responses correlated with vaccine efficacy. The PCS vaccine did not induce immune activation or inflammation known to be associated with increased susceptibility to HIV infection. Machine learning analyses revealed that the immune microenvironment generated by the PCS vaccine was predictive of vaccine efficacy. Our study demonstrates, for the first time to our knowledge, that a vaccine which targets only viral maturation, but lacks full-length Env and Gag immunogens, can prevent intravaginal infection in a stringent macaque/SIV challenge model. Targeting HIV maturation thus offers a potentially novel approach to developing an effective HIV vaccine.
Collapse
Affiliation(s)
- Hongzhao Li
- Department of Medical Microbiology and Infectious Diseases, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Robert W Omange
- Department of Medical Microbiology and Infectious Diseases, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Binhua Liang
- National Microbiology Laboratory, Public Health Agency of Canada, Winnipeg, Manitoba, Canada.,Department of Biochemistry and Medical Genetics, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Nikki Toledo
- Department of Medical Microbiology and Infectious Diseases, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Yan Hai
- Department of Medical Microbiology and Infectious Diseases, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Lewis R Liu
- Department of Medical Microbiology and Infectious Diseases, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Dane Schalk
- Scientific Protocol Implementation Unit, Wisconsin National Primate Research Center, Madison, Wisconsin, USA
| | - Jose Crecente-Campo
- Center for Research in Molecular Medicine and Chronic Diseases (CIMUS), Campus Vida, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | - Tamara G Dacoba
- Center for Research in Molecular Medicine and Chronic Diseases (CIMUS), Campus Vida, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | | | - So-Yon Lim
- Center for Virology and Vaccine Research, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Lin Li
- National Microbiology Laboratory, Public Health Agency of Canada, Winnipeg, Manitoba, Canada
| | - Mohammad Abul Kashem
- Department of Medical Microbiology and Infectious Diseases, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Yanmin Wan
- Nebraska Center for Virology, School of Biological Sciences, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Jorge F Correia-Pinto
- Center for Research in Molecular Medicine and Chronic Diseases (CIMUS), Campus Vida, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | - Michael S Seaman
- Center for Virology and Vaccine Research, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Xiao Qing Liu
- Department of Biochemistry and Medical Genetics, University of Manitoba, Winnipeg, Manitoba, Canada.,Department of Obstetrics, Gynecology and Reproductive Sciences, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Robert F Balshaw
- Centre for Healthcare Innovation, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Qingsheng Li
- Nebraska Center for Virology, School of Biological Sciences, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Nancy Schultz-Darken
- Scientific Protocol Implementation Unit, Wisconsin National Primate Research Center, Madison, Wisconsin, USA
| | - Maria J Alonso
- Center for Research in Molecular Medicine and Chronic Diseases (CIMUS), Campus Vida, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | - Francis A Plummer
- Department of Medical Microbiology and Infectious Diseases, University of Manitoba, Winnipeg, Manitoba, Canada.,National Microbiology Laboratory, Public Health Agency of Canada, Winnipeg, Manitoba, Canada
| | - James B Whitney
- Center for Virology and Vaccine Research, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA.,Ragon Institute of MGH, MIT, and Harvard, Cambridge, Massachusetts, USA
| | - Ma Luo
- Department of Medical Microbiology and Infectious Diseases, University of Manitoba, Winnipeg, Manitoba, Canada.,National Microbiology Laboratory, Public Health Agency of Canada, Winnipeg, Manitoba, Canada
| |
Collapse
|
46
|
Zhang S, Zhang C, Du J, Zhang R, Yang S, Li B, Wang P, Deng W. Prediction of Lymph-Node Metastasis in Cancers Using Differentially Expressed mRNA and Non-coding RNA Signatures. Front Cell Dev Biol 2021; 9:605977. [PMID: 33644044 PMCID: PMC7905047 DOI: 10.3389/fcell.2021.605977] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 01/07/2021] [Indexed: 12/12/2022] Open
Abstract
Accurate prediction of lymph-node metastasis in cancers is pivotal for the next targeted clinical interventions that allow favorable prognosis for patients. Different molecular profiles (mRNA and non-coding RNAs) have been widely used to establish classifiers for cancer prediction (e.g., tumor origin, cancerous or non-cancerous state, cancer subtype). However, few studies focus on lymphatic metastasis evaluation using these profiles, and the performance of classifiers based on different profiles has also not been compared. Here, differentially expressed mRNAs, miRNAs, and lncRNAs between lymph-node metastatic and non-metastatic groups were identified as molecular signatures to construct classifiers for lymphatic metastasis prediction in different cancers. With this similar feature selection strategy, support vector machine (SVM) classifiers based on different profiles were systematically compared in their prediction performance. For representative cancers (a total of nine types), these classifiers achieved comparative overall accuracies of 81.00% (67.96-92.19%), 81.97% (70.83-95.24%), and 80.78% (69.61-90.00%) on independent mRNA, miRNA, and lncRNA datasets, with a small set of biomarkers (6, 12, and 4 on average). Therefore, our proposed feature selection strategies are economical and efficient to identify biomarkers that aid in developing competitive classifiers for predicting lymph-node metastasis in cancers. A user-friendly webserver was also deployed to help researchers in metastasis risk determination by submitting their expression profiles of different origins.
Collapse
Affiliation(s)
- Shihua Zhang
- College of Life Science and Health, Wuhan University of Science and Technology, Wuhan, China
| | - Cheng Zhang
- College of Life Science and Health, Wuhan University of Science and Technology, Wuhan, China
| | - Jinke Du
- State Key Laboratory of Tea Plant Biology and Utilization, Anhui Agricultural University, Hefei, China
| | - Rui Zhang
- State Key Laboratory of Tea Plant Biology and Utilization, Anhui Agricultural University, Hefei, China
| | - Shixiong Yang
- Central Laboratory, Xiaogan Hospital Affiliated to Wuhan University of Science and Technology, Xiaogan, China
| | - Bo Li
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
| | - Pingping Wang
- School of Life Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Wensheng Deng
- College of Life Science and Health, Wuhan University of Science and Technology, Wuhan, China
| |
Collapse
|
47
|
Mansouri-Benssassi E, Ye J. Generalisation and robustness investigation for facial and speech emotion recognition using bio-inspired spiking neural networks. Soft comput 2021. [DOI: 10.1007/s00500-020-05501-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
AbstractEmotion recognition through facial expression and non-verbal speech represents an important area in affective computing. They have been extensively studied from classical feature extraction techniques to more recent deep learning approaches. However, most of these approaches face two major challenges: (1) robustness—in the face of degradation such as noise, can a model still make correct predictions? and (2) cross-dataset generalisation—when a model is trained on one dataset, can it be used to make inference on another dataset?. To directly address these challenges, we first propose the application of a spiking neural network (SNN) in predicting emotional states based on facial expression and speech data, then investigate, and compare their accuracy when facing data degradation or unseen new input. We evaluate our approach on third-party, publicly available datasets and compare to the state-of-the-art techniques. Our approach demonstrates robustness to noise, where it achieves an accuracy of 56.2% for facial expression recognition (FER) compared to 22.64% and 14.10% for CNN and SVM, respectively, when input images are degraded with the noise intensity of 0.5, and the highest accuracy of 74.3% for speech emotion recognition (SER) compared to 21.95% of CNN and 14.75% for SVM when audio white noise is applied. For generalisation, our approach achieves consistently high accuracy of 89% for FER and 70% for SER in cross-dataset evaluation and suggests that it can learn more effective feature representations, which lead to good generalisation of facial features and vocal characteristics across subjects.
Collapse
|
48
|
Rastogi M, Lu S, Islam N, Sengupta A. On the Self-Repair Role of Astrocytes in STDP Enabled Unsupervised SNNs. Front Neurosci 2021; 14:603796. [PMID: 33519358 PMCID: PMC7841294 DOI: 10.3389/fnins.2020.603796] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 11/27/2020] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic computing is emerging to be a disruptive computational paradigm that attempts to emulate various facets of the underlying structure and functionalities of the brain in the algorithm and hardware design of next-generation machine learning platforms. This work goes beyond the focus of current neuromorphic computing architectures on computational models for neuron and synapse to examine other computational units of the biological brain that might contribute to cognition and especially self-repair. We draw inspiration and insights from computational neuroscience regarding functionalities of glial cells and explore their role in the fault-tolerant capacity of Spiking Neural Networks (SNNs) trained in an unsupervised fashion using Spike-Timing Dependent Plasticity (STDP). We characterize the degree of self-repair that can be enabled in such networks with varying degree of faults ranging from 50 to 90% and evaluate our proposal on the MNIST and Fashion-MNIST datasets.
Collapse
Affiliation(s)
- Mehul Rastogi
- School of Electrical Engineering and Computer Science, Pennsylvania State University (PSU), University Park, PA, United States
- Department of Computer Science and Information Systems, Birla Institute of Technology and Science Pilani, Goa Campus, India
| | - Sen Lu
- School of Electrical Engineering and Computer Science, Pennsylvania State University (PSU), University Park, PA, United States
| | - Nafiul Islam
- School of Electrical Engineering and Computer Science, Pennsylvania State University (PSU), University Park, PA, United States
| | - Abhronil Sengupta
- School of Electrical Engineering and Computer Science, Pennsylvania State University (PSU), University Park, PA, United States
| |
Collapse
|
49
|
Meizlish ML, Pine AB, Bishai JD, Goshua G, Nadelmann ER, Simonov M, Chang CH, Zhang H, Shallow M, Bahel P, Owusu K, Yamamoto Y, Arora T, Atri DS, Patel A, Gbyli R, Kwan J, Won CH, Dela Cruz C, Price C, Koff J, King BA, Rinder HM, Wilson FP, Hwa J, Halene S, Damsky W, van Dijk D, Lee AI, Chun H. A neutrophil activation signature predicts critical illness and mortality in COVID-19. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2020. [PMID: 32908988 DOI: 10.1101/2020.09.01.20183897] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Pathologic immune hyperactivation is emerging as a key feature of critical illness in COVID-19, but the mechanisms involved remain poorly understood. We carried out proteomic profiling of plasma from cross-sectional and longitudinal cohorts of hospitalized patients with COVID-19 and analyzed clinical data from our health system database of over 3,300 patients. Using a machine learning algorithm, we identified a prominent signature of neutrophil activation, including resistin, lipocalin-2, HGF, IL-8, and G-CSF, as the strongest predictors of critical illness. Neutrophil activation was present on the first day of hospitalization in patients who would only later require transfer to the intensive care unit, thus preceding the onset of critical illness and predicting increased mortality. In the health system database, early elevations in developing and mature neutrophil counts also predicted higher mortality rates. Altogether, we define an essential role for neutrophil activation in the pathogenesis of severe COVID-19 and identify molecular neutrophil markers that distinguish patients at risk of future clinical decompensation.
Collapse
|
50
|
Berlin SJ, John M. Light weight convolutional models with spiking neural network based human action recognition. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-191914] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- S. Jeba Berlin
- Department of Electronics Engineering, Madras Institute of Technology, Anna University, Chennai, India
| | - Mala John
- Department of Electronics Engineering, Madras Institute of Technology, Anna University, Chennai, India
| |
Collapse
|