1
|
Ororbia AG. Contrastive signal-dependent plasticity: Self-supervised learning in spiking neural circuits. SCIENCE ADVANCES 2024; 10:eadn6076. [PMID: 39441920 DOI: 10.1126/sciadv.adn6076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 09/18/2024] [Indexed: 10/25/2024]
Abstract
Brain-inspired machine intelligence research seeks to develop computational models that emulate the information processing and adaptability that distinguishes biological systems of neurons. This has led to the development of spiking neural networks, a class of models that promisingly addresses the biological implausibility and the lack of energy efficiency inherent to modern-day deep neural networks. In this work, we address the challenge of designing neurobiologically motivated schemes for adjusting the synapses of spiking networks and propose contrastive signal-dependent plasticity, a process which generalizes ideas behind self-supervised learning to facilitate local adaptation in architectures of event-based neuronal layers that operate in parallel. Our experimental simulations demonstrate a consistent advantage over other biologically plausible approaches when training recurrent spiking networks, crucially side-stepping the need for extra structure such as feedback synapses.
Collapse
Affiliation(s)
- Alexander G Ororbia
- Department of Computer Science, Rochester Institute of Technology, 1 Lomb Memorial Dr, Rochester, NY 14623, USA
| |
Collapse
|
2
|
Tang F, Zhang J, Zhang C, Liu L. Brain-Inspired Architecture for Spiking Neural Networks. Biomimetics (Basel) 2024; 9:646. [PMID: 39451852 PMCID: PMC11506793 DOI: 10.3390/biomimetics9100646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 10/17/2024] [Accepted: 10/17/2024] [Indexed: 10/26/2024] Open
Abstract
Spiking neural networks (SNNs), using action potentials (spikes) to represent and transmit information, are more biologically plausible than traditional artificial neural networks. However, most of the existing SNNs require a separate preprocessing step to convert the real-valued input into spikes that are then input to the network for processing. The dissected spike-coding process may result in information loss, leading to degenerated performance. However, the biological neuron system does not perform a separate preprocessing step. Moreover, the nervous system may not have a single pathway with which to respond and process external stimuli but allows multiple circuits to perceive the same stimulus. Inspired by these advantageous aspects of the biological neural system, we propose a self-adaptive encoding spike neural network with parallel architecture. The proposed network integrates the input-encoding process into the spiking neural network architecture via convolutional operations such that the network can accept the real-valued input and automatically transform it into spikes for further processing. Meanwhile, the proposed network contains two identical parallel branches, inspired by the biological nervous system that processes information in both serial and parallel. The experimental results on multiple image classification tasks reveal that the proposed network can obtain competitive performance, suggesting the effectiveness of the proposed architecture.
Collapse
Affiliation(s)
- Fengzhen Tang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Nanta Street 114, Shenyang 110016, China; (J.Z.); (C.Z.); (L.L.)
| | - Junhuai Zhang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Nanta Street 114, Shenyang 110016, China; (J.Z.); (C.Z.); (L.L.)
- School of Computer Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chi Zhang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Nanta Street 114, Shenyang 110016, China; (J.Z.); (C.Z.); (L.L.)
- School of Computer Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianqing Liu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Nanta Street 114, Shenyang 110016, China; (J.Z.); (C.Z.); (L.L.)
| |
Collapse
|
3
|
Deng L, Tang H, Roy K. Editorial: Understanding and bridging the gap between neuromorphic computing and machine learning, volume II. Front Comput Neurosci 2024; 18:1455530. [PMID: 39421849 PMCID: PMC11484035 DOI: 10.3389/fncom.2024.1455530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Accepted: 09/09/2024] [Indexed: 10/19/2024] Open
Affiliation(s)
- Lei Deng
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, China
| | - Huajin Tang
- College of Computer Science and Technology, The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, China
| | - Kaushik Roy
- Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
4
|
Białas M, Mirończuk MM, Mańdziuk J. Leveraging spiking neural networks for topic modeling. Neural Netw 2024; 178:106494. [PMID: 38972130 DOI: 10.1016/j.neunet.2024.106494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 05/06/2024] [Accepted: 06/25/2024] [Indexed: 07/09/2024]
Abstract
This article investigates the application of spiking neural networks (SNNs) to the problem of topic modeling (TM): the identification of significant groups of words that represent human-understandable topics in large sets of documents. Our research is based on the hypothesis that an SNN that implements the Hebbian learning paradigm is capable of becoming specialized in the detection of statistically significant word patterns in the presence of adequately tailored sequential input. To support this hypothesis, we propose a novel spiking topic model (STM) that transforms text into a sequence of spikes and uses that sequence to train single-layer SNNs. In STM, each SNN neuron represents one topic, and each of the neuron's weights corresponds to one word. STM synaptic connections are modified according to spike-timing-dependent plasticity; after training, the neurons' strongest weights are interpreted as the words that represent topics. We compare the performance of STM with four other TM methods Latent Dirichlet Allocation (LDA), Biterm Topic Model (BTM), Embedding Topic Model (ETM) and BERTopic on three datasets: 20Newsgroups, BBC news, and AG news. The results demonstrate that STM can discover high-quality topics and successfully compete with comparative classical methods. This sheds new light on the possibility of the adaptation of SNN models in unsupervised natural language processing.
Collapse
Affiliation(s)
- Marcin Białas
- National Information Processing Institute, al. Niepodległości 188b, 00-608, Warsaw, Poland.
| | | | - Jacek Mańdziuk
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland.
| |
Collapse
|
5
|
Grimaldi A, Boutin V, Ieng SH, Benosman R, Perrinet LU. A robust event-driven approach to always-on object recognition. Neural Netw 2024; 178:106415. [PMID: 38852508 DOI: 10.1016/j.neunet.2024.106415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 04/05/2024] [Accepted: 05/29/2024] [Indexed: 06/11/2024]
Abstract
We propose a neuromimetic architecture capable of always-on pattern recognition, i.e. at any time during processing. To achieve this, we have extended an existing event-based algorithm (Lagorce et al., 2017), which introduced novel spatio-temporal features as a Hierarchy Of Time-Surfaces (HOTS). Built from asynchronous events captured by a neuromorphic camera, these time surfaces allow to encode the local dynamics of a visual scene and to create an efficient event-based pattern recognition architecture. Inspired by neuroscience, we have extended this method to improve its performance. First, we add a homeostatic gain control on the activity of neurons to improve the learning of spatio-temporal patterns (Grimaldi et al., 2021). We also provide a new mathematical formalism that allows an analogy to be drawn between the HOTS algorithm and Spiking Neural Networks (SNN). Following this analogy, we transform the offline pattern categorization method into an online and event-driven layer. This classifier uses the spiking output of the network to define new time surfaces and we then perform the online classification with a neuromimetic implementation of a multinomial logistic regression. These improvements not only consistently increase the performance of the network, but also bring this event-driven pattern recognition algorithm fully online. The results have been validated on different datasets: Poker-DVS (Serrano-Gotarredona and Linares-Barranco, 2015), N-MNIST (Orchard, Jayawant et al., 2015) and DVS Gesture (Amir et al., 2017). This demonstrates the efficiency of this bio-realistic SNN for ultra-fast object recognition through an event-by-event categorization process.
Collapse
Affiliation(s)
- Antoine Grimaldi
- Aix-Marseille Universit, Institut de Neurosciences de la Timone, CNRS, Marseille, France.
| | - Victor Boutin
- Carney Institute for Brain Science, Brown University, Providence, RI, United States; Artificial and Natural Intelligence Toulouse Institute, Université de Toulouse, Toulouse, France.
| | - Sio-Hoi Ieng
- Institut de la Vision, Sorbonne Université, CNRS, Paris, France.
| | - Ryad Benosman
- Robotics Institute, Carnegie Mellon University, Pittsburg, PA, United States.
| | - Laurent U Perrinet
- Aix-Marseille Universit, Institut de Neurosciences de la Timone, CNRS, Marseille, France.
| |
Collapse
|
6
|
Wu Y, Shi B, Zheng Z, Zheng H, Yu F, Liu X, Luo G, Deng L. Adaptive spatiotemporal neural networks through complementary hybridization. Nat Commun 2024; 15:7355. [PMID: 39191782 PMCID: PMC11350166 DOI: 10.1038/s41467-024-51641-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 08/12/2024] [Indexed: 08/29/2024] Open
Abstract
Processing spatiotemporal data sources with both high spatial dimension and rich temporal information is a ubiquitous need in machine intelligence. Recurrent neural networks in the machine learning domain and bio-inspired spiking neural networks in the neuromorphic computing domain are two promising candidate models for dealing with spatiotemporal data via extrinsic dynamics and intrinsic dynamics, respectively. Nevertheless, these networks have disparate modeling paradigms, which leads to different performance results, making it hard for them to cover diverse data sources and performance requirements in practice. Constructing a unified modeling framework that can effectively and adaptively process variable spatiotemporal data in different situations remains quite challenging. In this work, we propose hybrid spatiotemporal neural networks created by combining the recurrent neural networks and spiking neural networks under a unified surrogate gradient learning framework and a Hessian-aware neuron selection method. By flexibly tuning the ratio between two types of neurons, the hybrid model demonstrates better adaptive ability in balancing different performance metrics, including accuracy, robustness, and efficiency on several typical benchmarks, and generally outperforms conventional single-paradigm recurrent neural networks and spiking neural networks. Furthermore, we evidence the great potential of the proposed network with a robotic task in varying environments. With our proof of concept, the proposed hybrid model provides a generic modeling route to process spatiotemporal data sources in the open world.
Collapse
Affiliation(s)
- Yujie Wu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
- Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Bizhao Shi
- School of Computer Science, Peking University, Beijing, China
- Center for Energy-Efficient Computing and Applications, Peking University, Beijing, China
| | - Zhong Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Hanle Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Fangwen Yu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Xue Liu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Guojie Luo
- School of Computer Science, Peking University, Beijing, China
- Center for Energy-Efficient Computing and Applications, Peking University, Beijing, China
| | - Lei Deng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China.
| |
Collapse
|
7
|
Vallejo-Mancero B, Madrenas J, Zapata M. Real-time execution of SNN models with synaptic plasticity for handwritten digit recognition on SIMD hardware. Front Neurosci 2024; 18:1425861. [PMID: 39165339 PMCID: PMC11333227 DOI: 10.3389/fnins.2024.1425861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 07/22/2024] [Indexed: 08/22/2024] Open
Abstract
Recent advancements in neuromorphic computing have led to the development of hardware architectures inspired by Spiking Neural Networks (SNNs) to emulate the efficiency and parallel processing capabilities of the human brain. This work focuses on testing the HEENS architecture, specifically designed for high parallel processing and biological realism in SNN emulation, implemented on a ZYNQ family FPGA. The study applies this architecture to the classification of digits using the well-known MNIST database. The image resolutions were adjusted to match HEENS' processing capacity. Results were compared with existing work, demonstrating HEENS' performance comparable to other solutions. This study highlights the importance of balancing accuracy and efficiency in the execution of applications. HEENS offers a flexible solution for SNN emulation, allowing for the implementation of programmable neural and synaptic models. It encourages the exploration of novel algorithms and network architectures, providing an alternative for real-time processing with efficient energy consumption.
Collapse
Affiliation(s)
| | - Jordi Madrenas
- Department of Electronic Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Mireya Zapata
- Centro de Investigación en Mecatrónica y Sistemas Interactivos—MIST, Universidad Indoamérica, Quito, Ecuador
| |
Collapse
|
8
|
Habara T, Sato T, Awano H. BayesianSpikeFusion: accelerating spiking neural network inference via Bayesian fusion of early prediction. Front Neurosci 2024; 18:1420119. [PMID: 39161650 PMCID: PMC11330889 DOI: 10.3389/fnins.2024.1420119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 07/15/2024] [Indexed: 08/21/2024] Open
Abstract
Spiking neural networks (SNNs) have garnered significant attention due to their notable energy efficiency. However, conventional SNNs rely on spike firing frequency to encode information, necessitating a fixed sampling time and leaving room for further optimization. This study presents a novel approach to reduce sampling time and conserve energy by extracting early prediction results from the intermediate layer of the network and integrating them with the final layer's predictions in a Bayesian fashion. Experimental evaluations conducted on image classification tasks using MNIST, CIFAR-10, and CIFAR-100 datasets demonstrate the efficacy of our proposed method when applied to VGGNets and ResNets models. Results indicate a substantial energy reduction of 38.8% in VGGNets and 48.0% in ResNets, illustrating the potential for achieving significant efficiency gains in spiking neural networks. These findings contribute to the ongoing research in enhancing the performance of SNNs, facilitating their deployment in resource-constrained environments. Our code is available on GitHub: https://github.com/hanebarla/BayesianSpikeFusion.
Collapse
Affiliation(s)
- Takehiro Habara
- Department of Communications and Computer Engineering, Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | | | | |
Collapse
|
9
|
Granato G, Baldassarre G. Bridging flexible goal-directed cognition and consciousness: The Goal-Aligning Representation Internal Manipulation theory. Neural Netw 2024; 176:106292. [PMID: 38657422 DOI: 10.1016/j.neunet.2024.106292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/27/2024] [Accepted: 04/05/2024] [Indexed: 04/26/2024]
Abstract
Goal-directed manipulation of internal representations is a key element of human flexible behaviour, while consciousness is commonly associated with higher-order cognition and human flexibility. Current perspectives have only partially linked these processes, thus preventing a clear understanding of how they jointly generate flexible cognition and behaviour. Moreover, these limitations prevent an effective exploitation of this knowledge for technological scopes. We propose a new theoretical perspective that extends our 'three-component theory of flexible cognition' toward higher-order cognition and consciousness, based on the systematic integration of key concepts from Cognitive Neuroscience and AI/Robotics. The theory proposes that the function of conscious processes is to support the alignment of representations with multi-level goals. This higher alignment leads to more flexible and effective behaviours. We analyse here our previous model of goal-directed flexible cognition (validated with more than 20 human populations) as a starting GARIM-inspired model. By bridging the main theories of consciousness and goal-directed behaviour, the theory has relevant implications for scientific and technological fields. In particular, it contributes to developing new experimental tasks and interpreting clinical evidence. Finally, it indicates directions for improving machine learning and robotics systems and for informing real-world applications (e.g., in digital-twin healthcare and roboethics).
Collapse
Affiliation(s)
- Giovanni Granato
- Laboratory of Embodied Natural and Artificial Intelligence, Institute of Cognitive Sciences and Technologies, National Research Council of Italy, Rome, Italy.
| | - Gianluca Baldassarre
- Laboratory of Embodied Natural and Artificial Intelligence, Institute of Cognitive Sciences and Technologies, National Research Council of Italy, Rome, Italy.
| |
Collapse
|
10
|
Yuan Y, Kotiuga M, Park TJ, Patel RK, Ni Y, Saha A, Zhou H, Sadowski JT, Al-Mahboob A, Yu H, Du K, Zhu M, Deng S, Bisht RS, Lyu X, Wu CTM, Ye PD, Sengupta A, Cheong SW, Xu X, Rabe KM, Ramanathan S. Hydrogen-induced tunable remanent polarization in a perovskite nickelate. Nat Commun 2024; 15:4717. [PMID: 38830914 PMCID: PMC11148064 DOI: 10.1038/s41467-024-49213-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 05/28/2024] [Indexed: 06/05/2024] Open
Abstract
Materials with field-tunable polarization are of broad interest to condensed matter sciences and solid-state device technologies. Here, using hydrogen (H) donor doping, we modify the room temperature metallic phase of a perovskite nickelate NdNiO3 into an insulating phase with both metastable dipolar polarization and space-charge polarization. We then demonstrate transient negative differential capacitance in thin film capacitors. The space-charge polarization caused by long-range movement and trapping of protons dominates when the electric field exceeds the threshold value. First-principles calculations suggest the polarization originates from the polar structure created by H doping. We find that polarization decays within ~1 second which is an interesting temporal regime for neuromorphic computing hardware design, and we implement the transient characteristics in a neural network to demonstrate unsupervised learning. These discoveries open new avenues for designing ferroelectric materials and electrets using light-ion doping.
Collapse
Affiliation(s)
- Yifan Yuan
- Department of Electrical & Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ, USA.
| | - Michele Kotiuga
- Theory and Simulation of Materials (THEOS), National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Tae Joon Park
- School of Materials Engineering, Purdue University, West Lafayette, IN, USA.
| | - Ranjan Kumar Patel
- Department of Electrical & Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ, USA
| | - Yuanyuan Ni
- Department of Physics and Astronomy, University of Nebraska-Lincoln, Lincoln, NE, USA
| | - Arnob Saha
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, State College, PA, USA
| | - Hua Zhou
- X-ray Science Division, Advanced Photon Source, Argonne National Laboratory, Lemont, IL, USA
| | - Jerzy T Sadowski
- Center for Functional Nanomaterials, Brookhaven National Laboratory, Upton, NY, USA
| | - Abdullah Al-Mahboob
- Center for Functional Nanomaterials, Brookhaven National Laboratory, Upton, NY, USA
| | - Haoming Yu
- School of Materials Engineering, Purdue University, West Lafayette, IN, USA
| | - Kai Du
- Department of Physics and Astronomy, Rutgers, The State University of New Jersey, Piscataway, NJ, USA
| | - Minning Zhu
- Department of Electrical & Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ, USA
| | - Sunbin Deng
- School of Materials Engineering, Purdue University, West Lafayette, IN, USA
| | - Ravindra S Bisht
- Department of Electrical & Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ, USA
| | - Xiao Lyu
- School of Electrical and Computer Engineering and Birck Nanotechnology Center, Purdue University, West Lafayette, IN, USA
| | - Chung-Tse Michael Wu
- Department of Electrical & Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ, USA
| | - Peide D Ye
- School of Electrical and Computer Engineering and Birck Nanotechnology Center, Purdue University, West Lafayette, IN, USA
| | - Abhronil Sengupta
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, State College, PA, USA
| | - Sang-Wook Cheong
- Department of Physics and Astronomy, Rutgers, The State University of New Jersey, Piscataway, NJ, USA
| | - Xiaoshan Xu
- Department of Physics and Astronomy, University of Nebraska-Lincoln, Lincoln, NE, USA
| | - Karin M Rabe
- Department of Physics and Astronomy, Rutgers, The State University of New Jersey, Piscataway, NJ, USA
| | - Shriram Ramanathan
- Department of Electrical & Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ, USA.
| |
Collapse
|
11
|
Kim H, Woo SY, Kim H. Neuron Circuit Based on a Split-gate Transistor with Nonvolatile Memory for Homeostatic Functions of Biological Neurons. Biomimetics (Basel) 2024; 9:335. [PMID: 38921215 PMCID: PMC11201417 DOI: 10.3390/biomimetics9060335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 05/24/2024] [Accepted: 05/30/2024] [Indexed: 06/27/2024] Open
Abstract
To mimic the homeostatic functionality of biological neurons, a split-gate field-effect transistor (S-G FET) with a charge trap layer is proposed within a neuron circuit. By adjusting the number of charges trapped in the Si3N4 layer, the threshold voltage (Vth) of the S-G FET changes. To prevent degradation of the gate dielectric due to program/erase pulses, the gates for read operation and Vth control were separated through the fin structure. A circuit that modulates the width and amplitude of the pulse was constructed to generate a Program/Erase pulse for the S-G FET as the output pulse of the neuron circuit. By adjusting the Vth of the neuron circuit, the firing rate can be lowered by increasing the Vth of the neuron circuit with a high firing rate. To verify the performance of the neural network based on S-G FET, a simulation of online unsupervised learning and classification in a 2-layer SNN is performed. The results show that the recognition rate was improved by 8% by increasing the threshold of the neuron circuit fired.
Collapse
Affiliation(s)
- Hansol Kim
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea;
| | - Sung Yun Woo
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea;
| | - Hyungjin Kim
- Division of Materials Science and Engineering, Hanyang University, Seoul 04763, Republic of Korea
| |
Collapse
|
12
|
Daddinounou S, Vatajelu EI. Bi-sigmoid spike-timing dependent plasticity learning rule for magnetic tunnel junction-based SNN. Front Neurosci 2024; 18:1387339. [PMID: 38817912 PMCID: PMC11137280 DOI: 10.3389/fnins.2024.1387339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 04/22/2024] [Indexed: 06/01/2024] Open
Abstract
In this study, we explore spintronic synapses composed of several Magnetic Tunnel Junctions (MTJs), leveraging their attractive characteristics such as endurance, nonvolatility, stochasticity, and energy efficiency for hardware implementation of unsupervised neuromorphic systems. Spiking Neural Networks (SNNs) running on dedicated hardware are suitable for edge computing and IoT devices where continuous online learning and energy efficiency are important characteristics. We focus in this work on synaptic plasticity by conducting comprehensive electrical simulations to optimize the MTJ-based synapse design and find the accurate neuronal pulses that are responsible for the Spike Timing Dependent Plasticity (STDP) behavior. Most proposals in the literature are based on hardware-independent algorithms that require the network to store the spiking history to be able to update the weights accordingly. In this work, we developed a new learning rule, the Bi-Sigmoid STDP (B2STDP), which originates from the physical properties of MTJs. This rule enables immediate synaptic plasticity based on neuronal activity, leveraging in-memory computing. Finally, the integration of this learning approach within an SNN framework leads to a 91.71% accuracy in unsupervised image classification, demonstrating the potential of MTJ-based synapses for effective online learning in hardware-implemented SNNs.
Collapse
|
13
|
Ma D, Jin X, Sun S, Li Y, Wu X, Hu Y, Yang F, Tang H, Zhu X, Lin P, Pan G. Darwin3: a large-scale neuromorphic chip with a novel ISA and on-chip learning. Natl Sci Rev 2024; 11:nwae102. [PMID: 38689713 PMCID: PMC11060491 DOI: 10.1093/nsr/nwae102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 02/03/2024] [Accepted: 02/23/2024] [Indexed: 05/02/2024] Open
Abstract
Spiking neural networks (SNNs) are gaining increasing attention for their biological plausibility and potential for improved computational efficiency. To match the high spatial-temporal dynamics in SNNs, neuromorphic chips are highly desired to execute SNNs in hardware-based neuron and synapse circuits directly. This paper presents a large-scale neuromorphic chip named Darwin3 with a novel instruction set architecture, which comprises 10 primary instructions and a few extended instructions. It supports flexible neuron model programming and local learning rule designs. The Darwin3 chip architecture is designed in a mesh of computing nodes with an innovative routing algorithm. We used a compression mechanism to represent synaptic connections, significantly reducing memory usage. The Darwin3 chip supports up to 2.35 million neurons, making it the largest of its kind on the neuron scale. The experimental results showed that the code density was improved by up to 28.3× in Darwin3, and that the neuron core fan-in and fan-out were improved by up to 4096× and 3072× by connection compression compared to the physical memory depth. Our Darwin3 chip also provided memory saving between 6.8× and 200.8× when mapping convolutional spiking neural networks onto the chip, demonstrating state-of-the-art performance in accuracy and latency compared to other neuromorphic chips.
Collapse
Affiliation(s)
- De Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
- Research Center for Intelligent Computing Hardware, Zhejiang Lab, Hangzhou 311121, China
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310027, China
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou 310027, China
| | - Xiaofei Jin
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
- Research Center for Intelligent Computing Hardware, Zhejiang Lab, Hangzhou 311121, China
| | - Shichun Sun
- Research Center for Intelligent Computing Hardware, Zhejiang Lab, Hangzhou 311121, China
| | - Yitao Li
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310027, China
| | - Xundong Wu
- Research Center for Intelligent Computing Hardware, Zhejiang Lab, Hangzhou 311121, China
| | - Youneng Hu
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| | - Fangchao Yang
- Research Center for Intelligent Computing Hardware, Zhejiang Lab, Hangzhou 311121, China
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
- Research Center for Intelligent Computing Hardware, Zhejiang Lab, Hangzhou 311121, China
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310027, China
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou 310027, China
| | - Xiaolei Zhu
- College of Micro-Nano College of Micro-Nano Electronics, Zhejiang University, Hangzhou 311200, China
- Research Center for Intelligent Computing Hardware, Zhejiang Lab, Hangzhou 311121, China
| | - Peng Lin
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310027, China
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou 310027, China
| | - Gang Pan
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
- Research Center for Intelligent Computing Hardware, Zhejiang Lab, Hangzhou 311121, China
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310027, China
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
14
|
Florini D, Gandolfi D, Mapelli J, Benatti L, Pavan P, Puglisi FM. A Hybrid CMOS-Memristor Spiking Neural Network Supporting Multiple Learning Rules. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5117-5129. [PMID: 36099218 DOI: 10.1109/tnnls.2022.3202501] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Artificial intelligence (AI) is changing the way computing is performed to cope with real-world, ill-defined tasks for which traditional algorithms fail. AI requires significant memory access, thus running into the von Neumann bottleneck when implemented in standard computing platforms. In this respect, low-latency energy-efficient in-memory computing can be achieved by exploiting emerging memristive devices, given their ability to emulate synaptic plasticity, which provides a path to design large-scale brain-inspired spiking neural networks (SNNs). Several plasticity rules have been described in the brain and their coexistence in the same network largely expands the computational capabilities of a given circuit. In this work, starting from the electrical characterization and modeling of the memristor device, we propose a neuro-synaptic architecture that co-integrates in a unique platform with a single type of synaptic device to implement two distinct learning rules, namely, the spike-timing-dependent plasticity (STDP) and the Bienenstock-Cooper-Munro (BCM). This architecture, by exploiting the aforementioned learning rules, successfully addressed two different tasks of unsupervised learning.
Collapse
|
15
|
Pan W, Zhao F, Han B, Dong Y, Zeng Y. Emergence of brain-inspired small-world spiking neural network through neuroevolution. iScience 2024; 27:108845. [PMID: 38327781 PMCID: PMC10847652 DOI: 10.1016/j.isci.2024.108845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 08/23/2023] [Accepted: 01/03/2024] [Indexed: 02/09/2024] Open
Abstract
Studies suggest that the brain's high efficiency and low energy consumption may be closely related to its small-world topology and critical dynamics. However, existing efforts on the performance-oriented structural evolution of spiking neural networks (SNNs) are time-consuming and ignore the core structural properties of the brain. Here, we introduce a multi-objective Evolutionary Liquid State Machine (ELSM), which blends the small-world coefficient and criticality to evolve models and guide the emergence of brain-inspired, efficient structures. Experiments reveal ELSM's consistent and comparable performance, achieving 97.23% on NMNIST and outperforming LSM models on MNIST and Fashion-MNIST with 98.12% and 88.81% accuracies, respectively. Further analysis shows its versatility and spontaneous evolution of topologies such as hub nodes, short paths, long-tailed degree distributions, and numerous communities. This study evolves recurrent spiking neural networks into brain-inspired energy-efficient structures, showcasing versatility in multiple tasks and potential for adaptive general artificial intelligence.
Collapse
Affiliation(s)
- Wenxuan Pan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Bing Han
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yiting Dong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
16
|
Kim Y, Kahana A, Yin R, Li Y, Stinis P, Karniadakis GE, Panda P. Rethinking skip connections in Spiking Neural Networks with Time-To-First-Spike coding. Front Neurosci 2024; 18:1346805. [PMID: 38419664 PMCID: PMC10899405 DOI: 10.3389/fnins.2024.1346805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 01/30/2024] [Indexed: 03/02/2024] Open
Abstract
Time-To-First-Spike (TTFS) coding in Spiking Neural Networks (SNNs) offers significant advantages in terms of energy efficiency, closely mimicking the behavior of biological neurons. In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding. Our focus is on two distinct types of skip connection architectures: (1) addition-based skip connections, and (2) concatenation-based skip connections. We find that addition-based skip connections introduce an additional delay in terms of spike timing. On the other hand, concatenation-based skip connections circumvent this delay but produce time gaps between after-convolution and skip connection paths, thereby restricting the effective mixing of information from these two paths. To mitigate these issues, we propose a novel approach involving a learnable delay for skip connections in the concatenation-based skip connection architecture. This approach successfully bridges the time gap between the convolutional and skip branches, facilitating improved information mixing. We conduct experiments on public datasets including MNIST and Fashion-MNIST, illustrating the advantage of the skip connection in TTFS coding architectures. Additionally, we demonstrate the applicability of TTFS coding on beyond image recognition tasks and extend it to scientific machine-learning tasks, broadening the potential uses of SNNs.
Collapse
Affiliation(s)
- Youngeun Kim
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Adar Kahana
- Division of Applied Mathematics, Brown University, Providence, RI, United States
| | - Ruokai Yin
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Yuhang Li
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Panos Stinis
- Division of Applied Mathematics, Brown University, Providence, RI, United States
- Advanced Computing, Mathematics and Data Division, Pacific Northwest National Laboratory, Richland, WA, United States
| | - George Em Karniadakis
- Division of Applied Mathematics, Brown University, Providence, RI, United States
- Advanced Computing, Mathematics and Data Division, Pacific Northwest National Laboratory, Richland, WA, United States
| | - Priyadarshini Panda
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| |
Collapse
|
17
|
Bahrami MK, Nazari S. Digital design of a spatial-pow-STDP learning block with high accuracy utilizing pow CORDIC for large-scale image classifier spatiotemporal SNN. Sci Rep 2024; 14:3388. [PMID: 38337032 PMCID: PMC10858263 DOI: 10.1038/s41598-024-54043-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 02/07/2024] [Indexed: 02/12/2024] Open
Abstract
The paramount concern of highly accurate energy-efficient computing in machines with significant cognitive capabilities aims to enhance the accuracy and efficiency of bio-inspired Spiking Neural Networks (SNNs). This paper addresses this main objective by introducing a novel spatial power spike-timing-dependent plasticity (Spatial-Pow-STDP) learning rule as a digital block with high accuracy in a bio-inspired SNN model. Motivated by the demand for precise and accelerated computation that reduces high-cost resources in neural network applications, this paper presents a methodology based on COordinate Rotation DIgital Computer (CORDIC) definitions. The proposed designs of CORDIC algorithms for exponential (Exp CORDIC), natural logarithm (Ln CORDIC), and arbitrary power function (Pow CORDIC) are meticulously detailed and evaluated to ensure optimal acceleration and accuracy, which respectively show average errors near 10-9, 10-6, and 10-5 with 4, 4, and 6 iterations. The engineered architectures for the Exp, Ln, and Pow CORDIC implementations are illustrated and assessed, showcasing the efficiency achieved through high frequency, leading to the introduction of a Spatial-Pow-STDP learning block design based on Pow CORDIC that facilitates efficient and accurate hardware computation with 6.93 × 10-3 average error with 9 iterations. The proposed learning mechanism integrates this structure into a large-scale spatiotemporal SNN consisting of three layers with reduced hyper-parameters, enabling unsupervised training in an event-based paradigm using excitatory and inhibitory synapses. As a result, the application of the developed methodology and equations in the computational SNN model for image classification reveals superior accuracy and convergence speed compared to existing spiking networks by achieving up to 97.5%, 97.6%, 93.4%, and 93% accuracy, respectively, when trained on the MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with 6, 2, 2, and 6 training epochs.
Collapse
Affiliation(s)
| | - Soheila Nazari
- Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, 1983969411, Iran.
| |
Collapse
|
18
|
Gong B, Wang L, Wang S, Yu Z, Xiong L, Xiong R, Liu Q, Zhang Y. Optimizing skyrmionium movement and stability via stray magnetic fields in trilayer nanowire constructs. Phys Chem Chem Phys 2024; 26:4716-4723. [PMID: 38251958 DOI: 10.1039/d3cp05340g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Skyrmioniums, known for their unique transport and regulatory properties, are emerging as potential cornerstones for future data storage systems. However, the stability of skyrmionium movement faces considerable challenges due to the skyrmion Hall effect, which is induced by deformation. In response, our research introduces an innovative solution: we utilized micro-magnetic simulations to create a sandwiched trilayer nanowire structure augmented with a stray magnetic field. This combination effectively guides the skyrmionium within the ferromagnetic (FM) layer. Our empirical investigations reveal that the use of a stray magnetic field not only reduces the size of the skyrmionium but also amplifies its stability. This dual-effect proficiently mitigates the deformation of skyrmionium movement and boosts their thermal stability. We find these positive outcomes are most pronounced at a particular intensity of the stray magnetic field. Importantly, the required stray magnetic field can be generated using a heavy metal (HM1) layer of suitable thickness, rendering the practical application of this approach plausible in real-world experiments. Additionally, we analyze the functioning mechanism based on the Landau-Lifshitz-Gilbert (LLG) equation and energy variation. We also develop a deep spiking neural network (DSNN), which achieves a remarkable recognition accuracy of 97%. This achievement is realized through supervised learning via the spike timing dependent plasticity rule (STDP), considering the nanostructure as an artificial synapse device that corresponds to the electrical properties of the nanostructure. In conclusion, our study provides invaluable insights for the design of innovative information storage devices utilizing skyrmionium technology. By tackling the issues presented by the skyrmion Hall effect, we outline a feasible route for the practical application of this advanced technology. Our research, therefore, serves as a robust platform for continued investigations in this field.
Collapse
Affiliation(s)
- Bin Gong
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
- Fujian Provincial Key Laboratory of Semiconductors and Applications, Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Department of Physics, Xiamen University, Xiamen 361005, P. R. China
| | - Luowen Wang
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Sunan Wang
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Ziyang Yu
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Lun Xiong
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Rui Xiong
- Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, School of Physics and Technology, Wuhan University, Wuhan 430072, China
| | - Qingbo Liu
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Yue Zhang
- School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
19
|
Taeckens EA, Shah S. A spiking neural network with continuous local learning for robust online brain machine interface. J Neural Eng 2024; 20:066042. [PMID: 38173230 DOI: 10.1088/1741-2552/ad1787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 12/20/2023] [Indexed: 01/05/2024]
Abstract
Objective.Spiking neural networks (SNNs) are powerful tools that are well suited for brain machine interfaces (BMI) due to their similarity to biological neural systems and computational efficiency. They have shown comparable accuracy to state-of-the-art methods, but current training methods require large amounts of memory, and they cannot be trained on a continuous input stream without pausing periodically to perform backpropagation. An ideal BMI should be capable training continuously without interruption to minimize disruption to the user and adapt to changing neural environments.Approach.We propose a continuous SNN weight update algorithm that can be trained to perform regression learning with no need for storing past spiking events in memory. As a result, the amount of memory needed for training is constant regardless of the input duration. We evaluate the accuracy of the network on recordings of neural data taken from the premotor cortex of a primate performing reaching tasks. Additionally, we evaluate the SNN in a simulated closed loop environment and observe its ability to adapt to sudden changes in the input neural structure.Main results.The continuous learning SNN achieves the same peak correlation (ρ=0.7) as existing SNN training methods when trained offline on real neural data while reducing the total memory usage by 92%. Additionally, it matches state-of-the-art accuracy in a closed loop environment, demonstrates adaptability when subjected to multiple types of neural input disruptions, and is capable of being trained online without any prior offline training.Significance.This work presents a neural decoding algorithm that can be trained rapidly in a closed loop setting. The algorithm increases the speed of acclimating a new user to the system and also can adapt to sudden changes in neural behavior with minimal disruption to the user.
Collapse
Affiliation(s)
- Elijah A Taeckens
- Department of Electrical and Computer Engineering, University of Maryland, College Park, United States of America
| | - Sahil Shah
- Department of Electrical and Computer Engineering, University of Maryland, College Park, United States of America
| |
Collapse
|
20
|
Liu Y, Liu T, Hu Y, Liao W, Xing Y, Sheik S, Qiao N. Chip-In-Loop SNN Proxy Learning: a new method for efficient training of spiking neural networks. Front Neurosci 2024; 17:1323121. [PMID: 38239830 PMCID: PMC10794440 DOI: 10.3389/fnins.2023.1323121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 11/23/2023] [Indexed: 01/22/2024] Open
Abstract
The primary approaches used to train spiking neural networks (SNNs) involve either training artificial neural networks (ANNs) first and then transforming them into SNNs, or directly training SNNs using surrogate gradient techniques. Nevertheless, both of these methods encounter a shared challenge: they rely on frame-based methodologies, where asynchronous events are gathered into synchronous frames for computation. This strays from the authentic asynchronous, event-driven nature of SNNs, resulting in notable performance degradation when deploying the trained models on SNN simulators or hardware chips for real-time asynchronous computation. To eliminate this performance degradation, we propose a hardware-based SNN proxy learning method that is called Chip-In-Loop SNN Proxy Learning (CIL-SPL). This approach effectively eliminates the performance degradation caused by the mismatch between synchronous and asynchronous computations. To demonstrate the effectiveness of our method, we trained models using public datasets such as N-MNIST and tested them on the SNN simulator or hardware chip, comparing our results to those classical training methods.
Collapse
Affiliation(s)
| | | | - Yalun Hu
- SynSense Co. Ltd., Chengdu, China
| | - Wei Liao
- SynSense Co. Ltd., Chengdu, China
| | | | - Sadique Sheik
- SynSense Co. Ltd., Chengdu, China
- SynSense AG., Zurich, Switzerland
| | - Ning Qiao
- SynSense Co. Ltd., Chengdu, China
- SynSense AG., Zurich, Switzerland
| |
Collapse
|
21
|
Yu Q, Gao J, Wei J, Li J, Tan KC, Huang T. Improving Multispike Learning With Plastic Synaptic Delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10254-10265. [PMID: 35442893 DOI: 10.1109/tnnls.2022.3165527] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Emulating the spike-based processing in the brain, spiking neural networks (SNNs) are developed and act as a promising candidate for the new generation of artificial neural networks that aim to produce efficient cognitions as the brain. Due to the complex dynamics and nonlinearity of SNNs, designing efficient learning algorithms has remained a major difficulty, which attracts great research attention. Most existing ones focus on the adjustment of synaptic weights. However, other components, such as synaptic delays, are found to be adaptive and important in modulating neural behavior. How could plasticity on different components cooperate to improve the learning of SNNs remains as an interesting question. Advancing our previous multispike learning, we propose a new joint weight-delay plasticity rule, named TDP-DL, in this article. Plastic delays are integrated into the learning framework, and as a result, the performance of multispike learning is significantly improved. Simulation results highlight the effectiveness and efficiency of our TDP-DL rule compared to baseline ones. Moreover, we reveal the underlying principle of how synaptic weights and delays cooperate with each other through a synthetic task of interval selectivity and show that plastic delays can enhance the selectivity and flexibility of neurons by shifting information across time. Due to this capability, useful information distributed away in the time domain can be effectively integrated for a better accuracy performance, as highlighted in our generalization tasks of the image, speech, and event-based object recognitions. Our work is thus valuable and significant to improve the performance of spike-based neuromorphic computing.
Collapse
|
22
|
Luo X, Qu H, Wang Y, Yi Z, Zhang J, Zhang M. Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10141-10153. [PMID: 35436200 DOI: 10.1109/tnnls.2022.3164930] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The brain-inspired spiking neural networks (SNNs) hold the advantages of lower power consumption and powerful computing capability. However, the lack of effective learning algorithms has obstructed the theoretical advance and applications of SNNs. The majority of the existing learning algorithms for SNNs are based on the synaptic weight adjustment. However, neuroscience findings confirm that synaptic delays can also be modulated to play an important role in the learning process. Here, we propose a gradient descent-based learning algorithm for synaptic delays to enhance the sequential learning performance of single spiking neuron. Moreover, we extend the proposed method to multilayer SNNs with spike temporal-based error backpropagation. In the proposed multilayer learning algorithm, information is encoded in the relative timing of individual neuronal spikes, and learning is performed based on the exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. Experimental results on both synthetic and realistic datasets show significant improvements in learning efficiency and accuracy over the existing spike temporal-based learning algorithms. We also evaluate the proposed learning method in an SNN-based multimodal computational model for audiovisual pattern recognition, and it achieves better performance compared with its counterparts.
Collapse
|
23
|
Wu X, Song Y, Zhou Y, Jiang Y, Bai Y, Li X, Yang X. STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks. Front Neurosci 2023; 17:1261543. [PMID: 38027490 PMCID: PMC10667472 DOI: 10.3389/fnins.2023.1261543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Spiking Neural Networks (SNNs) have shown great promise in processing spatio-temporal information compared to Artificial Neural Networks (ANNs). However, there remains a performance gap between SNNs and ANNs, which impedes the practical application of SNNs. With intrinsic event-triggered property and temporal dynamics, SNNs have the potential to effectively extract spatio-temporal features from event streams. To leverage the temporal potential of SNNs, we propose a self-attention-based temporal-channel joint attention SNN (STCA-SNN) with end-to-end training, which infers attention weights along both temporal and channel dimensions concurrently. It models global temporal and channel information correlations with self-attention, enabling the network to learn 'what' and 'when' to attend simultaneously. Our experimental results show that STCA-SNNs achieve better performance on N-MNIST (99.67%), CIFAR10-DVS (81.6%), and N-Caltech 101 (80.88%) compared with the state-of-the-art SNNs. Meanwhile, our ablation study demonstrates that STCA-SNNs improve the accuracy of event stream classification tasks.
Collapse
Affiliation(s)
| | - Yong Song
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Ya Zhou
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | | | | | | | | |
Collapse
|
24
|
Wang M, Yuan Y, Jiang Y. Realization of Artificial Neurons and Synapses Based on STDP Designed by an MTJ Device. MICROMACHINES 2023; 14:1820. [PMID: 37893257 PMCID: PMC10609371 DOI: 10.3390/mi14101820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/22/2023] [Accepted: 09/22/2023] [Indexed: 10/29/2023]
Abstract
As the third-generation neural network, the spiking neural network (SNN) has become one of the most promising neuromorphic computing paradigms to mimic brain neural networks over the past decade. The SNN shows many advantages in performing classification and recognition tasks in the artificial intelligence field. In the SNN, the communication between the pre-synapse neuron (PRE) and the post-synapse neuron (POST) is conducted by the synapse. The corresponding synaptic weights are dependent on both the spiking patterns of the PRE and the POST, which are updated by spike-timing-dependent plasticity (STDP) rules. The emergence and growing maturity of spintronic devices present a new approach for constructing the SNN. In the paper, a novel SNN is proposed, in which both the synapse and the neuron are mimicked with the spin transfer torque magnetic tunnel junction (STT-MTJ) device. The synaptic weight is presented by the conductance of the MTJ device. The mapping of the probabilistic spiking nature of the neuron to the stochastic switching behavior of the MTJ with thermal noise is presented based on the stochastic Landau-Lifshitz-Gilbert (LLG) equation. In this way, a simplified SNN is mimicked with the MTJ device. The function of the mimicked SNN is verified by a handwritten digit recognition task based on the MINIST database.
Collapse
Affiliation(s)
| | | | - Yanfeng Jiang
- Department of Electrical Engineering, School of Internet of Things (IoTs), Jiangnan University, Wuxi 214122, China; (M.W.); (Y.Y.)
| |
Collapse
|
25
|
Weerasinghe MMA, Wang G, Whalley J, Crook-Rumsey M. Mental stress recognition on the fly using neuroplasticity spiking neural networks. Sci Rep 2023; 13:14962. [PMID: 37696860 PMCID: PMC10495416 DOI: 10.1038/s41598-023-34517-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Accepted: 05/03/2023] [Indexed: 09/13/2023] Open
Abstract
Mental stress is found to be strongly connected with human cognition and wellbeing. As the complexities of human life increase, the effects of mental stress have impacted human health and cognitive performance across the globe. This highlights the need for effective non-invasive stress detection methods. In this work, we introduce a novel, artificial spiking neural network model called Online Neuroplasticity Spiking Neural Network (O-NSNN) that utilizes a repertoire of learning concepts inspired by the brain to classify mental stress using Electroencephalogram (EEG) data. These models are personalized and tested on EEG data recorded during sessions in which participants listen to different types of audio comments designed to induce acute stress. Our O-NSNN models learn on the fly producing an average accuracy of 90.76% (σ = 2.09) when classifying EEG signals of brain states associated with these audio comments. The brain-inspired nature of the individual models makes them robust and efficient and has the potential to be integrated into wearable technology. Furthermore, this article presents an exploratory analysis of trained O-NSNNs to discover links between perceived and acute mental stress. The O-NSNN algorithm proved to be better for personalized stress recognition in terms of accuracy, efficiency, and model interpretability.
Collapse
Affiliation(s)
- Mahima Milinda Alwis Weerasinghe
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand.
- Brain-Inspired AI and Neuroinformatics Lab, Department of Data Science, Sri Lanka Technological Campus, Padukka, Sri Lanka.
| | - Grace Wang
- School of Psychology and Wellbeing, University of Southern Queensland, Toowoomba, Australia
- Centre for Health Research, University of Southern Queensland, Toowoomba, Australia
| | - Jacqueline Whalley
- Department of Computer Science and Software Engineering, Auckland University of Technology, Auckland, New Zealand
| | - Mark Crook-Rumsey
- Department of Basic and Clinical Neuroscience, King's College London, London, UK
- UK Dementia Research Institute, Centre for Care Research and Technology, Imperial College London, London, UK
| |
Collapse
|
26
|
Chunduri RK, Perera DG. Neuromorphic Sentiment Analysis Using Spiking Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:7701. [PMID: 37765758 PMCID: PMC10536645 DOI: 10.3390/s23187701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/25/2023] [Accepted: 09/02/2023] [Indexed: 09/29/2023]
Abstract
Over the past decade, the artificial neural networks domain has seen a considerable embracement of deep neural networks among many applications. However, deep neural networks are typically computationally complex and consume high power, hindering their applicability for resource-constrained applications, such as self-driving vehicles, drones, and robotics. Spiking neural networks, often employed to bridge the gap between machine learning and neuroscience fields, are considered a promising solution for resource-constrained applications. Since deploying spiking neural networks on traditional von-Newman architectures requires significant processing time and high power, typically, neuromorphic hardware is created to execute spiking neural networks. The objective of neuromorphic devices is to mimic the distinctive functionalities of the human brain in terms of energy efficiency, computational power, and robust learning. Furthermore, natural language processing, a machine learning technique, has been widely utilized to aid machines in comprehending human language. However, natural language processing techniques cannot also be deployed efficiently on traditional computing platforms. In this research work, we strive to enhance the natural language processing traits/abilities by harnessing and integrating the SNNs traits, as well as deploying the integrated solution on neuromorphic hardware, efficiently and effectively. To facilitate this endeavor, we propose a novel, unique, and efficient sentiment analysis model created using a large-scale SNN model on SpiNNaker neuromorphic hardware that responds to user inputs. SpiNNaker neuromorphic hardware typically can simulate large spiking neural networks in real time and consumes low power. We initially create an artificial neural networks model, and then train the model using an Internet Movie Database (IMDB) dataset. Next, the pre-trained artificial neural networks model is converted into our proposed spiking neural networks model, called a spiking sentiment analysis (SSA) model. Our SSA model using SpiNNaker, called SSA-SpiNNaker, is created in such a way to respond to user inputs with a positive or negative response. Our proposed SSA-SpiNNaker model achieves 100% accuracy and only consumes 3970 Joules of energy, while processing around 10,000 words and predicting a positive/negative review. Our experimental results and analysis demonstrate that by leveraging the parallel and distributed capabilities of SpiNNaker, our proposed SSA-SpiNNaker model achieves better performance compared to artificial neural networks models. Our investigation into existing works revealed that no similar models exist in the published literature, demonstrating the uniqueness of our proposed model. Our proposed work would offer a synergy between SNNs and NLP within the neuromorphic computing domain, in order to address many challenges in this domain, including computational complexity and power consumption. Our proposed model would not only enhance the capabilities of sentiment analysis but also contribute to the advancement of brain-inspired computing. Our proposed model could be utilized in other resource-constrained and low-power applications, such as robotics, autonomous, and smart systems.
Collapse
Affiliation(s)
| | - Darshika G. Perera
- Department of Electrical and Computer Engineering, University of Colorado Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80918, USA;
| |
Collapse
|
27
|
Syed GS, Zhou Y, Warner J, Bhaskaran H. Atomically thin optomemristive feedback neurons. NATURE NANOTECHNOLOGY 2023; 18:1036-1043. [PMID: 37142710 DOI: 10.1038/s41565-023-01391-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 03/24/2023] [Indexed: 05/06/2023]
Abstract
Cognitive functions such as learning in mammalian brains have been attributed to the presence of neuronal circuits with feed-forward and feedback topologies. Such networks have interactions within and between neurons that provide excitory and inhibitory modulation effects. In neuromorphic computing, neurons that combine and broadcast both excitory and inhibitory signals using one nanoscale device are still an elusive goal. Here we introduce a type-II, two-dimensional heterojunction-based optomemristive neuron, using a stack of MoS2, WS2 and graphene that demonstrates both of these effects via optoelectronic charge-trapping mechanisms. We show that such neurons provide a nonlinear and rectified integration of information, that can be optically broadcast. Such a neuron has applications in machine learning, particularly in winner-take-all networks. We then apply such networks to simulations to establish unsupervised competitive learning for data partitioning, as well as cooperative learning in solving combinatorial optimization problems.
Collapse
Affiliation(s)
- Ghazi Sarwat Syed
- IBM Research - Europe, Rüschlikon, Switzerland.
- Department of Materials, University of Oxford, Oxford, UK.
| | - Yingqiu Zhou
- Department of Materials, University of Oxford, Oxford, UK
- Denmark Technical University, Lyngby, Denmark
| | - Jamie Warner
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX, USA
- Texas Materials Institute, The University of Texas at Austin, Austin, TX, USA
| | | |
Collapse
|
28
|
Shen J, Zhao Y, Liu JK, Wang Y. HybridSNN: Combining Bio-Machine Strengths by Boosting Adaptive Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:5841-5855. [PMID: 34890341 DOI: 10.1109/tnnls.2021.3131356] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs), inspired by the neuronal network in the brain, provide biologically relevant and low-power consuming models for information processing. Existing studies either mimic the learning mechanism of brain neural networks as closely as possible, for example, the temporally local learning rule of spike-timing-dependent plasticity (STDP), or apply the gradient descent rule to optimize a multilayer SNN with fixed structure. However, the learning rule used in the former is local and how the real brain might do the global-scale credit assignment is still not clear, which means that those shallow SNNs are robust but deep SNNs are difficult to be trained globally and could not work so well. For the latter, the nondifferentiable problem caused by the discrete spike trains leads to inaccuracy in gradient computing and difficulties in effective deep SNNs. Hence, a hybrid solution is interesting to combine shallow SNNs with an appropriate machine learning (ML) technique not requiring the gradient computing, which is able to provide both energy-saving and high-performance advantages. In this article, we propose a HybridSNN, a deep and strong SNN composed of multiple simple SNNs, in which data-driven greedy optimization is used to build powerful classifiers, avoiding the derivative problem in gradient descent. During the training process, the output features (spikes) of selected weak classifiers are fed back to the pool for the subsequent weak SNN training and selection. This guarantees HybridSNN not only represents the linear combination of simple SNNs, as what regular AdaBoost algorithm generates, but also contains neuron connection information, thus closely resembling the neural networks of a brain. HybridSNN has the benefits of both low power consumption in weak units and overall data-driven optimizing strength. The network structure in HybridSNN is learned from training samples, which is more flexible and effective compared with existing fixed multilayer SNNs. Moreover, the topological tree of HybridSNN resembles the neural system in the brain, where pyramidal neurons receive thousands of synaptic input signals through their dendrites. Experimental results show that the proposed HybridSNN is highly competitive among the state-of-the-art SNNs.
Collapse
|
29
|
Xu M, Chen X, Sun A, Zhang X, Chen X. A Novel Event-Driven Spiking Convolutional Neural Network for Electromyography Pattern Recognition. IEEE Trans Biomed Eng 2023; 70:2604-2615. [PMID: 37030849 DOI: 10.1109/tbme.2023.3258606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
Abstract
Electromyography (EMG) pattern recognition is an important technology for prosthesis control and human-computer interaction etc. However, the practical application of EMG pattern recognition is hampered by poor accuracy and robustness due to electrode shift caused by repeated wearing of the signal acquisition device. Moreover, the user's acceptability is low due to the heavy training burden, which is caused by the need for a large amount of training data by traditional methods. In order to explore the advantage of spiking neural network (SNN) in solving the poor robustness and heavy training burden problems in EMG pattern recognition, a spiking convolutional neural network (SCNN) composed of cyclic convolutional neural network (CNN) and fully connected modules is proposed and implemented in this study. High density surface electromyography (HD-sEMG) signals collected from 6 gestures of 10 subjects at 6 electrode positions are taken as the research object. Compared to CNN with the same structure, CNN-Long Short Term Memory (CNN-LSTM), linear kernel linear discriminant analysis classifier (LDA) and spiking multilayer perceptron (Spiking MLP), the accuracy of SCNN is 50.69%, 33.92%, 32.94% and 9.41% higher in the small sample training experiment, 6.50%, 4.23%, 28.73%, and 2.57% higher in the electrode shifts experiment respectively. In addition, the power consumption of SCNN is about 1/93 of CNN. The advantages of the proposed framework in alleviating user training burden, mitigating the adverse effect of electrode shifts and reducing power consumption make it very meaningful for promoting the development of user-friendly real-time myoelectric control system.
Collapse
|
30
|
Wang C, Yan H, Huang W, Sheng W, Wang Y, Fan YS, Liu T, Zou T, Li R, Chen H. Neural encoding with unsupervised spiking convolutional neural network. Commun Biol 2023; 6:880. [PMID: 37640808 PMCID: PMC10462614 DOI: 10.1038/s42003-023-05257-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 08/18/2023] [Indexed: 08/31/2023] Open
Abstract
Accurately predicting the brain responses to various stimuli poses a significant challenge in neuroscience. Despite recent breakthroughs in neural encoding using convolutional neural networks (CNNs) in fMRI studies, there remain critical gaps between the computational rules of traditional artificial neurons and real biological neurons. To address this issue, a spiking CNN (SCNN)-based framework is presented in this study to achieve neural encoding in a more biologically plausible manner. The framework utilizes unsupervised SCNN to extract visual features of image stimuli and employs a receptive field-based regression algorithm to predict fMRI responses from the SCNN features. Experimental results on handwritten characters, handwritten digits and natural images demonstrate that the proposed approach can achieve remarkably good encoding performance and can be utilized for "brain reading" tasks such as image reconstruction and identification. This work suggests that SNN can serve as a promising tool for neural encoding.
Collapse
Affiliation(s)
- Chong Wang
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Hongmei Yan
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| | - Wei Huang
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Wei Sheng
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Yuting Wang
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Yun-Shuang Fan
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Tao Liu
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Ting Zou
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Rong Li
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| | - Huafu Chen
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China.
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
31
|
Zhang T, Cheng X, Jia S, Li CT, Poo MM, Xu B. A brain-inspired algorithm that mitigates catastrophic forgetting of artificial and spiking neural networks with low computational cost. SCIENCE ADVANCES 2023; 9:eadi2947. [PMID: 37624895 PMCID: PMC10456855 DOI: 10.1126/sciadv.adi2947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 07/27/2023] [Indexed: 08/27/2023]
Abstract
Neuromodulators in the brain act globally at many forms of synaptic plasticity, represented as metaplasticity, which is rarely considered by existing spiking (SNNs) and nonspiking artificial neural networks (ANNs). Here, we report an efficient brain-inspired computing algorithm for SNNs and ANNs, referred to here as neuromodulation-assisted credit assignment (NACA), which uses expectation signals to induce defined levels of neuromodulators to selective synapses, whereby the long-term synaptic potentiation and depression are modified in a nonlinear manner depending on the neuromodulator level. The NACA algorithm achieved high recognition accuracy with substantially reduced computational cost in learning spatial and temporal classification tasks. Notably, NACA was also verified as efficient for learning five different class continuous learning tasks with varying degrees of complexity, exhibiting a markedly mitigated catastrophic forgetting at low computational cost. Mapping synaptic weight changes showed that these benefits could be explained by the sparse and targeted synaptic modifications attributed to expectation-based global neuromodulation.
Collapse
Affiliation(s)
- Tielin Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Shanghai Center for Brain Science and Brain-inspired Technology, Lingang Laboratory, Shanghai 200031, China
| | - Xiang Cheng
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuncheng Jia
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chengyu T Li
- Shanghai Center for Brain Science and Brain-inspired Technology, Lingang Laboratory, Shanghai 200031, China
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Mu-ming Poo
- Shanghai Center for Brain Science and Brain-inspired Technology, Lingang Laboratory, Shanghai 200031, China
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
32
|
Yang G, Lee W, Seo Y, Lee C, Seok W, Park J, Sim D, Park C. Unsupervised Spiking Neural Network with Dynamic Learning of Inhibitory Neurons. SENSORS (BASEL, SWITZERLAND) 2023; 23:7232. [PMID: 37631767 PMCID: PMC10459513 DOI: 10.3390/s23167232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/23/2023] [Accepted: 08/15/2023] [Indexed: 08/27/2023]
Abstract
A spiking neural network (SNN) is a type of artificial neural network that operates based on discrete spikes to process timing information, similar to the manner in which the human brain processes real-world problems. In this paper, we propose a new spiking neural network (SNN) based on conventional, biologically plausible paradigms, such as the leaky integrate-and-fire model, spike timing-dependent plasticity, and the adaptive spiking threshold, by suggesting new biological models; that is, dynamic inhibition weight change, a synaptic wiring method, and Bayesian inference. The proposed network is designed for image recognition tasks, which are frequently used to evaluate the performance of conventional deep neural networks. To manifest the bio-realistic neural architecture, the learning is unsupervised, and the inhibition weight is dynamically changed; this, in turn, affects the synaptic wiring method based on Hebbian learning and the neuronal population. In the inference phase, Bayesian inference successfully classifies the input digits by counting the spikes from the responding neurons. The experimental results demonstrate that the proposed biological model ensures a performance improvement compared with other biologically plausible SNN models.
Collapse
Affiliation(s)
- Geunbo Yang
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Wongyu Lee
- Department of Intelligent Information and Embedded Software Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (W.L.); (W.S.)
| | - Youjung Seo
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Choongseop Lee
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Woojoon Seok
- Department of Intelligent Information and Embedded Software Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (W.L.); (W.S.)
| | - Jongkil Park
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea;
| | - Donggyu Sim
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| | - Cheolsoo Park
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Republic of Korea; (G.Y.); (Y.S.); (C.L.)
| |
Collapse
|
33
|
Zhang Y, Xiang S, Jiang S, Han Y, Guo X, Zheng L, Shi Y, Hao Y. Hybrid photonic deep convolutional residual spiking neural networks for text classification. OPTICS EXPRESS 2023; 31:28489-28502. [PMID: 37710902 DOI: 10.1364/oe.497218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 07/30/2023] [Indexed: 09/16/2023]
Abstract
Spiking neural networks (SNNs) offer powerful computation capability due to its event-driven nature and temporal processing. However, it is still limited to shallow structure and simple tasks due to the training difficulty. In this work, we propose a deep convolutional residual spiking neural network (DCRSNN) for text classification tasks. In the DCRSNN, the feature extraction is achieved via a convolution SNN with residual connection, using the surrogate gradient direct training technique. Classification is performed by a fully-connected network. We also suggest a hybrid photonic DCRSNN, in which photonic SNNs are used for classification with a converted training method. The accuracy of hard and soft reset methods, as well as three different surrogate functions, were evaluated and compared across four different datasets. Results indicated a maximum accuracy of 76.36% for MR, 91.03% for AG News, 88.06% for IMDB and 93.99% for Yelp review polarity. Soft reset methods used in the deep convolutional SNN yielded slightly better accuracy than their hard reset counterparts. We also considered the effects of different pooling methods and observation time windows and found that the convergence accuracy achieved by convolutional SNNs was comparable to that of convolutional neural networks under the same conditions. Moreover, the hybrid photonic DCRSNN also shows comparable testing accuracy. This work provides new insights into extending the SNN applications in the field of text classification and natural language processing, which is interesting for the resources-restrained scenarios.
Collapse
|
34
|
Zeng Y, Zhao D, Zhao F, Shen G, Dong Y, Lu E, Zhang Q, Sun Y, Liang Q, Zhao Y, Zhao Z, Fang H, Wang Y, Li Y, Liu X, Du C, Kong Q, Ruan Z, Bi W. BrainCog: A spiking neural network based, brain-inspired cognitive intelligence engine for brain-inspired AI and brain simulation. PATTERNS (NEW YORK, N.Y.) 2023; 4:100789. [PMID: 37602224 PMCID: PMC10435966 DOI: 10.1016/j.patter.2023.100789] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 02/06/2023] [Accepted: 06/05/2023] [Indexed: 08/22/2023]
Abstract
Spiking neural networks (SNNs) serve as a promising computational framework for integrating insights from the brain into artificial intelligence (AI). Existing software infrastructures based on SNNs exclusively support brain simulation or brain-inspired AI, but not both simultaneously. To decode the nature of biological intelligence and create AI, we present the brain-inspired cognitive intelligence engine (BrainCog). This SNN-based platform provides essential infrastructure support for developing brain-inspired AI and brain simulation. BrainCog integrates different biological neurons, encoding strategies, learning rules, brain areas, and hardware-software co-design as essential components. Leveraging these user-friendly components, BrainCog incorporates various cognitive functions, including perception and learning, decision-making, knowledge representation and reasoning, motor control, social cognition, and brain structure and function simulations across multiple scales. BORN is an AI engine developed by BrainCog, showcasing seamless integration of BrainCog's components and cognitive functions to build advanced AI models and applications.
Collapse
Affiliation(s)
- Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Dongcheng Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Guobin Shen
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yiting Dong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Enmeng Lu
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Qian Zhang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yinqian Sun
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Qian Liang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yuxuan Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Zhuoya Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Hongjian Fang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yuwei Wang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yang Li
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Xin Liu
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Chengcheng Du
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Qingqun Kong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Zizhe Ruan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Weida Bi
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
35
|
Dong Y, Zhao D, Li Y, Zeng Y. An unsupervised STDP-based spiking neural network inspired by biologically plausible learning rules and connections. Neural Netw 2023; 165:799-808. [PMID: 37418862 DOI: 10.1016/j.neunet.2023.06.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
The backpropagation algorithm has promoted the rapid development of deep learning, but it relies on a large amount of labeled data and still has a large gap with how humans learn. The human brain can quickly learn various conceptual knowledge in a self-organized and unsupervised manner, accomplished through coordinating various learning rules and structures in the human brain. Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly. In this paper, taking inspiration from short-term synaptic plasticity, we design an adaptive synaptic filter and introduce the adaptive spiking threshold as the neuron plasticity to enrich the representation ability of SNNs. We also introduce an adaptive lateral inhibitory connection to adjust the spikes balance dynamically to help the network learn richer features. To speed up and stabilize the training of unsupervised spiking neural networks, we design a samples temporal batch STDP (STB-STDP), which updates weights based on multiple samples and moments. By integrating the above three adaptive mechanisms and STB-STDP, our model greatly accelerates the training of unsupervised spiking neural networks and improves the performance of unsupervised SNNs on complex tasks. Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets. Further, we tested on the more complex CIFAR10 dataset, and the results fully illustrate the superiority of our algorithm. Our model is also the first work to apply unsupervised STDP-based SNNs to CIFAR10. At the same time, in the small-sample learning scenario, it will far exceed the supervised ANN using the same structure.
Collapse
Affiliation(s)
- Yiting Dong
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China; Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China
| | - Dongcheng Zhao
- Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China
| | - Yang Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China
| | - Yi Zeng
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China; Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences (CAS), Shanghai, China; State Key Laboratory of Multimodal Artifcial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China.
| |
Collapse
|
36
|
Gautam A, Kohno T. Adaptive STDP-based on-chip spike pattern detection. Front Neurosci 2023; 17:1203956. [PMID: 37521704 PMCID: PMC10374023 DOI: 10.3389/fnins.2023.1203956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 06/15/2023] [Indexed: 08/01/2023] Open
Abstract
A spiking neural network (SNN) is a bottom-up tool used to describe information processing in brain microcircuits. It is becoming a crucial neuromorphic computational model. Spike-timing-dependent plasticity (STDP) is an unsupervised brain-like learning rule implemented in many SNNs and neuromorphic chips. However, a significant performance gap exists between ideal model simulation and neuromorphic implementation. The performance of STDP learning in neuromorphic chips deteriorates because the resolution of synaptic efficacy in such chips is generally restricted to 6 bits or less, whereas simulations employ the entire 64-bit floating-point precision available on digital computers. Previously, we introduced a bio-inspired learning rule named adaptive STDP and demonstrated via numerical simulation that adaptive STDP (using only 4-bit fixed-point synaptic efficacy) performs similarly to STDP learning (using 64-bit floating-point precision) in a noisy spike pattern detection model. Herein, we present the experimental results demonstrating the performance of adaptive STDP learning. To the best of our knowledge, this is the first study that demonstrates unsupervised noisy spatiotemporal spike pattern detection to perform well and maintain the simulation performance on a mixed-signal CMOS neuromorphic chip with low-resolution synaptic efficacy. The chip was designed in Taiwan Semiconductor Manufacturing Company (TSMC) 250 nm CMOS technology node and comprises a soma circuit and 256 synapse circuits along with their learning circuitry.
Collapse
|
37
|
Xu H, Cao K, Chen H, Abudusalamu A, Wu W, Xue Y. Emotional brain network decoded by biological spiking neural network. Front Neurosci 2023; 17:1200701. [PMID: 37496741 PMCID: PMC10366476 DOI: 10.3389/fnins.2023.1200701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 05/30/2023] [Indexed: 07/28/2023] Open
Abstract
Introduction Emotional disorders are essential manifestations of many neurological and psychiatric diseases. Nowadays, researchers try to explore bi-directional brain-computer interface techniques to help the patients. However, the related functional brain areas and biological markers are still unclear, and the dynamic connection mechanism is also unknown. Methods To find effective regions related to different emotion recognition and intervention, our research focuses on finding emotional EEG brain networks using spiking neural network algorithm with binary coding. We collected EEG data while human participants watched emotional videos (fear, sadness, happiness, and neutrality), and analyzed the dynamic connections between the electrodes and the biological rhythms of different emotions. Results The analysis has shown that the local high-activation brain network of fear and sadness is mainly in the parietal lobe area. The local high-level brain network of happiness is in the prefrontal-temporal lobe-central area. Furthermore, the α frequency band could effectively represent negative emotions, while the α frequency band could be used as a biological marker of happiness. The decoding accuracy of the three emotions reached 86.36%, 95.18%, and 89.09%, respectively, fully reflecting the excellent emotional decoding performance of the spiking neural network with self- backpropagation. Discussion The introduction of the self-backpropagation mechanism effectively improves the performance of the spiking neural network model. Different emotions exhibit distinct EEG networks and neuro-oscillatory-based biological markers. These emotional brain networks and biological markers may provide important hints for brain-computer interface technique exploration to help related brain disease recovery.
Collapse
Affiliation(s)
- Hubo Xu
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence, Peking University, Beijing, China
- Department of Pharmacology, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Kexin Cao
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence, Peking University, Beijing, China
- Department of Pharmacology, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Hongguang Chen
- NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Peking University Institute of Mental Health, Peking University Sixth Hospital, Beijing, China
| | - Awuti Abudusalamu
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence, Peking University, Beijing, China
- Department of Pharmacology, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Wei Wu
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yanxue Xue
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence, Peking University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
- Key Laboratory for Neuroscience, Ministry of Education/National Health Commission, Peking University, Beijing, China
| |
Collapse
|
38
|
López C. Artificial Intelligence and Advanced Materials. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2208683. [PMID: 36560859 DOI: 10.1002/adma.202208683] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/01/2022] [Indexed: 06/09/2023]
Abstract
Artificial intelligence (AI) is gaining strength, and materials science can both contribute to and profit from it. In a simultaneous progress race, new materials, systems, and processes can be devised and optimized thanks to machine learning (ML) techniques, and such progress can be turned into innovative computing platforms. Future materials scientists will profit from understanding how ML can boost the conception of advanced materials. This review covers aspects of computation from the fundamentals to directions taken and repercussions produced by computation to account for the origins, procedures, and applications of AI. ML and its methods are reviewed to provide basic knowledge of its implementation and its potential. The materials and systems used to implement AI with electric charges are finding serious competition from other information-carrying and processing agents. The impact these techniques have on the inception of new advanced materials is so deep that a new paradigm is developing where implicit knowledge is being mined to conceive materials and systems for functions instead of finding applications to found materials. How far this trend can be carried is hard to fathom, as exemplified by the power to discover unheard of materials or physical laws buried in data.
Collapse
Affiliation(s)
- Cefe López
- Instituto de Ciencia de Materiales de Madrid (ICMM), Consejo Superior de Investigaciones Científicas (CSIC), Calle Sor Juana Inés de la Cruz 3, Madrid, 28049, Spain
- Donostia International Physics Centre (DIPC), Paseo Manuel de Lardizábal 4, San Sebastián, 20018, España
| |
Collapse
|
39
|
Aceituno PV, Farinha MT, Loidl R, Grewe BF. Learning cortical hierarchies with temporal Hebbian updates. Front Comput Neurosci 2023; 17:1136010. [PMID: 37293353 PMCID: PMC10244748 DOI: 10.3389/fncom.2023.1136010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/25/2023] [Indexed: 06/10/2023] Open
Abstract
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| | | | - Reinhard Loidl
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Benjamin F. Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
40
|
Yu F, Wu Y, Ma S, Xu M, Li H, Qu H, Song C, Wang T, Zhao R, Shi L. Brain-inspired multimodal hybrid neural network for robot place recognition. Sci Robot 2023; 8:eabm6996. [PMID: 37163608 DOI: 10.1126/scirobotics.abm6996] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Place recognition is an essential spatial intelligence capability for robots to understand and navigate the world. However, recognizing places in natural environments remains a challenging task for robots because of resource limitations and changing environments. In contrast, humans and animals can robustly and efficiently recognize hundreds of thousands of places in different conditions. Here, we report a brain-inspired general place recognition system, dubbed NeuroGPR, that enables robots to recognize places by mimicking the neural mechanism of multimodal sensing, encoding, and computing through a continuum of space and time. Our system consists of a multimodal hybrid neural network (MHNN) that encodes and integrates multimodal cues from both conventional and neuromorphic sensors. Specifically, to encode different sensory cues, we built various neural networks of spatial view cells, place cells, head direction cells, and time cells. To integrate these cues, we designed a multiscale liquid state machine that can process and fuse multimodal information effectively and asynchronously using diverse neuronal dynamics and bioinspired inhibitory circuits. We deployed the MHNN on Tianjic, a hybrid neuromorphic chip, and integrated it into a quadruped robot. Our results show that NeuroGPR achieves better performance compared with conventional and existing biologically inspired approaches, exhibiting robustness to diverse environmental uncertainty, including perceptual aliasing, motion blur, light, or weather changes. Running NeuroGPR as an overall multi-neural network workload on Tianjic showcases its advantages with 10.5 times lower latency and 43.6% lower power consumption than the commonly used mobile robot processor Jetson Xavier NX.
Collapse
Affiliation(s)
- Fangwen Yu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Yujie Wu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Songchen Ma
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Mingkun Xu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Hongyi Li
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Huanyu Qu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Chenhang Song
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Taoyi Wang
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Rong Zhao
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Luping Shi
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
- THU-CET HIK Joint Research Center for Brain-Inspired Computing, Tsinghua University, Beijing 100084, China
| |
Collapse
|
41
|
Liang L, Hu X, Deng L, Wu Y, Li G, Ding Y, Li P, Xie Y. Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2569-2583. [PMID: 34473634 DOI: 10.1109/tnnls.2021.3106961] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Spiking neural network (SNN) is broadly deployed in neuromorphic devices to emulate brain function. In this context, SNN security becomes important while lacking in-depth investigation. To this end, we target the adversarial attack against SNNs and identify several challenges distinct from the artificial neural network (ANN) attack: 1) current adversarial attack is mainly based on gradient information that presents in a spatiotemporal pattern in SNNs, hard to obtain with conventional backpropagation algorithms; 2) the continuous gradient of the input is incompatible with the binary spiking input during gradient accumulation, hindering the generation of spike-based adversarial examples; and 3) the input gradient can be all-zeros (i.e., vanishing) sometimes due to the zero-dominant derivative of the firing function. Recently, backpropagation through time (BPTT)-inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given spatiotemporal gradient maps. We propose two approaches to address the above challenges of gradient-input incompatibility and gradient vanishing. Specifically, we design a gradient-to-spike (G2S) converter to convert continuous gradients to ternary ones compatible with spike inputs. Then, we design a restricted spike flipper (RSF) to construct ternary gradients that can randomly flip the spike inputs with a controllable turnover rate, when meeting all-zero gradients. Putting these methods together, we build an adversarial attack methodology for SNNs. Moreover, we analyze the influence of the training loss function and the firing threshold of the penultimate layer on the attack effectiveness. Extensive experiments are conducted to validate our solution. Besides the quantitative analysis of the influence factors, we also compare SNNs and ANNs against adversarial attacks under different attack methods. This work can help reveal what happens in SNN attacks and might stimulate more research on the security of SNN models and neuromorphic devices.
Collapse
|
42
|
Ma C, Yan R, Yu Z, Yu Q. Deep Spike Learning With Local Classifiers. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:3363-3375. [PMID: 35867374 DOI: 10.1109/tcyb.2022.3188015] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Backpropagation has been successfully generalized to optimize deep spiking neural networks (SNNs), where, nevertheless, gradients need to be propagated back through all layers, resulting in a massive consumption of computing resources and an obstacle to the parallelization of training. A biologically motivated scheme of local learning provides an alternative to efficiently train deep networks but often suffers a low performance of accuracy on practical tasks. Thus, how to train deep SNNs with the local learning scheme to achieve both efficient and accurate performance still remains an important challenge. In this study, we focus on a supervised local learning scheme where each layer is independently optimized with an auxiliary classifier. Accordingly, we first propose a spike-based efficient local learning rule by only considering the direct dependencies in the current time. We then propose two variants that additionally incorporate temporal dependencies through a backward and forward process, respectively. The effectiveness and performance of our proposed methods are extensively evaluated with six mainstream datasets. Experimental results show that our methods can successfully scale up to large networks and substantially outperform the spike-based local learning baselines on all studied benchmarks. Our results also reveal that gradients with temporal dependencies are essential for high performance on temporal tasks, while they have negligible effects on rate-based tasks. Our work is significant as it brings the performance of spike-based local learning to a new level with the computational benefits being retained.
Collapse
|
43
|
Siddique A, Vai MI, Pun SH. A low cost neuromorphic learning engine based on a high performance supervised SNN learning algorithm. Sci Rep 2023; 13:6280. [PMID: 37072443 PMCID: PMC10113267 DOI: 10.1038/s41598-023-32120-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Accepted: 03/22/2023] [Indexed: 05/03/2023] Open
Abstract
Spiking neural networks (SNNs) are more energy- and resource-efficient than artificial neural networks (ANNs). However, supervised SNN learning is a challenging task due to non-differentiability of spikes and computation of complex terms. Moreover, the design of SNN learning engines is not an easy task due to limited hardware resources and tight energy constraints. In this article, a novel hardware-efficient SNN back-propagation scheme that offers fast convergence is proposed. The learning scheme does not require any complex operation such as error normalization and weight-threshold balancing, and can achieve an accuracy of around 97.5% on MNIST dataset using only 158,800 synapses. The multiplier-less inference engine trained using the proposed hard sigmoid SNN training (HaSiST) scheme can operate at a frequency of 135 MHz and consumes only 1.03 slice registers per synapse, 2.8 slice look-up tables, and can infer about 0.03[Formula: see text] features in a second, equivalent to 9.44 giga synaptic operations per second (GSOPS). The article also presents a high-speed, cost-efficient SNN training engine that consumes only 2.63 slice registers per synapse, 37.84 slice look-up tables per synapse, and can operate at a maximum computational frequency of around 50 MHz on a Virtex 6 FPGA.
Collapse
Affiliation(s)
- Ali Siddique
- Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, 999078, Macau.
| | - Mang I Vai
- Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, 999078, Macau
| | - Sio Hang Pun
- Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, 999078, Macau
| |
Collapse
|
44
|
Sanchez-Garcia M, Chauhan T, Cottereau BR, Beyeler M. Efficient multi-scale representation of visual objects using a biologically plausible spike-latency code and winner-take-all inhibition. BIOLOGICAL CYBERNETICS 2023; 117:95-111. [PMID: 37004546 DOI: 10.1007/s00422-023-00956-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 02/10/2023] [Indexed: 05/05/2023]
Abstract
Deep neural networks have surpassed human performance in key visual challenges such as object recognition, but require a large amount of energy, computation, and memory. In contrast, spiking neural networks (SNNs) have the potential to improve both the efficiency and biological plausibility of object recognition systems. Here we present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli using multi-scale parallel processing. Mimicking neuronal response properties in early visual cortex, images were preprocessed with three different spatial frequency (SF) channels, before they were fed to a layer of spiking neurons whose synaptic weights were updated using spike-timing-dependent-plasticity. We investigate how the quality of the represented objects changes under different SF bands and WTA-I schemes. We demonstrate that a network of 200 spiking neurons tuned to three SFs can efficiently represent objects with as little as 15 spikes per neuron. Studying how core object recognition may be implemented using biologically plausible learning rules in SNNs may not only further our understanding of the brain, but also lead to novel and efficient artificial vision systems.
Collapse
Affiliation(s)
| | - Tushar Chauhan
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, MA, USA
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
| | - Benoit R Cottereau
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
- IPAL, CNRS IRL 2955, Singapore, Singapore
| | - Michael Beyeler
- Department of Computer Science, University of California, Santa Barbara, CA, USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| |
Collapse
|
45
|
Xiao M, Meng Q, Zhang Z, Wang Y, Lin Z. SPIDE: A purely spike-based method for training feedback spiking neural networks. Neural Netw 2023; 161:9-24. [PMID: 36736003 DOI: 10.1016/j.neunet.2023.01.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 11/19/2022] [Accepted: 01/19/2023] [Indexed: 01/26/2023]
Abstract
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware. However, most supervised SNN training methods, such as conversion from artificial neural networks or direct training with surrogate gradients, require complex computation rather than spike-based operations of spiking neurons during training. In this paper, we study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method, implicit differentiation on the equilibrium state (IDE), for supervised learning with purely spike-based computation, which demonstrates the potential for energy-efficient training of SNNs. Specifically, we introduce ternary spiking neuron couples and prove that implicit differentiation can be solved by spikes based on this design, so the whole training procedure, including both forward and backward passes, is made as event-driven spike computation, and weights are updated locally with two-stage average firing rates. Then we propose to modify the reset membrane potential to reduce the approximation error of spikes. With these key components, we can train SNNs with flexible structures in a small number of time steps and with firing sparsity during training, and the theoretical estimation of energy costs demonstrates the potential for high efficiency. Meanwhile, experiments show that even with these constraints, our trained models can still achieve competitive results on MNIST, CIFAR-10, CIFAR-100, and CIFAR10-DVS.
Collapse
Affiliation(s)
- Mingqing Xiao
- National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University, China.
| | - Qingyan Meng
- The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China.
| | - Zongpeng Zhang
- Center for Data Science, Academy for Advanced Interdisciplinary Studies, Peking University, China.
| | - Yisen Wang
- National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University, China; Institute for Artificial Intelligence, Peking University, China.
| | - Zhouchen Lin
- National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University, China; Institute for Artificial Intelligence, Peking University, China; Peng Cheng Laboratory, China.
| |
Collapse
|
46
|
Amiri M, Jafari AH, Makkiabadi B, Nazari S, Van Hulle MM. A novel un-supervised burst time dependent plasticity learning approach for biologically pattern recognition networks. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2022.11.162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
47
|
Gong B, Wei C, Yang H, Yu Z, Wang L, Xiong L, Xiong R, Lu Z, Zhang Y, Liu Q. Control and regulation of skyrmionic topological charge in a novel synthetic antiferromagnetic nanostructure. NANOSCALE 2023; 15:5257-5264. [PMID: 36794971 DOI: 10.1039/d2nr06498g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Skyrmionium is a combination of a skyrmion with a topological charge (Q is +1 or -1), resulting in a magnetic configuration with a total topological charge of Q = 0. Skyrmionium has distinctive characteristics, including a slightly higher velocity, motion restricted to the middle of the track without the skyrmion Hall effect (SkHE), and absence of an acceleration phase. However, there is little stray field due to the zero net magnetization, the topological charge Q is zero due to the magnetic configuration, and detecting skyrmionium is still challenging. In the present work, we propose a novel nanostructure composed of triple nanowires with a narrow channel. It was found that the skyrmionium is converted into a DW pair or skyrmion by the concave channel. It was also found that the topological charge Q can be regulated by Ruderman-Kittel-Kasuya-Yosida (RKKY) antiferromagnetic (AFM) exchange coupling. Moreover, we analyzed the mechanism of the function based on the Landau-Lifshitz-Gilbert (LLG) equation and energy variation and constructed a deep spiking neural network (DSNN) with a recognition accuracy of 98.6% with supervised learning via the spike timing dependent plasticity rule (STDP) by considering the nanostructure as an artificial synapse device corresponding to the electrical properties of the nanostructure. These results provide the means for skyrmion-skyrmionium hybrid application and neuromorphic computing applications.
Collapse
Affiliation(s)
- Bin Gong
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Chenhuinan Wei
- Hubei Provincial Key Laboratory of Green Materials for Light Industry, Hubei University of Technology, Wuhan 430068, China
| | - Han Yang
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Ziyang Yu
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Luowen Wang
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Lun Xiong
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| | - Rui Xiong
- Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, School of Physics and Technology, Wuhan University, Wuhan 430072, China
| | - Zhihong Lu
- The State Key Laboratory of Refractories and Metallurgy, School of Materials and Metallurgy, Wuhan University of Science and Technology, Wuhan 430081, China
| | - Yue Zhang
- School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Qingbo Liu
- Hubei Key Laboratory of Optical Information and Pattern Recognition, School of Optical Information and Energy Engineering, Wuhan Institute of Technology, Wuhan 430205, P. R. China.
| |
Collapse
|
48
|
Deng S, Yu H, Park TJ, Islam AN, Manna S, Pofelski A, Wang Q, Zhu Y, Sankaranarayanan SK, Sengupta A, Ramanathan S. Selective area doping for Mott neuromorphic electronics. SCIENCE ADVANCES 2023; 9:eade4838. [PMID: 36930716 PMCID: PMC10022892 DOI: 10.1126/sciadv.ade4838] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 02/16/2023] [Indexed: 06/18/2023]
Abstract
The cointegration of artificial neuronal and synaptic devices with homotypic materials and structures can greatly simplify the fabrication of neuromorphic hardware. We demonstrate experimental realization of vanadium dioxide (VO2) artificial neurons and synapses on the same substrate through selective area carrier doping. By locally configuring pairs of catalytic and inert electrodes that enable nanoscale control over carrier density, volatility or nonvolatility can be appropriately assigned to each two-terminal Mott memory device per lithographic design, and both neuron- and synapse-like devices are successfully integrated on a single chip. Feedforward excitation and inhibition neural motifs are demonstrated at hardware level, followed by simulation of network-level handwritten digit and fashion product recognition tasks with experimental characteristics. Spatially selective electron doping opens up previously unidentified avenues for integration of emerging correlated semiconductors in electronic device technologies.
Collapse
Affiliation(s)
- Sunbin Deng
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Haoming Yu
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Tae Joon Park
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - A. N. M. Nafiul Islam
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA 16802, USA
| | - Sukriti Manna
- Center for Nanoscale Materials, Argonne National Laboratory, Lemont, IL 60439, USA
- Department of Mechanical and Industrial Engineering, University of Illinois, Chicago, IL 60607, USA
| | - Alexandre Pofelski
- Department of Condensed Matter Physics and Materials Science, Brookhaven National Laboratory, Upton, NY 11973, USA
| | - Qi Wang
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Yimei Zhu
- Department of Condensed Matter Physics and Materials Science, Brookhaven National Laboratory, Upton, NY 11973, USA
| | - Subramanian K. R. S. Sankaranarayanan
- Center for Nanoscale Materials, Argonne National Laboratory, Lemont, IL 60439, USA
- Department of Mechanical and Industrial Engineering, University of Illinois, Chicago, IL 60607, USA
| | - Abhronil Sengupta
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA 16802, USA
| | - Shriram Ramanathan
- School of Materials Engineering, Purdue University, West Lafayette, IN 47907, USA
| |
Collapse
|
49
|
Spike timing-dependent plasticity and memory. Curr Opin Neurobiol 2023; 80:102707. [PMID: 36924615 DOI: 10.1016/j.conb.2023.102707] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 01/18/2023] [Accepted: 02/15/2023] [Indexed: 03/16/2023]
Abstract
Spike timing-dependent plasticity (STDP) is a bidirectional form of synaptic plasticity discovered about 30 years ago and based on the relative timing of pre- and post-synaptic spiking activity with a millisecond precision. STDP is thought to be involved in the formation of memory but the millisecond-precision spike-timing required for STDP is difficult to reconcile with the much slower timescales of behavioral learning. This review therefore aims to expose and discuss recent findings about i) the multiple STDP learning rules at both excitatory and inhibitory synapses in vitro, ii) the contribution of STDP-like synaptic plasticity in the formation of memory in vivo and iii) the implementation of STDP rules in artificial neural networks and memristive devices.
Collapse
|
50
|
Pietrzak P, Szczęsny S, Huderek D, Przyborowski Ł. Overview of Spiking Neural Network Learning Approaches and Their Computational Complexities. SENSORS (BASEL, SWITZERLAND) 2023; 23:3037. [PMID: 36991750 PMCID: PMC10053242 DOI: 10.3390/s23063037] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/08/2023] [Accepted: 03/09/2023] [Indexed: 06/19/2023]
Abstract
Spiking neural networks (SNNs) are subjects of a topic that is gaining more and more interest nowadays. They more closely resemble actual neural networks in the brain than their second-generation counterparts, artificial neural networks (ANNs). SNNs have the potential to be more energy efficient than ANNs on event-driven neuromorphic hardware. This can yield drastic maintenance cost reduction for neural network models, as the energy consumption would be much lower in comparison to regular deep learning models hosted in the cloud today. However, such hardware is still not yet widely available. On standard computer architectures consisting mainly of central processing units (CPUs) and graphics processing units (GPUs) ANNs, due to simpler models of neurons and simpler models of connections between neurons, have the upper hand in terms of execution speed. In general, they also win in terms of learning algorithms, as SNNs do not reach the same levels of performance as their second-generation counterparts in typical machine learning benchmark tasks, such as classification. In this paper, we review existing learning algorithms for spiking neural networks, divide them into categories by type, and assess their computational complexity.
Collapse
|