1
|
Yan J, Liu Q, Zhang M, Feng L, Ma D, Li H, Pan G. Efficient spiking neural network design via neural architecture search. Neural Netw 2024; 173:106172. [PMID: 38402808 DOI: 10.1016/j.neunet.2024.106172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 01/09/2024] [Accepted: 02/08/2024] [Indexed: 02/27/2024]
Abstract
Spiking neural networks (SNNs) are brain-inspired models that utilize discrete and sparse spikes to transmit information, thus having the property of energy efficiency. Recent advances in learning algorithms have greatly improved SNN performance due to the automation of feature engineering. While the choice of neural architecture plays a significant role in deep learning, the current SNN architectures are mainly designed manually, which is a time-consuming and error-prone process. In this paper, we propose a spiking neural architecture search (NAS) method that can automatically find efficient SNNs. To tackle the challenge of long search time faced by SNNs when utilizing NAS, the proposed NAS encodes candidate architectures in a branchless spiking supernet which significantly reduces the computation requirements in the search process. Considering that real-world tasks prefer efficient networks with optimal accuracy under a limited computational budget, we propose a Synaptic Operation (SynOps)-aware optimization to automatically find the computationally efficient subspace of the supernet. Experimental results show that, in less search time, our proposed NAS can find SNNs with higher accuracy and lower computational cost than state-of-the-art SNNs. We also conduct experiments to validate the search process and the trade-off between accuracy and computational cost.
Collapse
Affiliation(s)
- Jiaqi Yan
- Zhejiang University, Hangzhou, 310027, China
| | - Qianhui Liu
- National University of Singapore, 119077, Singapore
| | - Malu Zhang
- University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lang Feng
- Zhejiang University, Hangzhou, 310027, China
| | - De Ma
- Zhejiang University, Hangzhou, 310027, China
| | - Haizhou Li
- National University of Singapore, 119077, Singapore; The Chinese University of Hong Kong, Shenzhen, 518172, China
| | - Gang Pan
- Zhejiang University, Hangzhou, 310027, China.
| |
Collapse
|
2
|
Su Q, He W, Wei X, Xu B, Li G. Multi-scale full spike pattern for semantic segmentation. Neural Netw 2024; 176:106330. [PMID: 38688068 DOI: 10.1016/j.neunet.2024.106330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 02/08/2024] [Accepted: 04/19/2024] [Indexed: 05/02/2024]
Abstract
Spiking neural networks (SNNs), as the brain-inspired neural networks, encode information in spatio-temporal dynamics. They have the potential to serve as low-power alternatives to artificial neural networks (ANNs) due to their sparse and event-driven nature. However, existing SNN-based models for pixel-level semantic segmentation tasks suffer from poor performance and high memory overhead, failing to fully exploit the computational effectiveness and efficiency of SNNs. To address these challenges, we propose the multi-scale and full spike segmentation network (MFS-Seg), which is based on the deep direct trained SNN and represents the first attempt to train a deep SNN with surrogate gradients for semantic segmentation. Specifically, we design an efficient fully-spike residual block (EFS-Res) to alleviate representation issues caused by spiking noise on different channels. EFS-Res utilizes depthwise separable convolution to improve the distributions of spiking feature maps. The visualization shows that our model can effectively extract the edge features of segmented objects. Furthermore, it can significantly reduce the memory overhead and energy consumption of the network. In addition, we theoretically analyze and prove that EFS-Res can avoid the degradation problem based on block dynamical isometry theory. Experimental results on the Camvid dataset, the DDD17 dataset, and the DSEC-Semantic dataset show that our model achieves comparable performance to the mainstream UNet network with up to 31× fewer parameters, while significantly reducing power consumption by over 13×. Overall, our MFS-Seg model demonstrates promising results in terms of performance, memory efficiency, and energy consumption, showcasing the potential of deep SNNs for semantic segmentation tasks. Our code is available in https://github.com/BICLab/MFS-Seg.
Collapse
Affiliation(s)
- Qiaoyi Su
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | - Weihua He
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
| | - Xiaobao Wei
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Bo Xu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Guoqi Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; Institute of Automation, Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Chinese Academy of Sciences, Beijing 100190, China.
| |
Collapse
|
3
|
Nikiruy K, Perez E, Baroni A, Reddy KDS, Pechmann S, Wenger C, Ziegler M. Blooming and pruning: learning from mistakes with memristive synapses. Sci Rep 2024; 14:7802. [PMID: 38565677 PMCID: PMC10987678 DOI: 10.1038/s41598-024-57660-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 03/20/2024] [Indexed: 04/04/2024] Open
Abstract
Blooming and pruning is one of the most important developmental mechanisms of the biological brain in the first years of life, enabling it to adapt its network structure to the demands of the environment. The mechanism is thought to be fundamental for the development of cognitive skills. Inspired by this, Chialvo and Bak proposed in 1999 a learning scheme that learns from mistakes by eliminating from the initial surplus of synaptic connections those that lead to an undesirable outcome. Here, this idea is implemented in a neuromorphic circuit scheme using CMOS integrated HfO2-based memristive devices. The implemented two-layer neural network learns in a self-organized manner without positive reinforcement and exploits the inherent variability of the memristive devices. This approach provides hardware, local, and energy-efficient learning. A combined experimental and simulation-based parameter study is presented to find the relevant system and device parameters leading to a compact and robust memristive neuromorphic circuit that can handle association tasks.
Collapse
Affiliation(s)
- Kristina Nikiruy
- Micro- and Nanoelectronic Systems, Department of Electrical Engineering and Information Technology, TU Ilmenau, Ilmenau, Germany.
| | - Eduardo Perez
- IHP - Leibniz-Institut fuer innovative Mikroelektronik, Frankfurt/Oder, Germany
- BTU Cottbus-Senftenberg, Cottbus, Germany
| | - Andrea Baroni
- IHP - Leibniz-Institut fuer innovative Mikroelektronik, Frankfurt/Oder, Germany
| | | | - Stefan Pechmann
- Chair of Micro- and Nanosystems Technology, Technical University of Munich, Munich, Germany
| | - Christian Wenger
- IHP - Leibniz-Institut fuer innovative Mikroelektronik, Frankfurt/Oder, Germany
- BTU Cottbus-Senftenberg, Cottbus, Germany
| | - Martin Ziegler
- Micro- and Nanoelectronic Systems, Department of Electrical Engineering and Information Technology, TU Ilmenau, Ilmenau, Germany
- Institute of Micro- and Nanotechnologies MacroNano, TU Ilmenau, Ilmenau, Germany
| |
Collapse
|
4
|
Zhou PK, Li Y, Zeng T, Chee MY, Huang Y, Yu Z, Yu H, Yu H, Huang W, Chen X. One-Dimensional Covalent Organic Framework-Based Multilevel Memristors for Neuromorphic Computing. Angew Chem Int Ed Engl 2024:e202402911. [PMID: 38511343 DOI: 10.1002/anie.202402911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 03/08/2024] [Accepted: 03/21/2024] [Indexed: 03/22/2024]
Abstract
Memristors are essential components of neuromorphic systems that mimic the synaptic plasticity observed in biological neurons. In this study, a novel approach employing one-dimensional covalent organic framework (1D COF) films was explored to enhance the performance of memristors. The unique structural and electronic properties of two 1D COF films (COF-4,4'-methylenedianiline (MDA) and COF-4,4'-oxydianiline (ODA)) offer advantages for multilevel resistive switching, which is a key feature in neuromorphic computing applications. By further introducing a TiO2 layer on the COF-ODA film, a built-in electric field between the COF-TiO2 interfaces could be generated, demonstrating the feasibility of utilizing COFs as a platform for constructing memristors with tunable resistive states. The 1D nanochannels of these COF structures contributed to the efficient modulation of electrical conductance, enabling precise control over synaptic weights in neuromorphic circuits. This study also investigated the potential of these COF-based memristors to achieve energy-efficient and high-density memory devices.
Collapse
Affiliation(s)
- Pan-Ke Zhou
- State Key Laboratory of Photocatalysis on Energy and Environment, and Key Laboratory of Molecular Synthesis and Function Discovery, College of Chemistry, Fuzhou University, Fujian, 350108, China
| | - Yiping Li
- State Key Laboratory of Photocatalysis on Energy and Environment, and Key Laboratory of Molecular Synthesis and Function Discovery, College of Chemistry, Fuzhou University, Fujian, 350108, China
| | - Tao Zeng
- Department of Materials Science and Engineering, National University of Singapore, Singapore, 117575, Singapore
| | - Mun Yin Chee
- School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, 637371, Singapore
| | - Yuxing Huang
- State Key Laboratory of Photocatalysis on Energy and Environment, and Key Laboratory of Molecular Synthesis and Function Discovery, College of Chemistry, Fuzhou University, Fujian, 350108, China
| | - Ziyue Yu
- State Key Laboratory of Photocatalysis on Energy and Environment, and Key Laboratory of Molecular Synthesis and Function Discovery, College of Chemistry, Fuzhou University, Fujian, 350108, China
| | - Hongling Yu
- State Key Laboratory of Photocatalysis on Energy and Environment, and Key Laboratory of Molecular Synthesis and Function Discovery, College of Chemistry, Fuzhou University, Fujian, 350108, China
| | - Hong Yu
- State Key Laboratory of Photocatalysis on Energy and Environment, and Key Laboratory of Molecular Synthesis and Function Discovery, College of Chemistry, Fuzhou University, Fujian, 350108, China
| | - Weiguo Huang
- State Key Laboratory of Structural Chemistry, Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, 155 Yangqiao West Road, Fuzhou, Fujian, 350002, China
| | - Xiong Chen
- State Key Laboratory of Photocatalysis on Energy and Environment, and Key Laboratory of Molecular Synthesis and Function Discovery, College of Chemistry, Fuzhou University, Fujian, 350108, China
| |
Collapse
|
5
|
Halužan Vasle A, Moškon M. Synthetic biological neural networks: From current implementations to future perspectives. Biosystems 2024; 237:105164. [PMID: 38402944 DOI: 10.1016/j.biosystems.2024.105164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 01/03/2024] [Accepted: 02/21/2024] [Indexed: 02/27/2024]
Abstract
Artificial neural networks, inspired by the biological networks of the human brain, have become game-changing computing models in modern computer science. Inspired by their wide scope of applications, synthetic biology strives to create their biological counterparts, which we denote synthetic biological neural networks (SYNBIONNs). Their use in the fields of medicine, biosensors, biotechnology, and many more shows great potential and presents exciting possibilities. So far, many different synthetic biological networks have been successfully constructed, however, SYNBIONN implementations have been sparse. The latter are mostly based on neural networks pretrained in silico and being heavily dependent on extensive human input. In this paper, we review current implementations and models of SYNBIONNs. We briefly present the biological platforms that show potential for designing and constructing perceptrons and/or multilayer SYNBIONNs. We explore their future possibilities along with the challenges that must be overcome to successfully implement a scalable in vivo biological neural network capable of online learning.
Collapse
Affiliation(s)
- Ana Halužan Vasle
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
| | - Miha Moškon
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia.
| |
Collapse
|
6
|
Kwak H, Kim N, Jeon S, Kim S, Woo J. Electrochemical random-access memory: recent advances in materials, devices, and systems towards neuromorphic computing. Nano Converg 2024; 11:9. [PMID: 38416323 PMCID: PMC10902254 DOI: 10.1186/s40580-024-00415-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 01/30/2024] [Indexed: 02/29/2024]
Abstract
Artificial neural networks (ANNs), inspired by the human brain's network of neurons and synapses, enable computing machines and systems to execute cognitive tasks, thus embodying artificial intelligence (AI). Since the performance of ANNs generally improves with the expansion of the network size, and also most of the computation time is spent for matrix operations, AI computation have been performed not only using the general-purpose central processing unit (CPU) but also architectures that facilitate parallel computation, such as graphic processing units (GPUs) and custom-designed application-specific integrated circuits (ASICs). Nevertheless, the substantial energy consumption stemming from frequent data transfers between processing units and memory has remained a persistent challenge. In response, a novel approach has emerged: an in-memory computing architecture harnessing analog memory elements. This innovation promises a notable advancement in energy efficiency. The core of this analog AI hardware accelerator lies in expansive arrays of non-volatile memory devices, known as resistive processing units (RPUs). These RPUs facilitate massively parallel matrix operations, leading to significant enhancements in both performance and energy efficiency. Electrochemical random-access memory (ECRAM), leveraging ion dynamics in secondary-ion battery materials, has emerged as a promising candidate for RPUs. ECRAM achieves over 1000 memory states through precise ion movement control, prompting early-stage research into material stacks such as mobile ion species and electrolyte materials. Crucially, the analog states in ECRAMs update symmetrically with pulse number (or voltage polarity), contributing to high network performance. Recent strides in device engineering in planar and three-dimensional structures and the understanding of ECRAM operation physics have marked significant progress in a short research period. This paper aims to review ECRAM material advancements through literature surveys, offering a systematic discussion on engineering assessments for ion control and a physical understanding of array-level demonstrations. Finally, the review outlines future directions for improvements, co-optimization, and multidisciplinary collaboration in circuits, algorithms, and applications to develop energy-efficient, next-generation AI hardware systems.
Collapse
Affiliation(s)
- Hyunjeong Kwak
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang, 37673, South Korea
| | - Nayeon Kim
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu, 41566, South Korea
| | - Seonuk Jeon
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu, 41566, South Korea
| | - Seyoung Kim
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang, 37673, South Korea.
| | - Jiyong Woo
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu, 41566, South Korea.
| |
Collapse
|
7
|
Wang W, Wang Y, Yin F, Niu H, Shin YK, Li Y, Kim ES, Kim NY. Tailoring Classical Conditioning Behavior in TiO 2 Nanowires: ZnO QDs-Based Optoelectronic Memristors for Neuromorphic Hardware. Nanomicro Lett 2024; 16:133. [PMID: 38411720 PMCID: PMC10899558 DOI: 10.1007/s40820-024-01338-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 12/28/2023] [Indexed: 02/28/2024]
Abstract
Neuromorphic hardware equipped with associative learning capabilities presents fascinating applications in the next generation of artificial intelligence. However, research into synaptic devices exhibiting complex associative learning behaviors is still nascent. Here, an optoelectronic memristor based on Ag/TiO2 Nanowires: ZnO Quantum dots/FTO was proposed and constructed to emulate the biological associative learning behaviors. Effective implementation of synaptic behaviors, including long and short-term plasticity, and learning-forgetting-relearning behaviors, were achieved in the device through the application of light and electrical stimuli. Leveraging the optoelectronic co-modulated characteristics, a simulation of neuromorphic computing was conducted, resulting in a handwriting digit recognition accuracy of 88.9%. Furthermore, a 3 × 7 memristor array was constructed, confirming its application in artificial visual memory. Most importantly, complex biological associative learning behaviors were emulated by mapping the light and electrical stimuli into conditioned and unconditioned stimuli, respectively. After training through associative pairs, reflexes could be triggered solely using light stimuli. Comprehensively, under specific optoelectronic signal applications, the four features of classical conditioning, namely acquisition, extinction, recovery, and generalization, were elegantly emulated. This work provides an optoelectronic memristor with associative behavior capabilities, offering a pathway for advancing brain-machine interfaces, autonomous robots, and machine self-learning in the future.
Collapse
Affiliation(s)
- Wenxiao Wang
- School of Information Science and Engineering, University of Jinan, Jinan, 250022, People's Republic of China
- RFIC Centre, NDAC Centre, Kwangwoon University, Nowon-gu, Seoul, 139-701, South Korea
- Department of Electronics Engineering, Kwangwoon University, Nowon-Gu, Seoul, 139-701, South Korea
| | - Yaqi Wang
- School of Information Science and Engineering, University of Jinan, Jinan, 250022, People's Republic of China
| | - Feifei Yin
- RFIC Centre, NDAC Centre, Kwangwoon University, Nowon-gu, Seoul, 139-701, South Korea
- Department of Electronics Engineering, Kwangwoon University, Nowon-Gu, Seoul, 139-701, South Korea
| | - Hongsen Niu
- RFIC Centre, NDAC Centre, Kwangwoon University, Nowon-gu, Seoul, 139-701, South Korea
- Department of Electronics Engineering, Kwangwoon University, Nowon-Gu, Seoul, 139-701, South Korea
| | - Young-Kee Shin
- Department of Molecular Medicine and Biopharmaceutical Sciences, Seoul National University, Seoul, 08826, South Korea
| | - Yang Li
- School of Information Science and Engineering, University of Jinan, Jinan, 250022, People's Republic of China.
- School of Microelectronics, Shandong University, Jinan, 250101, People's Republic of China.
| | - Eun-Seong Kim
- RFIC Centre, NDAC Centre, Kwangwoon University, Nowon-gu, Seoul, 139-701, South Korea.
- Department of Electronics Engineering, Kwangwoon University, Nowon-Gu, Seoul, 139-701, South Korea.
| | - Nam-Young Kim
- RFIC Centre, NDAC Centre, Kwangwoon University, Nowon-gu, Seoul, 139-701, South Korea.
- Department of Electronics Engineering, Kwangwoon University, Nowon-Gu, Seoul, 139-701, South Korea.
| |
Collapse
|
8
|
Xue S, Wang S, Wu T, Di Z, Xu N, Sun Y, Zeng C, Ma S, Zhou P. Hybrid neuromorphic hardware with sparing 2D synapse and CMOS neuron for character recognition. Sci Bull (Beijing) 2023; 68:2336-2343. [PMID: 37714804 DOI: 10.1016/j.scib.2023.09.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 08/07/2023] [Accepted: 08/23/2023] [Indexed: 09/17/2023]
Abstract
Neuromorphic computing enables efficient processing of data-intensive tasks, but requires numerous artificial synapses and neurons for certain functions, which leads to bulky systems and energy challenges. Achieving functionality with fewer synapses and neurons will facilitate integration density and computility. Two-dimensional (2D) materials exhibit potential for artificial synapses, including diverse biomimetic plasticity and efficient computing. Considering the complexity of neuron circuits and the maturity of complementary metal-oxide-semiconductor (CMOS), hybrid integration is attractive. Here, we demonstrate a hybrid neuromorphic hardware with 2D MoS2 synaptic arrays and CMOS neural circuitry integrated on board. With the joint benefit of hybrid integration, frequency coding and feature extraction, a total cost of twelve MoS2 synapses, three CMOS neurons, combined with digital/analogue converter enables alphabetic and numeric recognition. MoS2 synapses exhibit progressively tunable weight plasticity, CMOS neurons integrate and fire frequency-encoded spikes to display the target characters. The synapse- and neuron-saving hybrid hardware exhibits a competitive accuracy of 98.8% and single recognition energy consumption of 11.4 μW. This work provides a viable solution for building neuromorphic hardware with high compactness and computility.
Collapse
Affiliation(s)
- Siwei Xue
- Shanghai Key Laboratory for Future Computing Hardware and System, School of Microelectronics, Fudan University, Shanghai 200433, China
| | - Shuiyuan Wang
- Shanghai Key Laboratory for Future Computing Hardware and System, School of Microelectronics, Fudan University, Shanghai 200433, China.
| | - Tianxiang Wu
- State Key Laboratory of ASIC and System, School of Information Science and Technology, Fudan University, Shanghai 200433, China
| | - Ziye Di
- Shanghai Key Laboratory for Future Computing Hardware and System, School of Microelectronics, Fudan University, Shanghai 200433, China
| | - Nuo Xu
- Shanghai Key Laboratory for Future Computing Hardware and System, School of Microelectronics, Fudan University, Shanghai 200433, China
| | - Yibo Sun
- Shanghai Key Laboratory for Future Computing Hardware and System, School of Microelectronics, Fudan University, Shanghai 200433, China
| | - Chaofan Zeng
- Shanghai Key Laboratory for Future Computing Hardware and System, School of Microelectronics, Fudan University, Shanghai 200433, China
| | - Shunli Ma
- Shanghai Key Laboratory for Future Computing Hardware and System, School of Microelectronics, Fudan University, Shanghai 200433, China.
| | - Peng Zhou
- Shanghai Key Laboratory for Future Computing Hardware and System, School of Microelectronics, Fudan University, Shanghai 200433, China; Frontier Institute of Chip and System & Qizhi Institute, Fudan University, Shanghai 200433, China; Hubei Yangtze Memory Laboratories, Wuhan 430205, China.
| |
Collapse
|
9
|
Schmid D, Jarvers C, Neumann H. Canonical circuit computations for computer vision. Biol Cybern 2023; 117:299-329. [PMID: 37306782 PMCID: PMC10600314 DOI: 10.1007/s00422-023-00966-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 05/18/2023] [Indexed: 06/13/2023]
Abstract
Advanced computer vision mechanisms have been inspired by neuroscientific findings. However, with the focus on improving benchmark achievements, technical solutions have been shaped by application and engineering constraints. This includes the training of neural networks which led to the development of feature detectors optimally suited to the application domain. However, the limitations of such approaches motivate the need to identify computational principles, or motifs, in biological vision that can enable further foundational advances in machine vision. We propose to utilize structural and functional principles of neural systems that have been largely overlooked. They potentially provide new inspirations for computer vision mechanisms and models. Recurrent feedforward, lateral, and feedback interactions characterize general principles underlying processing in mammals. We derive a formal specification of core computational motifs that utilize these principles. These are combined to define model mechanisms for visual shape and motion processing. We demonstrate how such a framework can be adopted to run on neuromorphic brain-inspired hardware platforms and can be extended to automatically adapt to environment statistics. We argue that the identified principles and their formalization inspires sophisticated computational mechanisms with improved explanatory scope. These and other elaborated, biologically inspired models can be employed to design computer vision solutions for different tasks and they can be used to advance neural network architectures of learning.
Collapse
Affiliation(s)
- Daniel Schmid
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Christian Jarvers
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Heiko Neumann
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| |
Collapse
|
10
|
Yao M, Zhang H, Zhao G, Zhang X, Wang D, Cao G, Li G. Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition. Neural Netw 2023; 166:410-423. [PMID: 37549609 DOI: 10.1016/j.neunet.2023.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 02/23/2023] [Accepted: 07/05/2023] [Indexed: 08/09/2023]
Abstract
Event-based visual, a new visual paradigm with bio-inspired dynamic perception and μs level temporal resolution, has prominent advantages in many specific visual scenarios and gained much research interest. Spiking neural network (SNN) is naturally suitable for dealing with event streams due to its temporal information processing capability and event-driven nature. However, existing works SNN neglect the fact that the input event streams are spatially sparse and temporally non-uniform, and just treat these variant inputs equally. This situation interferes with the effectiveness and efficiency of existing SNNs. In this paper, we propose the feature Refine-and-Mask SNN (RM-SNN), which has the ability of self-adaption to regulate the spiking response in a data-dependent way. We use the Refine-and-Mask (RM) module to refine all features and mask the unimportant features to optimize the membrane potential of spiking neurons, which in turn drops the spiking activity. Inspired by the fact that not all events in spatio-temporal streams are task-relevant, we execute the RM module in both temporal and channel dimensions. Extensive experiments on seven event-based benchmarks, DVS128 Gesture, DVS128 Gait, CIFAR10-DVS, N-Caltech101, DailyAction-DVS, UCF101-DVS, and HMDB51-DVS demonstrate that under the multi-scale constraints of input time window, RM-SNN can significantly reduce the network average spiking activity rate while improving the task performance. In addition, by visualizing spiking responses, we analyze why sparser spiking activity can be better. Code.
Collapse
Affiliation(s)
- Man Yao
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China; Peng Cheng Laboratory, Shenzhen 518000, China.
| | - Hengyu Zhang
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China; Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518000, China.
| | - Guangshe Zhao
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China.
| | - Xiyu Zhang
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China.
| | - Dingheng Wang
- Northwest Institute of Mechanical & Electrical Engineering, Xianyang, Shaanxi, China.
| | - Gang Cao
- Beijing Academy of Artificial Intelligence, Beijing 100089, China
| | - Guoqi Li
- Peng Cheng Laboratory, Shenzhen 518000, China; Institute of Automation, Chinese Academy of Sciences, Beijing 100089, China.
| |
Collapse
|
11
|
Xiao M, Meng Q, Zhang Z, Wang Y, Lin Z. SPIDE: A purely spike-based method for training feedback spiking neural networks. Neural Netw 2023; 161:9-24. [PMID: 36736003 DOI: 10.1016/j.neunet.2023.01.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 11/19/2022] [Accepted: 01/19/2023] [Indexed: 01/26/2023]
Abstract
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware. However, most supervised SNN training methods, such as conversion from artificial neural networks or direct training with surrogate gradients, require complex computation rather than spike-based operations of spiking neurons during training. In this paper, we study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method, implicit differentiation on the equilibrium state (IDE), for supervised learning with purely spike-based computation, which demonstrates the potential for energy-efficient training of SNNs. Specifically, we introduce ternary spiking neuron couples and prove that implicit differentiation can be solved by spikes based on this design, so the whole training procedure, including both forward and backward passes, is made as event-driven spike computation, and weights are updated locally with two-stage average firing rates. Then we propose to modify the reset membrane potential to reduce the approximation error of spikes. With these key components, we can train SNNs with flexible structures in a small number of time steps and with firing sparsity during training, and the theoretical estimation of energy costs demonstrates the potential for high efficiency. Meanwhile, experiments show that even with these constraints, our trained models can still achieve competitive results on MNIST, CIFAR-10, CIFAR-100, and CIFAR10-DVS.
Collapse
Affiliation(s)
- Mingqing Xiao
- National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University, China.
| | - Qingyan Meng
- The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China.
| | - Zongpeng Zhang
- Center for Data Science, Academy for Advanced Interdisciplinary Studies, Peking University, China.
| | - Yisen Wang
- National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University, China; Institute for Artificial Intelligence, Peking University, China.
| | - Zhouchen Lin
- National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University, China; Institute for Artificial Intelligence, Peking University, China; Peng Cheng Laboratory, China.
| |
Collapse
|
12
|
Fu J, Wang J, He X, Ming J, Wang L, Wang Y, Shao H, Zheng C, Xie L, Ling H. Pseudo-transistors for emerging neuromorphic electronics. Sci Technol Adv Mater 2023; 24:2180286. [PMID: 36970452 PMCID: PMC10035954 DOI: 10.1080/14686996.2023.2180286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 01/15/2023] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
Artificial synaptic devices are the cornerstone of neuromorphic electronics. The development of new artificial synaptic devices and the simulation of biological synaptic computational functions are important tasks in the field of neuromorphic electronics. Although two-terminal memristors and three-terminal synaptic transistors have exhibited significant capabilities in the artificial synapse, more stable devices and simpler integration are needed in practical applications. Combining the configuration advantages of memristors and transistors, a novel pseudo-transistor is proposed. Here, recent advances in the development of pseudo-transistor-based neuromorphic electronics in recent years are reviewed. The working mechanisms, device structures and materials of three typical pseudo-transistors, including tunneling random access memory (TRAM), memflash and memtransistor, are comprehensively discussed. Finally, the future development and challenges in this field are emphasized.
Collapse
Affiliation(s)
- Jingwei Fu
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
| | - Jie Wang
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
| | - Xiang He
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
| | - Jianyu Ming
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
| | - Le Wang
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
| | - Yiru Wang
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
| | - He Shao
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
| | - Chaoyue Zheng
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
- Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou, China
| | - Linghai Xie
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
| | - Haifeng Ling
- State Key Laboratory of Organic Electronics and Information Displays & Institute of Advanced Materials (IAM), Nanjing University of Posts & Telecommunications, Nanjing, China
| |
Collapse
|
13
|
Chen H, Li H, Ma T, Han S, Zhao Q. Biological function simulation in neuromorphic devices: from synapse and neuron to behavior. Sci Technol Adv Mater 2023; 24:2183712. [PMID: 36926202 PMCID: PMC10013381 DOI: 10.1080/14686996.2023.2183712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 02/06/2023] [Accepted: 02/11/2023] [Indexed: 06/18/2023]
Abstract
As the boom of data storage and processing, brain-inspired computing provides an effective approach to solve the current problem. Various emerging materials and devices have been reported to promote the development of neuromorphic computing. Thereinto, the neuromorphic device represented by memristor has attracted extensive research due to its outstanding property to emulate the brain's functions from synaptic plasticity, sensory-memory neurons to some intelligent behaviors of living creatures. Herein, we mainly review the progress of these brain functions mimicked by neuromorphic devices, concentrating on synapse (i.e. various synaptic plasticity trigger by electricity and/or light), neurons (including the various sensory nervous system) and intelligent behaviors (such as conditioned reflex represented by Pavlov's dog experiment). Finally, some challenges and prospects related to neuromorphic devices are presented.
Collapse
Affiliation(s)
- Hui Chen
- Heart Center of Henan Provincial People’s Hospital, Central China Fuwai Hospital, Central China Fuwai Hospital of Zhengzhou University, Zhengzhou, P. R. China
| | - Huilin Li
- Henan Key Laboratory of Photovoltaic Materials, Henan University, Kaifeng, P. R. China
| | - Ting Ma
- Henan Key Laboratory of Photovoltaic Materials, Henan University, Kaifeng, P. R. China
| | - Shuangshuang Han
- Henan Key Laboratory of Photovoltaic Materials, Henan University, Kaifeng, P. R. China
| | - Qiuping Zhao
- Heart Center of Henan Provincial People’s Hospital, Central China Fuwai Hospital, Central China Fuwai Hospital of Zhengzhou University, Zhengzhou, P. R. China
| |
Collapse
|
14
|
Zahoor F, Hussin FA, Isyaku UB, Gupta S, Khanday FA, Chattopadhyay A, Abbas H. Resistive random access memory: introduction to device mechanism, materials and application to neuromorphic computing. Discov Nano 2023; 18:36. [PMID: 37382679 PMCID: PMC10409712 DOI: 10.1186/s11671-023-03775-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 01/17/2023] [Indexed: 06/30/2023]
Abstract
The modern-day computing technologies are continuously undergoing a rapid changing landscape; thus, the demands of new memory types are growing that will be fast, energy efficient and durable. The limited scaling capabilities of the conventional memory technologies are pushing the limits of data-intense applications beyond the scope of silicon-based complementary metal oxide semiconductors (CMOS). Resistive random access memory (RRAM) is one of the most suitable emerging memory technologies candidates that have demonstrated potential to replace state-of-the-art integrated electronic devices for advanced computing and digital and analog circuit applications including neuromorphic networks. RRAM has grown in prominence in the recent years due to its simple structure, long retention, high operating speed, ultra-low-power operation capabilities, ability to scale to lower dimensions without affecting the device performance and the possibility of three-dimensional integration for high-density applications. Over the past few years, research has shown RRAM as one of the most suitable candidates for designing efficient, intelligent and secure computing system in the post-CMOS era. In this manuscript, the journey and the device engineering of RRAM with a special focus on the resistive switching mechanism are detailed. This review also focuses on the RRAM based on two-dimensional (2D) materials, as 2D materials offer unique electrical, chemical, mechanical and physical properties owing to their ultrathin, flexible and multilayer structure. Finally, the applications of RRAM in the field of neuromorphic computing are presented.
Collapse
Affiliation(s)
- Furqan Zahoor
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| | - Fawnizu Azmadi Hussin
- Department of Electrical and Electronics Engineering, Universiti Teknologi Petronas, Seri Iskandar, Malaysia
| | - Usman Bature Isyaku
- Department of Electrical and Electronics Engineering, Universiti Teknologi Petronas, Seri Iskandar, Malaysia
| | - Shagun Gupta
- School of Electronics and Communication Engineering, Shri Mata Vaishno Devi University, Katra, India
| | - Farooq Ahmad Khanday
- Department of Electronics & Instrumentation Technology, University of Kashmir, Srinagar, India
| | - Anupam Chattopadhyay
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| | - Haider Abbas
- Division of Material Science and Engineering, Hanyang University, Seoul, South Korea
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
15
|
|
16
|
Kwon KC, Baek JH, Hong K, Kim SY, Jang HW. Memristive Devices Based on Two-Dimensional Transition Metal Chalcogenides for Neuromorphic Computing. Nanomicro Lett 2022; 14:58. [PMID: 35122527 PMCID: PMC8818077 DOI: 10.1007/s40820-021-00784-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 12/03/2021] [Indexed: 05/21/2023]
Abstract
Two-dimensional (2D) transition metal chalcogenides (TMC) and their heterostructures are appealing as building blocks in a wide range of electronic and optoelectronic devices, particularly futuristic memristive and synaptic devices for brain-inspired neuromorphic computing systems. The distinct properties such as high durability, electrical and optical tunability, clean surface, flexibility, and LEGO-staking capability enable simple fabrication with high integration density, energy-efficient operation, and high scalability. This review provides a thorough examination of high-performance memristors based on 2D TMCs for neuromorphic computing applications, including the promise of 2D TMC materials and heterostructures, as well as the state-of-the-art demonstration of memristive devices. The challenges and future prospects for the development of these emerging materials and devices are also discussed. The purpose of this review is to provide an outlook on the fabrication and characterization of neuromorphic memristors based on 2D TMCs.
Collapse
Affiliation(s)
- Ki Chang Kwon
- Department of Materials Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826 Republic of Korea
- Interdisciplinary Materials Measurement Institute, Korea Research Institute of Standards and Science (KRISS), Daejeon, 34133 Republic of Korea
| | - Ji Hyun Baek
- Department of Materials Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826 Republic of Korea
| | - Kootak Hong
- Department of Materials Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826 Republic of Korea
| | - Soo Young Kim
- Department of Materials Science and Engineering, Institute of Green Manufacturing Technology, Korea University, Seoul, 02841 Republic of Korea
| | - Ho Won Jang
- Department of Materials Science and Engineering, Research Institute of Advanced Materials, Seoul National University, Seoul, 08826 Republic of Korea
- Advanced Institute of Convergence Technology, Seoul National University, Suwon, 16229 Korea
| |
Collapse
|
17
|
Sravya V, Pavithra VR, Thangadurai TD, Nataraj D, Kumar NS. Excitation-independent and fluorescence-reversible N-GQD for picomolar detection of inhibitory neurotransmitter in milk samples ‒ an alleyway for possible neuromorphic computing application. Talanta 2021; 239:123132. [PMID: 34920264 DOI: 10.1016/j.talanta.2021.123132] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 11/30/2021] [Accepted: 12/06/2021] [Indexed: 02/08/2023]
Abstract
N‒GQDs with an average size of ca. 20-30 nm are utilized for the picomolar detection of inhibitory neurotransmitters, glycine (Gly), in pH ca. 7.0. The crystalline nature, morphology, elemental composition, and chemical state of N-GQDs are investigated by XRD, FE-SEM, HR-TEM, XPS, and FT-IR techniques. The addition of Gly (100 × 10-9 M; 0 → 1.0 mL) steadily quenches the fluorescence intensity of N-GQD (1 × 10-6 M) at 432 nm (λex 333 nm) due to inner filter effect (IFE) through the formation of ground-state complex, N-GQD•Gly. The excitation-independent N‒GQDs showed an outstanding selectivity and sensitivity towards Gly with binding constant (Ka = 8.97 × 10-3 M-1) and LoD (21.04 pM; S/N = 3). Time-correlated single-photon counting experiment confirms the static quenching of N-GQD (8.77 → 8.85 ns) in the presence of Gly. The interference of other amino acids on the strong binding of the N-GQD•Gly complex in H2O is examined. Combinatorial Ex-OR and NOT gate logic circuits that could be useful in neuromorphic computing are developed based on the reversible fluorescence intensity changes of N-GQD upon the addition of Gly (ФF 0.54 → 0.39). The real-time application of N-GQD was investigated using commercially available relevant milk samples. Remarkably, not less than 99% cytotoxic reactivity of N-GQDs is attained against HeLa cells.
Collapse
Affiliation(s)
- V Sravya
- Department of Nanoscience and Technology, Sri Ramakrishna Engineering College, Affiliated with Anna University, Coimbatore, 641 022, Tamilnadu, India; Department of Physics, Kongunadu Arts and Science College, Affiliated to Bharathiar University, Coimbatore, 641 029, Tamilnadu, India
| | - V R Pavithra
- Department of Nanoscience and Technology, Sri Ramakrishna Engineering College, Affiliated with Anna University, Coimbatore, 641 022, Tamilnadu, India
| | - T Daniel Thangadurai
- Department of Nanoscience and Technology, Sri Ramakrishna Engineering College, Affiliated with Anna University, Coimbatore, 641 022, Tamilnadu, India.
| | - D Nataraj
- Department of Physics, Bharathiar University, Coimbatore, 641 046, Tamilnadu, India
| | - N Sathish Kumar
- Department of Electronics and Communication Engineering, Sri Ramakrishna Engineering College, Affiliated to Anna University, Coimbatore, 641 022, Tamilnadu, India
| |
Collapse
|
18
|
Abstract
Spiking Neural Networks (SNNs) have recently emerged as a new generation of low-power deep neural networks due to sparse, asynchronous, and binary event-driven processing. Most previous deep SNN optimization methods focus on static datasets (e.g., MNIST) from a conventional frame-based camera. On the other hand, optimization techniques for event data from Dynamic Vision Sensor (DVS) cameras are still at infancy. Most prior SNN techniques handling DVS data are limited to shallow networks and thus, show low performance. Generally, we observe that the integrate-and-fire behavior of spiking neurons diminishes spike activity in deeper layers. The sparse spike activity results in a sub-optimal solution during training (i.e., performance degradation). To address this limitation, we propose novel algorithmic and architectural advances to accelerate the training of very deep SNNs on DVS data. Specifically, we propose Spike Activation Lift Training (SALT) which increases spike activity across all layers by optimizing both weights and thresholds in convolutional layers. After applying SALT, we train the weights based on the cross-entropy loss. SALT helps the networks to convey ample information across all layers during training and therefore improves the performance. Furthermore, we propose a simple and effective architecture, called Switched-BN, which exploits Batch Normalization (BN). Previous methods show that the standard BN is incompatible with the temporal dynamics of SNNs. Therefore, in Switched-BN architecture, we apply BN to the last layer of an SNN after accumulating all the spikes from previous layer with a spike voltage accumulator (i.e., converting temporal spike information to float value). Even though we apply BN in just one layer of SNNs, our results demonstrate a considerable performance gain without any significant computational overhead. Through extensive experiments, we show the effectiveness of SALT and Switched-BN for training very deep SNNs from scratch on various benchmarks including, DVS-Cifar10, N-Caltech, DHP19, CIFAR10, and CIFAR100. To the best of our knowledge, this is the first work showing state-of-the-art performance with deep SNNs on DVS data.
Collapse
Affiliation(s)
- Youngeun Kim
- Department of Electrical Engineering, Yale University, New Haven, CT, USA.
| | | |
Collapse
|
19
|
Song S, Ma C, Sun W, Xu J, Dang J, Yu Q. Efficient learning with augmented spikes: A case study with image classification. Neural Netw 2021; 142:205-212. [PMID: 34023641 DOI: 10.1016/j.neunet.2021.05.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 02/15/2021] [Accepted: 05/06/2021] [Indexed: 10/21/2022]
Abstract
Efficient learning of spikes plays a valuable role in training spiking neural networks (SNNs) to have desired responses to input stimuli. However, current learning rules are limited to a binary form of spikes. The seemingly ubiquitous phenomenon of burst in nervous systems suggests a new way to carry more information with spike bursts in addition to times. Based on this, we introduce an advanced form, the augmented spikes, where spike coefficients are used to carry additional information. How could neurons learn and benefit from augmented spikes remains unclear. In this paper, we propose two new efficient learning rules to process spatiotemporal patterns composed of augmented spikes. Moreover, we examine the learning abilities of our methods with a synthetic recognition task of augmented spike patterns and two practical ones for image classification. Experimental results demonstrate that our rules are capable of extracting information carried by both the timing and coefficient of spikes. Our proposed approaches achieve remarkable performance and good robustness under various noise conditions, as compared to benchmarks. The improved performance indicates the merits of augmented spikes and our learning rules, which could be beneficial and generalized to a broad range of spike-based platforms.
Collapse
Affiliation(s)
- Shiming Song
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Chenxiang Ma
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Wei Sun
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Junhai Xu
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Jianwu Dang
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Qiang Yu
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| |
Collapse
|
20
|
Krishna A, Mittal D, Virupaksha SG, Nair AR, Narayanan R, Thakur CS. Biomimetic FPGA-based spatial navigation model with grid cells and place cells. Neural Netw 2021; 139:45-63. [PMID: 33677378 DOI: 10.1016/j.neunet.2021.01.028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Revised: 01/15/2021] [Accepted: 01/25/2021] [Indexed: 12/22/2022]
Abstract
The mammalian spatial navigation system is characterized by an initial divergence of internal representations, with disparate classes of neurons responding to distinct features including location, speed, borders and head direction; an ensuing convergence finally enables navigation and path integration. Here, we report the algorithmic and hardware implementation of biomimetic neural structures encompassing a feed-forward trimodular, multi-layer architecture representing grid-cell, place-cell and decoding modules for navigation. The grid-cell module comprised of neurons that fired in a grid-like pattern, and was built of distinct layers that constituted the dorsoventral span of the medial entorhinal cortex. Each layer was built as an independent continuous attractor network with distinct grid-field spatial scales. The place-cell module comprised of neurons that fired at one or few spatial locations, organized into different clusters based on convergent modular inputs from different grid-cell layers, replicating the gradient in place-field size along the hippocampal dorso-ventral axis. The decoding module, a two-layer neural network that constitutes the convergence of the divergent representations in preceding modules, received inputs from the place-cell module and provided specific coordinates of the navigating object. After vital design optimizations involving all modules, we implemented the tri-modular structure on Zynq Ultrascale+ field-programmable gate array silicon chip, and demonstrated its capacity in precisely estimating the navigational trajectory with minimal overall resource consumption involving a mere 2.92% Look Up Table utilization. Our implementation of a biomimetic, digital spatial navigation system is stable, reliable, reconfigurable, real-time with execution time of about 32 s for 100k input samples (in contrast to 40 minutes on Intel Core i7-7700 CPU with 8 cores clocking at 3.60 GHz) and thus can be deployed for autonomous-robotic navigation without requiring additional sensors.
Collapse
Affiliation(s)
- Adithya Krishna
- NeuRonICS Lab, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore 560012, India.
| | - Divyansh Mittal
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore 560012, India.
| | - Siri Garudanagiri Virupaksha
- NeuRonICS Lab, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore 560012, India.
| | - Abhishek Ramdas Nair
- NeuRonICS Lab, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore 560012, India.
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore 560012, India.
| | - Chetan Singh Thakur
- NeuRonICS Lab, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore 560012, India.
| |
Collapse
|
21
|
Huang W, Xia X, Zhu C, Steichen P, Quan W, Mao W, Yang J, Chu L, Li X. Memristive Artificial Synapses for Neuromorphic Computing. Nanomicro Lett 2021; 13:85. [PMID: 34138298 PMCID: PMC8006524 DOI: 10.1007/s40820-021-00618-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 01/29/2021] [Indexed: 05/06/2023]
Abstract
Neuromorphic computing simulates the operation of biological brain function for information processing and can potentially solve the bottleneck of the von Neumann architecture. This computing is realized based on memristive hardware neural networks in which synaptic devices that mimic biological synapses of the brain are the primary units. Mimicking synaptic functions with these devices is critical in neuromorphic systems. In the last decade, electrical and optical signals have been incorporated into the synaptic devices and promoted the simulation of various synaptic functions. In this review, these devices are discussed by categorizing them into electrically stimulated, optically stimulated, and photoelectric synergetic synaptic devices based on stimulation of electrical and optical signals. The working mechanisms of the devices are analyzed in detail. This is followed by a discussion of the progress in mimicking synaptic functions. In addition, existing application scenarios of various synaptic devices are outlined. Furthermore, the performances and future development of the synaptic devices that could be significant for building efficient neuromorphic systems are prospected.
Collapse
Affiliation(s)
- Wen Huang
- New Energy Technology Engineering Laboratory of Jiangsu Province and School of Science, Nanjing University of Posts and Telecommunications (NJUPT), Nanjing, 210023, People's Republic of China.
| | - Xuwen Xia
- New Energy Technology Engineering Laboratory of Jiangsu Province and School of Science, Nanjing University of Posts and Telecommunications (NJUPT), Nanjing, 210023, People's Republic of China
| | - Chen Zhu
- College of Electronic and Optical Engineering and College of Microelectronics, Nanjing University of Posts and Telecommunications (NJUPT), Nanjing, 210023, People's Republic of China
| | - Parker Steichen
- Department of Materials Science and Engineering, University of Washington, Seattle, WA, 98195-2120, USA
| | - Weidong Quan
- New Energy Technology Engineering Laboratory of Jiangsu Province and School of Science, Nanjing University of Posts and Telecommunications (NJUPT), Nanjing, 210023, People's Republic of China
| | - Weiwei Mao
- New Energy Technology Engineering Laboratory of Jiangsu Province and School of Science, Nanjing University of Posts and Telecommunications (NJUPT), Nanjing, 210023, People's Republic of China
| | - Jianping Yang
- New Energy Technology Engineering Laboratory of Jiangsu Province and School of Science, Nanjing University of Posts and Telecommunications (NJUPT), Nanjing, 210023, People's Republic of China
| | - Liang Chu
- New Energy Technology Engineering Laboratory of Jiangsu Province and School of Science, Nanjing University of Posts and Telecommunications (NJUPT), Nanjing, 210023, People's Republic of China.
| | - Xing'ao Li
- New Energy Technology Engineering Laboratory of Jiangsu Province and School of Science, Nanjing University of Posts and Telecommunications (NJUPT), Nanjing, 210023, People's Republic of China.
- Key Laboratory for Organic Electronics and Information Displays and Institute of Advanced Materials, Jiangsu National Synergistic Innovation Center for Advanced Materials, School of Materials Science and Engineering, Nanjing University of Posts and Telecommunications (NUPT), 9 Wenyuan Road, Nanjing, 210023, People's Republic of China.
| |
Collapse
|
22
|
Shirai S, Acharya SK, Bose SK, Mallinson JB, Galli E, Pike MD, Arnold MD, Brown SA. Long-range temporal correlations in scale-free neuromorphic networks. Netw Neurosci 2020; 4:432-447. [PMID: 32537535 PMCID: PMC7286302 DOI: 10.1162/netn_a_00128] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 01/17/2020] [Indexed: 12/05/2022] Open
Abstract
Biological neuronal networks are the computing engines of the mammalian brain. These networks exhibit structural characteristics such as hierarchical architectures, small-world attributes, and scale-free topologies, providing the basis for the emergence of rich temporal characteristics such as scale-free dynamics and long-range temporal correlations. Devices that have both the topological and the temporal features of a neuronal network would be a significant step toward constructing a neuromorphic system that can emulate the computational ability and energy efficiency of the human brain. Here we use numerical simulations to show that percolating networks of nanoparticles exhibit structural properties that are reminiscent of biological neuronal networks, and then show experimentally that stimulation of percolating networks by an external voltage stimulus produces temporal dynamics that are self-similar, follow power-law scaling, and exhibit long-range temporal correlations. These results are expected to have important implications for the development of neuromorphic devices, especially for those based on the concept of reservoir computing. Biological neuronal networks exhibit well-defined properties such as hierarchical structures and scale-free topologies, as well as a high degree of local clustering and short path lengths between nodes. These structural properties are intimately connected to the observed long-range temporal correlations in the network dynamics. Fabrication of artificial networks with similar structural properties would facilitate brain-like (“neuromorphic”) computing. Here we show experimentally that percolating networks of nanoparticles exhibit similar long-range temporal correlations to those of biological neuronal networks and use simulations to demonstrate that the dynamics arise from an underlying scale-free network architecture. We discuss similarities between the biological and percolating systems and highlight the potential for the percolating networks to be used in neuromorphic computing applications.
Collapse
Affiliation(s)
- Shota Shirai
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Susant Kumar Acharya
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Saurabh Kumar Bose
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Joshua Brian Mallinson
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Edoardo Galli
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Matthew D Pike
- Electrical and Electronics Engineering, University of Canterbury, Christchurch, New Zealand
| | - Matthew D Arnold
- School of Mathematical and Physical Sciences, University of Technology Sydney, Australia
| | - Simon Anthony Brown
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| |
Collapse
|
23
|
Abderrahmane N, Lemaire E, Miramond B. Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence. Neural Netw 2019; 121:366-386. [PMID: 31593842 DOI: 10.1016/j.neunet.2019.09.024] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 09/16/2019] [Accepted: 09/17/2019] [Indexed: 11/30/2022]
Abstract
Machine learning is yielding unprecedented interest in research and industry, due to recent success in many applied contexts such as image classification and object recognition. However, the deployment of these systems requires huge computing capabilities, thus making them unsuitable for embedded systems. To deal with this limitation, many researchers are investigating brain-inspired computing, which would be a perfect alternative to the conventional Von Neumann architecture based computers (CPU/GPU) that meet the requirements for computing performance, but not for energy-efficiency. Therefore, neuromorphic hardware circuits that are adaptable for both parallel and distributed computations need to be designed. In this paper, we focus on Spiking Neural Networks (SNNs) with a comprehensive study of neural coding methods and hardware exploration. In this context, we propose a framework for neuromorphic hardware design space exploration, which allows to define a suitable architecture based on application-specific constraints and starting from a wide variety of possible architectural choices. For this framework, we have developed a behavioral level simulator for neuromorphic hardware architectural exploration named NAXT. Moreover, we propose modified versions of the standard Rate Coding technique to make trade-offs with the Time Coding paradigm, which is characterized by the low number of spikes propagating in the network. Thus, we are able to reduce the number of spikes while keeping the same neuron's model, which results in an SNN with fewer events to process. By doing so, we seek to reduce the amount of power consumed by the hardware. Furthermore, we present three neuromorphic hardware architectures in order to quantitatively study the implementation of SNNs. One of these architectures integrates a novel hybrid structure: a highly-parallel computation core for most solicited layers, and time-multiplexed computation units for deeper layers. These architectures are derived from a novel funnel-like Design Space Exploration framework for neuromorphic hardware.
Collapse
Affiliation(s)
| | - Edgar Lemaire
- Université Côte d'Azur, CNRS, LEAT, France; Thales Research Technology / STI Group / LCHP, Palaiseau, France.
| | | |
Collapse
|
24
|
Deng L, Wu Y, Hu X, Liang L, Ding Y, Li G, Zhao G, Li P, Xie Y. Rethinking the performance comparison between SNNS and ANNS. Neural Netw 2019; 121:294-307. [PMID: 31586857 DOI: 10.1016/j.neunet.2019.09.005] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 09/04/2019] [Accepted: 09/05/2019] [Indexed: 01/21/2023]
Abstract
Artificial neural networks (ANNs), a popular path towards artificial intelligence, have experienced remarkable success via mature models, various benchmarks, open-source datasets, and powerful computing platforms. Spiking neural networks (SNNs), a category of promising models to mimic the neuronal dynamics of the brain, have gained much attention for brain inspired computing and been widely deployed on neuromorphic devices. However, for a long time, there are ongoing debates and skepticisms about the value of SNNs in practical applications. Except for the low power attribute benefit from the spike-driven processing, SNNs usually perform worse than ANNs especially in terms of the application accuracy. Recently, researchers attempt to address this issue by borrowing learning methodologies from ANNs, such as backpropagation, to train high-accuracy SNN models. The rapid progress in this domain continuously produces amazing results with ever-increasing network size, whose growing path seems similar to the development of deep learning. Although these ways endow SNNs the capability to approach the accuracy of ANNs, the natural superiorities of SNNs and the way to outperform ANNs are potentially lost due to the use of ANN-oriented workloads and simplistic evaluation metrics. In this paper, we take the visual recognition task as a case study to answer the questions of "what workloads are ideal for SNNs and how to evaluate SNNs makes sense". We design a series of contrast tests using different types of datasets (ANN-oriented and SNN-oriented), diverse processing models, signal conversion methods, and learning algorithms. We propose comprehensive metrics on the application accuracy and the cost of memory & compute to evaluate these models, and conduct extensive experiments. We evidence the fact that on ANN-oriented workloads, SNNs fail to beat their ANN counterparts; while on SNN-oriented workloads, SNNs can fully perform better. We further demonstrate that in SNNs there exists a trade-off between the application accuracy and the execution cost, which will be affected by the simulation time window and firing threshold. Based on these abundant analyses, we recommend the most suitable model for each scenario. To the best of our knowledge, this is the first work using systematical comparisons to explicitly reveal that the straightforward workload porting from ANNs to SNNs is unwise although many works are doing so and a comprehensive evaluation indeed matters. Finally, we highlight the urgent need to build a benchmarking framework for SNNs with broader tasks, datasets, and metrics.
Collapse
Affiliation(s)
- Lei Deng
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing 100084, China; Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| | - Yujie Wu
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing 100084, China.
| | - Xing Hu
- Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| | - Ling Liang
- Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| | - Yufei Ding
- Department of Computer Science, University of California, Santa Barbara,, CA 93106, USA.
| | - Guoqi Li
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing 100084, China.
| | - Guangshe Zhao
- School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an 710049, China.
| | - Peng Li
- Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| | - Yuan Xie
- Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| |
Collapse
|
25
|
Wu X, Wang Y, Tang H, Yan R. A structure-time parallel implementation of spike-based deep learning. Neural Netw 2019; 113:72-78. [PMID: 30785011 DOI: 10.1016/j.neunet.2019.01.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 01/08/2019] [Accepted: 01/22/2019] [Indexed: 10/27/2022]
Abstract
Motivated by the recent progress of deep spiking neural networks (SNNs), we propose a structure-time parallel strategy based on layered structure and one-time computation over a time window to speed up the prominent spike-based deep learning algorithm named broadcast alignment. Furthermore, a well-designed deep hierarchical model based on the parallel broadcast alignment is proposed for object recognition. The parallel broadcast alignment achieves a significant 137× speedup compared to its original implementation on MNIST dataset. The object recognition model achieves higher accuracy than that of the latest spiking deep convolutional neural networks on the ETH-80 dataset. The proposed parallel strategy and the object recognition model will facilitate both the simulation of deep SNNs for studying spiking neural dynamics and also the applications of spike-based deep learning in real-world problems.
Collapse
Affiliation(s)
- Xi Wu
- Neuromorphic Computing Research Center, College of Computer Science, Sichuan University, Chengdu, 610065, China
| | - Yixuan Wang
- Neuromorphic Computing Research Center, College of Computer Science, Sichuan University, Chengdu, 610065, China
| | - Huajin Tang
- Neuromorphic Computing Research Center, College of Computer Science, Sichuan University, Chengdu, 610065, China
| | - Rui Yan
- Neuromorphic Computing Research Center, College of Computer Science, Sichuan University, Chengdu, 610065, China.
| |
Collapse
|
26
|
Detorakis G, Sheik S, Augustine C, Paul S, Pedroni BU, Dutt N, Krichmar J, Cauwenberghs G, Neftci E. Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning. Front Neurosci 2018; 12:583. [PMID: 30210274 PMCID: PMC6123384 DOI: 10.3389/fnins.2018.00583] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Accepted: 08/03/2018] [Indexed: 11/13/2022] Open
Abstract
Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, most neuromorphic hardware are trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy.
Collapse
Affiliation(s)
- Georgios Detorakis
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Sadique Sheik
- Biocircuits Institute, University of California, San Diego, La Jolla, CA, United States
| | - Charles Augustine
- Intel Corporation-Circuit Research Lab, Hillsboro, OR, United States
| | - Somnath Paul
- Intel Corporation-Circuit Research Lab, Hillsboro, OR, United States
| | - Bruno U. Pedroni
- Department of Bioengineering and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
| | - Nikil Dutt
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Jeffrey Krichmar
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Gert Cauwenberghs
- Department of Bioengineering and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
| | - Emre Neftci
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
27
|
Kulkarni SR, Rajendran B. Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization. Neural Netw 2018; 103:118-127. [PMID: 29674234 DOI: 10.1016/j.neunet.2018.03.019] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2017] [Revised: 02/13/2018] [Accepted: 03/27/2018] [Indexed: 12/17/2022]
Abstract
We demonstrate supervised learning in Spiking Neural Networks (SNNs) for the problem of handwritten digit recognition using the spike triggered Normalized Approximate Descent (NormAD) algorithm. Our network that employs neurons operating at sparse biological spike rates below 300Hz achieves a classification accuracy of 98.17% on the MNIST test database with four times fewer parameters compared to the state-of-the-art. We present several insights from extensive numerical experiments regarding optimization of learning parameters and network configuration to improve its accuracy. We also describe a number of strategies to optimize the SNN for implementation in memory and energy constrained hardware, including approximations in computing the neuronal dynamics and reduced precision in storing the synaptic weights. Experiments reveal that even with 3-bit synaptic weights, the classification accuracy of the designed SNN does not degrade beyond 1% as compared to the floating-point baseline. Further, the proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision. Thus, our study shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications.
Collapse
Affiliation(s)
- Shruti R Kulkarni
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, NJ, 07102, USA
| | - Bipin Rajendran
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, NJ, 07102, USA.
| |
Collapse
|
28
|
Ielmini D, Milo V. Physics-based modeling approaches of resistive switching devices for memory and in-memory computing applications. J Comput Electron 2017; 16:1121-1143. [PMID: 31997981 PMCID: PMC6956947 DOI: 10.1007/s10825-017-1101-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The semiconductor industry is currently challenged by the emergence of Internet of Things, Big data, and deep-learning techniques to enable object recognition and inference in portable computers. These revolutions demand new technologies for memory and computation going beyond the standard CMOS-based platform. In this scenario, resistive switching memory (RRAM) is extremely promising in the frame of storage technology, memory devices, and in-memory computing circuits, such as memristive logic or neuromorphic machines. To serve as enabling technology for these new fields, however, there is still a lack of industrial tools to predict the device behavior under certain operation schemes and to allow for optimization of the device properties based on materials and stack engineering. This work provides an overview of modeling approaches for RRAM simulation, at the level of technology computer aided design and high-level compact models for circuit simulations. Finite element method modeling, kinetic Monte Carlo models, and physics-based analytical models will be reviewed. The adaptation of modeling schemes to various RRAM concepts, such as filamentary switching and interface switching, will be discussed. Finally, application cases of compact modeling to simulate simple RRAM circuits for computing will be shown.
Collapse
Affiliation(s)
- D. Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria and IU.NET, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milan, Italy
| | - V. Milo
- Dipartimento di Elettronica, Informazione e Bioingegneria and IU.NET, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milan, Italy
| |
Collapse
|
29
|
Abstract
Frequency and phase of neural activity play important roles in the behaving brain. The emerging understanding of these roles has been informed by the design of analog devices that have been important to neuroscience, among them the neuroanalog computer developed by O. Schmitt and A. Hodgkin in the 1930s. Later J. von Neumann, in a search for high performance computing using microwaves, invented a logic machine based on crystal diodes that can perform logic functions including binary arithmetic. Described here is an embodiment of his machine using nano-magnetics. Electrical currents through point contacts on a ferromagnetic thin film can create oscillations in the magnetization of the film. Under natural conditions these properties of a ferromagnetic thin film may be described by a nonlinear Schrödinger equation for the film's magnetization. Radiating solutions of this system are referred to as spin waves, and communication within the film may be by spin waves or by directed graphs of electrical connections. It is shown here how to formulate a STO logic machine, and by computer simulation how this machine can perform several computations simultaneously using multiplexing of inputs, that this system can evaluate iterated logic functions, and that spin waves may communicate frequency, phase and binary information. Neural tissue and the Schmitt-Hodgkin, von Neumann and STO devices share a common bifurcation structure, although these systems operate on vastly different space and time scales; namely, all may exhibit Andronov-Hopf bifurcations. This suggests that neural circuits may be capable of the computational functionality as described by von Neumann.
Collapse
Affiliation(s)
- Frank Hoppensteadt
- Courant Institute of Mathematical Sciences, New York University, United States; Neurocirc LLC, United States.
| |
Collapse
|