1
|
Mehta SK, Mondal I, Yadav B, Kulkarni GU. Energy-efficient resistive switching synaptic devices based on patterned Ag nanotriangles with tunable gaps fabricated using plasma-assisted nanosphere lithography. NANOSCALE 2024. [PMID: 39268707 DOI: 10.1039/d4nr02748e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/17/2024]
Abstract
The development of synaptic devices featuring metallic nanostructures with brain-analog hierarchical architecture, capable of mimicking cognitive functionalities, has emerged as a focal point in neuromorphic computing. However, existing challenges, such as inconsistent and unpredictable switching, high voltage requirements, unguided filament formation, and detailed fabrication processes, have impeded technological progress in the domain. The present study addresses some of these challenges by leveraging periodic nanostructures of Ag fabricated via plasma-assisted nanosphere lithography (NSL). The triangular nanostructures with a preferred orientation offer enhanced localized electric fields, facilitating low voltage electromigration at the sharp edges to guide predictive filament formation. A thorough investigation into gap control between the nanostructures through oxygen plasma treatment enables the attainment of an optimized low switching voltage of 0.86 V and retention at an ultra-low current compliance of 100 nA. The optimized device consumes low power, typically in the fJ range, akin to biological neurons. Furthermore, the device showcases intriguing synaptic characteristics, including controlled transition from short- to long-term potentiation, associative learning, etc., projecting its potential in perceptive learning, memory formation, and brain-inspired computing. COMSOL Multiphysics simulation, supported by ex situ electron microscopic imaging, confirms the controlled and predictable filament formation facilitated by electric field enhancement across the strategic nanostructures. Thus, the work highlights the potential of NSL-based cost-effective fabrication techniques for realizing efficient and biomimetic synaptic devices for neuromorphic computing applications.
Collapse
Affiliation(s)
- Shubham K Mehta
- Chemistry & Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur P. O., Bangalore-560064, India.
| | - Indrajit Mondal
- Chemistry & Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur P. O., Bangalore-560064, India.
| | - Bhupesh Yadav
- Chemistry & Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur P. O., Bangalore-560064, India.
| | - Giridhar U Kulkarni
- Chemistry & Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur P. O., Bangalore-560064, India.
| |
Collapse
|
2
|
Shu F, Chen W, Chen Y, Liu G. 2D Atomic-Molecular Heterojunctions toward Brainoid Applications. Macromol Rapid Commun 2024:e2400529. [PMID: 39101667 DOI: 10.1002/marc.202400529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Revised: 07/23/2024] [Indexed: 08/06/2024]
Abstract
Brainoid computing using 2D atomic crystals and their heterostructures, by emulating the human brain's remarkable efficiency and minimal energy consumption in information processing, poses a formidable solution to the energy-efficiency and processing speed constraints inherent in the von Neumann architecture. However, conventional 2D material based heterostructures employed in brainoid devices are beset with limitations, performance uniformity, fabrication intricacies, and weak interfacial adhesion, which restrain their broader application. The introduction of novel 2D atomic-molecular heterojunctions (2DAMH), achieved through covalent functionalization of 2D materials with functional molecules, ushers in a new era for brain-like devices by providing both stability and tunability of functionalities. This review chiefly delves into the electronic attributes of 2DAMH derived from the synergy of polymer materials with 2D materials, emphasizing the most recent advancements in their utilization within memristive devices, particularly their potential in replicating the functionality of biological synapses. Despite ongoing challenges pertaining to precision in modification, scalability in production, and the refinement of underlying theories, the proliferation of innovative research is actively pursuing solutions. These endeavors illuminate the vast potential for incorporating 2DAMH within brain-inspired intelligent systems, highlighting the prospect of achieving a more efficient and energy-conserving computing paradigm.
Collapse
Affiliation(s)
- Fan Shu
- Department of Micro/Nano Electronics, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Weilin Chen
- Department of Micro/Nano Electronics, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yu Chen
- School of Chemistry and Molecular Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Gang Liu
- Department of Micro/Nano Electronics, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
3
|
Luo X, Qu H, Wang Y, Yi Z, Zhang J, Zhang M. Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10141-10153. [PMID: 35436200 DOI: 10.1109/tnnls.2022.3164930] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The brain-inspired spiking neural networks (SNNs) hold the advantages of lower power consumption and powerful computing capability. However, the lack of effective learning algorithms has obstructed the theoretical advance and applications of SNNs. The majority of the existing learning algorithms for SNNs are based on the synaptic weight adjustment. However, neuroscience findings confirm that synaptic delays can also be modulated to play an important role in the learning process. Here, we propose a gradient descent-based learning algorithm for synaptic delays to enhance the sequential learning performance of single spiking neuron. Moreover, we extend the proposed method to multilayer SNNs with spike temporal-based error backpropagation. In the proposed multilayer learning algorithm, information is encoded in the relative timing of individual neuronal spikes, and learning is performed based on the exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. Experimental results on both synthetic and realistic datasets show significant improvements in learning efficiency and accuracy over the existing spike temporal-based learning algorithms. We also evaluate the proposed learning method in an SNN-based multimodal computational model for audiovisual pattern recognition, and it achieves better performance compared with its counterparts.
Collapse
|
4
|
Grimaldi A, Perrinet LU. Learning heterogeneous delays in a layer of spiking neurons for fast motion detection. BIOLOGICAL CYBERNETICS 2023; 117:373-387. [PMID: 37695359 DOI: 10.1007/s00422-023-00975-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 08/18/2023] [Indexed: 09/12/2023]
Abstract
The precise timing of spikes emitted by neurons plays a crucial role in shaping the response of efferent biological neurons. This temporal dimension of neural activity holds significant importance in understanding information processing in neurobiology, especially for the performance of neuromorphic hardware, such as event-based cameras. Nonetheless, many artificial neural models disregard this critical temporal dimension of neural activity. In this study, we present a model designed to efficiently detect temporal spiking motifs using a layer of spiking neurons equipped with heterogeneous synaptic delays. Our model capitalizes on the diverse synaptic delays present on the dendritic tree, enabling specific arrangements of temporally precise synaptic inputs to synchronize upon reaching the basal dendritic tree. We formalize this process as a time-invariant logistic regression, which can be trained using labeled data. To demonstrate its practical efficacy, we apply the model to naturalistic videos transformed into event streams, simulating the output of the biological retina or event-based cameras. To evaluate the robustness of the model in detecting visual motion, we conduct experiments by selectively pruning weights and demonstrate that the model remains efficient even under significantly reduced workloads. In conclusion, by providing a comprehensive, event-driven computational building block, the incorporation of heterogeneous delays has the potential to greatly improve the performance of future spiking neural network algorithms, particularly in the context of neuromorphic chips.
Collapse
Affiliation(s)
- Antoine Grimaldi
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France.
| |
Collapse
|
5
|
Shen J, Zhao Y, Liu JK, Wang Y. HybridSNN: Combining Bio-Machine Strengths by Boosting Adaptive Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:5841-5855. [PMID: 34890341 DOI: 10.1109/tnnls.2021.3131356] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs), inspired by the neuronal network in the brain, provide biologically relevant and low-power consuming models for information processing. Existing studies either mimic the learning mechanism of brain neural networks as closely as possible, for example, the temporally local learning rule of spike-timing-dependent plasticity (STDP), or apply the gradient descent rule to optimize a multilayer SNN with fixed structure. However, the learning rule used in the former is local and how the real brain might do the global-scale credit assignment is still not clear, which means that those shallow SNNs are robust but deep SNNs are difficult to be trained globally and could not work so well. For the latter, the nondifferentiable problem caused by the discrete spike trains leads to inaccuracy in gradient computing and difficulties in effective deep SNNs. Hence, a hybrid solution is interesting to combine shallow SNNs with an appropriate machine learning (ML) technique not requiring the gradient computing, which is able to provide both energy-saving and high-performance advantages. In this article, we propose a HybridSNN, a deep and strong SNN composed of multiple simple SNNs, in which data-driven greedy optimization is used to build powerful classifiers, avoiding the derivative problem in gradient descent. During the training process, the output features (spikes) of selected weak classifiers are fed back to the pool for the subsequent weak SNN training and selection. This guarantees HybridSNN not only represents the linear combination of simple SNNs, as what regular AdaBoost algorithm generates, but also contains neuron connection information, thus closely resembling the neural networks of a brain. HybridSNN has the benefits of both low power consumption in weak units and overall data-driven optimizing strength. The network structure in HybridSNN is learned from training samples, which is more flexible and effective compared with existing fixed multilayer SNNs. Moreover, the topological tree of HybridSNN resembles the neural system in the brain, where pyramidal neurons receive thousands of synaptic input signals through their dendrites. Experimental results show that the proposed HybridSNN is highly competitive among the state-of-the-art SNNs.
Collapse
|
6
|
Yang S, Kim T, Kim S, Chung D, Kim TH, Lee JK, Kim S, Ismail M, Mahata C, Kim S, Cho S. Synaptic plasticity and non-volatile memory characteristics in TiN-nanocrystal-embedded 3D vertical memristor-based synapses for neuromorphic systems. NANOSCALE 2023; 15:13239-13251. [PMID: 37525621 DOI: 10.1039/d3nr01930f] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
Although vertical configurations for high-density storage require challenging process steps, such as etching high aspect ratios and atomic layer deposition (ALD), they are more affordable with a relatively simple lithography process and have been employed in many studies. Herein, the potential of memristors with CMOS-compatible 3D vertical stacked structures of Pt/Ti/HfOx/TiN-NCs/HfOx/TiN is examined for use in neuromorphic systems. The electrical characteristics (including I-V properties, retention, and endurance) were investigated for both planar single cells and vertical resistive random-access memory (VRRAM) cells at each layer, demonstrating their outstanding non-volatile memory capabilities. In addition, various synaptic functions (including potentiation and depression) under different pulse schemes, excitatory postsynaptic current (EPSC), and spike-timing-dependent plasticity (STDP) were investigated. In pattern recognition simulations, an improved recognition rate was achieved by the linearly changing conductance, which was enhanced by the incremental pulse scheme. The achieved results demonstrated the feasibility of employing VRRAM with TiN nanocrystals in neuromorphic systems that resemble the human brain.
Collapse
Affiliation(s)
- Seyeong Yang
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, South Korea.
| | - Taegyun Kim
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, South Korea.
| | - Sunghun Kim
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, South Korea.
| | - Daewon Chung
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, South Korea.
| | - Tae-Hyeon Kim
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Jung Kyu Lee
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, South Korea.
| | - Sungjoon Kim
- Department of Electrical and Computer Engineering, Seoul National University, Seoul 08826, South Korea
| | - Muhammad Ismail
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, South Korea.
| | - Chandreswar Mahata
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, South Korea.
| | - Sungjun Kim
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, South Korea.
| | - Seongjae Cho
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, South Korea.
| |
Collapse
|
7
|
Cimeša L, Ciric L, Ostojic S. Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 2023; 19:e1011315. [PMID: 37549194 PMCID: PMC10461857 DOI: 10.1371/journal.pcbi.1011315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/28/2023] [Accepted: 06/27/2023] [Indexed: 08/09/2023] Open
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Collapse
Affiliation(s)
- Ljubica Cimeša
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Lazar Ciric
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
8
|
Gautam A, Kohno T. Adaptive STDP-based on-chip spike pattern detection. Front Neurosci 2023; 17:1203956. [PMID: 37521704 PMCID: PMC10374023 DOI: 10.3389/fnins.2023.1203956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 06/15/2023] [Indexed: 08/01/2023] Open
Abstract
A spiking neural network (SNN) is a bottom-up tool used to describe information processing in brain microcircuits. It is becoming a crucial neuromorphic computational model. Spike-timing-dependent plasticity (STDP) is an unsupervised brain-like learning rule implemented in many SNNs and neuromorphic chips. However, a significant performance gap exists between ideal model simulation and neuromorphic implementation. The performance of STDP learning in neuromorphic chips deteriorates because the resolution of synaptic efficacy in such chips is generally restricted to 6 bits or less, whereas simulations employ the entire 64-bit floating-point precision available on digital computers. Previously, we introduced a bio-inspired learning rule named adaptive STDP and demonstrated via numerical simulation that adaptive STDP (using only 4-bit fixed-point synaptic efficacy) performs similarly to STDP learning (using 64-bit floating-point precision) in a noisy spike pattern detection model. Herein, we present the experimental results demonstrating the performance of adaptive STDP learning. To the best of our knowledge, this is the first study that demonstrates unsupervised noisy spatiotemporal spike pattern detection to perform well and maintain the simulation performance on a mixed-signal CMOS neuromorphic chip with low-resolution synaptic efficacy. The chip was designed in Taiwan Semiconductor Manufacturing Company (TSMC) 250 nm CMOS technology node and comprises a soma circuit and 256 synapse circuits along with their learning circuitry.
Collapse
|
9
|
Aceituno PV, Farinha MT, Loidl R, Grewe BF. Learning cortical hierarchies with temporal Hebbian updates. Front Comput Neurosci 2023; 17:1136010. [PMID: 37293353 PMCID: PMC10244748 DOI: 10.3389/fncom.2023.1136010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/25/2023] [Indexed: 06/10/2023] Open
Abstract
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| | | | - Reinhard Loidl
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Benjamin F. Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
10
|
Liu W, Liu X. Pre-stimulus network responses affect information coding in neural variability quenching. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
|
11
|
The effects of distractors on brightness perception based on a spiking network. Sci Rep 2023; 13:1517. [PMID: 36707550 PMCID: PMC9883501 DOI: 10.1038/s41598-023-28326-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/17/2023] [Indexed: 01/28/2023] Open
Abstract
Visual perception can be modified by the surrounding context. Particularly, experimental observations have demonstrated that visual perception and primary visual cortical responses could be modified by properties of surrounding distractors. However, the underlying mechanism remains unclear. To simulate primary visual cortical activities in this paper, we design a k-winner-take-all (k-WTA) spiking network whose responses are generated through probabilistic inference. In simulations, images with the same target and various surrounding distractors perform as stimuli. Distractors are designed with multiple varying properties, including the luminance, the sizes and the distances to the target. Simulations for each varying property are performed with other properties fixed. Each property could modify second-layer neural responses and interactions in the network. To the same target in the designed images, the modified network responses could simulate distinguishing brightness perception consistent with experimental observations. Our model provides a possible explanation of how the surrounding distractors modify primary visual cortical responses to induce various brightness perception of the given target.
Collapse
|
12
|
Zhou W, Wen S, Liu Y, Liu L, Liu X, Chen L. Forgetting memristor based STDP learning circuit for neural networks. Neural Netw 2023; 158:293-304. [PMID: 36493532 DOI: 10.1016/j.neunet.2022.11.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 10/18/2022] [Accepted: 11/14/2022] [Indexed: 11/21/2022]
Abstract
The circuit implementation of STDP based on memristor is of great significance for the application of neural network. However, recent research shows that the research on the pure circuit implementation of forgetting memristor and STDP is still rare. This paper proposes a new STDP learning rule implementation circuit based on the forgetting memristor. This kind of forgetting memory resistance synapse makes the neural network have the function of time-division multiplexing, but the instability of short-term memory will affect the learning ability of the neural network. This paper analyzes and discusses the influence of synapses with long-term and short-term memory on the learning characteristics of neural network STDP, which lays a foundation for the construction of time-division multiplexing neural network with long-term and short-term memory synapses. Through this circuit, it is found that the volatile memristor has different behaviors to the stimulus signal in different initial states, and the resulting LTP phenomenon is more in line with the forgetting effect in biology. This circuit has multiple adjustable parameters, which can fit the STDP learning rules under different conditions. The application of neural network proves the availability of this circuit.
Collapse
Affiliation(s)
- Wenhao Zhou
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China.
| | - Shiping Wen
- Centre for Artificial Intelligence, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia.
| | - Yi Liu
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China
| | - Lu Liu
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China
| | - Xin Liu
- Computer Vision and Pattern Recognition Laboratory, School of Engineering Science, Lappeenranta-Lahti University of Technology LUT, Finland.
| | - Ling Chen
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China; Computer Vision and Pattern Recognition Laboratory, School of Engineering Science, Lappeenranta-Lahti University of Technology LUT, Finland.
| |
Collapse
|
13
|
Gautam A, Kohno T. A Conductance-Based Silicon Synapse Circuit. Biomimetics (Basel) 2022; 7:246. [PMID: 36546946 PMCID: PMC9775663 DOI: 10.3390/biomimetics7040246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 12/09/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022] Open
Abstract
Neuron, synapse, and learning circuits inspired by the brain comprise the key components of a neuromorphic chip. In this study, we present a conductance-based analog silicon synapse circuit suitable for the implementation of reduced or multi-compartment neuron models. Compartmental models are more bio-realistic. They are implemented in neuromorphic chips aiming to mimic the electrical activities of the neuronal networks in the brain and incorporate biomimetic soma and synapse circuits. Most contemporary low-power analog synapse circuits implement bioinspired "current-based" synaptic models suited for the implementation of single-compartment point neuron models. They emulate the exponential decay profile of the synaptic current, but ignore the effect of the postsynaptic membrane potential on the synaptic current. This dependence is necessary to emulate shunting inhibition, which is thought to play important roles in information processing in the brain. The proposed circuit uses an oscillator-based resistor-type element at its output stage to incorporate this effect. This circuit is used to demonstrate the shunting inhibition phenomenon. Next, to demonstrate that the oscillatory nature of the induced synaptic current has no unforeseen effects, the synapse circuit is employed in a spatiotemporal spike pattern detection task. The task employs the adaptive spike-timing-dependent plasticity (STDP) learning rule, a bio-inspired learning rule introduced in a previous study. The mixed-signal chip is designed in a Taiwan Manufacturing Semiconductor Company 250 nm complementary metal oxide semiconductor technology node. It comprises a biomimetic soma circuit and 256 synapse circuits, along with their learning circuitries.
Collapse
Affiliation(s)
- Ashish Gautam
- Institute of Industrial Science, The University of Tokyo, Tokyo 153-8505, Japan
| | | |
Collapse
|
14
|
Goujon A, Mathy F, Thorpe S. The fate of visual long term memories for images across weeks in adults and children. Sci Rep 2022; 12:21763. [PMID: 36526824 PMCID: PMC9758234 DOI: 10.1038/s41598-022-26002-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 12/07/2022] [Indexed: 12/23/2022] Open
Abstract
What is the content and the format of visual memories in Long Term Memory (LTM)? Is it similar in adults and children? To address these issues, we investigated, in both adults and 9-year-old children, how visual LTM is affected over time and whether visual vs semantic features are affected differentially. In a learning phase, participants were exposed to hundreds of meaningless and meaningful images presented once or twice for either 120 ms or 1920 ms. Memory was assessed using a recognition task either immediately after learning or after a delay of three or six weeks. The results suggest that multiple and extended exposures are crucial for retaining an image for several weeks. Although a benefit was observed in the meaningful condition when memory was assessed immediately after learning, this benefit tended to disappear over weeks, especially when the images were presented twice for 1920 ms. This pattern was observed for both adults and children. Together, the results call into question the dominant models of LTM for images: although semantic information enhances the encoding & maintaining of images in LTM when assessed immediately, this seems not critical for LTM over weeks.
Collapse
Affiliation(s)
- Annabelle Goujon
- Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive UR 481, Université de Franche-Comté, 19 rue Ambroise Paré, 25030, Besançon, Cedex, France.
| | - Fabien Mathy
- Laboratory BCL CNRS UMR 7320 & Université Côte d'Azur, Nice, France
| | - Simon Thorpe
- CerCo-CNRS & Université de Toulouse 3, Toulouse, France
| |
Collapse
|
15
|
Gansel KS. Neural synchrony in cortical networks: mechanisms and implications for neural information processing and coding. Front Integr Neurosci 2022; 16:900715. [PMID: 36262373 PMCID: PMC9574343 DOI: 10.3389/fnint.2022.900715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/13/2022] [Indexed: 11/13/2022] Open
Abstract
Synchronization of neuronal discharges on the millisecond scale has long been recognized as a prevalent and functionally important attribute of neural activity. In this article, I review classical concepts and corresponding evidence of the mechanisms that govern the synchronization of distributed discharges in cortical networks and relate those mechanisms to their possible roles in coding and cognitive functions. To accommodate the need for a selective, directed synchronization of cells, I propose that synchronous firing of distributed neurons is a natural consequence of spike-timing-dependent plasticity (STDP) that associates cells repetitively receiving temporally coherent input: the “synchrony through synaptic plasticity” hypothesis. Neurons that are excited by a repeated sequence of synaptic inputs may learn to selectively respond to the onset of this sequence through synaptic plasticity. Multiple neurons receiving coherent input could thus actively synchronize their firing by learning to selectively respond at corresponding temporal positions. The hypothesis makes several predictions: first, the position of the cells in the network, as well as the source of their input signals, would be irrelevant as long as their input signals arrive simultaneously; second, repeating discharge patterns should get compressed until all or some part of the signals are synchronized; and third, this compression should be accompanied by a sparsening of signals. In this way, selective groups of cells could emerge that would respond to some recurring event with synchronous firing. Such a learned response pattern could further be modulated by synchronous network oscillations that provide a dynamic, flexible context for the synaptic integration of distributed signals. I conclude by suggesting experimental approaches to further test this new hypothesis.
Collapse
|
16
|
Mao R, Li S, Zhang Z, Xia Z, Xiao J, Zhu Z, Liu J, Shan W, Chang L, Zhou J. An Ultra-Energy-Efficient and High Accuracy ECG Classification Processor With SNN Inference Assisted by On-Chip ANN Learning. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:832-841. [PMID: 35737625 DOI: 10.1109/tbcas.2022.3185720] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The ECG classification processor is a key component in wearable intelligent ECG monitoring devices which monitor the ECG signals in real time and detect the abnormality automatically. The state-of-the-art ECG classification processors for wearable intelligent ECG monitoring devices are faced with two challenges, including ultra-low energy consumption demand and high classification accuracy demand against patient-to-patient variability. To address the above two challenges, in this work, an ultra-energy-efficient ECG classification processor with high classification accuracy is proposed. Several design techniques have been proposed, including a reconfigurable SNN/ANN inference architecture for reducing energy consumption while maintaining classification accuracy, a reconfigurable on-chip learning architecture for improving the classification accuracy against patent-to-patient variability, and a dual-purpose binary encoding scheme of ECG heartbeats for further reducing the energy consumption. Fabricated with a 28nm CMOS technology, the proposed design consumes extremely low classification energy (0.3μJ) while achieving high classification accuracy (97.36%) against patient-to-patient variability, outperforming several state-of-the-art designs.
Collapse
|
17
|
Wang Z, Liu J, Ma Y, Chen B, Zheng N, Ren P. Perturbation of Spike Timing Benefits Neural Network Performance on Similarity Search. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4361-4372. [PMID: 33606643 DOI: 10.1109/tnnls.2021.3056694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Perturbation has a positive effect, as it contributes to the stability of neural systems through adaptation and robustness. For example, deep reinforcement learning generally engages in exploratory behavior by injecting noise into the action space and network parameters. It can consistently increase the agent's exploration ability and lead to richer sets of behaviors. Evolutionary strategies also apply parameter perturbations, which makes network architecture robust and diverse. Our main concern is whether the notion of synaptic perturbation introduced in a spiking neural network (SNN) is biologically relevant or if novel frameworks and components are desired to account for the perturbation properties of artificial neural systems. In this work, we first review part of the locality-sensitive hashing (LSH) of similarity search, the FLY algorithm, as recently published in Science, and propose an improved architecture, time-shifted spiking LSH (TS-SLSH), with the consideration of temporal perturbations of the firing moments of spike pulses. Experiment results show promising performance of the proposed method and demonstrate its generality to various spiking neuron models. Therefore, we expect temporal perturbation to play an active role in SNN performance.
Collapse
|
18
|
Personalized Spiking Neural Network Models of Clinical and Environmental Factors to Predict Stroke. Cognit Comput 2022. [DOI: 10.1007/s12559-021-09975-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
AbstractThe high incidence of stroke occurrence necessitates the understanding of its causes and possible ways for early prediction and prevention. In this respect, statistical methods offer the “big picture,” but they have a weak predictive ability at an individual level. This research proposes a new personalized modeling method based on computational spiking neural networks (SNN) for the identification of causal associations between clinical and environmental time series data that can be used to predict individual stroke events. The method is tested on 804 stroke patients. Given a clinical data set of patients who experienced a stroke in the past and the corresponding environmental time-series data for a selected time-window before the stroke event, the method identifies the clusters of individuals with a high risk for stroke under similar conditions. The methodology involves a pipeline of processes when creating a personalized model for an individual $$x$$
x
: (1) selecting a group of individuals $$Gx$$
Gx
with similar personal records to $$x$$
x
; (2) training a personalized SNN $$x$$
x
model of several days of environmental data related to the $$Gx$$
Gx
group to predict the risk of stroke for $$x$$
x
at least one day earlier; (3) model interpretability through 3D visualization; (4) discovery of personalized predictive markers. The results are twofold, first proposing a new computational methodology and second presenting new findings. It is found that certain environmental factors, such as SO2, PM10, CO, and PM2.5, increase the risk of stroke if an individual $$x$$
x
belongs to a certain cluster of people, characterized by a combination of family history of stroke and diabetes, overweight, vascular/heart disease, age, and other. For the used population data, the proposed method can predict accurately individual risk of stroke before the day of the stroke. The paper presents a new methodology for personalized machine learning methods to define subgroups of the population with a high risk of stroke and to predict early individual risk of the stroke event. This makes the proposed cognitive computation method useful to reduce morbidity and mortality in society. The method is broadly applicable for predicting individual risk of other diseases and mental health conditions.
Collapse
|
19
|
Guo W, Yantir HE, Fouda ME, Eltawil AM, Salama KN. Toward the Optimal Design and FPGA Implementation of Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3988-4002. [PMID: 33571097 DOI: 10.1109/tnnls.2021.3055421] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The performance of a biologically plausible spiking neural network (SNN) largely depends on the model parameters and neural dynamics. This article proposes a parameter optimization scheme for improving the performance of a biologically plausible SNN and a parallel on-field-programmable gate array (FPGA) online learning neuromorphic platform for the digital implementation based on two numerical methods, namely, the Euler and third-order Runge-Kutta (RK3) methods. The optimization scheme explores the impact of biological time constants on information transmission in the SNN and improves the convergence rate of the SNN on digit recognition with a suitable choice of the time constants. The parallel digital implementation leads to a significant speedup over software simulation on a general-purpose CPU. The parallel implementation with the Euler method enables around 180× ( 20× ) training (inference) speedup over a Pytorch-based SNN simulation on CPU. Moreover, compared with previous work, our parallel implementation shows more than 300× ( 240× ) improvement on speed and 180× ( 250× ) reduction in energy consumption for training (inference). In addition, due to the high-order accuracy, the RK3 method is demonstrated to gain 2× training speedup over the Euler method, which makes it suitable for online training in real-time applications.
Collapse
|
20
|
Li AA, Wang F, Wu S, Zhang X. Emergence of probabilistic representation in the neural network of primary visual cortex. iScience 2022; 25:103975. [PMID: 35310336 PMCID: PMC8924637 DOI: 10.1016/j.isci.2022.103975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 10/27/2021] [Accepted: 02/21/2022] [Indexed: 11/12/2022] Open
Abstract
During the early development of the mammalian visual system, the distribution of neuronal preferred orientations in the primary visual cortex (V1) gradually shifts to match major orientation features of the environment, achieving its optimal representation. By combining computational modeling and electrophysiological recording, we provide a circuit plasticity mechanism that underlies the developmental emergence of such matched representation in the visual cortical network. Specifically, in a canonical circuit of densely-interconnected pyramidal cells and inhibitory parvalbumin-expressing (PV+) fast-spiking interneurons in V1 layer 2/3, our model successfully simulates the experimental observations and further reveals that the nonuniform inhibition plays a key role in shaping the network representation through spike timing-dependent plasticity. The experimental results suggest that PV + interneurons in V1 are capable of providing nonuniform inhibition shortly after vision onset. Our study elucidates a circuit mechanism for acquisition of prior knowledge of environment for optimal inference in sensory neural systems Computational and experimental methods are combined to representation in mice V1 Nonuniform inhibition plays a key role in shaping the network representation PV + interneurons provide nonuniform inhibition shortly after vision onset
Collapse
Affiliation(s)
- Ang A Li
- Academy for Advanced Interdisciplinary Studies, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Beijing, China
| | - Fengchao Wang
- Academy for Advanced Interdisciplinary Studies, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Beijing, China.,State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Si Wu
- Academy for Advanced Interdisciplinary Studies, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Beijing, China.,School of Psychology and Cognitive Sciences, Peking University, Beijing, China
| | - Xiaohui Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
21
|
Yu Q, Li S, Tang H, Wang L, Dang J, Tan KC. Toward Efficient Processing and Learning With Spikes: New Approaches for Multispike Learning. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1364-1376. [PMID: 32356771 DOI: 10.1109/tcyb.2020.2984888] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Spikes are the currency in central nervous systems for information transmission and processing. They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing. However, efficient processing and learning of discrete spikes still remain a challenging problem. In this article, we make our contributions toward this direction. A simplified spiking neuron model is first introduced with the effects of both synaptic input and firing output on the membrane potential being modeled with an impulse function. An event-driven scheme is then presented to further improve the processing efficiency. Based on the neuron model, we propose two new multispike learning rules which demonstrate better performance over other baselines on various tasks, including association, classification, and feature detection. In addition to efficiency, our learning rules demonstrate high robustness against the strong noise of different types. They can also be generalized to different spike coding schemes for the classification task, and notably, the single neuron is capable of solving multicategory classifications with our learning rules. In the feature detection task, we re-examine the ability of unsupervised spike-timing-dependent plasticity with its limitations being presented, and find a new phenomenon of losing selectivity. In contrast, our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied. Moreover, our rules cannot only detect features but also discriminate them. The improved performance of our methods would contribute to neuromorphic computing as a preferable choice.
Collapse
|
22
|
Anisimova M, van Bommel B, Wang R, Mikhaylova M, Wiegert JS, Oertner TG, Gee CE. Spike-timing-dependent plasticity rewards synchrony rather than causality. Cereb Cortex 2022; 33:23-34. [PMID: 35203089 PMCID: PMC9758582 DOI: 10.1093/cercor/bhac050] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 12/22/2021] [Accepted: 01/24/2022] [Indexed: 11/14/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) is a candidate mechanism for information storage in the brain, but the whole-cell recordings required for the experimental induction of STDP are typically limited to 1 h. This mismatch of time scales is a long-standing weakness in synaptic theories of memory. Here we use spectrally separated optogenetic stimulation to fire precisely timed action potentials (spikes) in CA3 and CA1 pyramidal cells. Twenty minutes after optogenetic induction of STDP (oSTDP), we observed timing-dependent depression (tLTD) and timing-dependent potentiation (tLTP), depending on the sequence of spiking. As oSTDP does not require electrodes, we could also assess the strength of these paired connections three days later. At this late time point, late tLTP was observed for both causal (CA3 before CA1) and anticausal (CA1 before CA3) timing, but not for asynchronous activity patterns (Δt = 50 ms). Blocking activity after induction of oSTDP prevented stable potentiation. Our results confirm that neurons wire together if they fire together, but suggest that synaptic depression after anticausal activation (tLTD) is a transient phenomenon.
Collapse
Affiliation(s)
- Margarita Anisimova
- Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf, Falkenried 94, D-20251 Hamburg, Germany
| | - Bas van Bommel
- Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf, Falkenried 94, D-20251 Hamburg, Germany,Institute for Chemistry and Biochemistry, Feie Universität Berlin, Berlin, Germany
| | - Rui Wang
- Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf, Falkenried 94, D-20251 Hamburg, Germany
| | - Marina Mikhaylova
- Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf, Falkenried 94, D-20251 Hamburg, Germany,Institute of Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Jörn Simon Wiegert
- Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf, Falkenried 94, D-20251 Hamburg, Germany
| | | | - Christine E Gee
- Corresponding author: Institute for Synaptic Physiology, Center for Molecular Neurobiology Hamburg, Falkenried 94, 20251 Hamburg, Germany.
| |
Collapse
|
23
|
Chakraborty B, Mukhopadhyay S. Characterization of Generalizability of Spike Timing Dependent Plasticity Trained Spiking Neural Networks. Front Neurosci 2021; 15:695357. [PMID: 34776837 PMCID: PMC8589121 DOI: 10.3389/fnins.2021.695357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 09/29/2021] [Indexed: 11/30/2022] Open
Abstract
A Spiking Neural Network (SNN) is trained with Spike Timing Dependent Plasticity (STDP), which is a neuro-inspired unsupervised learning method for various machine learning applications. This paper studies the generalizability properties of the STDP learning processes using the Hausdorff dimension of the trajectories of the learning algorithm. The paper analyzes the effects of STDP learning models and associated hyper-parameters on the generalizability properties of an SNN. The analysis is used to develop a Bayesian optimization approach to optimize the hyper-parameters for an STDP model for improving the generalizability properties of an SNN.
Collapse
Affiliation(s)
- Biswadeep Chakraborty
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | | |
Collapse
|
24
|
Gautam A, Kohno T. An Adaptive STDP Learning Rule for Neuromorphic Systems. Front Neurosci 2021; 15:741116. [PMID: 34630026 PMCID: PMC8498208 DOI: 10.3389/fnins.2021.741116] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 08/13/2021] [Indexed: 11/18/2022] Open
Abstract
The promise of neuromorphic computing to develop ultra-low-power intelligent devices lies in its ability to localize information processing and memory storage in synaptic circuits much like the synapses in the brain. Spiking neural networks modeled using high-resolution synapses and armed with local unsupervised learning rules like spike time-dependent plasticity (STDP) have shown promising results in tasks such as pattern detection and image classification. However, designing and implementing a conventional, multibit STDP circuit becomes complex both in terms of the circuitry and the required silicon area. In this work, we introduce a modified and hardware-friendly STDP learning (named adaptive STDP) implemented using just 4-bit synapses. We demonstrate the capability of this learning rule in a pattern recognition task, in which a neuron learns to recognize a specific spike pattern embedded within noisy inhomogeneous Poisson spikes. Our results demonstrate that the performance of the proposed learning rule (94% using just 4-bit synapses) is similar to the conventional STDP learning (96% using 64-bit floating-point precision). The models used in this study are ideal ones for a CMOS neuromorphic circuit with analog soma and synapse circuits and mixed-signal learning circuits. The learning circuit stores the synaptic weight in a 4-bit digital memory that is updated asynchronously. In circuit simulation with Taiwan Semiconductor Manufacturing Company (TSMC) 250 nm CMOS process design kit (PDK), the static power consumption of a single synapse and the energy per spike (to generate a synaptic current of amplitude 15 pA and time constant 3 ms) are less than 2 pW and 200 fJ, respectively. The static power consumption of the learning circuit is less than 135 pW, and the energy to process a pair of pre- and postsynaptic spikes corresponding to a single learning step is less than 235 pJ. A single 4-bit synapse (capable of being configured as excitatory, inhibitory, or shunting inhibitory) along with its learning circuitry and digital memory occupies around 17,250 μm2 of silicon area.
Collapse
Affiliation(s)
- Ashish Gautam
- Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Takashi Kohno
- Institute of Industrial Science, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
25
|
Doborjeh M, Doborjeh Z, Merkin A, Bahrami H, Sumich A, Krishnamurthi R, Medvedev ON, Crook-Rumsey M, Morgan C, Kirk I, Sachdev PS, Brodaty H, Kang K, Wen W, Feigin V, Kasabov N. Personalised predictive modelling with brain-inspired spiking neural networks of longitudinal MRI neuroimaging data and the case study of dementia. Neural Netw 2021; 144:522-539. [PMID: 34619582 DOI: 10.1016/j.neunet.2021.09.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 08/11/2021] [Accepted: 09/12/2021] [Indexed: 11/27/2022]
Abstract
BACKGROUND Longitudinal neuroimaging provides spatiotemporal brain data (STBD) measurement that can be utilised to understand dynamic changes in brain structure and/or function underpinning cognitive activities. Making sense of such highly interactive information is challenging, given that the features manifest intricate temporal, causal relations between the spatially distributed neural sources in the brain. METHODS The current paper argues for the advancement of deep learning algorithms in brain-inspired spiking neural networks (SNN), capable of modelling structural data across time (longitudinal measurement) and space (anatomical components). The paper proposes a methodology and a computational architecture based on SNN for building personalised predictive models from longitudinal brain data to accurately detect, understand, and predict the dynamics of an individual's functional brain state. The methodology includes finding clusters of similar data to each individual, data interpolation, deep learning in a 3-dimensional brain-template structured SNN model, classification and prediction of individual outcome, visualisation of structural brain changes related to the predicted outcomes, interpretation of results, and individual and group predictive marker discovery. RESULTS To demonstrate the functionality of the proposed methodology, the paper presents experimental results on a longitudinal magnetic resonance imaging (MRI) dataset derived from 175 older adults of the internationally recognised community-based cohort Sydney Memory and Ageing Study (MAS) spanning 6 years of follow-up. SIGNIFICANCE The models were able to accurately classify and predict 2 years ahead of cognitive decline, such as mild cognitive impairment (MCI) and dementia with 95% and 91% accuracy, respectively. The proposed methodology also offers a 3-dimensional visualisation of the MRI models reflecting the dynamic patterns of regional changes in white matter hyperintensity (WMH) and brain volume over 6 years. CONCLUSION The method is efficient for personalised predictive modelling on a wide range of neuroimaging longitudinal data, including also demographic, genetic, and clinical data. As a case study, it resulted in finding predictive markers for MCI and dementia as dynamic brain patterns using MRI data.
Collapse
Affiliation(s)
- Maryam Doborjeh
- Computer Science and Software Engineering Department, School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand.
| | - Zohreh Doborjeh
- Department of Audiology, School of Population Health, Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Alexander Merkin
- The National Institute for Stroke and Applied Neurosciences, School of Clinical Sciences, Auckland University of Technology, New Zealand
| | - Helena Bahrami
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand
| | - Alexander Sumich
- NTU Psychology, Nottingham Trent University, Nottingham, United Kingdom
| | - Rita Krishnamurthi
- The National Institute for Stroke and Applied Neurosciences, School of Clinical Sciences, Auckland University of Technology, New Zealand
| | - Oleg N Medvedev
- University of Waikato, School of Psychology, Hamilton, New Zealand
| | - Mark Crook-Rumsey
- NTU Psychology, Nottingham Trent University, Nottingham, United Kingdom; School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand
| | - Catherine Morgan
- School of Psychology and Centre for Brain Research, University of Auckland, New Zealand; Brain Research New Zealand - Rangahau Roro Aotearoa, Centre of Research Excellence, New Zealand
| | - Ian Kirk
- School of Psychology and Centre for Brain Research, University of Auckland, New Zealand; Brain Research New Zealand - Rangahau Roro Aotearoa, Centre of Research Excellence, New Zealand
| | - Perminder S Sachdev
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia; Neuropsychiatric Institute, the Prince of Wales Hospital, Sydney, Australia
| | - Henry Brodaty
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia
| | - Kristan Kang
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia
| | - Wei Wen
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia; Neuropsychiatric Institute, the Prince of Wales Hospital, Sydney, Australia
| | - Valery Feigin
- The National Institute for Stroke and Applied Neurosciences, School of Clinical Sciences, Auckland University of Technology, New Zealand; Research Center of Neurology, Moscow, Russia
| | - Nikola Kasabov
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand; George Moore Chair, Ulster University, Londonderry, United Kingdom
| |
Collapse
|
26
|
Schmidgall S, Ashkanazy J, Lawson W, Hays J. SpikePropamine: Differentiable Plasticity in Spiking Neural Networks. Front Neurorobot 2021; 15:629210. [PMID: 34630063 PMCID: PMC8493296 DOI: 10.3389/fnbot.2021.629210] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 08/11/2021] [Indexed: 11/17/2022] Open
Abstract
The adaptive changes in synaptic efficacy that occur between spiking neurons have been demonstrated to play a critical role in learning for biological neural networks. Despite this source of inspiration, many learning focused applications using Spiking Neural Networks (SNNs) retain static synaptic connections, preventing additional learning after the initial training period. Here, we introduce a framework for simultaneously learning the underlying fixed-weights and the rules governing the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in SNNs through gradient descent. We further demonstrate the capabilities of this framework on a series of challenging benchmarks, learning the parameters of several plasticity rules including BCM, Oja's, and their respective set of neuromodulatory variants. The experimental results display that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks that a traditional SNN fails to solve, even in the presence of significant noise. These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task, where near-minimal degradation in performance is observed in the presence of novel conditions not seen during the initial training period.
Collapse
Affiliation(s)
| | - Julia Ashkanazy
- U.S. Naval Research Laboratory, Washington, DC, United States
| | - Wallace Lawson
- U.S. Naval Research Laboratory, Washington, DC, United States
| | - Joe Hays
- U.S. Naval Research Laboratory, Washington, DC, United States
| |
Collapse
|
27
|
Abstract
In recent years, spiking neural networks (SNNs) have attracted increasingly more researchers to study by virtue of its bio-interpretability and low-power computing. The SNN simulator is an essential tool to accomplish image classification, recognition, speech recognition, and other tasks using SNN. However, most of the existing simulators for spike neural networks are clock-driven, which has two main problems. First, the calculation result is affected by time slice, which obviously shows that when the calculation accuracy is low, the calculation speed is fast, but when the calculation accuracy is high, the calculation speed is unacceptable. The other is the failure of lateral inhibition, which severely affects SNN learning. In order to solve these problems, an event-driven high accurate simulator named EDHA (Event-Driven High Accuracy) for spike neural networks is proposed in this paper. EDHA takes full advantage of the event-driven characteristics of SNN and only calculates when a spike is generated, which is independent of the time slice. Compared with previous SNN simulators, EDHA is completely event-driven, which reduces a large amount of calculations and achieves higher computational accuracy. The calculation speed of EDHA in the MNIST classification task is more than 10 times faster than that of mainstream clock-driven simulators. By optimizing the spike encoding method, the former can even achieve more than 100 times faster than the latter. Due to the cross-platform characteristics of Java, EDHA can run on x86, amd64, ARM, and other platforms that support Java.
Collapse
|
28
|
Xiang S, Ren Z, Song Z, Zhang Y, Guo X, Han G, Hao Y. Computing Primitive of Fully VCSEL-Based All-Optical Spiking Neural Network for Supervised Learning and Pattern Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2494-2505. [PMID: 32673197 DOI: 10.1109/tnnls.2020.3006263] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose computing primitive for an all-optical spiking neural network (SNN) based on vertical-cavity surface-emitting lasers (VCSELs) for supervised learning by using biologically plausible mechanisms. The spike-timing-dependent plasticity (STDP) model was established based on the dynamics of the vertical-cavity semiconductor optical amplifier (VCSOA) subject to dual-optical pulse injection. The neuron-synapse self-consistent unified model of the all-optical SNN was developed, which enables reproducing the essential neuron-like dynamics and STDP function. Optical character numbers are trained and tested by the proposed fully VCSEL-based all-optical SNN. Simulation results show that the proposed all-optical SNN is capable of recognizing ten numbers by a supervised learning algorithm, in which the input and output patterns as well as the teacher signals of the all-optical SNN are represented by spatiotemporal fashions. Moreover, the lateral inhibition is not required in our proposed architecture, which is friendly to the hardware implementation. The system-level unified model enables architecture-algorithm codesigns and optimization of all-optical SNN. To the best of our knowledge, the computing primitive of an all-optical SNN based on VCSELs for supervised learning has not yet been reported, which paves the way toward fully VCSEL-based large-scale photonic neuromorphic systems with low power consumption.
Collapse
|
29
|
Lan Y, Wang X, Wang Y. Spatio-Temporal Sequential Memory Model With Mini-Column Neural Network. Front Neurosci 2021; 15:650430. [PMID: 34121986 PMCID: PMC8195288 DOI: 10.3389/fnins.2021.650430] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 03/15/2021] [Indexed: 11/13/2022] Open
Abstract
Memory is an intricate process involving various faculties of the brain and is a central component in human cognition. However, the exact mechanism that brings about memory in our brain remains elusive and the performance of the existing memory models is not satisfactory. To overcome these problems, this paper puts forward a brain-inspired spatio-temporal sequential memory model based on spiking neural networks (SNNs). Inspired by the structure of the neocortex, the proposed model is structured by many mini-columns composed of biological spiking neurons. Each mini-column represents one memory item, and the firing of different spiking neurons in the mini-column depends on the context of the previous inputs. The Spike-Timing-Dependant Plasticity (STDP) is used to update the connections between excitatory neurons and formulates association between two memory items. In addition, the inhibitory neurons are employed to prevent incorrect prediction, which contributes to improving the retrieval accuracy. Experimental results demonstrate that the proposed model can effectively store a huge number of data and accurately retrieve them when sufficient context is provided. This work not only provides a new memory model but also suggests how memory could be formulated with excitatory/inhibitory neurons, spike-based encoding, and mini-column structure.
Collapse
Affiliation(s)
- Yawen Lan
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China.,School of Information Engineering, Southwest University of Science and Technology, Mianyang, China
| | - Xiaobin Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuchen Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
30
|
The effects of eye movements on the visual cortical responding variability based on a spiking network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
Dynamics of a Mutual Inhibition Circuit between Pyramidal Neurons Compared to Human Perceptual Competition. J Neurosci 2021; 41:1251-1264. [PMID: 33443089 DOI: 10.1523/jneurosci.2503-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 11/16/2020] [Accepted: 12/09/2020] [Indexed: 11/21/2022] Open
Abstract
Neural competition plays an essential role in active selection processes of noisy and ambiguous input signals, and it is assumed to underlie emergent properties of brain functioning, such as perceptual organization and decision-making. Despite ample theoretical research on neural competition, experimental tools to allow neurophysiological investigation of competing neurons have not been available. We developed a "hybrid" system where real-life neurons and a computer-simulated neural circuit interacted. It enabled us to construct a mutual inhibition circuit between two real-life pyramidal neurons. We then asked what dynamics this minimal unit of neural competition exhibits and compared them with the known behavioral-level dynamics of neural competition. We found that the pair of neurons shows bistability when activated simultaneously by current injections. The addition of modeled synaptic noise and changes in the activation strength showed that the dynamics of the circuit are strikingly similar to the known properties of bistable visual perception: The distribution of dominance durations showed a right-skewed shape, and the changes of the activation strengths caused changes in dominance, dominance durations, and reversal rates as stated in the well-known empirical laws of bistable perception known as Levelt's propositions.SIGNIFICANCE STATEMENT Visual perception emerges as the result of neural systems actively organizing visual signals that involves selection processes of competing neurons. While the neural competition, realized by a "mutual inhibition" circuit has been examined in many theoretical studies, its properties have not been investigated in real neurons. We have developed a "hybrid" system where two real-life pyramidal neurons in a mouse brain slice interact through a computer-simulated mutual inhibition circuit. We found that simultaneous activation of the neurons leads to bistable activity. We investigated the effect of noise and the effect of changes in the activation strength on the dynamics. We observed that the pair of neurons exhibit dynamics strikingly similar to the known properties of bistable visual perception.
Collapse
|
32
|
Time-Multiplexed Spiking Convolutional Neural Network Based on VCSELs for Unsupervised Image Classification. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11041383] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this work, we present numerical results concerning a multilayer “deep” photonic spiking convolutional neural network, arranged so as to tackle a 2D image classification task. The spiking neurons used are typical two-section quantum-well vertical-cavity surface-emitting lasers that exhibit isomorphic behavior to biological neurons, such as integrate-and-fire excitability and timing encoding. The isomorphism of the proposed scheme to biological networks is extended by replicating the retina ganglion cell for contrast detection in the photonic domain and by utilizing unsupervised spike dependent plasticity as the main training technique. Finally, in this work we also investigate the possibility of exploiting the fast carrier dynamics of lasers so as to time-multiplex spatial information and reduce the number of physical neurons used in the convolutional layers by orders of magnitude. This last feature unlocks new possibilities, where neuron count and processing speed can be interchanged so as to meet the constraints of different applications.
Collapse
|
33
|
Sanders PJ, Doborjeh ZG, Doborjeh MG, Kasabov NK, Searchfield GD. Prediction of Acoustic Residual Inhibition of Tinnitus Using a Brain-Inspired Spiking Neural Network Model. Brain Sci 2021; 11:52. [PMID: 33466500 PMCID: PMC7824871 DOI: 10.3390/brainsci11010052] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 12/23/2020] [Accepted: 01/02/2021] [Indexed: 02/07/2023] Open
Abstract
Auditory Residual Inhibition (ARI) is a temporary suppression of tinnitus that occurs in some people following the presentation of masking sounds. Differences in neural response to ARI stimuli may enable classification of tinnitus and a tailored approach to intervention in the future. In an exploratory study, we investigated the use of a brain-inspired artificial neural network to examine the effects of ARI on electroencephalographic function, as well as the predictive ability of the model. Ten tinnitus patients underwent two auditory stimulation conditions (constant and amplitude modulated broadband noise) at two time points and were then characterised as responders or non-responders, based on whether they experienced ARI or not. Using a spiking neural network model, we evaluated concurrent neural patterns generated across space and time from features of electroencephalographic data, capturing the neural dynamic changes before and after stimulation. Results indicated that the model may be used to predict the effect of auditory stimulation on tinnitus on an individual basis. This approach may aid in the development of predictive models for treatment selection.
Collapse
Affiliation(s)
- Philip J. Sanders
- Section of Audiology, The University of Auckland, Auckland 1023, New Zealand; (P.J.S.); (Z.G.D.)
- Eisdell Moore Centre, Auckland 1023, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1023, New Zealand
| | - Zohreh G. Doborjeh
- Section of Audiology, The University of Auckland, Auckland 1023, New Zealand; (P.J.S.); (Z.G.D.)
- Eisdell Moore Centre, Auckland 1023, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1023, New Zealand
| | - Maryam G. Doborjeh
- Information Technology and Software Engineering Department, Auckland University of Technology, Auckland 1010, New Zealand;
| | - Nikola K. Kasabov
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand;
- Intelligent Systems Research Centre, Ulster University, Derry/Londonderry BT48 7JL, UK
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand
| | - Grant D. Searchfield
- Section of Audiology, The University of Auckland, Auckland 1023, New Zealand; (P.J.S.); (Z.G.D.)
- Eisdell Moore Centre, Auckland 1023, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1023, New Zealand
| |
Collapse
|
34
|
Doborjeh Z, Doborjeh M, Crook-Rumsey M, Taylor T, Wang GY, Moreau D, Krägeloh C, Wrapson W, Siegert RJ, Kasabov N, Searchfield G, Sumich A. Interpretability of Spatiotemporal Dynamics of the Brain Processes Followed by Mindfulness Intervention in a Brain-Inspired Spiking Neural Network Architecture. SENSORS (BASEL, SWITZERLAND) 2020; 20:E7354. [PMID: 33371459 PMCID: PMC7767448 DOI: 10.3390/s20247354] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 12/16/2020] [Accepted: 12/17/2020] [Indexed: 01/05/2023]
Abstract
Mindfulness training is associated with improvements in psychological wellbeing and cognition, yet the specific underlying neurophysiological mechanisms underpinning these changes are uncertain. This study uses a novel brain-inspired artificial neural network to investigate the effect of mindfulness training on electroencephalographic function. Participants completed a 4-tone auditory oddball task (that included targets and physically similar distractors) at three assessment time points. In Group A (n = 10), these tasks were given immediately prior to 6-week mindfulness training, immediately after training and at a 3-week follow-up; in Group B (n = 10), these were during an intervention waitlist period (3 weeks prior to training), pre-mindfulness training and post-mindfulness training. Using a spiking neural network (SNN) model, we evaluated concurrent neural patterns generated across space and time from features of electroencephalographic data capturing the neural dynamics associated with the event-related potential (ERP). This technique capitalises on the temporal dynamics of the shifts in polarity throughout the ERP and spatially across electrodes. Findings support anteriorisation of connection weights in response to distractors relative to target stimuli. Right frontal connection weights to distractors were associated with trait mindfulness (positively) and depression (inversely). Moreover, mindfulness training was associated with an increase in connection weights to targets (bilateral frontal, left frontocentral, and temporal regions only) and distractors. SNN models were superior to other machine learning methods in the classification of brain states as a function of mindfulness training. Findings suggest SNN models can provide useful information that differentiates brain states based on distinct task demands and stimuli, as well as changes in brain states as a function of psychological intervention.
Collapse
Affiliation(s)
- Zohreh Doborjeh
- Faculty of Medical and Health Sciences, School of Population Health, Section of Audiology, The University of Auckland, Auckland 1142, New Zealand;
- Eisdell Moore Centre, The University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1142, New Zealand;
| | - Maryam Doborjeh
- Information Technology and Software Engineering Department, Auckland University of Technology, Auckland 1010, New Zealand;
| | - Mark Crook-Rumsey
- School of Psychology, Nottingham Trent University, Nottingham NG25 0QF, UK; (M.C.-R.); (A.S.)
| | - Tamasin Taylor
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1142, New Zealand;
| | - Grace Y. Wang
- Department of Psychology and Neuroscience, Auckland University of Technology, Auckland 0627, New Zealand; (G.Y.W.); (C.K.); (R.J.S.)
| | - David Moreau
- Centre for Brain Research, The University of Auckland, Auckland 1142, New Zealand;
- School of Psychology, The University of Auckland, Auckland 1142, New Zealand
| | - Christian Krägeloh
- Department of Psychology and Neuroscience, Auckland University of Technology, Auckland 0627, New Zealand; (G.Y.W.); (C.K.); (R.J.S.)
| | - Wendy Wrapson
- School of Public Health and Interdisciplinary Studies, Auckland University of Technology, Auckland 0627, New Zealand;
| | - Richard J. Siegert
- Department of Psychology and Neuroscience, Auckland University of Technology, Auckland 0627, New Zealand; (G.Y.W.); (C.K.); (R.J.S.)
| | - Nikola Kasabov
- Intelligent Systems Research Centre, Ulster University, Londonderry BT48 7JL, UK
- School of Engineering, Computing and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
| | - Grant Searchfield
- Faculty of Medical and Health Sciences, School of Population Health, Section of Audiology, The University of Auckland, Auckland 1142, New Zealand;
- Eisdell Moore Centre, The University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1142, New Zealand;
| | - Alexander Sumich
- School of Psychology, Nottingham Trent University, Nottingham NG25 0QF, UK; (M.C.-R.); (A.S.)
| |
Collapse
|
35
|
Yang JQ, Wang R, Ren Y, Mao JY, Wang ZP, Zhou Y, Han ST. Neuromorphic Engineering: From Biological to Spike-Based Hardware Nervous Systems. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2020; 32:e2003610. [PMID: 33165986 DOI: 10.1002/adma.202003610] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 07/27/2020] [Indexed: 06/11/2023]
Abstract
The human brain is a sophisticated, high-performance biocomputer that processes multiple complex tasks in parallel with high efficiency and remarkably low power consumption. Scientists have long been pursuing an artificial intelligence (AI) that can rival the human brain. Spiking neural networks based on neuromorphic computing platforms simulate the architecture and information processing of the intelligent brain, providing new insights for building AIs. The rapid development of materials engineering, device physics, chip integration, and neuroscience has led to exciting progress in neuromorphic computing with the goal of overcoming the von Neumann bottleneck. Herein, fundamental knowledge related to the structures and working principles of neurons and synapses of the biological nervous system is reviewed. An overview is then provided on the development of neuromorphic hardware systems, from artificial synapses and neurons to spike-based neuromorphic computing platforms. It is hoped that this review will shed new light on the evolution of brain-like computing.
Collapse
Affiliation(s)
- Jia-Qin Yang
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen, 518060, P. R. China
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Ruopeng Wang
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen, 518060, P. R. China
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Yi Ren
- Institute for Advanced Study, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Jing-Yu Mao
- Institute for Advanced Study, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Zhan-Peng Wang
- Institute for Advanced Study, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Ye Zhou
- Institute for Advanced Study, Shenzhen University, Shenzhen, 518060, P. R. China
| | - Su-Ting Han
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, 518060, P. R. China
| |
Collapse
|
36
|
Wang W, Song W, Yao P, Li Y, Van Nostrand J, Qiu Q, Ielmini D, Yang JJ. Integration and Co-design of Memristive Devices and Algorithms for Artificial Intelligence. iScience 2020; 23:101809. [PMID: 33305176 PMCID: PMC7718163 DOI: 10.1016/j.isci.2020.101809] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Memristive devices share remarkable similarities to biological synapses, dendrites, and neurons at both the physical mechanism level and unit functionality level, making the memristive approach to neuromorphic computing a promising technology for future artificial intelligence. However, these similarities do not directly transfer to the success of efficient computation without device and algorithm co-designs and optimizations. Contemporary deep learning algorithms demand the memristive artificial synapses to ideally possess analog weighting and linear weight-update behavior, requiring substantial device-level and circuit-level optimization. Such co-design and optimization have been the main focus of memristive neuromorphic engineering, which often abandons the “non-ideal” behaviors of memristive devices, although many of them resemble what have been observed in biological components. Novel brain-inspired algorithms are being proposed to utilize such behaviors as unique features to further enhance the efficiency and intelligence of neuromorphic computing, which calls for collaborations among electrical engineers, computing scientists, and neuroscientists.
Collapse
Affiliation(s)
- Wei Wang
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IU.NET, Piazza L. da Vinci 32, Milano 20133, Italy
| | - Wenhao Song
- Electrical and Computer Engineering Department, University of Southern California, Los Angeles, CA, USA
| | - Peng Yao
- Electrical and Computer Engineering Department, University of Southern California, Los Angeles, CA, USA
| | - Yang Li
- The Andrew and Erna Viterbi Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa 32000, Israel
| | | | - Qinru Qiu
- Electrical Engineering and Computer Science Department, Syracuse University, NY, USA
| | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IU.NET, Piazza L. da Vinci 32, Milano 20133, Italy
| | - J Joshua Yang
- Electrical and Computer Engineering Department, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
37
|
Galindo SE, Toharia P, Robles ÓD, Ros E, Pastor L, Garrido JA. Simulation, visualization and analysis tools for pattern recognition assessment with spiking neuronal networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.02.114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
38
|
Panda P, Aketi SA, Roy K. Toward Scalable, Efficient, and Accurate Deep Spiking Neural Networks With Backward Residual Connections, Stochastic Softmax, and Hybridization. Front Neurosci 2020; 14:653. [PMID: 32694977 PMCID: PMC7339963 DOI: 10.3389/fnins.2020.00653] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Accepted: 05/26/2020] [Indexed: 11/24/2022] Open
Abstract
Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications. In recent years, there have been several proposals focused on supervised (conversion, spike-based gradient descent) and unsupervised (spike timing dependent plasticity) training methods to improve the accuracy of SNNs on large-scale tasks. However, each of these methods suffer from scalability, latency, and accuracy limitations. In this paper, we propose novel algorithmic techniques of modifying the SNN configuration with backward residual connections, stochastic softmax, and hybrid artificial-and-spiking neuronal activations to improve the learning ability of the training methodologies to yield competitive accuracy, while, yielding large efficiency gains over their artificial counterparts. Note, artificial counterparts refer to conventional deep learning/artificial neural networks. Our techniques apply to VGG/Residual architectures, and are compatible with all forms of training methodologies. Our analysis reveals that the proposed solutions yield near state-of-the-art accuracy with significant energy-efficiency and reduced parameter overhead translating to hardware improvements on complex visual recognition tasks, such as, CIFAR10, Imagenet datatsets.
Collapse
Affiliation(s)
- Priyadarshini Panda
- Department of Electrical Engineering, Yale University, New Haven, CT, United States
| | - Sai Aparna Aketi
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States
| | - Kaushik Roy
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
39
|
Somatodendritic consistency check for temporal feature segmentation. Nat Commun 2020; 11:1554. [PMID: 32214100 PMCID: PMC7096495 DOI: 10.1038/s41467-020-15367-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 03/06/2020] [Indexed: 11/08/2022] Open
Abstract
The brain identifies potentially salient features within continuous information streams to process hierarchical temporal events. This requires the compression of information streams, for which effective computational principles are yet to be explored. Backpropagating action potentials can induce synaptic plasticity in the dendrites of cortical pyramidal neurons. By analogy with this effect, we model a self-supervising process that increases the similarity between dendritic and somatic activities where the somatic activity is normalized by a running average. We further show that a family of networks composed of the two-compartment neurons performs a surprisingly wide variety of complex unsupervised learning tasks, including chunking of temporal sequences and the source separation of mixed correlated signals. Common methods applicable to these temporal feature analyses were previously unknown. Our results suggest the powerful ability of neural networks with dendrites to analyze temporal features. This simple neuron model may also be potentially useful in neural engineering applications. The authors propose a learning rule for a neuron model with dendrite. In their model, somatodendritic interaction implements self-supervised learning applicable to a wide range of sequence learning tasks, including spike pattern detection, chunking temporal input and blind source separation.
Collapse
|
40
|
Efficient and hardware-friendly methods to implement competitive learning for spiking neural networks. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04755-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
41
|
Emelyanov AV, Nikiruy KE, Serenko AV, Sitnikov AV, Presnyakov MY, Rybka RB, Sboev AG, Rylkov VV, Kashkarov PK, Kovalchuk MV, Demin VA. Self-adaptive STDP-based learning of a spiking neuron with nanocomposite memristive weights. NANOTECHNOLOGY 2020; 31:045201. [PMID: 31578002 DOI: 10.1088/1361-6528/ab4a6d] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Neuromorphic systems consisting of artificial neurons and memristive synapses could provide a much better performance and a significantly more energy-efficient approach to the implementation of different types of neural network algorithms than traditional hardware with the Von-Neumann architecture. However, the memristive weight adjustment in the formal neuromorphic networks by the standard back-propagation techniques suffers from poor device-to-device reproducibility. One of the most promising approaches to overcome this problem is to use local learning rules for spiking neuromorphic architectures which potentially could be adaptive to the variability issue mentioned above. Different kinds of local rules for learning spiking systems are mostly realized on a bio-inspired spike-time-dependent plasticity (STDP) mechanism, which is an improved type of classical Hebbian learning. Whereas the STDP-like mechanism has already been shown to emerge naturally in memristive devices, the demonstration of its self-adaptive learning property, potentially overcoming the variability problem, is more challenging and has yet to be reported. Here we experimentally demonstrate an STDP-based learning protocol that ensures self-adaptation of the memristor resistive states, after only a very few spikes, and makes the plasticity sensitive only to the input signal configuration, but neither to the initial state of the devices nor their device-to-device variability. Then, it is shown that the self-adaptive learning of a spiking neuron with memristive weights on rate-coded patterns could also be realized with hardware-based STDP rules. The experiments have been carried out with nanocomposite-based (Co40Fe40B20) х (LiNbO3-y )100-х memristive structures, but their results are believed to be applicable to a wide range of memristive devices. All the experimental data were supported and extended by numerical simulations. There is a hope that the obtained results pave the way for building up reliable spiking neuromorphic systems composed of partially unreliable analog memristive elements, with a more complex architecture and the capability of unsupervised learning.
Collapse
Affiliation(s)
- A V Emelyanov
- National Research Center 'Kurchatov Institute', 123182 Moscow, Russia. Moscow Institute of Physics and Technology (State University), 141700 Dolgoprudny, Moscow Region, Russia
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
42
|
Lobov SA, Chernyshov AV, Krilova NP, Shamshin MO, Kazantsev VB. Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier. SENSORS 2020; 20:s20020500. [PMID: 31963143 PMCID: PMC7014236 DOI: 10.3390/s20020500] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 01/10/2020] [Accepted: 01/14/2020] [Indexed: 12/24/2022]
Abstract
One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.
Collapse
|
43
|
Milo V, Malavena G, Monzio Compagnoni C, Ielmini D. Memristive and CMOS Devices for Neuromorphic Computing. MATERIALS (BASEL, SWITZERLAND) 2020; 13:E166. [PMID: 31906325 PMCID: PMC6981548 DOI: 10.3390/ma13010166] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 12/17/2019] [Accepted: 12/18/2019] [Indexed: 11/17/2022]
Abstract
Neuromorphic computing has emerged as one of the most promising paradigms to overcome the limitations of von Neumann architecture of conventional digital processors. The aim of neuromorphic computing is to faithfully reproduce the computing processes in the human brain, thus paralleling its outstanding energy efficiency and compactness. Toward this goal, however, some major challenges have to be faced. Since the brain processes information by high-density neural networks with ultra-low power consumption, novel device concepts combining high scalability, low-power operation, and advanced computing functionality must be developed. This work provides an overview of the most promising device concepts in neuromorphic computing including complementary metal-oxide semiconductor (CMOS) and memristive technologies. First, the physics and operation of CMOS-based floating-gate memory devices in artificial neural networks will be addressed. Then, several memristive concepts will be reviewed and discussed for applications in deep neural network and spiking neural network architectures. Finally, the main technology challenges and perspectives of neuromorphic computing will be discussed.
Collapse
Affiliation(s)
| | | | | | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and Italian Universities Nanoelectronics Team (IU.NET), Piazza L. da Vinci 32, 20133 Milano, Italy; (V.M.); (G.M.); (C.M.C.)
| |
Collapse
|
44
|
Towards spike-based machine intelligence with neuromorphic computing. Nature 2019; 575:607-617. [PMID: 31776490 DOI: 10.1038/s41586-019-1677-2] [Citation(s) in RCA: 319] [Impact Index Per Article: 63.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 07/09/2019] [Indexed: 11/08/2022]
Abstract
Guided by brain-like 'spiking' computational frameworks, neuromorphic computing-brain-inspired computing for machine intelligence-promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm-hardware codesign.
Collapse
|
45
|
|
46
|
Bernert M, Yvert B. An Attention-Based Spiking Neural Network for Unsupervised Spike-Sorting. Int J Neural Syst 2019; 29:1850059. [DOI: 10.1142/s0129065718500594] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Bio-inspired computing using artificial spiking neural networks promises performances outperforming currently available computational approaches. Yet, the number of applications of such networks remains limited due to the absence of generic training procedures for complex pattern recognition, which require the design of dedicated architectures for each situation. We developed a spike-timing-dependent plasticity (STDP) spiking neural network (SSN) to address spike-sorting, a central pattern recognition problem in neuroscience. This network is designed to process an extracellular neural signal in an online and unsupervised fashion. The signal stream is continuously fed to the network and processed through several layers to output spike trains matching the truth after a short learning period requiring only few data. The network features an attention mechanism to handle the scarcity of action potential occurrences in the signal, and a threshold adaptation mechanism to handle patterns with different sizes. This method outperforms two existing spike-sorting algorithms at low signal-to-noise ratio (SNR) and can be adapted to process several channels simultaneously in the case of tetrode recordings. Such attention-based STDP network applied to spike-sorting opens perspectives to embed neuromorphic processing of neural data in future brain implants.
Collapse
Affiliation(s)
- Marie Bernert
- BrainTech Laboratory U1205, INSERM, 2280 Rue de la Piscine, 38400 Saint-Martin-d’Hères, France
- BrainTech Laboratory U1205, Université Grenoble Alpes, 2280 rue de la piscine, 38400 Saint-Martin-d’Hères, France
- LETI, CEA Grenoble, 17 Rue des Martyrs, 38000 Grenoble, France
| | - Blaise Yvert
- BrainTech Laboratory U1205, INSERM, 2280 Rue de la Piscine, 38400 Saint-Martin-d’Hères, France
- BrainTech Laboratory U1205, Université Grenoble Alpes, 2280 rue de la piscine, 38400 Saint-Martin-d’Hères, France
| |
Collapse
|
47
|
Kumarasinghe K, Kasabov N, Taylor D. Deep learning and deep knowledge representation in Spiking Neural Networks for Brain-Computer Interfaces. Neural Netw 2019; 121:169-185. [PMID: 31568895 DOI: 10.1016/j.neunet.2019.08.029] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 08/26/2019] [Accepted: 08/26/2019] [Indexed: 01/21/2023]
Abstract
OBJECTIVE This paper argues that Brain-Inspired Spiking Neural Network (BI-SNN) architectures can learn and reveal deep in time-space functional and structural patterns from spatio-temporal data. These patterns can be represented as deep knowledge, in a partial case in the form of deep spatio-temporal rules. This is a promising direction for building new types of Brain-Computer Interfaces called Brain-Inspired Brain-Computer Interfaces (BI-BCI). A theoretical framework and its experimental validation on deep knowledge extraction and representation using SNN are presented. RESULTS The proposed methodology was applied in a case study to extract deep knowledge of the functional and structural organisation of the brain's neural network during the execution of a Grasp and Lift task. The BI-BCI successfully extracted the neural trajectories that represent the dorsal and ventral visual information processing streams as well as its connection to the motor cortex in the brain. Deep spatiotemporal rules on functional and structural interaction of distinct brain areas were then used for event prediction in BI-BCI. SIGNIFICANCE The computational framework can be used for unveiling the topological patterns of the brain and such knowledge can be effectively used to enhance the state-of-the-art in BCI.
Collapse
Affiliation(s)
- Kaushalya Kumarasinghe
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand; Health and Rehabilitation Research Institute, Auckland University of Technology, Auckland, New Zealand.
| | - Nikola Kasabov
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand.
| | - Denise Taylor
- Health and Rehabilitation Research Institute, Auckland University of Technology, Auckland, New Zealand.
| |
Collapse
|
48
|
Camuñas-Mesa LA, Linares-Barranco B, Serrano-Gotarredona T. Neuromorphic Spiking Neural Networks and Their Memristor-CMOS Hardware Implementations. MATERIALS (BASEL, SWITZERLAND) 2019; 12:E2745. [PMID: 31461877 PMCID: PMC6747825 DOI: 10.3390/ma12172745] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 08/02/2019] [Accepted: 08/10/2019] [Indexed: 11/17/2022]
Abstract
Inspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses. However, the realization of learning strategies in these systems consumes an important proportion of resources in terms of area and power. The recent development of nanoscale memristors that can be integrated with Complementary Metal-Oxide-Semiconductor (CMOS) technology opens a very promising solution to emulate the behavior of biological synapses. Therefore, hybrid memristor-CMOS approaches have been proposed to implement large-scale neural networks with learning capabilities, offering a scalable and lower-cost alternative to existing CMOS systems.
Collapse
Affiliation(s)
- Luis A Camuñas-Mesa
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain.
| | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| | - Teresa Serrano-Gotarredona
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| |
Collapse
|
49
|
Guo Y, Wu H, Gao B, Qian H. Unsupervised Learning on Resistive Memory Array Based Spiking Neural Networks. Front Neurosci 2019; 13:812. [PMID: 31447634 PMCID: PMC6691091 DOI: 10.3389/fnins.2019.00812] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 07/22/2019] [Indexed: 11/13/2022] Open
Abstract
Spiking Neural Networks (SNNs) offer great potential to promote both the performance and efficiency of real-world computing systems, considering the biological plausibility of SNNs. The emerging analog Resistive Random Access Memory (RRAM) devices have drawn increasing interest as potential neuromorphic hardware for implementing practical SNNs. In this article, we propose a novel training approach (called greedy training) for SNNs by diluting spike events on the temporal dimension with necessary controls on input encoding phase switching, endowing SNNs with the ability to cooperate with the inevitable conductance variations of RRAM devices. The SNNs could utilize Spike-Timing-Dependent Plasticity (STDP) as the unsupervised learning rule, and this plasticity has been observed on our one-transistor-one-resistor (1T1R) RRAM devices under voltage pulses with designed waveforms. We have also conducted handwritten digit recognition task simulations on MNIST dataset. The results show that the unsupervised SNNs trained by the proposed method could mitigate the requirement for the number of gradual levels of RRAM devices, and also have immunity to both cycle-to-cycle and device-to-device RRAM conductance variations. Unsupervised SNNs trained by the proposed methods could cooperate with real RRAM devices with non-ideal behaviors better, promising high feasibility of RRAM array based neuromorphic systems for online training.
Collapse
Affiliation(s)
- Yilong Guo
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Huaqiang Wu
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Bin Gao
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - He Qian
- Institute of Microelectronics, Tsinghua University, Beijing, China
| |
Collapse
|
50
|
Diamond A, Schmuker M, Nowotny T. An unsupervised neuromorphic clustering algorithm. BIOLOGICAL CYBERNETICS 2019; 113:423-437. [PMID: 30944983 PMCID: PMC6658584 DOI: 10.1007/s00422-019-00797-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Accepted: 03/23/2019] [Indexed: 06/09/2023]
Abstract
Brains perform complex tasks using a fraction of the power that would be required to do the same on a conventional computer. New neuromorphic hardware systems are now becoming widely available that are intended to emulate the more power efficient, highly parallel operation of brains. However, to use these systems in applications, we need "neuromorphic algorithms" that can run on them. Here we develop a spiking neural network model for neuromorphic hardware that uses spike timing-dependent plasticity and lateral inhibition to perform unsupervised clustering. With this model, time-invariant, rate-coded datasets can be mapped into a feature space with a specified resolution, i.e., number of clusters, using exclusively neuromorphic hardware. We developed and tested implementations on the SpiNNaker neuromorphic system and on GPUs using the GeNN framework. We show that our neuromorphic clustering algorithm achieves results comparable to those of conventional clustering algorithms such as self-organizing maps, neural gas or k-means clustering. We then combine it with a previously reported supervised neuromorphic classifier network to demonstrate its practical use as a neuromorphic preprocessing module.
Collapse
Affiliation(s)
- Alan Diamond
- School of Engineering and Informatics, University of Sussex, Falmer, Brighton, BN1 9QJ UK
| | - Michael Schmuker
- Department of Computer Science, University of Hertfordshire Hatfield, Hertfordshire, AL10 9AB UK
| | - Thomas Nowotny
- School of Engineering and Informatics, University of Sussex, Falmer, Brighton, BN1 9QJ UK
| |
Collapse
|