1
|
Das H, Schuman C, Chakraborty NN, Rose GS. Enhanced read resolution in reconfigurable memristive synapses for Spiking Neural Networks. Sci Rep 2024; 14:8897. [PMID: 38632304 PMCID: PMC11024114 DOI: 10.1038/s41598-024-58947-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 04/04/2024] [Indexed: 04/19/2024] Open
Abstract
The synapse is a key element circuit in any memristor-based neuromorphic computing system. A memristor is a two-terminal analog memory device. Memristive synapses suffer from various challenges including high voltage, SET or RESET failure, and READ margin issues that can degrade the distinguishability of stored weights. Enhancing READ resolution is very important to improving the reliability of memristive synapses. Usually, the READ resolution is very small for a memristive synapse with a 4-bit data precision. This work considers a step-by-step analysis to enhance the READ current resolution or the read current difference between two resistance levels for a current-controlled memristor-based synapse. An empirical model is used to characterize the HfO 2 based memristive device. 1 st and 2 nd stage device of our proposed synapse design can be scaled to enhance the READ current margin up to ∼ 4.3 × and ∼ 21%, respectively. Moreover, READ current resolution can be enhanced with run-time adaptation techniques such as READ voltage scaling and body biasing. The READ voltage scaling and body biasing can improve the READ current resolution by about 46% and 15%, respectively. TENNLab's neuromorphic computing framework is leveraged to evaluate the effect of READ current resolution on classification, control, and reservoir computing applications. Higher READ current resolution shows better accuracy than lower resolution even when facing different levels of read noise.
Collapse
Affiliation(s)
- Hritom Das
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996, USA.
| | - Catherine Schuman
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996, USA
| | - Nishith N Chakraborty
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996, USA
| | - Garrett S Rose
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996, USA
| |
Collapse
|
2
|
The Impact of Trap-Assisted Tunneling and Poole–Frenkel Emission on Synaptic Potentiation in an α-Fe2O3/p-Si Memristive Device. SCI 2023. [DOI: 10.3390/sci5010003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023] Open
Abstract
A signature of synaptic potentiation conductance has been observed in an α-Fe2O3/p-Si device fabricated using spin coating. The conductance of the device in dark conditions and illumination with a white light source was characterized as a function of the application of a periodic bias (voltage) with a triangular profile. The conductance of the device increases with the number of voltage cycles applied and plateaus to its maximum value of 0.70 μS under dark conditions and 12.00 μS under illumination, and this mimics the analog synaptic weight change with the action potential of a neuron. In the range of applied voltage from 0 V to 0.7 V, the conduction mechanism corresponds to trap-assisted tunneling (TAT) and in the range of 0.7–5 V it corresponds to the Poole–Frenkel emission (PFE). The conductance as a function of electrical pulses was fitted with a Hill function, which is a measure of cooperation in biological systems. In this case, it allows one to determine the turn-on threshold (K) of the device in terms of the number of voltage pulses, which are found to be 3 and 166 under dark and illumination conditions, respectively. The gradual conductance change and activation after a certain number of pulses perfectly mimics the synaptic potentiation of neurons. In addition, the threshold parameter extracted from the Hill equation fit, acting as the number of pulses for synaptic activation, is found to have programmability with the intensity of the light illumination.
Collapse
|
3
|
Mayacela M, Rentería L, Contreras L, Medina S. Comparative Analysis of Reconfigurable Platforms for Memristor Emulation. MATERIALS 2022; 15:ma15134487. [PMID: 35806617 PMCID: PMC9267316 DOI: 10.3390/ma15134487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 05/30/2022] [Accepted: 06/07/2022] [Indexed: 12/04/2022]
Abstract
The memristor is the fourth fundamental element in the electronic circuit field, whose memory and resistance properties make it unique. Although there are no electronic solutions based on the memristor, interest in application development has increased significantly. Nevertheless, there are only numerical Matlab or Spice models that can be used for simulating memristor systems, and designing is limited to using memristor emulators only. A memristor emulator is an electronic circuit that mimics a memristor. In this way, a research approach is to build discrete-component emulators of memristors for its study without using the actual models. In this work, two reconfigurable hardware architectures have been proposed for use in the prototyping of a non-linearity memristor emulator: the FPAA (Field Programing Analog Arrays) and the FPGA (Field Programming Gate Array). The easy programming and reprogramming of the first architecture and the performance, high area density, and parallelism of the second one allow the implementation of this type of system. In addition, a detailed comparison is shown to underline the main differences between the two approaches. These platforms could be used in more complex analog and/or digital systems, such as neural networks, CNN, digital circuits, etc.
Collapse
Affiliation(s)
- Margarita Mayacela
- Faculty of Civil and Mechanical Engineering, Research and Development Directorate, Technical University of Ambato, Ambato 180207, Ecuador; (L.C.); (S.M.)
- Correspondence: ; Tel.: +593-960596700
| | - Leonardo Rentería
- Faculty of Engineering, National University of Chimborazo, Av. Antonio José de Sucre, Riobamba 060108, Ecuador;
| | - Luis Contreras
- Faculty of Civil and Mechanical Engineering, Research and Development Directorate, Technical University of Ambato, Ambato 180207, Ecuador; (L.C.); (S.M.)
| | - Santiago Medina
- Faculty of Civil and Mechanical Engineering, Research and Development Directorate, Technical University of Ambato, Ambato 180207, Ecuador; (L.C.); (S.M.)
| |
Collapse
|
4
|
Suzuki Y, Asakawa N. Stochastic Resonance in Organic Electronic Devices. Polymers (Basel) 2022; 14:polym14040747. [PMID: 35215663 PMCID: PMC8878602 DOI: 10.3390/polym14040747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/07/2022] [Accepted: 02/09/2022] [Indexed: 01/27/2023] Open
Abstract
Stochastic Resonance (SR) is a phenomenon in which noise improves the performance of a system. With the addition of noise, a weak input signal to a nonlinear system, which may exceed its threshold, is transformed into an output signal. In the other words, noise-driven signal transfer is achieved. SR has been observed in nonlinear response systems, such as biological and artificial systems, and this review will focus mainly on examples of previous studies of mathematical models and experimental realization of SR using poly(hexylthiophene)-based organic field-effect transistors (OFETs). This phenomenon may contribute to signal processing with low energy consumption. However, the generation of SR requires a noise source. Therefore, the focus is on OFETs using materials such as organic materials with unstable electrical properties and critical elements due to unidirectional signal transmission, such as neural synapses. It has been reported that SR can be observed in OFETs by application of external noise. However, SR does not occur under conditions where the input signal exceeds the OFET threshold without external noise. Here, we present an example of a study that analyzes the behavior of SR in OFET systems and explain how SR can be made observable. At the same time, the role of internal noise in OFETs will be explained.
Collapse
|
5
|
Yoo J, Shoaran M. Neural interface systems with on-device computing: machine learning and neuromorphic architectures. Curr Opin Biotechnol 2021; 72:95-101. [PMID: 34735990 DOI: 10.1016/j.copbio.2021.10.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 10/18/2021] [Accepted: 10/19/2021] [Indexed: 11/26/2022]
Abstract
Development of neural interface and brain-machine interface (BMI) systems enables the treatment of neurological disorders including cognitive, sensory, and motor dysfunctions. While neural interfaces have steadily decreased in form factor, recent developments target pervasive implantables. Along with advances in electrodes, neural recording, and neurostimulation circuits, integration of disease biomarkers and machine learning algorithms enables real-time and on-site processing of neural activity with no need for power-demanding telemetry. This recent trend on combining artificial intelligence and machine learning with modern neural interfaces will lead to a new generation of low-power, smart, and miniaturized therapeutic devices for a wide range of neurological and psychiatric disorders. This paper reviews the recent development of the 'on-chip' machine learning and neuromorphic architectures, which is one of the key puzzles in devising next-generation clinically viable neural interface systems.
Collapse
Affiliation(s)
- Jerald Yoo
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, 117585, Singapore; The N.1 Institute for Health, Singapore, Singapore, 117456, Singapore
| | - Mahsa Shoaran
- Institute of Electrical Engineering, Center for Neuroprosthetics, École polytechnique federal de Lausanne (EPFL), 1202, Geneva, Switzerland.
| |
Collapse
|
6
|
James A. The Why, What and How of Artificial General Intelligence Chip Development. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2021.3069871] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
7
|
|
8
|
Shukla A, Ganguly U. An On-Chip Trainable and the Clock-Less Spiking Neural Network With 1R Memristive Synapses. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:884-893. [PMID: 29993721 DOI: 10.1109/tbcas.2018.2831618] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spiking neural networks (SNNs) are being explored in an attempt to mimic brain's capability to learn and recognize at low power. Crossbar architecture with highly scalable resistive RAM or RRAM array serving as synaptic weights and neuronal drivers in the periphery is an attractive option for the SNN. Recognition (akin to "reading" the synaptic weight) requires small amplitude bias applied across the RRAM to minimize conductance change. Learning (akin to "writing" or updating the synaptic weight) requires large amplitude bias pulses to produce a conductance change. The contradictory bias amplitude requirement to perform reading and writing simultaneously and asynchronously, akin to biology, is a major challenge. Solutions suggested in the literature rely on time-division-multiplexing of read and write operations based on clocks, or approximations ignoring the reading when coincidental with writing. In this paper, we overcome this challenge and present a clock-less approach wherein reading and writing are performed in different frequency domains. This enables learning and recognition simultaneously on an SNN. We validate our scheme in SPICE circuit simulator by translating a two-layered feed-forward Iris classifying SNN to demonstrate software-equivalent performance. The system performance is not adversely affected by a voltage dependence of conductance in realistic RRAMs, despite departing from linearity. Overall, our approach enables direct implementation of biological SNN algorithms in hardware.
Collapse
|
9
|
Agarwal S, Quach TT, Parekh O, Hsia AH, DeBenedictis EP, James CD, Marinella MJ, Aimone JB. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding. Front Neurosci 2016; 9:484. [PMID: 26778946 PMCID: PMC4701906 DOI: 10.3389/fnins.2015.00484] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2015] [Accepted: 12/07/2015] [Indexed: 11/26/2022] Open
Abstract
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Collapse
Affiliation(s)
- Sapan Agarwal
- Microsystems Science and Technology, Sandia National Laboratories Albuquerque, NM, USA
| | - Tu-Thach Quach
- Sensor Exploitation, Sandia National Laboratories Albuquerque, NM, USA
| | - Ojas Parekh
- Center for Computing Research, Sandia National Laboratories Albuquerque, NM, USA
| | - Alexander H Hsia
- Microsystems Science and Technology, Sandia National Laboratories Albuquerque, NM, USA
| | - Erik P DeBenedictis
- Center for Computing Research, Sandia National Laboratories Albuquerque, NM, USA
| | - Conrad D James
- Microsystems Science and Technology, Sandia National Laboratories Albuquerque, NM, USA
| | - Matthew J Marinella
- Microsystems Science and Technology, Sandia National Laboratories Albuquerque, NM, USA
| | - James B Aimone
- Center for Computing Research, Sandia National Laboratories Albuquerque, NM, USA
| |
Collapse
|
10
|
Srinivasa N, Stepp ND, Cruz-Albrecht J. Criticality as a Set-Point for Adaptive Behavior in Neuromorphic Hardware. Front Neurosci 2015; 9:449. [PMID: 26648839 PMCID: PMC4664726 DOI: 10.3389/fnins.2015.00449] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2015] [Accepted: 11/13/2015] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware are designed by drawing inspiration from biology to overcome limitations of current computer architectures while forging the development of a new class of autonomous systems that can exhibit adaptive behaviors. Several designs in the recent past are capable of emulating large scale networks but avoid complexity in network dynamics by minimizing the number of dynamic variables that are supported and tunable in hardware. We believe that this is due to the lack of a clear understanding of how to design self-tuning complex systems. It has been widely demonstrated that criticality appears to be the default state of the brain and manifests in the form of spontaneous scale-invariant cascades of neural activity. Experiment, theory and recent models have shown that neuronal networks at criticality demonstrate optimal information transfer, learning and information processing capabilities that affect behavior. In this perspective article, we argue that understanding how large scale neuromorphic electronics can be designed to enable emergent adaptive behavior will require an understanding of how networks emulated by such hardware can self-tune local parameters to maintain criticality as a set-point. We believe that such capability will enable the design of truly scalable intelligent systems using neuromorphic hardware that embrace complexity in network dynamics rather than avoiding it.
Collapse
Affiliation(s)
- Narayan Srinivasa
- Information and System Sciences Lab, Center for Neural and Emergent Systems, HRL Laboratories LLC Malibu, CA, USA
| | - Nigel D Stepp
- Information and System Sciences Lab, Center for Neural and Emergent Systems, HRL Laboratories LLC Malibu, CA, USA
| | | |
Collapse
|
11
|
Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks. Neural Netw 2015; 72:152-67. [DOI: 10.1016/j.neunet.2015.07.004] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2015] [Revised: 05/29/2015] [Accepted: 07/09/2015] [Indexed: 11/21/2022]
|
12
|
Nazari S, Amiri M, Faez K, Amiri M. Multiplier-less digital implementation of neuron–astrocyte signalling on FPGA. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.02.041] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
13
|
Stromatias E, Neil D, Pfeiffer M, Galluppi F, Furber SB, Liu SC. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms. Front Neurosci 2015. [PMID: 26217169 PMCID: PMC4496577 DOI: 10.3389/fnins.2015.00222] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
Collapse
Affiliation(s)
- Evangelos Stromatias
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Daniel Neil
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Francesco Galluppi
- Centre National de la Recherche Scientifique UMR 7210, Equipe de Vision et Calcul Naturel, Vision Institute, UMR S968 Inserm, CHNO des Quinze-Vingts, Université Pierre et Marie Curie Paris, France
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Shih-Chii Liu
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
14
|
Synaptic plasticity enables adaptive self-tuning critical networks. PLoS Comput Biol 2015; 11:e1004043. [PMID: 25590427 PMCID: PMC4295840 DOI: 10.1371/journal.pcbi.1004043] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2014] [Accepted: 11/17/2014] [Indexed: 11/19/2022] Open
Abstract
During rest, the mammalian cortex displays spontaneous neural activity. Spiking of single neurons during rest has been described as irregular and asynchronous. In contrast, recent in vivo and in vitro population measures of spontaneous activity, using the LFP, EEG, MEG or fMRI suggest that the default state of the cortex is critical, manifested by spontaneous, scale-invariant, cascades of activity known as neuronal avalanches. Criticality keeps a network poised for optimal information processing, but this view seems to be difficult to reconcile with apparently irregular single neuron spiking. Here, we simulate a 10,000 neuron, deterministic, plastic network of spiking neurons. We show that a combination of short- and long-term synaptic plasticity enables these networks to exhibit criticality in the face of intrinsic, i.e. self-sustained, asynchronous spiking. Brief external perturbations lead to adaptive, long-term modification of intrinsic network connectivity through long-term excitatory plasticity, whereas long-term inhibitory plasticity enables rapid self-tuning of the network back to a critical state. The critical state is characterized by a branching parameter oscillating around unity, a critical exponent close to -3/2 and a long tail distribution of a self-similarity parameter between 0.5 and 1.
Collapse
|
15
|
Thibeault CM. A role for neuromorphic processors in therapeutic nervous system stimulation. Front Syst Neurosci 2014; 8:187. [PMID: 25339869 PMCID: PMC4187612 DOI: 10.3389/fnsys.2014.00187] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2014] [Accepted: 09/16/2014] [Indexed: 11/13/2022] Open
Affiliation(s)
- Corey M Thibeault
- Center for Neural and Emergent Systems, Information and System Sciences Laboratory, HRL Laboratories LLC. Malibu, CA, USA
| |
Collapse
|
16
|
Stefanini F, Neftci EO, Sheik S, Indiveri G. PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems. Front Neuroinform 2014; 8:73. [PMID: 25232314 PMCID: PMC4152885 DOI: 10.3389/fninf.2014.00073] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Accepted: 08/01/2014] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
Collapse
Affiliation(s)
- Fabio Stefanini
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Emre O Neftci
- Department of Bioengineering, Institute for Neural Computation, University of California at San Diego La Jolla, CA, USA
| | - Sadique Sheik
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
17
|
Srinivasa N, Zhang D, Grigorian B. A robust and scalable neuromorphic communication system by combining synaptic time multiplexing and MIMO-OFDM. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:585-608. [PMID: 24807453 DOI: 10.1109/tnnls.2013.2280126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.
Collapse
|
18
|
Carlson KD, Nageswaran JM, Dutt N, Krichmar JL. An efficient automated parameter tuning framework for spiking neural networks. Front Neurosci 2014; 8:10. [PMID: 24550771 PMCID: PMC3912986 DOI: 10.3389/fnins.2014.00010] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 01/17/2014] [Indexed: 11/13/2022] Open
Abstract
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
Collapse
Affiliation(s)
- Kristofor D Carlson
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA
| | | | - Nikil Dutt
- Department of Computer Science, University of California Irvine Irvine, CA, USA
| | - Jeffrey L Krichmar
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA ; Department of Computer Science, University of California Irvine Irvine, CA, USA
| |
Collapse
|
19
|
Neftci E, Das S, Pedroni B, Kreutz-Delgado K, Cauwenberghs G. Event-driven contrastive divergence for spiking neuromorphic systems. Front Neurosci 2014; 7:272. [PMID: 24574952 PMCID: PMC3922083 DOI: 10.3389/fnins.2013.00272] [Citation(s) in RCA: 113] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Accepted: 12/22/2013] [Indexed: 11/13/2022] Open
Abstract
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
Collapse
Affiliation(s)
- Emre Neftci
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Srinjoy Das
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Electrical and Computer Engineering Department, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Bruno Pedroni
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Kenneth Kreutz-Delgado
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Electrical and Computer Engineering Department, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Gert Cauwenberghs
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| |
Collapse
|