1
|
Sandin F, Nilsson M. Synaptic Delays for Insect-Inspired Temporal Feature Detection in Dynamic Neuromorphic Processors. Front Neurosci 2020; 14:150. [PMID: 32180698 PMCID: PMC7059595 DOI: 10.3389/fnins.2020.00150] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 02/07/2020] [Indexed: 11/13/2022] Open
Abstract
Spiking neural networks are well-suited for spatiotemporal feature detection and learning, and naturally involve dynamic delay mechanisms in the synapses, dendrites, and axons. Dedicated delay neurons and axonal delay circuits have been considered when implementing such pattern recognition networks in dynamic neuromorphic processors. Inspired by an auditory feature detection circuit in crickets, featuring a delayed excitation by post-inhibitory rebound, we investigate disynaptic delay elements formed by inhibitory-excitatory pairs of dynamic synapses. We configured such disynaptic delay elements in the DYNAP-SE neuromorphic processor and characterized the distribution of delayed excitations resulting from device mismatch. Interestingly, we found that the disynaptic delay elements can be configured such that the timing and magnitude of the delayed excitation depend mainly on the efficacy of the inhibitory and excitatory synapses, respectively, and that a neuron with multiple delay elements can be tuned to respond selectively to a specific pattern. Furthermore, we present a network with one disynaptic delay element that mimics the auditory feature detection circuit of crickets, and we demonstrate how varying synaptic weights, input noise and processor temperature affect the circuit. Dynamic delay elements of this kind open up for synapse level temporal feature tuning with configurable delays of up to 100 ms.
Collapse
Affiliation(s)
- Fredrik Sandin
- Embedded Intelligent Systems Lab (EISLAB), Luleå University of Technology, Luleå, Sweden
| | - Mattias Nilsson
- Embedded Intelligent Systems Lab (EISLAB), Luleå University of Technology, Luleå, Sweden
| |
Collapse
|
2
|
Gomar S, Ahmadi M. Digital Hardware Implementation of Gaussian Wilson–Cowan Neocortex Model. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2019. [DOI: 10.1109/tetci.2018.2849095] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
3
|
Wang R, van Schaik A. Breaking Liebig's Law: An Advanced Multipurpose Neuromorphic Engine. Front Neurosci 2018; 12:593. [PMID: 30210278 PMCID: PMC6123369 DOI: 10.3389/fnins.2018.00593] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 08/07/2018] [Indexed: 11/13/2022] Open
Abstract
We present a massively-parallel scalable multi-purpose neuromorphic engine. All existing neuromorphic hardware systems suffer from Liebig’s law (that the performance of the system is limited by the component in shortest supply) as they have fixed numbers of dedicated neurons and synapses for specific types of plasticity. For any application, it is always the availability of one of these components that limits the size of the model, leaving the others unused. To overcome this problem, our engine adopts a unique novel architecture: an array of identical components, each of which can be configured as a leaky-integrate-and-fire (LIF) neuron, a learning-synapse, or an axon with trainable delay. Spike timing dependent plasticity (STDP) and spike timing dependent delay plasticity (STDDP) are the two supported learning rules. All the parameters are stored in the SRAMs such that runtime reconfiguration is supported. As a proof of concept, we have implemented a prototype system with 16 neural engines, each of which consists of 32768 (32k) components, yielding half a million components, on an entry level FPGA (Altera Cyclone V). We verified the prototype system with measurement results. To demonstrate that our neuromorphic engine is a high performance and scalable digital design, we implemented it using TSMC 28nm HPC technology. Place and route results using Cadence Innovus with a clock frequency of 2.5 GHz show that this engine achieves an excellent area efficiency of 1.68 μm2 per component: 256k (218) components in a silicon area of 650 μm × 680 μm (∼0.44 mm2, the utilization of the silicon area is 98.7%). The power consumption of this engine is 37 mW, yielding a power efficiency of 0.92 pJ per synaptic operation (SOP).
Collapse
Affiliation(s)
- Runchun Wang
- The MARCS Institute, Western Sydney University, Sydney, NSW, Australia
| | - André van Schaik
- The MARCS Institute, Western Sydney University, Sydney, NSW, Australia
| |
Collapse
|
4
|
Hwu T, Wang AY, Oros N, Krichmar JL. Adaptive Robot Path Planning Using a Spiking Neuron Algorithm With Axonal Delays. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2655539] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
5
|
Wang RM, Thakur CS, van Schaik A. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator. Front Neurosci 2018; 12:213. [PMID: 29692702 PMCID: PMC5902707 DOI: 10.3389/fnins.2018.00213] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2017] [Accepted: 03/16/2018] [Indexed: 11/13/2022] Open
Abstract
This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.
Collapse
Affiliation(s)
- Runchun M Wang
- The MARCS Institute, University of Western Sydney, Sydney, NSW, Australia
| | - Chetan S Thakur
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India
| | - André van Schaik
- The MARCS Institute, University of Western Sydney, Sydney, NSW, Australia
| |
Collapse
|
6
|
Bhaduri A, Banerjee A, Roy S, Kar S, Basu A. Spiking Neural Classifier with Lumped Dendritic Nonlinearity and Binary Synapses: A Current Mode VLSI Implementation and Analysis. Neural Comput 2018; 30:723-760. [DOI: 10.1162/neco_a_01045] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We present a neuromorphic current mode implementation of a spiking neural classifier with lumped square law dendritic nonlinearity. It has been shown previously in software simulations that such a system with binary synapses can be trained with structural plasticity algorithms to achieve comparable classification accuracy with fewer synaptic resources than conventional algorithms. We show that even in real analog systems with manufacturing imperfections (CV of 23.5% and 14.4% for dendritic branch gains and leaks respectively), this network is able to produce comparable results with fewer synaptic resources. The chip fabricated in [Formula: see text]m complementary metal oxide semiconductor has eight dendrites per cell and uses two opposing cells per class to cancel common-mode inputs. The chip can operate down to a [Formula: see text] V and dissipates 19 nW of static power per neuronal cell and [Formula: see text] 125 pJ/spike. For two-class classification problems of high-dimensional rate encoded binary patterns, the hardware achieves comparable performance as software implementation of the same with only about a 0.5% reduction in accuracy. On two UCI data sets, the IC integrated circuit has classification accuracy comparable to standard machine learners like support vector machines and extreme learning machines while using two to five times binary synapses. We also show that the system can operate on mean rate encoded spike patterns, as well as short bursts of spikes. To the best of our knowledge, this is the first attempt in hardware to perform classification exploiting dendritic properties and binary synapses.
Collapse
Affiliation(s)
- Aritra Bhaduri
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
| | - Amitava Banerjee
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
| | - Subhrajit Roy
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
| | - Sougata Kar
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
| | - Arindam Basu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
| |
Collapse
|
7
|
Wang R, Thakur CS, Cohen G, Hamilton TJ, Tapson J, van Schaik A. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2017; 11:574-584. [PMID: 28436888 DOI: 10.1109/tbcas.2017.2666883] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.
Collapse
|
8
|
Lee WW, Kukreja SL, Thakor NV. CONE: Convex-Optimized-Synaptic Efficacies for Temporally Precise Spike Mapping. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:849-861. [PMID: 27046881 DOI: 10.1109/tnnls.2015.2509479] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Spiking neural networks are well suited to perform time-dependent pattern recognition problems by encoding the temporal dimension in precise spike times. With an appropriate set of weights, a spiking neuron can emit precisely timed action potentials in response to spatiotemporal input spikes. However, deriving supervised learning rules for spike mapping is nontrivial due to the increased complexity. Existing methods rely on heuristic approaches that do not guarantee a convex objective function and, therefore, may not converge to a global minimum. In this paper, we present a novel technique to obtain the weights of spiking neurons by formulating the problem in a convex optimization framework, rendering it be compatible with the established methods. We introduce techniques to influence the weight distribution and membrane trajectory, and then study how these factors affect robustness in the presence of noise. In addition, we show how the existence of a solution can be determined and assess memory capacity limits of a neuron model using synthetic examples. The practical utility of our technique is further assessed by its application to gait-event detection using the experimental data.
Collapse
|
9
|
Wang RM, Hamilton TJ, Tapson JC, van Schaik A. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks. Front Neurosci 2015; 9:180. [PMID: 26041985 PMCID: PMC4438254 DOI: 10.3389/fnins.2015.00180] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 05/06/2015] [Indexed: 11/24/2022] Open
Abstract
We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform.
Collapse
Affiliation(s)
- Runchun M Wang
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Tara J Hamilton
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Jonathan C Tapson
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - André van Schaik
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| |
Collapse
|
10
|
Afshar S, George L, Thakur CS, Tapson J, van Schaik A, de Chazal P, Hamilton TJ. Turn Down That Noise: Synaptic Encoding of Afferent SNR in a Single Spiking Neuron. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2015; 9:188-196. [PMID: 25910252 DOI: 10.1109/tbcas.2015.2416391] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We have added a simplified neuromorphic model of Spike Time Dependent Plasticity (STDP) to the previously described Synapto-dendritic Kernel Adapting Neuron (SKAN), a hardware efficient neuron model capable of learning spatio-temporal spike patterns. The resulting neuron model is the first to perform synaptic encoding of afferent signal-to-noise ratio in addition to the unsupervised learning of spatio-temporal spike patterns. The neuron model is particularly suitable for implementation in digital neuromorphic hardware as it does not use any complex mathematical operations and uses a novel shift-based normalization approach to achieve synaptic homeostasis. The neuron's noise compensation properties are characterized and tested on random spatio-temporal spike patterns as well as a noise corrupted subset of the zero images of the MNIST handwritten digit dataset. Results show the simultaneously learning common patterns in its input data while dynamically weighing individual afferents based on their signal to noise ratio. Despite its simplicity the interesting behaviors of the neuron model and the resulting computational power may also offer insights into biological systems.
Collapse
|
11
|
Delbruck T, van Schaik A, Hasler J. Research topic: neuromorphic engineering systems and applications. A snapshot of neuromorphic systems engineering. Front Neurosci 2014; 8:424. [PMID: 25565952 PMCID: PMC4271593 DOI: 10.3389/fnins.2014.00424] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Accepted: 12/03/2014] [Indexed: 11/17/2022] Open
Affiliation(s)
- Tobi Delbruck
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - André van Schaik
- Bioelectronics and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Jennifer Hasler
- School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA, USA
| |
Collapse
|
12
|
Afshar S, George L, Tapson J, van Schaik A, Hamilton TJ. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels. Front Neurosci 2014; 8:377. [PMID: 25505378 PMCID: PMC4243566 DOI: 10.3389/fnins.2014.00377] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2014] [Accepted: 11/05/2014] [Indexed: 11/17/2022] Open
Abstract
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Collapse
Affiliation(s)
- Saeed Afshar
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - Libin George
- School of Electrical Engineering and Telecommunications, The University of New South WalesSydney, NSW, Australia
| | - Jonathan Tapson
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - André van Schaik
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - Tara J. Hamilton
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
- School of Electrical Engineering and Telecommunications, The University of New South WalesSydney, NSW, Australia
| |
Collapse
|
13
|
Yaghini Bonabi S, Asgharian H, Safari S, Nili Ahmadabadi M. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model. Front Neurosci 2014; 8:379. [PMID: 25484854 PMCID: PMC4240168 DOI: 10.3389/fnins.2014.00379] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2014] [Accepted: 11/05/2014] [Indexed: 11/29/2022] Open
Abstract
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.
Collapse
Affiliation(s)
- Safa Yaghini Bonabi
- Cognitive Robotic Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran Tehran, Iran
| | - Hassan Asgharian
- Research Center of Information Technology, Department of Computer Engineering, Iran University of Science and Technology Tehran, Iran
| | - Saeed Safari
- High Performance Embedded Computing Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran Tehran, Iran
| | - Majid Nili Ahmadabadi
- Cognitive Robotic Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran Tehran, Iran ; School of Cognitive Sciences, Institute for Research in Fundamental Sciences, IPM Tehran, Iran
| |
Collapse
|