1
|
Hore A, Bandyopadhyay S, Chakrabarti S. Persistent spiking activity in neuromorphic circuits incorporating post-inhibitory rebound excitation. J Neural Eng 2024; 21:036048. [PMID: 38861961 DOI: 10.1088/1741-2552/ad56c8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 06/11/2024] [Indexed: 06/13/2024]
Abstract
Objective. This study introduces a novel approach for integrating the post-inhibitory rebound excitation (PIRE) phenomenon into a neuronal circuit. Excitatory and inhibitory synapses are designed to establish a connection between two hardware neurons, effectively forming a network. The model demonstrates the occurrence of PIRE under strong inhibitory input. Emphasizing the significance of incorporating PIRE in neuromorphic circuits, the study showcases generation of persistent activity within cyclic and recurrent spiking neuronal networks.Approach. The neuronal and synaptic circuits are designed and simulated in Cadence Virtuoso using TSMC 180 nm technology. The operating mechanism of the PIRE phenomenon integrated into a hardware neuron is discussed. The proposed circuit encompasses several parameters for effectively controlling multiple electrophysiological features of a neuron.Main results. The neuronal circuit has been tuned to match the response of a biological neuron. The efficiency of this circuit is evaluated by computing the average power dissipation and energy consumption per spike through simulation. The sustained firing of neural spikes is observed till 1.7 s using the two neuronal networks.Significance. Persistent activity has significant implications for various cognitive functions such as working memory, decision-making, and attention. Therefore, hardware implementation of these functions will require our PIRE-integrated model. Energy-efficient neuromorphic systems are useful in many artificial intelligence applications, including human-machine interaction, IoT devices, autonomous systems, and brain-computer interfaces.
Collapse
|
2
|
Abstract
The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies - from perception to motor control - represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations.
Collapse
Affiliation(s)
- Chiara Bartolozzi
- Event-Driven Perception for Robotics, Istituto Italiano di Tecnologia, via San Quirico 19D, 16163, Genova, Italy.
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057, Zurich, Switzerland
| | - Elisa Donati
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057, Zurich, Switzerland
| |
Collapse
|
3
|
Abstract
Recurrent neural networks can solve a variety of computational tasks and produce patterns of activity that capture key properties of brain circuits. However, learning rules designed to train these models are time-consuming and prone to inaccuracies when tuning connection weights located deep within the network. Here, we describe a rapid one-shot learning rule to train recurrent networks composed of biologically-grounded neurons. First, inputs to the model are compressed onto a smaller number of recurrent neurons. Then, a non-iterative rule adjusts the output weights of these neurons based on a target signal. The model learned to reproduce natural images, sequential patterns, as well as a high-resolution movie scene. Together, results provide a novel avenue for one-shot learning in biologically realistic recurrent networks and open a path to solving complex tasks by merging brain-inspired models with rapid optimization rules.
Collapse
|
4
|
George R, Chiappalone M, Giugliano M, Levi T, Vassanelli S, Partzsch J, Mayr C. Plasticity and Adaptation in Neuromorphic Biohybrid Systems. iScience 2020; 23:101589. [PMID: 33083749 PMCID: PMC7554028 DOI: 10.1016/j.isci.2020.101589] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Neuromorphic systems take inspiration from the principles of biological information processing to form hardware platforms that enable the large-scale implementation of neural networks. The recent years have seen both advances in the theoretical aspects of spiking neural networks for their use in classification and control tasks and a progress in electrophysiological methods that is pushing the frontiers of intelligent neural interfacing and signal processing technologies. At the forefront of these new technologies, artificial and biological neural networks are tightly coupled, offering a novel "biohybrid" experimental framework for engineers and neurophysiologists. Indeed, biohybrid systems can constitute a new class of neuroprostheses opening important perspectives in the treatment of neurological disorders. Moreover, the use of biologically plausible learning rules allows forming an overall fault-tolerant system of co-developing subsystems. To identify opportunities and challenges in neuromorphic biohybrid systems, we discuss the field from the perspectives of neurobiology, computational neuroscience, and neuromorphic engineering.
Collapse
Affiliation(s)
- Richard George
- Department of Electrical Engineering and Information Technology, Technical University of Dresden, Dresden, Germany
| | | | - Michele Giugliano
- Neuroscience Area, International School of Advanced Studies, Trieste, Italy
| | - Timothée Levi
- Laboratoire de l’Intégration du Matéeriau au Systéme, University of Bordeaux, Bordeaux, France
- LIMMS/CNRS, Institute of Industrial Science, The University of Tokyo, Tokyo, Japan
| | - Stefano Vassanelli
- Department of Biomedical Sciences and Padova Neuroscience Center, University of Padova, Padova, Italy
| | - Johannes Partzsch
- Department of Electrical Engineering and Information Technology, Technical University of Dresden, Dresden, Germany
| | - Christian Mayr
- Department of Electrical Engineering and Information Technology, Technical University of Dresden, Dresden, Germany
| |
Collapse
|
5
|
Keren H, Partzsch J, Marom S, Mayr CG. A Biohybrid Setup for Coupling Biological and Neuromorphic Neural Networks. Front Neurosci 2019; 13:432. [PMID: 31133779 PMCID: PMC6517490 DOI: 10.3389/fnins.2019.00432] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Accepted: 04/15/2019] [Indexed: 12/30/2022] Open
Abstract
Developing technologies for coupling neural activity and artificial neural components, is key for advancing neural interfaces and neuroprosthetics. We present a biohybrid experimental setting, where the activity of a biological neural network is coupled to a biomimetic hardware network. The implementation of the hardware network (denoted NeuroSoC) exhibits complex dynamics with a multiplicity of time-scales, emulating 2880 neurons and 12.7 M synapses, designed on a VLSI chip. This network is coupled to a neural network in vitro, where the activities of both the biological and the hardware networks can be recorded, processed, and integrated bidirectionally in real-time. This experimental setup enables an adjustable and well-monitored coupling, while providing access to key functional features of neural networks. We demonstrate the feasibility to functionally couple the two networks and to implement control circuits to modify the biohybrid activity. Overall, we provide an experimental model for neuromorphic-neural interfaces, hopefully to advance the capability to interface with neural activity, and with its irregularities in pathology.
Collapse
Affiliation(s)
- Hanna Keren
- Department of Physiology, Biophysics and Systems Biology, Ruth and Bruce Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa, Israel
- Network Biology Research Laboratory, Faculty of Electrical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, School of Engineering Sciences, Dresden University of Technology, Dresden, Germany
| | - Johannes Partzsch
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, School of Engineering Sciences, Dresden University of Technology, Dresden, Germany
| | - Shimon Marom
- Department of Physiology, Biophysics and Systems Biology, Ruth and Bruce Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa, Israel
- Network Biology Research Laboratory, Faculty of Electrical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Christian G Mayr
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, School of Engineering Sciences, Dresden University of Technology, Dresden, Germany
| |
Collapse
|
6
|
Wang R, van Schaik A. Breaking Liebig's Law: An Advanced Multipurpose Neuromorphic Engine. Front Neurosci 2018; 12:593. [PMID: 30210278 PMCID: PMC6123369 DOI: 10.3389/fnins.2018.00593] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 08/07/2018] [Indexed: 11/13/2022] Open
Abstract
We present a massively-parallel scalable multi-purpose neuromorphic engine. All existing neuromorphic hardware systems suffer from Liebig’s law (that the performance of the system is limited by the component in shortest supply) as they have fixed numbers of dedicated neurons and synapses for specific types of plasticity. For any application, it is always the availability of one of these components that limits the size of the model, leaving the others unused. To overcome this problem, our engine adopts a unique novel architecture: an array of identical components, each of which can be configured as a leaky-integrate-and-fire (LIF) neuron, a learning-synapse, or an axon with trainable delay. Spike timing dependent plasticity (STDP) and spike timing dependent delay plasticity (STDDP) are the two supported learning rules. All the parameters are stored in the SRAMs such that runtime reconfiguration is supported. As a proof of concept, we have implemented a prototype system with 16 neural engines, each of which consists of 32768 (32k) components, yielding half a million components, on an entry level FPGA (Altera Cyclone V). We verified the prototype system with measurement results. To demonstrate that our neuromorphic engine is a high performance and scalable digital design, we implemented it using TSMC 28nm HPC technology. Place and route results using Cadence Innovus with a clock frequency of 2.5 GHz show that this engine achieves an excellent area efficiency of 1.68 μm2 per component: 256k (218) components in a silicon area of 650 μm × 680 μm (∼0.44 mm2, the utilization of the silicon area is 98.7%). The power consumption of this engine is 37 mW, yielding a power efficiency of 0.92 pJ per synaptic operation (SOP).
Collapse
Affiliation(s)
- Runchun Wang
- The MARCS Institute, Western Sydney University, Sydney, NSW, Australia
| | - André van Schaik
- The MARCS Institute, Western Sydney University, Sydney, NSW, Australia
| |
Collapse
|
7
|
Sherfey JS, Ardid S, Hass J, Hasselmo ME, Kopell NJ. Flexible resonance in prefrontal networks with strong feedback inhibition. PLoS Comput Biol 2018; 14:e1006357. [PMID: 30091975 PMCID: PMC6103521 DOI: 10.1371/journal.pcbi.1006357] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Revised: 08/21/2018] [Accepted: 07/11/2018] [Indexed: 11/24/2022] Open
Abstract
Oscillations are ubiquitous features of brain dynamics that undergo task-related changes in synchrony, power, and frequency. The impact of those changes on target networks is poorly understood. In this work, we used a biophysically detailed model of prefrontal cortex (PFC) to explore the effects of varying the spike rate, synchrony, and waveform of strong oscillatory inputs on the behavior of cortical networks driven by them. Interacting populations of excitatory and inhibitory neurons with strong feedback inhibition are inhibition-based network oscillators that exhibit resonance (i.e., larger responses to preferred input frequencies). We quantified network responses in terms of mean firing rates and the population frequency of network oscillation; and characterized their behavior in terms of the natural response to asynchronous input and the resonant response to oscillatory inputs. We show that strong feedback inhibition causes the PFC to generate internal (natural) oscillations in the beta/gamma frequency range (>15 Hz) and to maximize principal cell spiking in response to external oscillations at slightly higher frequencies. Importantly, we found that the fastest oscillation frequency that can be relayed by the network maximizes local inhibition and is equal to a frequency even higher than that which maximizes the firing rate of excitatory cells; we call this phenomenon population frequency resonance. This form of resonance is shown to determine the optimal driving frequency for suppressing responses to asynchronous activity. Lastly, we demonstrate that the natural and resonant frequencies can be tuned by changes in neuronal excitability, the duration of feedback inhibition, and dynamic properties of the input. Our results predict that PFC networks are tuned for generating and selectively responding to beta- and gamma-rhythmic signals due to the natural and resonant properties of inhibition-based oscillators. They also suggest strategies for optimizing transcranial stimulation and using oscillatory networks in neuromorphic engineering.
Collapse
Affiliation(s)
- Jason S. Sherfey
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts, United States of America
- Department of Psychological and Brain Sciences, Center for Systems Neuroscience, Boston University, Massachusetts, United States of America
| | - Salva Ardid
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts, United States of America
| | - Joachim Hass
- Department of Theoretical Neuroscience, Bernstein Center for Computational Neuroscience, Central Institute of Mental Health, Heidelberg University, Mannheim, Germany
- Faculty of Applied Psychology, SRH University for Applied Sciences Heidelberg, Heidelberg, Germany
| | - Michael E. Hasselmo
- Department of Psychological and Brain Sciences, Center for Systems Neuroscience, Boston University, Massachusetts, United States of America
| | - Nancy J. Kopell
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts, United States of America
| |
Collapse
|
8
|
You H, Wang DH. Neuromorphic Implementation of Attractor Dynamics in a Two-Variable Winner-Take-All Circuit with NMDARs: A Simulation Study. Front Neurosci 2017; 11:40. [PMID: 28223913 PMCID: PMC5293789 DOI: 10.3389/fnins.2017.00040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 01/19/2017] [Indexed: 11/13/2022] Open
Abstract
Neural networks configured with winner-take-all (WTA) competition and N-methyl-D-aspartate receptor (NMDAR)-mediated synaptic dynamics are endowed with various dynamic characteristics of attractors underlying many cognitive functions. This paper presents a novel method for neuromorphic implementation of a two-variable WTA circuit with NMDARs aimed at implementing decision-making, working memory and hysteresis in visual perceptions. The method proposed is a dynamical system approach of circuit synthesis based on a biophysically plausible WTA model. Notably, slow and non-linear temporal dynamics of NMDAR-mediated synapses was generated. Circuit simulations in Cadence reproduced ramping neural activities observed in electrophysiological recordings in experiments of decision-making, the sustained activities observed in the prefrontal cortex during working memory, and classical hysteresis behavior during visual discrimination tasks. Furthermore, theoretical analysis of the dynamical system approach illuminated the underlying mechanisms of decision-making, memory capacity and hysteresis loops. The consistence between the circuit simulations and theoretical analysis demonstrated that the WTA circuit with NMDARs was able to capture the attractor dynamics underlying these cognitive functions. Their physical implementations as elementary modules are promising for assembly into integrated neuromorphic cognitive systems.
Collapse
Affiliation(s)
- Hongzhi You
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of ChinaChengdu, China
| | - Da-Hui Wang
- School of Systems Science and National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal UniversityBeijing, China
| |
Collapse
|
9
|
Giulioni M, Corradi F, Dante V, del Giudice P. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems. Sci Rep 2015; 5:14730. [PMID: 26463272 PMCID: PMC4604465 DOI: 10.1038/srep14730] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Accepted: 08/12/2015] [Indexed: 11/10/2022] Open
Abstract
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
Collapse
Affiliation(s)
| | - Federico Corradi
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, Switzerland
| | - Vittorio Dante
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
| | - Paolo del Giudice
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
- National Institute for Nuclear Physics, Rome, Italy
| |
Collapse
|
10
|
Wang RM, Hamilton TJ, Tapson JC, van Schaik A. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks. Front Neurosci 2015; 9:180. [PMID: 26041985 PMCID: PMC4438254 DOI: 10.3389/fnins.2015.00180] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 05/06/2015] [Indexed: 11/24/2022] Open
Abstract
We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform.
Collapse
Affiliation(s)
- Runchun M Wang
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Tara J Hamilton
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Jonathan C Tapson
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - André van Schaik
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| |
Collapse
|
11
|
Qiao N, Mostafa H, Corradi F, Osswald M, Stefanini F, Sumislawska D, Indiveri G. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Front Neurosci 2015; 9:141. [PMID: 25972778 PMCID: PMC4413675 DOI: 10.3389/fnins.2015.00141] [Citation(s) in RCA: 154] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 04/06/2015] [Indexed: 11/13/2022] Open
Abstract
Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Collapse
Affiliation(s)
- Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Hesham Mostafa
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Federico Corradi
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Marc Osswald
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Fabio Stefanini
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Dora Sumislawska
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
12
|
Petrovici MA, Vogginger B, Müller P, Breitwieser O, Lundqvist M, Muller L, Ehrlich M, Destexhe A, Lansner A, Schüffny R, Schemmel J, Meier K. Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms. PLoS One 2014; 9:e108590. [PMID: 25303102 PMCID: PMC4193761 DOI: 10.1371/journal.pone.0108590] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 08/22/2014] [Indexed: 11/18/2022] Open
Abstract
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Collapse
Affiliation(s)
- Mihai A. Petrovici
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Bernhard Vogginger
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Paul Müller
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Oliver Breitwieser
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Mikael Lundqvist
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - Lyle Muller
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Matthias Ehrlich
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Alain Destexhe
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - René Schüffny
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Johannes Schemmel
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Karlheinz Meier
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| |
Collapse
|
13
|
Stefanini F, Neftci EO, Sheik S, Indiveri G. PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems. Front Neuroinform 2014; 8:73. [PMID: 25232314 PMCID: PMC4152885 DOI: 10.3389/fninf.2014.00073] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Accepted: 08/01/2014] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
Collapse
Affiliation(s)
- Fabio Stefanini
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Emre O Neftci
- Department of Bioengineering, Institute for Neural Computation, University of California at San Diego La Jolla, CA, USA
| | - Sadique Sheik
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
14
|
Wang R, Cohen G, Stiefel KM, Hamilton TJ, Tapson J, van Schaik A. An FPGA Implementation of a Polychronous Spiking Neural Network with Delay Adaptation. Front Neurosci 2013; 7:14. [PMID: 23408739 PMCID: PMC3570898 DOI: 10.3389/fnins.2013.00014] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2012] [Accepted: 01/26/2013] [Indexed: 11/29/2022] Open
Abstract
We present an FPGA implementation of a re-configurable, polychronous spiking neural network with a large capacity for spatial-temporal patterns. The proposed neural network generates delay paths de novo, so that only connections that actually appear in the training patterns will be created. This allows the proposed network to use all the axons (variables) to store information. Spike Timing Dependent Delay Plasticity is used to fine-tune and add dynamics to the network. We use a time multiplexing approach allowing us to achieve 4096 (4k) neurons and up to 1.15 million programmable delay axons on a Virtex 6 FPGA. Test results show that the proposed neural network is capable of successfully recalling more than 95% of all spikes for 96% of the stored patterns. The tests also show that the neural network is robust to noise from random input spikes.
Collapse
Affiliation(s)
- Runchun Wang
- Bioelectronics and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | | | | | | | | | | |
Collapse
|