1
|
Renner A, Sheldon F, Zlotnik A, Tao L, Sornborger A. The backpropagation algorithm implemented on spiking neuromorphic hardware. Nat Commun 2024; 15:9691. [PMID: 39516210 PMCID: PMC11549378 DOI: 10.1038/s41467-024-53827-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 10/22/2024] [Indexed: 11/16/2024] Open
Abstract
The capabilities of natural neural systems have inspired both new generations of machine learning algorithms as well as neuromorphic, very large-scale integrated circuits capable of fast, low-power information processing. However, it has been argued that most modern machine learning algorithms are not neurophysiologically plausible. In particular, the workhorse of modern deep learning, the backpropagation algorithm, has proven difficult to translate to neuromorphic hardware. This study presents a neuromorphic, spiking backpropagation algorithm based on synfire-gated dynamical information coordination and processing implemented on Intel's Loihi neuromorphic research processor. We demonstrate a proof-of-principle three-layer circuit that learns to classify digits and clothing items from the MNIST and Fashion MNIST datasets. To our knowledge, this is the first work to show a Spiking Neural Network implementation of the exact backpropagation algorithm that is fully on-chip without a computer in the loop. It is competitive in accuracy with off-chip trained SNNs and achieves an energy-delay product suitable for edge computing. This implementation shows a path for using in-memory, massively parallel neuromorphic processors for low-power, low-latency implementation of modern deep learning applications.
Collapse
Affiliation(s)
- Alpha Renner
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, 8057, Switzerland
- Forschungszentrum Jülich, Jülich, 52428, Germany
| | - Forrest Sheldon
- Physics of Condensed Matter & Complex Systems (T-4), Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
- London Institute for Mathematical Sciences, Royal Institution, London, W1S 4BS, UK
| | - Anatoly Zlotnik
- Applied Mathematics & Plasma Physics (T-5), Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, 100871, China
- Center for Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China
| | - Andrew Sornborger
- Information Sciences (CCS-3), Los Alamos National Laboratory, Los Alamos, NM, 87545, USA.
| |
Collapse
|
2
|
Hore A, Bandyopadhyay S, Chakrabarti S. Persistent spiking activity in neuromorphic circuits incorporating post-inhibitory rebound excitation. J Neural Eng 2024; 21:036048. [PMID: 38861961 DOI: 10.1088/1741-2552/ad56c8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 06/11/2024] [Indexed: 06/13/2024]
Abstract
Objective. This study introduces a novel approach for integrating the post-inhibitory rebound excitation (PIRE) phenomenon into a neuronal circuit. Excitatory and inhibitory synapses are designed to establish a connection between two hardware neurons, effectively forming a network. The model demonstrates the occurrence of PIRE under strong inhibitory input. Emphasizing the significance of incorporating PIRE in neuromorphic circuits, the study showcases generation of persistent activity within cyclic and recurrent spiking neuronal networks.Approach. The neuronal and synaptic circuits are designed and simulated in Cadence Virtuoso using TSMC 180 nm technology. The operating mechanism of the PIRE phenomenon integrated into a hardware neuron is discussed. The proposed circuit encompasses several parameters for effectively controlling multiple electrophysiological features of a neuron.Main results. The neuronal circuit has been tuned to match the response of a biological neuron. The efficiency of this circuit is evaluated by computing the average power dissipation and energy consumption per spike through simulation. The sustained firing of neural spikes is observed till 1.7 s using the two neuronal networks.Significance. Persistent activity has significant implications for various cognitive functions such as working memory, decision-making, and attention. Therefore, hardware implementation of these functions will require our PIRE-integrated model. Energy-efficient neuromorphic systems are useful in many artificial intelligence applications, including human-machine interaction, IoT devices, autonomous systems, and brain-computer interfaces.
Collapse
|
3
|
Cotteret M, Greatorex H, Ziegler M, Chicca E. Vector Symbolic Finite State Machines in Attractor Neural Networks. Neural Comput 2024; 36:549-595. [PMID: 38457766 DOI: 10.1162/neco_a_01638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 10/19/2023] [Indexed: 03/10/2024]
Abstract
Hopfield attractor networks are robust distributed models of human memory, but they lack a general mechanism for effecting state-dependent attractor transitions in response to input. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random vectors and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network for dense bipolar state vectors and approximately quadratic for sparse binary state vectors. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs could exist as a distributed computational primitive in biological neural networks.
Collapse
Affiliation(s)
- Madison Cotteret
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Hugh Greatorex
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Martin Ziegler
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
| | - Elisabetta Chicca
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| |
Collapse
|
4
|
Loeffler A, Diaz-Alvarez A, Zhu R, Ganesh N, Shine JM, Nakayama T, Kuncic Z. Neuromorphic learning, working memory, and metaplasticity in nanowire networks. SCIENCE ADVANCES 2023; 9:eadg3289. [PMID: 37083527 PMCID: PMC10121165 DOI: 10.1126/sciadv.adg3289] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Nanowire networks (NWNs) mimic the brain's neurosynaptic connectivity and emergent dynamics. Consequently, NWNs may also emulate the synaptic processes that enable higher-order cognitive functions such as learning and memory. A quintessential cognitive task used to measure human working memory is the n-back task. In this study, task variations inspired by the n-back task are implemented in a NWN device, and external feedback is applied to emulate brain-like supervised and reinforcement learning. NWNs are found to retain information in working memory to at least n = 7 steps back, remarkably similar to the originally proposed "seven plus or minus two" rule for human subjects. Simulations elucidate how synapse-like NWN junction plasticity depends on previous synaptic modifications, analogous to "synaptic metaplasticity" in the brain, and how memory is consolidated via strengthening and pruning of synaptic conductance pathways.
Collapse
Affiliation(s)
- Alon Loeffler
- The University of Sydney, School of Physics, Sydney, Australia
- Corresponding author. (A.L.); (A.D.-A.); (Z.K.)
| | - Adrian Diaz-Alvarez
- International Center for Young Scientist (ICYS), National Institute for Materials Science (NIMS), Tsukuba, Japan
- International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS), Tsukuba, Japan
- Corresponding author. (A.L.); (A.D.-A.); (Z.K.)
| | - Ruomin Zhu
- The University of Sydney, School of Physics, Sydney, Australia
| | - Natesh Ganesh
- National Institute of Standards and Technology (NIST), Boulder, CO, USA
- University of Colorado, Boulder, CO, USA
| | - James M. Shine
- The University of Sydney, School of Physics, Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, Australia
- The University of Sydney, School of Medical Sciences, Sydney, Australia
| | - Tomonobu Nakayama
- The University of Sydney, School of Physics, Sydney, Australia
- International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS), Tsukuba, Japan
- Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Japan
| | - Zdenka Kuncic
- The University of Sydney, School of Physics, Sydney, Australia
- International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS), Tsukuba, Japan
- The University of Sydney Nano Institute, Sydney, Australia
- Corresponding author. (A.L.); (A.D.-A.); (Z.K.)
| |
Collapse
|
5
|
Schmitt FJ, Rostami V, Nawrot MP. Efficient parameter calibration and real-time simulation of large-scale spiking neural networks with GeNN and NEST. Front Neuroinform 2023; 17:941696. [PMID: 36844916 PMCID: PMC9950635 DOI: 10.3389/fninf.2023.941696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.
Collapse
Affiliation(s)
| | | | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
6
|
Jadaun P, Cui C, Liu S, Incorvia JAC. Adaptive cognition implemented with a context-aware and flexible neuron for next-generation artificial intelligence. PNAS NEXUS 2022; 1:pgac206. [PMID: 36712357 PMCID: PMC9802372 DOI: 10.1093/pnasnexus/pgac206] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 09/27/2022] [Indexed: 06/18/2023]
Abstract
Neuromorphic computing mimics the organizational principles of the brain in its quest to replicate the brain's intellectual abilities. An impressive ability of the brain is its adaptive intelligence, which allows the brain to regulate its functions "on the fly" to cope with myriad and ever-changing situations. In particular, the brain displays three adaptive and advanced intelligence abilities of context-awareness, cross frequency coupling, and feature binding. To mimic these adaptive cognitive abilities, we design and simulate a novel, hardware-based adaptive oscillatory neuron using a lattice of magnetic skyrmions. Charge current fed to the neuron reconfigures the skyrmion lattice, thereby modulating the neuron's state, its dynamics and its transfer function "on the fly." This adaptive neuron is used to demonstrate the three cognitive abilities, of which context-awareness and cross-frequency coupling have not been previously realized in hardware neurons. Additionally, the neuron is used to construct an adaptive artificial neural network (ANN) and perform context-aware diagnosis of breast cancer. Simulations show that the adaptive ANN diagnoses cancer with higher accuracy while learning faster and using a more compact and energy-efficient network than a nonadaptive ANN. The work further describes how hardware-based adaptive neurons can mitigate several critical challenges facing contemporary ANNs. Modern ANNs require large amounts of training data, energy, and chip area, and are highly task-specific; conversely, hardware-based ANNs built with adaptive neurons show faster learning, compact architectures, energy-efficiency, fault-tolerance, and can lead to the realization of broader artificial intelligence.
Collapse
Affiliation(s)
| | | | - Sam Liu
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | | |
Collapse
|
7
|
Abstract
AbstractComputational properties of neuronal networks have been applied to computing systems using simplified models comprising repeated connected nodes, e.g., perceptrons, with decision-making capabilities and flexible weighted links. Analogously to their revolutionary impact on computing, neuro-inspired models can transform synthetic gene circuit design in a manner that is reliable, efficient in resource utilization, and readily reconfigurable for different tasks. To this end, we introduce the perceptgene, a perceptron that computes in the logarithmic domain, which enables efficient implementation of artificial neural networks in Escherichia coli cells. We successfully modify perceptgene parameters to create devices that encode a minimum, maximum, and average of analog inputs. With these devices, we create multi-layer perceptgene circuits that compute a soft majority function, perform an analog-to-digital conversion, and implement a ternary switch. We also create a programmable perceptgene circuit whose computation can be modified from OR to AND logic using small molecule induction. Finally, we show that our approach enables circuit optimization via artificial intelligence algorithms.
Collapse
|
8
|
Abstract
The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies - from perception to motor control - represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations.
Collapse
Affiliation(s)
- Chiara Bartolozzi
- Event-Driven Perception for Robotics, Istituto Italiano di Tecnologia, via San Quirico 19D, 16163, Genova, Italy.
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057, Zurich, Switzerland
| | - Elisa Donati
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057, Zurich, Switzerland
| |
Collapse
|
9
|
Yang S, Gao T, Wang J, Deng B, Lansdell B, Linares-Barranco B. Efficient Spike-Driven Learning With Dendritic Event-Based Processing. Front Neurosci 2021; 15:601109. [PMID: 33679295 PMCID: PMC7933681 DOI: 10.3389/fnins.2021.601109] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 01/21/2021] [Indexed: 11/22/2022] Open
Abstract
A critical challenge in neuromorphic computing is to present computationally efficient algorithms of learning. When implementing gradient-based learning, error information must be routed through the network, such that each neuron knows its contribution to output, and thus how to adjust its weight. This is known as the credit assignment problem. Exactly implementing a solution like backpropagation involves weight sharing, which requires additional bandwidth and computations in a neuromorphic system. Instead, models of learning from neuroscience can provide inspiration for how to communicate error information efficiently, without weight sharing. Here we present a novel dendritic event-based processing (DEP) algorithm, using a two-compartment leaky integrate-and-fire neuron with partially segregated dendrites that effectively solves the credit assignment problem. In order to optimize the proposed algorithm, a dynamic fixed-point representation method and piecewise linear approximation approach are presented, while the synaptic events are binarized during learning. The presented optimization makes the proposed DEP algorithm very suitable for implementation in digital or mixed-signal neuromorphic hardware. The experimental results show that spiking representations can rapidly learn, achieving high performance by using the proposed DEP algorithm. We find the learning capability is affected by the degree of dendritic segregation, and the form of synaptic feedback connections. This study provides a bridge between the biological learning and neuromorphic learning, and is meaningful for the real-time applications in the field of artificial intelligence.
Collapse
Affiliation(s)
- Shuangming Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Tian Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Benjamin Lansdell
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | | |
Collapse
|
10
|
Haessig G, Milde MB, Aceituno PV, Oubari O, Knight JC, van Schaik A, Benosman RB, Indiveri G. Event-Based Computation for Touch Localization Based on Precise Spike Timing. Front Neurosci 2020; 14:420. [PMID: 32528239 PMCID: PMC7248403 DOI: 10.3389/fnins.2020.00420] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 04/07/2020] [Indexed: 11/13/2022] Open
Abstract
Precise spike timing and temporal coding are used extensively within the nervous system of insects and in the sensory periphery of higher order animals. However, conventional Artificial Neural Networks (ANNs) and machine learning algorithms cannot take advantage of this coding strategy, due to their rate-based representation of signals. Even in the case of artificial Spiking Neural Networks (SNNs), identifying applications where temporal coding outperforms the rate coding strategies of ANNs is still an open challenge. Neuromorphic sensory-processing systems provide an ideal context for exploring the potential advantages of temporal coding, as they are able to efficiently extract the information required to cluster or classify spatio-temporal activity patterns from relative spike timing. Here we propose a neuromorphic model inspired by the sand scorpion to explore the benefits of temporal coding, and validate it in an event-based sensory-processing task. The task consists in localizing a target using only the relative spike timing of eight spatially-separated vibration sensors. We propose two different approaches in which the SNNs learns to cluster spatio-temporal patterns in an unsupervised manner and we demonstrate how the task can be solved both analytically and through numerical simulation of multiple SNN models. We argue that the models presented are optimal for spatio-temporal pattern classification using precise spike timing in a task that could be used as a standard benchmark for evaluating event-based sensory processing models based on temporal coding.
Collapse
Affiliation(s)
- Germain Haessig
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Moritz B Milde
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Penrith, NSW, Australia
| | - Pau Vilimelis Aceituno
- Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany.,Max Planck School of Cognition, Leipzig, Germany
| | - Omar Oubari
- Institut de la Vision, Sorbonne Université, Paris, France
| | - James C Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | - André van Schaik
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Penrith, NSW, Australia
| | - Ryad B Benosman
- Institut de la Vision, Sorbonne Université, Paris, France.,University of Pittsburgh, Pittsburgh, PA, United States.,Carnegie Mellon University, Pittsburgh, PA, United States
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
11
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
12
|
Passian A, Imam N. Nanosystems, Edge Computing, and the Next Generation Computing Systems. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4048. [PMID: 31546907 PMCID: PMC6767340 DOI: 10.3390/s19184048] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 09/11/2019] [Accepted: 09/16/2019] [Indexed: 12/24/2022]
Abstract
It is widely recognized that nanoscience and nanotechnology and their subfields, such as nanophotonics, nanoelectronics, and nanomechanics, have had a tremendous impact on recent advances in sensing, imaging, and communication, with notable developments, including novel transistors and processor architectures. For example, in addition to being supremely fast, optical and photonic components and devices are capable of operating across multiple orders of magnitude length, power, and spectral scales, encompassing the range from macroscopic device sizes and kW energies to atomic domains and single-photon energies. The extreme versatility of the associated electromagnetic phenomena and applications, both classical and quantum, are therefore highly appealing to the rapidly evolving computing and communication realms, where innovations in both hardware and software are necessary to meet the growing speed and memory requirements. Development of all-optical components, photonic chips, interconnects, and processors will bring the speed of light, photon coherence properties, field confinement and enhancement, information-carrying capacity, and the broad spectrum of light into the high-performance computing, the internet of things, and industries related to cloud, fog, and recently edge computing. Conversely, owing to their extraordinary properties, 0D, 1D, and 2D materials are being explored as a physical basis for the next generation of logic components and processors. Carbon nanotubes, for example, have been recently used to create a new processor beyond proof of principle. These developments, in conjunction with neuromorphic and quantum computing, are envisioned to maintain the growth of computing power beyond the projected plateau for silicon technology. We survey the qualitative figures of merit of technologies of current interest for the next generation computing with an emphasis on edge computing.
Collapse
Affiliation(s)
- Ali Passian
- Computing & Computational Sciences Directorate, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA.
| | - Neena Imam
- Computing & Computational Sciences Directorate, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA.
| |
Collapse
|
13
|
Haessig G, Berthelon X, Ieng SH, Benosman R. A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision. Sci Rep 2019; 9:3744. [PMID: 30842458 PMCID: PMC6403400 DOI: 10.1038/s41598-019-40064-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 02/06/2019] [Indexed: 11/09/2022] Open
Abstract
Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels' absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-based silicon retinas and neural processing devices inspired by the organizing principles of the brain. In this paper, we present a low power, compact and computationally inexpensive setup to estimate depth in a 3D scene in real time at high rates that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. Exploiting the high temporal resolution of the event-based silicon retina, we are able to extract depth at 100 Hz for a power budget lower than a 200 mW (10 mW for the camera, 90 mW for the liquid lens and ~100 mW for the computation). We validate the model with experimental results, highlighting features that are consistent with both computational neuroscience and recent findings in the retina physiology. We demonstrate its efficiency with a prototype of a neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological depth from defocus experiments reported in the literature.
Collapse
Affiliation(s)
- Germain Haessig
- Sorbonne Universite, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012, Paris, France.
| | - Xavier Berthelon
- Sorbonne Universite, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012, Paris, France
| | - Sio-Hoi Ieng
- Sorbonne Universite, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012, Paris, France
| | - Ryad Benosman
- Sorbonne Universite, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012, Paris, France.,University of Pittsburgh Medical Center, Biomedical Science Tower 3, Fifth Avenue, Pittsburgh, PA, USA.,Carnegie Mellon University, Robotics Institute, 5000 Forbes Avenue, Pittsburgh, PA, 15213-3890, USA
| |
Collapse
|
14
|
|
15
|
Frenkel C, Lefebvre M, Legat JD, Bol D. A 0.086-mm 2 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:145-158. [PMID: 30418919 DOI: 10.1109/tbcas.2018.2880425] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Shifting computing architectures from von Neumann to event-based spiking neural networks (SNNs) uncovers new opportunities for low-power processing of sensory data in applications such as vision or sensorimotor control. Exploring roads toward cognitive SNNs requires the design of compact, low-power and versatile experimentation platforms with the key requirement of online learning in order to adapt and learn new features in uncontrolled environments. However, embedding online learning in SNNs is currently hindered by high incurred complexity and area overheads. In this paper, we present ODIN, a 0.086-mm 2 64k-synapse 256-neuron online-learning digital spiking neuromorphic processor in 28-nm FDSOI CMOS achieving a minimum energy per synaptic operation (SOP) of 12.7 pJ. It leverages an efficient implementation of the spike-driven synaptic plasticity (SDSP) learning rule for high-density embedded online learning with only 0.68 μm 2 per 4-bit synapse. Neurons can be independently configured as a standard leaky integrate-and-fire model or as a custom phenomenological model that emulates the 20 Izhikevich behaviors found in biological spiking neurons. Using a single presentation of 6k 16 × 16 MNIST training images to a single-layer fully-connected 10-neuron network with on-chip SDSP-based learning, ODIN achieves a classification accuracy of 84.5%, while consuming only 15 nJ/inference at 0.55 V using rank order coding. ODIN thus enables further developments toward cognitive neuromorphic devices for low-power, adaptive and low-cost processing.
Collapse
|
16
|
Kreiser R, Aathmani D, Qiao N, Indiveri G, Sandamirskaya Y. Organizing Sequential Memory in a Neuromorphic Device Using Dynamic Neural Fields. Front Neurosci 2018; 12:717. [PMID: 30524218 PMCID: PMC6262404 DOI: 10.3389/fnins.2018.00717] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 09/19/2018] [Indexed: 11/26/2022] Open
Abstract
Neuromorphic Very Large Scale Integration (VLSI) devices emulate the activation dynamics of biological neuronal networks using either mixed-signal analog/digital or purely digital electronic circuits. Using analog circuits in silicon to physically emulate the functionality of biological neurons and synapses enables faithful modeling of neural and synaptic dynamics at ultra low power consumption in real-time, and thus may serve as computational substrate for a new generation of efficient neural controllers for artificial intelligent systems. Although one of the main advantages of neural networks is their ability to perform on-line learning, only a small number of neuromorphic hardware devices implement this feature on-chip. In this work, we use a reconfigurable on-line learning spiking (ROLLS) neuromorphic processor chip to build a neuronal architecture for sequence learning. The proposed neuronal architecture uses the attractor properties of winner-takes-all (WTA) dynamics to cope with mismatch and noise in the ROLLS analog computing elements, and it uses its on-chip plasticity features to store sequences of states. We demonstrate, with a proof-of-concept feasibility study how this architecture can store, replay, and update sequences of states, induced by external inputs. Controlled by the attractor dynamics and an explicit destabilizing signal, the items in a sequence can last for varying amounts of time and thus reliable sequence learning and replay can be robustly implemented in a real sensorimotor system.
Collapse
Affiliation(s)
- Raphaela Kreiser
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Dora Aathmani
- The School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Yulia Sandamirskaya
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
17
|
Thakur CS, Molin JL, Cauwenberghs G, Indiveri G, Kumar K, Qiao N, Schemmel J, Wang R, Chicca E, Olson Hasler J, Seo JS, Yu S, Cao Y, van Schaik A, Etienne-Cummings R. Large-Scale Neuromorphic Spiking Array Processors: A Quest to Mimic the Brain. Front Neurosci 2018; 12:891. [PMID: 30559644 PMCID: PMC6287454 DOI: 10.3389/fnins.2018.00891] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2018] [Accepted: 11/14/2018] [Indexed: 11/16/2022] Open
Abstract
Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems, and this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.
Collapse
Affiliation(s)
- Chetan Singh Thakur
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India
| | - Jamal Lottier Molin
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Gert Cauwenberghs
- Department of Bioengineering and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Kundan Kumar
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India
| | - Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Johannes Schemmel
- Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
| | - Runchun Wang
- The MARCS Institute, Western Sydney University, Kingswood, NSW, Australia
| | - Elisabetta Chicca
- Cognitive Interaction Technology – Center of Excellence, Bielefeld University, Bielefeld, Germany
| | - Jennifer Olson Hasler
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Jae-sun Seo
- School of Electrical, Computer and Engineering, Arizona State University, Tempe, AZ, United States
| | - Shimeng Yu
- School of Electrical, Computer and Engineering, Arizona State University, Tempe, AZ, United States
| | - Yu Cao
- School of Electrical, Computer and Engineering, Arizona State University, Tempe, AZ, United States
| | - André van Schaik
- The MARCS Institute, Western Sydney University, Kingswood, NSW, Australia
| | - Ralph Etienne-Cummings
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
18
|
Abstract
The best way to develop a Turing test passing AI is to follow the human model: an embodied agent that functions over a wide range of domains, is a human cognitive model, follows human neural functioning and learns. These properties will endow the agent with the deep semantics required to pass the test. An embodied agent functioning over a wide range of domains is needed to be exposed to and learn the semantics of those domains. Following human cognitive and neural functioning simplifies the search for sufficiently sophisticated mechanisms by reusing mechanisms that are already known to be sufficient. This is a difficult task, but initial steps have been taken, including the development of CABots, neural agents embodied in virtual environments. Several different CABots run in response to natural language commands, performing a cognitive mapping task. These initial agents are quite some distance from passing the test, and to develop an agent that passes will require broad collaboration. Several next steps are proposed, and these could be integrated using, for instance, the Platforms from the Human Brain Project as a foundation for this collaboration.
Collapse
Affiliation(s)
- Christian Huyck
- Department of Computer Science, Middlesex UniversityLondon, United Kingdom
| | | |
Collapse
|
19
|
Detorakis G, Sheik S, Augustine C, Paul S, Pedroni BU, Dutt N, Krichmar J, Cauwenberghs G, Neftci E. Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning. Front Neurosci 2018; 12:583. [PMID: 30210274 PMCID: PMC6123384 DOI: 10.3389/fnins.2018.00583] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Accepted: 08/03/2018] [Indexed: 11/13/2022] Open
Abstract
Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, most neuromorphic hardware are trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy.
Collapse
Affiliation(s)
- Georgios Detorakis
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Sadique Sheik
- Biocircuits Institute, University of California, San Diego, La Jolla, CA, United States
| | - Charles Augustine
- Intel Corporation-Circuit Research Lab, Hillsboro, OR, United States
| | - Somnath Paul
- Intel Corporation-Circuit Research Lab, Hillsboro, OR, United States
| | - Bruno U. Pedroni
- Department of Bioengineering and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
| | - Nikil Dutt
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Jeffrey Krichmar
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Gert Cauwenberghs
- Department of Bioengineering and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
| | - Emre Neftci
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
20
|
Neftci EO. Data and Power Efficient Intelligence with Neuromorphic Learning Machines. iScience 2018; 5:52-68. [PMID: 30240646 PMCID: PMC6123858 DOI: 10.1016/j.isci.2018.06.010] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 06/04/2018] [Accepted: 06/26/2018] [Indexed: 11/22/2022] Open
Abstract
The success of deep networks and recent industry involvement in brain-inspired computing is igniting a widespread interest in neuromorphic hardware that emulates the biological processes of the brain on an electronic substrate. This review explores interdisciplinary approaches anchored in machine learning theory that enable the applicability of neuromorphic technologies to real-world, human-centric tasks. We find that (1) recent work in binary deep networks and approximate gradient descent learning are strikingly compatible with a neuromorphic substrate; (2) where real-time adaptability and autonomy are necessary, neuromorphic technologies can achieve significant advantages over main-stream ones; and (3) challenges in memory technologies, compounded by a tradition of bottom-up approaches in the field, block the road to major breakthroughs. We suggest that a neuromorphic learning framework, tuned specifically for the spatial and temporal constraints of the neuromorphic substrate, will help guiding hardware algorithm co-design and deploying neuromorphic hardware for proactive learning of real-world data.
Collapse
Affiliation(s)
- Emre O Neftci
- Department of Cognitive Sciences, UC Irvine, Irvine, CA 92697-5100, USA; Department of Computer Science, UC Irvine, Irvine, CA 92697-5100, USA.
| |
Collapse
|
21
|
|
22
|
Rutishauser U, Slotine JJ, Douglas RJ. Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks. Neural Comput 2018; 30:1359-1393. [PMID: 29566357 PMCID: PMC5930080 DOI: 10.1162/neco_a_01074] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Computation and Neural Systems, Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, U.S.A., and Cedars-Sinai Medical Center, Departments of Neurosurgery, Neurology and Biomedical Sciences, Los Angeles, CA 90048, U.S.A.
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Department of Mechanical Engineering and Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A.
| | - Rodney J Douglas
- Institute of Neuroinformatics, University and ETH Zurich, Zurich 8057, Switzerland
| |
Collapse
|
23
|
Chen Y, Wang X, Tang B. Structural regularity exploration in multidimensional networks via Bayesian inference. Neural Comput Appl 2018. [DOI: 10.1007/s00521-017-3041-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
24
|
Rasouli M, Chen Y, Basu A, Kukreja SL, Thakor NV. An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:313-325. [PMID: 29570059 DOI: 10.1109/tbcas.2018.2805721] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Despite significant advances in computational algorithms and development of tactile sensors, artificial tactile sensing is strikingly less efficient and capable than the human tactile perception. Inspired by efficiency of biological systems, we aim to develop a neuromorphic system for tactile pattern recognition. We particularly target texture recognition as it is one of the most necessary and challenging tasks for artificial sensory systems. Our system consists of a piezoresistive fabric material as the sensor to emulate skin, an interface that produces spike patterns to mimic neural signals from mechanoreceptors, and an extreme learning machine (ELM) chip to analyze spiking activity. Benefiting from intrinsic advantages of biologically inspired event-driven systems and massively parallel and energy-efficient processing capabilities of the ELM chip, the proposed architecture offers a fast and energy-efficient alternative for processing tactile information. Moreover, it provides the opportunity for the development of low-cost tactile modules for large-area applications by integration of sensors and processing circuits. We demonstrate the recognition capability of our system in a texture discrimination task, where it achieves a classification accuracy of 92% for categorization of ten graded textures. Our results confirm that there exists a tradeoff between response time and classification accuracy (and information transfer rate). A faster decision can be achieved at early time steps or by using a shorter time window. This, however, results in deterioration of the classification accuracy and information transfer rate. We further observe that there exists a tradeoff between the classification accuracy and the input spike rate (and thus energy consumption). Our work substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency. These results have a significance for a wide range of wearable, robotic, prosthetic, and industrial applications.
Collapse
|
25
|
van Gerven M. Computational Foundations of Natural Intelligence. Front Comput Neurosci 2017; 11:112. [PMID: 29375355 PMCID: PMC5770642 DOI: 10.3389/fncom.2017.00112] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Accepted: 11/22/2017] [Indexed: 01/14/2023] Open
Abstract
New developments in AI and neuroscience are revitalizing the quest to understanding natural intelligence, offering insight about how to equip machines with human-like capabilities. This paper reviews some of the computational principles relevant for understanding natural intelligence and, ultimately, achieving strong AI. After reviewing basic principles, a variety of computational modeling approaches is discussed. Subsequently, I concentrate on the use of artificial neural networks as a framework for modeling cognitive processes. This paper ends by outlining some of the challenges that remain to fulfill the promise of machines that show human-like intelligence.
Collapse
Affiliation(s)
- Marcel van Gerven
- Computational Cognitive Neuroscience Lab, Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
26
|
Sadeh S, Silver RA, Mrsic-Flogel TD, Muir DR. Assessing the Role of Inhibition in Stabilizing Neocortical Networks Requires Large-Scale Perturbation of the Inhibitory Population. J Neurosci 2017; 37:12050-12067. [PMID: 29074575 PMCID: PMC5719979 DOI: 10.1523/jneurosci.0963-17.2017] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Revised: 09/12/2017] [Accepted: 10/08/2017] [Indexed: 12/20/2022] Open
Abstract
Neurons within cortical microcircuits are interconnected with recurrent excitatory synaptic connections that are thought to amplify signals (Douglas and Martin, 2007), form selective subnetworks (Ko et al., 2011), and aid feature discrimination. Strong inhibition (Haider et al., 2013) counterbalances excitation, enabling sensory features to be sharpened and represented by sparse codes (Willmore et al., 2011). This balance between excitation and inhibition makes it difficult to assess the strength, or gain, of recurrent excitatory connections within cortical networks, which is key to understanding their operational regime and the computations that they perform. Networks that combine an unstable high-gain excitatory population with stabilizing inhibitory feedback are known as inhibition-stabilized networks (ISNs) (Tsodyks et al., 1997). Theoretical studies using reduced network models predict that ISNs produce paradoxical responses to perturbation, but experimental perturbations failed to find evidence for ISNs in cortex (Atallah et al., 2012). Here, we reexamined this question by investigating how cortical network models consisting of many neurons behave after perturbations and found that results obtained from reduced network models fail to predict responses to perturbations in more realistic networks. Our models predict that a large proportion of the inhibitory network must be perturbed to reliably detect an ISN regime robustly in cortex. We propose that wide-field optogenetic suppression of inhibition under promoters targeting a large fraction of inhibitory neurons may provide a perturbation of sufficient strength to reveal the operating regime of cortex. Our results suggest that detailed computational models of optogenetic perturbations are necessary to interpret the results of experimental paradigms.SIGNIFICANCE STATEMENT Many useful computational mechanisms proposed for cortex require local excitatory recurrence to be very strong, such that local inhibitory feedback is necessary to avoid epileptiform runaway activity (an "inhibition-stabilized network" or "ISN" regime). However, recent experimental results suggest that this regime may not exist in cortex. We simulated activity perturbations in cortical networks of increasing realism and found that, to detect ISN-like properties in cortex, large proportions of the inhibitory population must be perturbed. Current experimental methods for inhibitory perturbation are unlikely to satisfy this requirement, implying that existing experimental observations are inconclusive about the computational regime of cortex. Our results suggest that new experimental designs targeting a majority of inhibitory neurons may be able to resolve this question.
Collapse
Affiliation(s)
- Sadra Sadeh
- Department of Neuroscience, Physiology, and Pharmacology, University College London, WC1E 6BT London, United Kingdom, and
| | - R Angus Silver
- Department of Neuroscience, Physiology, and Pharmacology, University College London, WC1E 6BT London, United Kingdom, and
| | | | | |
Collapse
|
27
|
Muir DR, Molina-Luna P, Roth MM, Helmchen F, Kampa BM. Specific excitatory connectivity for feature integration in mouse primary visual cortex. PLoS Comput Biol 2017; 13:e1005888. [PMID: 29240769 PMCID: PMC5746254 DOI: 10.1371/journal.pcbi.1005888] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 12/28/2017] [Accepted: 11/23/2017] [Indexed: 11/21/2022] Open
Abstract
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a 'like-to-like' scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a 'feature binding' scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1.
Collapse
Affiliation(s)
- Dylan R. Muir
- Biozentrum, University of Basel, Basel, Switzerland
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Patricia Molina-Luna
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Morgane M. Roth
- Biozentrum, University of Basel, Basel, Switzerland
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Fritjof Helmchen
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Björn M. Kampa
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
- Department of Neurophysiology, Institute of Biology 2, RWTH Aachen University, Aachen, Germany
- JARA-BRAIN, Aachen, Germany
| |
Collapse
|
28
|
Abstract
Recurrent neural network architectures can have useful computational properties, with complex temporal dynamics and input-sensitive attractor states. However, evaluation of recurrent dynamic architectures requires solving systems of differential equations, and the number of evaluations required to determine their response to a given input can vary with the input or can be indeterminate altogether in the case of oscillations or instability. In feedforward networks, by contrast, only a single pass through the network is needed to determine the response to a given input. Modern machine learning systems are designed to operate efficiently on feedforward architectures. We hypothesized that two-layer feedforward architectures with simple, deterministic dynamics could approximate the responses of single-layer recurrent network architectures. By identifying the fixed-point responses of a given recurrent network, we trained two-layer networks to directly approximate the fixed-point response to a given input. These feedforward networks then embodied useful computations, including competitive interactions, information transformations, and noise rejection. Our approach was able to find useful approximations to recurrent networks, which can then be evaluated in linear and deterministic time complexity.
Collapse
Affiliation(s)
- Dylan R Muir
- Biozentrum, University of Basel, Basel 4056, Switzerland
| |
Collapse
|
29
|
Park J, Yu T, Joshi S, Maier C, Cauwenberghs G. Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2408-2422. [PMID: 27483491 DOI: 10.1109/tnnls.2016.2572164] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a hierarchical address-event routing (HiAER) architecture for scalable communication of neural and synaptic spike events between neuromorphic processors, implemented with five Xilinx Spartan-6 field-programmable gate arrays and four custom analog neuromophic integrated circuits serving 262k neurons and 262M synapses. The architecture extends the single-bus address-event representation protocol to a hierarchy of multiple nested buses, routing events across increasing scales of spatial distance. The HiAER protocol provides individually programmable axonal delay in addition to strength for each synapse, lending itself toward biologically plausible neural network architectures, and scales across a range of hierarchies suitable for multichip and multiboard systems in reconfigurable large-scale neuromorphic systems. We show approximately linear scaling of net global synaptic event throughput with number of routing nodes in the network, at 3.6×107 synaptic events per second per 16k-neuron node in the hierarchy.
Collapse
Affiliation(s)
- Jongkil Park
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| | | | - Siddharth Joshi
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| | - Christoph Maier
- Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| | - Gert Cauwenberghs
- Department of Bioengineering, Jacobs School of Engineering, Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| |
Collapse
|
30
|
Serruya MD. Connecting the Brain to Itself through an Emulation. Front Neurosci 2017; 11:373. [PMID: 28713235 PMCID: PMC5492113 DOI: 10.3389/fnins.2017.00373] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Accepted: 06/15/2017] [Indexed: 01/03/2023] Open
Abstract
Pilot clinical trials of human patients implanted with devices that can chronically record and stimulate ensembles of hundreds to thousands of individual neurons offer the possibility of expanding the substrate of cognition. Parallel trains of firing rate activity can be delivered in real-time to an array of intermediate external modules that in turn can trigger parallel trains of stimulation back into the brain. These modules may be built in software, VLSI firmware, or biological tissue as in vitro culture preparations or in vivo ectopic construct organoids. Arrays of modules can be constructed as early stage whole brain emulators, following canonical intra- and inter-regional circuits. By using machine learning algorithms and classic tasks known to activate quasi-orthogonal functional connectivity patterns, bedside testing can rapidly identify ensemble tuning properties and in turn cycle through a sequence of external module architectures to explore which can causatively alter perception and behavior. Whole brain emulation both (1) serves to augment human neural function, compensating for disease and injury as an auxiliary parallel system, and (2) has its independent operation bootstrapped by a human-in-the-loop to identify optimal micro- and macro-architectures, update synaptic weights, and entrain behaviors. In this manner, closed-loop brain-computer interface pilot clinical trials can advance strong artificial intelligence development and forge new therapies to restore independence in children and adults with neurological conditions.
Collapse
Affiliation(s)
- Mijail D Serruya
- Neurology, Thomas Jefferson UniversityPhiladelphia, PA, United States
| |
Collapse
|
31
|
Neftci EO, Augustine C, Paul S, Detorakis G. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines. Front Neurosci 2017; 11:324. [PMID: 28680387 PMCID: PMC5478701 DOI: 10.3389/fnins.2017.00324] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2017] [Accepted: 05/23/2017] [Indexed: 11/17/2022] Open
Abstract
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
Collapse
Affiliation(s)
- Emre O. Neftci
- Neuromorphic Machine Intelligence Laboratory, Department of Cognitive Sciences, University of California, IrvineIrvine, CA, United States
| | | | - Somnath Paul
- Circuit Research Lab, Intel CorporationHilsboro, OR, United States
| | - Georgios Detorakis
- Neuromorphic Machine Intelligence Laboratory, Department of Cognitive Sciences, University of California, IrvineIrvine, CA, United States
| |
Collapse
|
32
|
You H, Wang DH. Neuromorphic Implementation of Attractor Dynamics in a Two-Variable Winner-Take-All Circuit with NMDARs: A Simulation Study. Front Neurosci 2017; 11:40. [PMID: 28223913 PMCID: PMC5293789 DOI: 10.3389/fnins.2017.00040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 01/19/2017] [Indexed: 11/13/2022] Open
Abstract
Neural networks configured with winner-take-all (WTA) competition and N-methyl-D-aspartate receptor (NMDAR)-mediated synaptic dynamics are endowed with various dynamic characteristics of attractors underlying many cognitive functions. This paper presents a novel method for neuromorphic implementation of a two-variable WTA circuit with NMDARs aimed at implementing decision-making, working memory and hysteresis in visual perceptions. The method proposed is a dynamical system approach of circuit synthesis based on a biophysically plausible WTA model. Notably, slow and non-linear temporal dynamics of NMDAR-mediated synapses was generated. Circuit simulations in Cadence reproduced ramping neural activities observed in electrophysiological recordings in experiments of decision-making, the sustained activities observed in the prefrontal cortex during working memory, and classical hysteresis behavior during visual discrimination tasks. Furthermore, theoretical analysis of the dynamical system approach illuminated the underlying mechanisms of decision-making, memory capacity and hysteresis loops. The consistence between the circuit simulations and theoretical analysis demonstrated that the WTA circuit with NMDARs was able to capture the attractor dynamics underlying these cognitive functions. Their physical implementations as elementary modules are promising for assembly into integrated neuromorphic cognitive systems.
Collapse
Affiliation(s)
- Hongzhi You
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of ChinaChengdu, China
| | - Da-Hui Wang
- School of Systems Science and National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal UniversityBeijing, China
| |
Collapse
|
33
|
Osswald M, Ieng SH, Benosman R, Indiveri G. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems. Sci Rep 2017; 7:40703. [PMID: 28079187 PMCID: PMC5227683 DOI: 10.1038/srep40703] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Accepted: 12/08/2016] [Indexed: 11/09/2022] Open
Abstract
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Collapse
Affiliation(s)
- Marc Osswald
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Sio-Hoi Ieng
- Université Pierre et Marie Curie, Institut de la Vision, Paris, France
| | - Ryad Benosman
- Université Pierre et Marie Curie, Institut de la Vision, Paris, France
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
34
|
|
35
|
Tuma T, Pantazi A, Le Gallo M, Sebastian A, Eleftheriou E. Stochastic phase-change neurons. NATURE NANOTECHNOLOGY 2016; 11:693-9. [PMID: 27183057 DOI: 10.1038/nnano.2016.70] [Citation(s) in RCA: 297] [Impact Index Per Article: 37.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2015] [Accepted: 04/01/2016] [Indexed: 05/08/2023]
Abstract
Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals.
Collapse
Affiliation(s)
- Tomas Tuma
- IBM Research-Zurich, CH-8803 Rüschlikon, Switzerland
| | | | - Manuel Le Gallo
- IBM Research-Zurich, CH-8803 Rüschlikon, Switzerland
- ETH Zurich, CH-8092 Zurich, Switzerland
| | - Abu Sebastian
- IBM Research-Zurich, CH-8803 Rüschlikon, Switzerland
| | | |
Collapse
|
36
|
Hu J, Tang H, Tan K, Li H. How the Brain Formulates Memory: A Spatio-Temporal Model Research Frontier. IEEE COMPUT INTELL M 2016. [DOI: 10.1109/mci.2016.2532268] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
37
|
Almási AD, Woźniak S, Cristea V, Leblebici Y, Engbersen T. Review of advances in neural networks: Neural design technology stack. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.02.092] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
38
|
Binas J, Indiveri G, Pfeiffer M. Local structure supports learning of deterministic behavior in recurrent neural networks. BMC Neurosci 2015. [PMCID: PMC4698769 DOI: 10.1186/1471-2202-16-s1-p195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
39
|
Giulioni M, Corradi F, Dante V, del Giudice P. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems. Sci Rep 2015; 5:14730. [PMID: 26463272 PMCID: PMC4604465 DOI: 10.1038/srep14730] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Accepted: 08/12/2015] [Indexed: 11/10/2022] Open
Abstract
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
Collapse
Affiliation(s)
| | - Federico Corradi
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, Switzerland
| | - Vittorio Dante
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
| | - Paolo del Giudice
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
- National Institute for Nuclear Physics, Rome, Italy
| |
Collapse
|
40
|
Corradi F, Indiveri G. A Neuromorphic Event-Based Neural Recording System for Smart Brain-Machine-Interfaces. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2015; 9:699-709. [PMID: 26513801 DOI: 10.1109/tbcas.2015.2479256] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Neural recording systems are a central component of Brain-Machince Interfaces (BMIs). In most of these systems the emphasis is on faithful reproduction and transmission of the recorded signal to remote systems for further processing or data analysis. Here we follow an alternative approach: we propose a neural recording system that can be directly interfaced locally to neuromorphic spiking neural processing circuits for compressing the large amounts of data recorded, carrying out signal processing and neural computation to extract relevant information, and transmitting only the low-bandwidth outcome of the processing to remote computing or actuating modules. The fabricated system includes a low-noise amplifier, a delta-modulator analog-to-digital converter, and a low-power band-pass filter. The bio-amplifier has a programmable gain of 45-54 dB, with a Root Mean Squared (RMS) input-referred noise level of 2.1 μV, and consumes 90 μW . The band-pass filter and delta-modulator circuits include asynchronous handshaking interface logic compatible with event-based communication protocols. We describe the properties of the neural recording circuits, validating them with experimental measurements, and present system-level application examples, by interfacing these circuits to a reconfigurable neuromorphic processor comprising an array of spiking neurons with plastic and dynamic synapses. The pool of neurons within the neuromorphic processor was configured to implement a recurrent neural network, and to process the events generated by the neural recording system in order to carry out pattern recognition.
Collapse
|
41
|
Chung YH, Lee T, Yoo SY, Min J, Choi JW. Electrochemical Bioelectronic Device Consisting of Metalloprotein for Analog Decision Making. Sci Rep 2015; 5:14501. [PMID: 26400018 PMCID: PMC4585857 DOI: 10.1038/srep14501] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2015] [Accepted: 09/02/2015] [Indexed: 02/08/2023] Open
Abstract
We demonstrate an analog type logical device that combines metalloprotein and organic/inorganic materials and can make an interactive analog decision. Myoglobin is used as a functional biomolecule to generate electrochemical signals, and its original redox signal is controlled with various mercapto-acids by the distance effect between myoglobin and a metal surface in the process of electron transfer. Controlled signals are modulated with the introduction of inorganic materials including nanoparticles and metal ions. By forming a hybrid structure with various constituents of organic/inorganic materials, several functions for signal manipulation were achieved, including enhancement, suppression, and shift. Based on the manipulated signals of biomolecules, a novel logical system for interactive decision-making processes is proposed by selectively combining different signals. Through the arrangement of various output signals, we can define interactive logical results regulated by an inherent tendency (by metalloprotein), personal experience (by organic spacer sets), and environments (by inorganic materials). As a practical application, a group decision process is presented using the proposed logical device. The proposed flexible logic process could facilitate the realization of an artificial intelligence system by mimicking the sophisticated human logic process.
Collapse
Affiliation(s)
- Yong-Ho Chung
- Department of Chemical and Biomolecular Engineering, Sogang University, Seoul 121-742, Korea.,Department of Chemical Engineering, Hoseo University, Asan 336-795, Korea
| | - Taek Lee
- Department of Chemical and Biomolecular Engineering, Sogang University, Seoul 121-742, Korea
| | - Si-Youl Yoo
- Department of Chemical and Biomolecular Engineering, Sogang University, Seoul 121-742, Korea
| | - Junhong Min
- School of Integrative Engineering, Chung-Ang University, Seoul 156-756, Korea
| | - Jeong-Woo Choi
- Department of Chemical and Biomolecular Engineering, Sogang University, Seoul 121-742, Korea.,Interdisciplinary Program of Integrated Biotechnology, Sogang University, Seoul 121-742, Korea
| |
Collapse
|
42
|
Nazari S, Amiri M, Faez K, Amiri M. Multiplier-less digital implementation of neuron–astrocyte signalling on FPGA. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.02.041] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
43
|
Qiao N, Mostafa H, Corradi F, Osswald M, Stefanini F, Sumislawska D, Indiveri G. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Front Neurosci 2015; 9:141. [PMID: 25972778 PMCID: PMC4413675 DOI: 10.3389/fnins.2015.00141] [Citation(s) in RCA: 159] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 04/06/2015] [Indexed: 11/13/2022] Open
Abstract
Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Collapse
Affiliation(s)
- Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Hesham Mostafa
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Federico Corradi
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Marc Osswald
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Fabio Stefanini
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Dora Sumislawska
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
44
|
Rutishauser U, Slotine JJ, Douglas R. Computation in dynamically bounded asymmetric systems. PLoS Comput Biol 2015; 11:e1004039. [PMID: 25617645 PMCID: PMC4305289 DOI: 10.1371/journal.pcbi.1004039] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Accepted: 11/12/2014] [Indexed: 11/18/2022] Open
Abstract
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable 'expansion' dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Computation and Neural Systems, California Institute of Technology, Pasadena, California, United States of America
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, United States of America
- Departments of Neurosurgery, Neurology and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, California, United States of America
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Rodney Douglas
- Institute of Neuroinformatics, University and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
45
|
Stefanini F, Neftci EO, Sheik S, Indiveri G. PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems. Front Neuroinform 2014; 8:73. [PMID: 25232314 PMCID: PMC4152885 DOI: 10.3389/fninf.2014.00073] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Accepted: 08/01/2014] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
Collapse
Affiliation(s)
- Fabio Stefanini
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Emre O Neftci
- Department of Bioengineering, Institute for Neural Computation, University of California at San Diego La Jolla, CA, USA
| | - Sadique Sheik
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
46
|
Binas J, Rutishauser U, Indiveri G, Pfeiffer M. Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity. Front Comput Neurosci 2014; 8:68. [PMID: 25071538 PMCID: PMC4086298 DOI: 10.3389/fncom.2014.00068] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Accepted: 06/16/2014] [Indexed: 12/31/2022] Open
Abstract
Winner-Take-All (WTA) networks are recurrently connected populations of excitatory and inhibitory neurons that represent promising candidate microcircuits for implementing cortical computation. WTAs can perform powerful computations, ranging from signal-restoration to state-dependent processing. However, such networks require fine-tuned connectivity parameters to keep the network dynamics within stable operating regimes. In this article, we show how such stability can emerge autonomously through an interaction of biologically plausible plasticity mechanisms that operate simultaneously on all excitatory and inhibitory synapses of the network. A weight-dependent plasticity rule is derived from the triplet spike-timing dependent plasticity model, and its stabilization properties in the mean-field case are analyzed using contraction theory. Our main result provides simple constraints on the plasticity rule parameters, rather than on the weights themselves, which guarantee stable WTA behavior. The plastic network we present is able to adapt to changing input conditions, and to dynamically adjust its gain, therefore exhibiting self-stabilization mechanisms that are crucial for maintaining stable operation in large networks of interconnected subunits. We show how distributed neural assemblies can adjust their parameters for stable WTA function autonomously while respecting anatomical constraints on neural wiring.
Collapse
Affiliation(s)
- Jonathan Binas
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Ueli Rutishauser
- Department of Neurosurgery and Department of Neurology, Cedars-Sinai Medical CenterLos Angeles, CA, USA
- Computation and Neural Systems Program, Division of Biology and Biological Engineering, California Institute of TechnologyPasadena, CA, USA
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| |
Collapse
|
47
|
Carlson KD, Nageswaran JM, Dutt N, Krichmar JL. An efficient automated parameter tuning framework for spiking neural networks. Front Neurosci 2014; 8:10. [PMID: 24550771 PMCID: PMC3912986 DOI: 10.3389/fnins.2014.00010] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 01/17/2014] [Indexed: 11/13/2022] Open
Abstract
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
Collapse
Affiliation(s)
- Kristofor D Carlson
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA
| | | | - Nikil Dutt
- Department of Computer Science, University of California Irvine Irvine, CA, USA
| | - Jeffrey L Krichmar
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA ; Department of Computer Science, University of California Irvine Irvine, CA, USA
| |
Collapse
|
48
|
Coath M, Sheik S, Chicca E, Indiveri G, Denham SL, Wennekers T. A robust sound perception model suitable for neuromorphic implementation. Front Neurosci 2014; 7:278. [PMID: 24478621 PMCID: PMC3894459 DOI: 10.3389/fnins.2013.00278] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2013] [Accepted: 12/30/2013] [Indexed: 11/30/2022] Open
Abstract
We have recently demonstrated the emergence of dynamic feature sensitivity through exposure to formative stimuli in a real-time neuromorphic system implementing a hybrid analog/digital network of spiking neurons. This network, inspired by models of auditory processing in mammals, includes several mutually connected layers with distance-dependent transmission delays and learning in the form of spike timing dependent plasticity, which effects stimulus-driven changes in the network connectivity. Here we present results that demonstrate that the network is robust to a range of variations in the stimulus pattern, such as are found in naturalistic stimuli and neural responses. This robustness is a property critical to the development of realistic, electronic neuromorphic systems. We analyze the variability of the response of the network to “noisy” stimuli which allows us to characterize the acuity in information-theoretic terms. This provides an objective basis for the quantitative comparison of networks, their connectivity patterns, and learning strategies, which can inform future design decisions. We also show, using stimuli derived from speech samples, that the principles are robust to other challenges, such as variable presentation rate, that would have to be met by systems deployed in the real world. Finally we demonstrate the potential applicability of the approach to real sounds.
Collapse
Affiliation(s)
- Martin Coath
- Cognition Institute, Plymouth University Plymouth, UK ; Faculty of Health and Human Sciences, School of Psychology, Plymouth University Plymouth, UK
| | - Sadique Sheik
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Elisabetta Chicca
- Faculty of Technology, Cognitive Interaction Technology - Center of Excellence, Bielefeld University Bielefeld, Germany
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Susan L Denham
- Cognition Institute, Plymouth University Plymouth, UK ; Faculty of Health and Human Sciences, School of Psychology, Plymouth University Plymouth, UK
| | - Thomas Wennekers
- Cognition Institute, Plymouth University Plymouth, UK ; Faculty of Science and Environment, School of Computing and Mathematics, Plymouth University Plymouth, UK
| |
Collapse
|
49
|
Neftci E, Das S, Pedroni B, Kreutz-Delgado K, Cauwenberghs G. Event-driven contrastive divergence for spiking neuromorphic systems. Front Neurosci 2014; 7:272. [PMID: 24574952 PMCID: PMC3922083 DOI: 10.3389/fnins.2013.00272] [Citation(s) in RCA: 113] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Accepted: 12/22/2013] [Indexed: 11/13/2022] Open
Abstract
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
Collapse
Affiliation(s)
- Emre Neftci
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Srinjoy Das
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Electrical and Computer Engineering Department, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Bruno Pedroni
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Kenneth Kreutz-Delgado
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Electrical and Computer Engineering Department, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Gert Cauwenberghs
- Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| |
Collapse
|
50
|
|