1
|
Sinha A, Lee J, Kim J, So H. An evaluation of recent advancements in biological sensory organ-inspired neuromorphically tuned biomimetic devices. MATERIALS HORIZONS 2024; 11:5181-5208. [PMID: 39114942 DOI: 10.1039/d4mh00522h] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/29/2024]
Abstract
In the field of neuroscience, significant progress has been made regarding how the brain processes information. Unlike computer processors, the brain comprises neurons and synapses instead of memory blocks and transistors. Despite advancements in artificial neural networks, a complete understanding concerning brain functions remains elusive. For example, to achieve more accurate neuron replication, we must better understand signal transmission during synaptic processes, neural network tunability, and the creation of nanodevices featuring neurons and synapses. This study discusses the latest algorithms utilized in neuromorphic systems, the production of synaptic devices, differences between single and multisensory gadgets, recent advances in multisensory devices, and the promising research opportunities available in this field. We also explored the ability of an artificial synaptic device to mimic biological neural systems across diverse applications. Despite existing challenges, neuroscience-based computing technology holds promise for attracting scientists seeking to enhance solutions and augment the capabilities of neuromorphic devices, thereby fostering future breakthroughs in algorithms and the widespread application of cutting-edge technologies.
Collapse
Affiliation(s)
- Animesh Sinha
- Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, South Korea.
| | - Jihun Lee
- Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, South Korea.
| | - Junho Kim
- Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, South Korea.
| | - Hongyun So
- Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, South Korea.
- Institute of Nano Science and Technology, Hanyang University, Seoul 04763, South Korea
| |
Collapse
|
2
|
Farisco M, Evers K, Changeux JP. Is artificial consciousness achievable? Lessons from the human brain. Neural Netw 2024; 180:106714. [PMID: 39270349 DOI: 10.1016/j.neunet.2024.106714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 07/29/2024] [Accepted: 09/06/2024] [Indexed: 09/15/2024]
Abstract
We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word "consciousness" for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.
Collapse
Affiliation(s)
- Michele Farisco
- Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden; Biogem, Biology and Molecular Genetics Institute, Ariano Irpino (AV), Italy.
| | - Kathinka Evers
- Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| | | |
Collapse
|
3
|
Yang S, Wang J, Deng B, Azghadi MR, Linares-Barranco B. Neuromorphic Context-Dependent Learning Framework With Fault-Tolerant Spike Routing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7126-7140. [PMID: 34115596 DOI: 10.1109/tnnls.2021.3084250] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Neuromorphic computing is a promising technology that realizes computation based on event-based spiking neural networks (SNNs). However, fault-tolerant on-chip learning remains a challenge in neuromorphic systems. This study presents the first scalable neuromorphic fault-tolerant context-dependent learning (FCL) hardware framework. We show how this system can learn associations between stimulation and response in two context-dependent learning tasks from experimental neuroscience, despite possible faults in the hardware nodes. Furthermore, we demonstrate how our novel fault-tolerant neuromorphic spike routing scheme can avoid multiple fault nodes successfully and can enhance the maximum throughput of the neuromorphic network by 0.9%-16.1% in comparison with previous studies. By utilizing the real-time computational capabilities and multiple-fault-tolerant property of the proposed system, the neuronal mechanisms underlying the spiking activities of neuromorphic networks can be readily explored. In addition, the proposed system can be applied in real-time learning and decision-making applications, brain-machine integration, and the investigation of brain cognition during learning.
Collapse
|
4
|
Michaelis C, Lehr AB, Oed W, Tetzlaff C. Brian2Loihi: An emulator for the neuromorphic chip Loihi using the spiking neural network simulator Brian. Front Neuroinform 2022; 16:1015624. [PMID: 36439945 PMCID: PMC9682266 DOI: 10.3389/fninf.2022.1015624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 10/12/2022] [Indexed: 11/11/2022] Open
Abstract
Developing intelligent neuromorphic solutions remains a challenging endeavor. It requires a solid conceptual understanding of the hardware's fundamental building blocks. Beyond this, accessible and user-friendly prototyping is crucial to speed up the design pipeline. We developed an open source Loihi emulator based on the neural network simulator Brian that can easily be incorporated into existing simulation workflows. We demonstrate errorless Loihi emulation in software for a single neuron and for a recurrently connected spiking neural network. On-chip learning is also reviewed and implemented, with reasonable discrepancy due to stochastic rounding. This work provides a coherent presentation of Loihi's computational unit and introduces a new, easy-to-use Loihi prototyping package with the aim to help streamline conceptualization and deployment of new algorithms.
Collapse
Affiliation(s)
- Carlo Michaelis
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
- *Correspondence: Carlo Michaelis
| | - Andrew B. Lehr
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Winfried Oed
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| |
Collapse
|
5
|
Müller E, Schmitt S, Mauch C, Billaudelle S, Grübl A, Güttler M, Husmann D, Ilmberger J, Jeltsch S, Kaiser J, Klähn J, Kleider M, Koke C, Montes J, Müller P, Partzsch J, Passenberg F, Schmidt H, Vogginger B, Weidner J, Mayr C, Schemmel J. The operating system of the neuromorphic BrainScaleS-1 system. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
6
|
Klassert R, Baumbach A, Petrovici MA, Gärttner M. Variational learning of quantum ground states on spiking neuromorphic hardware. iScience 2022; 25:104707. [PMID: 35992070 PMCID: PMC9386107 DOI: 10.1016/j.isci.2022.104707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 05/05/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Recent research has demonstrated the usefulness of neural networks as variational ansatz functions for quantum many-body states. However, high-dimensional sampling spaces and transient autocorrelations confront these approaches with a challenging computational bottleneck. Compared to conventional neural networks, physical model devices offer a fast, efficient and inherently parallel substrate capable of related forms of Markov chain Monte Carlo sampling. Here, we demonstrate the ability of a neuromorphic chip to represent the ground states of quantum spin models by variational energy minimization. We develop a training algorithm and apply it to the transverse field Ising model, showing good performance at moderate system sizes (N≤10). A systematic hyperparameter study shows that performance depends on sample quality, which is limited by temporal parameter variations on the analog neuromorphic chip. Our work thus provides an important step towards harnessing the capabilities of neuromorphic hardware for tackling the curse of dimensionality in quantum many-body problems. Variational scheme for representing quantum ground states with neuromorphic hardware Accelerated physical system yields system-size independent sample generation time Accurate learning of ground states across a quantum phase transition Detailed analysis of algorithmic and technical limitations
Collapse
|
7
|
Yang S, Wang J, Hao X, Li H, Wei X, Deng B, Loparo KA. BiCoSS: Toward Large-Scale Cognition Brain With Multigranular Neuromorphic Architecture. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2801-2815. [PMID: 33428574 DOI: 10.1109/tnnls.2020.3045492] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The further exploration of the neural mechanisms underlying the biological activities of the human brain depends on the development of large-scale spiking neural networks (SNNs) with different categories at different levels, as well as the corresponding computing platforms. Neuromorphic engineering provides approaches to high-performance biologically plausible computational paradigms inspired by neural systems. In this article, we present a biological-inspired cognitive supercomputing system (BiCoSS) that integrates multiple granules (GRs) of SNNs to realize a hybrid compatible neuromorphic platform. A scalable hierarchical heterogeneous multicore architecture is presented, and a synergistic routing scheme for hybrid neural information is proposed. The BiCoSS system can accommodate different levels of GRs and biological plausibility of SNN models in an efficient and scalable manner. Over four million neurons can be realized on BiCoSS with a power efficiency of 2.8k larger than the GPU platform, and the average latency of BiCoSS is 3.62 and 2.49 times higher than conventional architectures of digital neuromorphic systems. For the verification, BiCoSS is used to replicate various biological cognitive activities, including motor learning, action selection, context-dependent learning, and movement disorders. Comprehensively considering the programmability, biological plausibility, learning capability, computational power, and scalability, BiCoSS is shown to outperform the alternative state-of-the-art works for large-scale SNN, while its real-time computational capability enables a wide range of potential applications.
Collapse
|
8
|
Ostrau C, Klarhorst C, Thies M, Rückert U. Benchmarking Neuromorphic Hardware and Its Energy Expenditure. Front Neurosci 2022; 16:873935. [PMID: 35720731 PMCID: PMC9201569 DOI: 10.3389/fnins.2022.873935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 04/27/2022] [Indexed: 11/13/2022] Open
Abstract
We propose and discuss a platform overarching benchmark suite for neuromorphic hardware. This suite covers benchmarks from low-level characterization to high-level application evaluation using benchmark specific metrics. With this rather broad approach we are able to compare various hardware systems including mixed-signal and fully digital neuromorphic architectures. Selected benchmarks are discussed and results for several target platforms are presented revealing characteristic differences between the various systems. Furthermore, a proposed energy model allows to combine benchmark performance metrics with energy efficiency. This model enables the prediction of the energy expenditure of a network on a target system without actually having access to it. To quantify the efficiency gap between neuromorphics and the biological paragon of the human brain, the energy model is used to estimate the energy required for a full brain simulation. This reveals that current neuromorphic systems are at least four orders of magnitude less efficient. It is argued, that even with a modern fabrication process, two to three orders of magnitude are remaining. Finally, for selected benchmarks the performance and efficiency of the neuromorphic solution is compared to standard approaches.
Collapse
|
9
|
|
10
|
Yang S, Deng B, Wang J, Li H, Lu M, Che Y, Wei X, Loparo KA. Scalable Digital Neuromorphic Architecture for Large-Scale Biophysically Meaningful Neural Network With Multi-Compartment Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:148-162. [PMID: 30892250 DOI: 10.1109/tnnls.2019.2899936] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Multicompartment emulation is an essential step to enhance the biological realism of neuromorphic systems and to further understand the computational power of neurons. In this paper, we present a hardware efficient, scalable, and real-time computing strategy for the implementation of large-scale biologically meaningful neural networks with one million multi-compartment neurons (CMNs). The hardware platform uses four Altera Stratix III field-programmable gate arrays, and both the cellular and the network levels are considered, which provides an efficient implementation of a large-scale spiking neural network with biophysically plausible dynamics. At the cellular level, a cost-efficient multi-CMN model is presented, which can reproduce the detailed neuronal dynamics with representative neuronal morphology. A set of efficient neuromorphic techniques for single-CMN implementation are presented with all the hardware cost of memory and multiplier resources removed and with hardware performance of computational speed enhanced by 56.59% in comparison with the classical digital implementation method. At the network level, a scalable network-on-chip (NoC) architecture is proposed with a novel routing algorithm to enhance the NoC performance including throughput and computational latency, leading to higher computational efficiency and capability in comparison with state-of-the-art projects. The experimental results demonstrate that the proposed work can provide an efficient model and architecture for large-scale biologically meaningful networks, while the hardware synthesis results demonstrate low area utilization and high computational speed that supports the scalability of the approach.
Collapse
|
11
|
Kungl AF, Schmitt S, Klähn J, Müller P, Baumbach A, Dold D, Kugele A, Müller E, Koke C, Kleider M, Mauch C, Breitwieser O, Leng L, Gürtler N, Güttler M, Husmann D, Husmann K, Hartel A, Karasenko V, Grübl A, Schemmel J, Meier K, Petrovici MA. Accelerated Physical Emulation of Bayesian Inference in Spiking Neural Networks. Front Neurosci 2019; 13:1201. [PMID: 31798400 PMCID: PMC6868054 DOI: 10.3389/fnins.2019.01201] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 10/23/2019] [Indexed: 11/13/2022] Open
Abstract
The massively parallel nature of biological information processing plays an important role due to its superiority in comparison to human-engineered computing devices. In particular, it may hold the key to overcoming the von Neumann bottleneck that limits contemporary computer architectures. Physical-model neuromorphic devices seek to replicate not only this inherent parallelism, but also aspects of its microscopic dynamics in analog circuits emulating neurons and synapses. However, these machines require network models that are not only adept at solving particular tasks, but that can also cope with the inherent imperfections of analog substrates. We present a spiking network model that performs Bayesian inference through sampling on the BrainScaleS neuromorphic platform, where we use it for generative and discriminative computations on visual data. By illustrating its functionality on this platform, we implicitly demonstrate its robustness to various substrate-specific distortive effects, as well as its accelerated capability for computation. These results showcase the advantages of brain-inspired physical computation and provide important building blocks for large-scale neuromorphic applications.
Collapse
Affiliation(s)
- Akos F Kungl
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johann Klähn
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Paul Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Dominik Dold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Alexander Kugele
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christoph Koke
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Luziwei Leng
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Nico Gürtler
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Maurice Güttler
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Dan Husmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Kai Husmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Hartel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Grübl
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mihai A Petrovici
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany.,Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
12
|
Dold D, Bytschok I, Kungl AF, Baumbach A, Breitwieser O, Senn W, Schemmel J, Meier K, Petrovici MA. Stochasticity from function - Why the Bayesian brain may need no noise. Neural Netw 2019; 119:200-213. [PMID: 31450073 DOI: 10.1016/j.neunet.2019.08.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 07/01/2019] [Accepted: 08/01/2019] [Indexed: 11/15/2022]
Abstract
An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functional Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial.
Collapse
Affiliation(s)
- Dominik Dold
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany; Department of Physiology, University of Bern, Bühlplatz 5, CH-3012 Bern, Switzerland.
| | - Ilja Bytschok
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany
| | - Akos F Kungl
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany; Department of Physiology, University of Bern, Bühlplatz 5, CH-3012 Bern, Switzerland
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany
| | - Walter Senn
- Department of Physiology, University of Bern, Bühlplatz 5, CH-3012 Bern, Switzerland
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany
| | - Mihai A Petrovici
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany; Department of Physiology, University of Bern, Bühlplatz 5, CH-3012 Bern, Switzerland.
| |
Collapse
|
13
|
Wunderlich T, Kungl AF, Müller E, Hartel A, Stradmann Y, Aamir SA, Grübl A, Heimbrecht A, Schreiber K, Stöckel D, Pehle C, Billaudelle S, Kiene G, Mauch C, Schemmel J, Meier K, Petrovici MA. Demonstrating Advantages of Neuromorphic Computation: A Pilot Study. Front Neurosci 2019; 13:260. [PMID: 30971881 PMCID: PMC6444279 DOI: 10.3389/fnins.2019.00260] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Accepted: 03/05/2019] [Indexed: 11/26/2022] Open
Abstract
Neuromorphic devices represent an attempt to mimic aspects of the brain's architecture and dynamics with the aim of replicating its hallmark functional capabilities in terms of computational power, robust learning and energy efficiency. We employ a single-chip prototype of the BrainScaleS 2 neuromorphic system to implement a proof-of-concept demonstration of reward-modulated spike-timing-dependent plasticity in a spiking network that learns to play a simplified version of the Pong video game by smooth pursuit. This system combines an electronic mixed-signal substrate for emulating neuron and synapse dynamics with an embedded digital processor for on-chip learning, which in this work also serves to simulate the virtual environment and learning agent. The analog emulation of neuronal membrane dynamics enables a 1000-fold acceleration with respect to biological real-time, with the entire chip operating on a power budget of 57 mW. Compared to an equivalent simulation using state-of-the-art software, the on-chip emulation is at least one order of magnitude faster and three orders of magnitude more energy-efficient. We demonstrate how on-chip learning can mitigate the effects of fixed-pattern noise, which is unavoidable in analog substrates, while making use of temporal variability for action exploration. Learning compensates imperfections of the physical substrate, as manifested in neuronal parameter variability, by adapting synaptic weights to match respective excitability of individual neurons.
Collapse
Affiliation(s)
- Timo Wunderlich
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Akos F Kungl
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Eric Müller
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Hartel
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Yannik Stradmann
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Syed Ahmed Aamir
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Grübl
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Arthur Heimbrecht
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Korbinian Schreiber
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - David Stöckel
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Pehle
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Billaudelle
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Gerd Kiene
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Karlheinz Meier
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mihai A Petrovici
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany.,Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
14
|
Yang S, Deng B, Li H, Liu C, Wang J, Yu H, Qin Y. FPGA implementation of hippocampal spiking network and its real-time simulation on dynamical neuromodulation of oscillations. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.12.031] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
15
|
Stöckel A, Jenzen C, Thies M, Rückert U. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware. Front Comput Neurosci 2017; 11:71. [PMID: 28878642 PMCID: PMC5572441 DOI: 10.3389/fncom.2017.00071] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 07/20/2017] [Indexed: 11/14/2022] Open
Abstract
Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.
Collapse
Affiliation(s)
- Andreas Stöckel
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Christoph Jenzen
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Michael Thies
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Ulrich Rückert
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
16
|
Petrovici MA, Bill J, Bytschok I, Schemmel J, Meier K. Stochastic inference with spiking neurons in the high-conductance state. Phys Rev E 2016; 94:042312. [PMID: 27841474 DOI: 10.1103/physreve.94.042312] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2013] [Indexed: 11/07/2022]
Abstract
The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.
Collapse
Affiliation(s)
- Mihai A Petrovici
- Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
| | - Johannes Bill
- Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany.,Institute for Theoretical Computer Science, Graz University of Technology, Styria, Austria
| | - Ilja Bytschok
- Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
| |
Collapse
|
17
|
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences. PLoS Comput Biol 2016; 12:e1004954. [PMID: 27213810 PMCID: PMC4877102 DOI: 10.1371/journal.pcbi.1004954] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 04/28/2016] [Indexed: 11/25/2022] Open
Abstract
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. From one moment to the next, in an ever-changing world, and awash in a deluge of sensory data, the brain fluidly guides our actions throughout an astonishing variety of tasks. Processing this ongoing bombardment of information is a fundamental problem faced by its underlying neural circuits. Given that the structure of our actions along with the organization of the environment in which they are performed can be intuitively decomposed into sequences of simpler patterns, an encoding strategy reflecting the temporal nature of these patterns should offer an efficient approach for assembling more complex memories and behaviors. We present a model that demonstrates how activity could propagate through recurrent cortical microcircuits as a result of a learning rule based on neurobiologically plausible time courses and dynamics. The model predicts that the interaction between several learning and dynamical processes constitute a compound mnemonic engram that can flexibly generate sequential step-wise increases of activity within neural populations.
Collapse
|
18
|
Knight JC, Tully PJ, Kaplan BA, Lansner A, Furber SB. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware. Front Neuroanat 2016; 10:37. [PMID: 27092061 PMCID: PMC4823276 DOI: 10.3389/fnana.2016.00037] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 03/18/2016] [Indexed: 11/17/2022] Open
Abstract
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.
Collapse
Affiliation(s)
- James C Knight
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Philip J Tully
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Institute for Adaptive and Neural Computation, School of Informatics, University of EdinburghEdinburgh, UK
| | - Bernhard A Kaplan
- Department of Visualization and Data Analysis, Zuse Institute Berlin Berlin, Germany
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Department of Numerical analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| |
Collapse
|
19
|
Jordan J, Tetzlaff T, Petrovici M, Breitwieser O, Bytschok I, Bill J, Schemmel J, Meier K, Diesmann M. Deterministic neural networks as sources of uncorrelated noise for probabilistic computations. BMC Neurosci 2015. [PMCID: PMC4697608 DOI: 10.1186/1471-2202-16-s1-p62] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
20
|
Partzsch J, Schüffny R. Network-driven design principles for neuromorphic systems. Front Neurosci 2015; 9:386. [PMID: 26539079 PMCID: PMC4611986 DOI: 10.3389/fnins.2015.00386] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2015] [Accepted: 10/05/2015] [Indexed: 11/17/2022] Open
Abstract
Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.
Collapse
Affiliation(s)
- Johannes Partzsch
- Chair for Highly Parallel VLSI Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Technische Universität Dresden Dresden, Germany
| | - Rene Schüffny
- Chair for Highly Parallel VLSI Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Technische Universität Dresden Dresden, Germany
| |
Collapse
|
21
|
Qiao N, Mostafa H, Corradi F, Osswald M, Stefanini F, Sumislawska D, Indiveri G. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Front Neurosci 2015; 9:141. [PMID: 25972778 PMCID: PMC4413675 DOI: 10.3389/fnins.2015.00141] [Citation(s) in RCA: 159] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 04/06/2015] [Indexed: 11/13/2022] Open
Abstract
Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Collapse
Affiliation(s)
- Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Hesham Mostafa
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Federico Corradi
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Marc Osswald
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Fabio Stefanini
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Dora Sumislawska
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
22
|
Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons. Front Comput Neurosci 2015; 9:13. [PMID: 25729361 PMCID: PMC4325917 DOI: 10.3389/fncom.2015.00013] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 01/27/2015] [Indexed: 12/18/2022] Open
Abstract
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.
Collapse
Affiliation(s)
- Dimitri Probst
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| | - Mihai A. Petrovici
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| | - Ilja Bytschok
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| | - Johannes Bill
- Institute for Theoretical Computer Science, Graz University of TechnologyGraz, Austria
| | - Dejan Pecevski
- Institute for Theoretical Computer Science, Graz University of TechnologyGraz, Austria
| | - Johannes Schemmel
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| |
Collapse
|